DL Research Content

A conversation with Luis Oscar Ramirez, founder & CEO of Mawari Network

A conversation with Luis Oscar Ramirez, founder & CEO of Mawari Network
Illustration: Gwen P; Source: Mawari Network

Luis Oscar Ramirez is the Founder & CEO of Mawari Network, which powers real-time streaming of immersive content through a global spatial computing network. A trained mathematician and digital arts enthusiast, Luis has been advancing XR technology and exploring AI agents and digital presence since 2017. Luis believes XR can transform how we connect, create, and experience the world.

What first inspired you to merge AI and extended reality back in 2017, and how has that vision evolved?

Let’s rewind a little. In 2017, we started Mawari with the mission of making immersive content more accessible and delivering these kinds of experiences at scale.

However, the aha moment happened in 2018 when we received an unusual request from KDDI, one of Japan’s largest telcos: they asked if we could help them stream a digital human to smart glasses.

This was not a simple task. The application ran on Unreal Engine 4 for real-time rendering and was connected to IBM Watson and Google’s speech services to give the digital human reasoning and a voice.

It quickly became clear that smartphones and smart glasses couldn’t handle this type of application on their own. That’s when we realised we needed a dedicated 3D-streaming tech stack for augmented reality, and that this would also require AI processing. The big challenge was creating an architecture that could distribute both 3D rendering and AI inference, running in real time on mobile or other lightweight devices.

That’s really where the story of Mawari’s proprietary technology began. Fast-forward to today: we have invented proprietary 3D-streaming technology and a way to deliver XR + AI at scale, and we’ve also realised that the only way to achieve true scale is through distributed compute, leading us to build the world’s first immersive compute network.

What are some of the biggest misconceptions about extended reality and spatial computing that you’ve encountered?

One of the biggest misconceptions is that spatial computing is just a more immersive version of today’s technology, like dropping a digital sofa in your living room or pinning a couple of 2D screens in mid-air as an extended desktop. Real spatial computing is about truly interactive, immersive content (what we call the sense of presence): real-time, intelligent, responsive interaction with digital beings and environments. Achieving that level of presence requires a completely different tech stack.

People also underestimate the technical complexity. The internet we use today was not built for the latency, compute distribution, or data load that real-time 3D at scale demands. For example, streaming a 3D AI avatar is nothing like buffering a YouTube video; it is closer to simulating life, and that calls for an entirely new approach.

Lastly, many still think spatial computing is decades away. We said the same about AI just a few years ago, yet here we are, chatting with AI every day. Spatial computing will have its ChatGPT moment much sooner than most people imagine.

Why did you transition Mawari to a DePIN model, and what advantages does it offer over traditional cloud services?

It became obvious that the promise of immersive content will never be unlocked while it depends on the same centralised data centers built for flat video. Real-time 3D needs compute that sits close to the user, spins up on demand, and scales with global traffic. Trying to force that workload through yesterday’s cloud felt like streaming Netflix over dial-up.

In short, immersive content needed a native network. Building it as a DePIN lets us scale faster, reduce costs, and keep control in the hands of builders rather than hyperscalers.

What roles do Guardian Nodes play in the Mawari Network, and what incentives exist for operators?

Guardian Nodes are the network’s auditors. They run on lightweight CPU hardware and double-check every frame rendered by a Spatial Streamer Node, tracking latency, bandwidth, jitter, and visual quality, then writing those metrics on-chain. Their job is to keep the entire mesh honest and give us a live picture of performance.

Guardian Node operators earn Network Monitoring Rewards worth 20 % of the Mawari Network’s total net revenue, scaling directly with global demand and usage. In addition, 11 % of the total token supply is reserved for a three-year Early Operator Incentive program. Licenses come in Visionary, Balance, and Catalyst tiers and cost just $333 to run on a single CPU core (or you can delegate to an infrastructure partner).

Will there be opportunities for other types of nodes in the future?

Yes. Beyond Guardian Nodes, the network already relies on three other node classes, each critical to keeping our immersive-content pipeline running smoothly:

  • Spatial Streamer Nodes – GPU workhorses that handle the heavy lifting: real-time rendering, AI inference, and adaptive streaming of 3D scenes to end-user devices.
  • QoS Nodes – CPU-based monitors operated by vetted partners. They run specialised tests to validate latency, geographic coverage, and overall service reliability, making sure baseline performance standards are met as we scale.
  • Pulse Nodes – lightweight, app-based clients on everyday user devices that crowd-source live telemetry (latency, jitter, FPS, bitrate). Their real-world data helps us fine-tune the network in real time.

Together with Guardian Nodes, these roles form the backbone of a resilient, low-latency infrastructure purpose-built for streaming immersive content at global scale.

What does Mawari’s recent $10.8 million funding round mean for your roadmap?

It gives us fuel to move faster, accelerate R&D, grow the team and expand ecosystem support.