
There’s an ongoing rallying cry in the XR world that AI is advancing at the right time to breathe new life into it. This can be seen in pitch decks, trade press, industry chatter, and recent talks at the industry’s biggest trade show and gravitational center: Augmented World Expo (AWE).
This narrative counters the hot take in the generalist tech press that AI replaces XR. Though some reporters love to call everything an “XYZ killer” – with XR as the latest victim – they know deep down that the world isn’t as binary as their appeasements to the click-bait gods make it seem.
The reality is that AI elevates XR’s value in several ways. These include developer-facing tools that can streamline and automate rote aspects of 3D experience creation, a la Snap’s GenAI Suite in Lens Studio. And on the user end, we’re getting closer to what we call generative AR.
The latter involves AR that’s generated through user prompts. Though it has a long way to go, the idea is that it brings AR from a pre-ordained experience to something that’s created on the fly. Already seen in early forms from Snap, it could vastly expand AR’s creative possibilities and utility.
Space Race
All the above focuses on how AI supports XR, as that’s a prevalent discussion point these days. But is the reverse value chain – XR supporting AI – a bigger deal? Discussed far less frequently, XR’s role in supporting AI could be underexposed and underrated relative to its potential.
Unpacking that a bit, XR can elevate AI in several ways. But one area where it will play a critical role is in AI’s sleeping giant: physical AI. As its name indicates, this unleashes AI’s capabilities on the physical world – a concept teased in early form in the IoT movement and DePIN.
Backing up for context, most AI investment and advancement have been confined to the web. We’re talking large language models, automating enterprise productivity, and creative production. But the larger prize is to bring all that intelligence to the broader canvas of the physical world.
One of the key enablers for that vision is world models. Think of them like LLMs for the training and simulation of large physical domains. And that’s where XR comes into the picture as spatial mapping – from spatial geometry to semantic understanding – can be fuel for world models.
“AI can’t understand the real world without XR,” said AWE founder and chair, Ori Inbar. “Large language models can predict, but they don’t actually experience. The real 2026 trend is AI that learns from the world, powered by XR, world models, and embodied intelligence.”
This will be a central motif at the upcoming AWE USA, which Inbar recently revealed. Previously focused on AI hearts XR, it’s now more about how AI needs XR, materializing in the show’s new theme I, Spatial. Players that tap into XR will gain an edge in AI’s space race.
Sleeping Giant
Much of the above is still to come in terms of how XR can unlock and amplify physical AI. But some of it is already happening. For example, Niantic Spatial is applying its signature spatial maps – derived from Pokémon Go play among other places – to develop and refine world models.
These models are already gaining traction, such as Niantic Spatial’s new partnership with Coco Robotics to guide autonomous delivery robots. Other use cases will emerge wherever autonomous navigation can benefit from world-models – a sleeping giant of a market.
The best part is that Niantic’s Spatial’s models – as valuable and robust as they are – were built on the backs of mobile AR. The real opportunity is world models built from smart glasses. Human-centric endpoints will be better achieved with models derived from eye-level points of view.
Of course, many things need to happen for that vision to materialize, including smart glasses scale, and the right mechanisms to alleviate otherwise-debilitating privacy concerns. But a few puzzle pieces are being placed to reveal what the physical AI picture might someday look like.
