Samsung has stepped into the spotlight as the first company to announce a new mixed reality (MR) headset powered by the freshly unveiled Android XR platform. Dubbed “Project Moohan,” this device is anticipated to hit the consumer market in 2025. I recently had the chance to experience an early version.
It’s important to note that Samsung and Google are holding back some crucial details regarding Project Moohan, such as its resolution, weight, field-of-view, and pricing. During my demo, I wasn’t permitted to take any photos or videos, so for now, we’re relying on official imagery.
When I first encountered Project Moohan, it immediately reminded me of a blend between the Quest and Vision Pro. This isn’t just a casual comparison; the headset itself seems to draw significant design inspiration from Vision Pro. From its color scheme to the button placement and even its calibration process, it’s evident that Samsung has taken notes on what’s out there.
On the software side, if you had asked someone to create an operating system that combines elements of Horizon OS and VisionOS, Android XR would likely be the result. It’s striking how much Project Moohan, coupled with Android XR, feels like an iteration of two of the major headset platforms.
However, I’m not here to accuse anyone of intellectual theft. In the tech industry, borrowing and enhancing great ideas from each other is par for the course. As long as Android XR and Project Moohan integrate the best features of their competitors while steering clear of the pitfalls, it’s a win-win for developers and consumers alike.
Now, let’s dive into the actual hands-on experience I had with Samsung’s Project Moohan and Android XR. Starting with the hardware, Project Moohan is a visually appealing device. It sports a goggle-style design reminiscent of the Vision Pro, but Samsung has opted for a rigid strap with a tightener, contrasting with Vision Pro’s softer, albeit less comfortable, strap. This design choice mirrors the ergonomics of the Quest Pro, featuring an open-peripheral style ideal for augmented reality usage. Like Quest Pro, it also includes magnetic snap-on blinders for those seeking a fully immersive experience.
While the headset shares similarities with Vision Pro in terms of looks and button placement, it does lack Vision Pro’s ‘EyeSight’ display—a feature that projects the user’s eyes externally. While some have critiqued this feature, I find it beneficial and something I’d hope Project Moohan would incorporate. Coming from the Vision Pro experience, not being able to ‘see’ the wearer while they see you feels slightly awkward.
Samsung has remained somewhat secretive about the headset’s technical specifications, citing its prototype status. However, it’s confirmed that a Snapdragon XR2+ Gen 2 processor powers the device, a step up from the chips in Quest 3 and Quest 3S.
During my brief time with Project Moohan, I noticed it uses pancake lenses with automatic IPD adjustment, made possible by integrated eye-tracking. The field-of-view seemed narrower than the Quest 3 or Vision Pro, but I need to try out different forehead pad options to see if they can bring my eyes closer to the lenses for a better field-of-view.
In terms of visual immersion, while the field-of-view appears smaller, it’s still sufficiently immersive. However, the display’s brightness drops off towards the edges, making the sweet spot seem a bit constrained. If I could get the lenses closer to my eyes, this might improve, but as it stands, Meta’s Quest 3 seems to have the upper hand, followed by Vision Pro, with Project Moohan trailing behind slightly.
Samsung hinted that Project Moohan would come with its own controllers, yet I didn’t get to test or even see them. The decision on whether these controllers will be included by default or sold separately hasn’t been finalized.
For now, the headset relies on hand- and eye-tracking for input, a surprisingly effective blend of features from both Horizon OS and VisionOS. You can use raycast cursors like in Horizon OS or opt for eye and pinch gestures similar to those in VisionOS. Samsung has also added downward-facing cameras to detect pinches comfortably made with hands resting on your lap.
Slipping the headset on, the clarity of my hands struck me first. The passthrough cameras seemed to produce a sharper image than the Quest 3 and exhibited less motion blur than the Vision Pro, though I tested in optimal lighting conditions. Interestingly, while my hands appeared crisp, objects further away seemed less so, suggesting a passthrough focus set at arm’s length.
Transitioning to Android XR, it feels closely akin to a combination of Horizon OS and VisionOS. The ‘home screen’ is a transparent backdrop filled with app icons, akin to Vision Pro’s design. Navigating through this interface involves a simple look and pinch gesture to select an app, which opens in a floating panel—a gesture mirroring that used to open the home screen itself.
System windows in Android XR resemble more of Horizon OS than VisionOS, boasting mostly opaque backgrounds and the freedom to reposition windows by interacting with what feels like an invisible frame surrounding each panel.
Beyond just flat apps, Android XR supports fully immersive experiences. During my demo, I explored a VR version of Google Maps reminiscent of Google Earth VR, complete with 3D modeling of cities, Street View imagery, and even volumetric captures of interior spaces.
Street View remains in its monoscopic 360 form, while volumetric captures offer a real-time, fully explorable experience. Google described this as a Gaussian splat solution, though details on whether it requires new scanning or utilizes existing imagery remain unclear. While not as sharp as photogrammetry, it’s still impressive, running directly on the device with anticipated improvements in sharpness over time.
In line with this upgrade, Google Photos has been transformed for Android XR, able to automatically convert 2D photos and videos from your library into 3D. In my brief interaction with this feature, the conversions seemed just as impressive as those on Vision Pro.
YouTube also saw enhancements to leverage Android XR fully. You can watch regular content on a large, curved display or delve into its 180, 360, and 3D video library. Although not all video quality is high, the inclusion of such a broad media selection signals continued support as more headsets adopt this media style.
A YouTube video originally captured in 2D was automatically converted to 3D for viewing on the headset, appearing comparable in quality to Google Photos’ 3D conversion technology. The conversion process wasn’t explicitly outlined—whether creators need to opt-in or it occurs automatically remains to be clarified, but I anticipate more details emerging.
Perhaps the standout advantage of Android XR and Project Moohan—both on a hardware and software level—is its conversational AI capabilities. Google’s AI agent, Gemini, specifically its ‘Project Astra’ variant, can be activated right from the home screen. Not only does it listen to everything you say, but it also ‘sees’ what you see in both real and virtual realms—continuously. This ongoing perception creates a more intelligent, better-integrated, and more conversational experience than what’s available on current headsets.
While Vision Pro relies on Siri, which only listens and focuses on single tasks, and the Quest has an experimental Meta AI agent limited to the real world, Gemini stands apart. It receives a continuous low-framerate video feed from both real and virtual environments, eliminating awkward pauses typical with other assistants.
Gemini also possesses a memory, giving it a contextual edge. According to Google, it has a rolling 10-minute memory and retains “key details of past conversations,” allowing you to reference things discussed earlier or visual elements you’ve looked at.
I witnessed a typical AI demo where you could inquire about room contents, and Gemini handled sly questions impressively, avoiding diversions. I tested its translation capabilities by asking for translations of signs in the room, with the results quickly and accurately provided back. It handled context-based queries seamlessly, referencing earlier signs I had asked about without any trouble.
Gemini’s talents extend beyond mere question handling. While it’s still uncertain how deeply these capabilities will go at launch, Google showcased its control over the headset. For instance, I requested, “take me to the Eiffel Tower,” pulling up an immersive 3D Google Maps experience of the landmark. It responded to queries about the tower or other information seamlessly.
Gemini can also fetch tailored YouTube videos relevant to your inquiry. For example, requesting “a video of the view from the ground” while virtually viewing the Eiffel Tower produced a fitting video answer.
The potential of Gemini extends to traditional assistant tasks like sending messages or setting reminders. Still, its ability to offer XR-specific functions awaits further exploration.
Currently, Gemini on Android XR feels like the most advanced AI agent on any headset, possibly surpassing Meta’s offerings on their Ray-Ban smart glasses. However, Apple and Meta undoubtedly aim to match these capabilities, so Google’s lead remains a question of time.
Far beyond just a headset add-on, Gemini appears destined for smaller, daily wearable smart glasses, something I managed to try—though that’s a story for another article.