Loading…
Loading…
UX design for augmented and virtual reality experiences, focusing on spatial interaction and immersion.
stellae.design
AR/VR UX addresses creating experiences in augmented reality (digital overlaid on real world) and virtual reality (fully immersive). These platforms require 3D thinking, physical comfort consideration, and new input methods (hand tracking, gaze, controllers). Key challenges: information placement in 3D, spatial navigation, text readability, and user orientation.
AR/VR UX design addresses the unique challenges of creating user experiences in three-dimensional, immersive environments where traditional 2D interface conventions no longer apply — users interact with spatial content through head movement, hand gestures, eye tracking, and body position rather than mouse clicks and screen taps. The stakes of getting immersive UX wrong extend beyond frustration into physical discomfort: poorly designed VR experiences cause motion sickness, spatial disorientation, and eye strain, while badly placed AR overlays can obscure critical real-world information and create safety hazards. As spatial computing moves from gaming niche to mainstream productivity, healthcare, education, and retail applications, the need for disciplined immersive UX practice is becoming as fundamental as responsive web design was a decade ago.
Apple Vision Pro uses eye tracking for targeting and hand pinch gestures for selection, creating an input paradigm that feels natural because it mirrors how humans already look at things they want to interact with and use fine motor hand movements to manipulate them. The system places UI windows in physical space that persists across sessions, allowing users to arrange their workspace spatially in ways that leverage human spatial memory for organization. The comfort-first approach — including passthrough for real-world awareness and gradual immersion controls — demonstrates how spatial computing can integrate into daily workflows rather than demanding full isolation.
IKEA Place uses AR to render true-to-scale 3D models of furniture in the user's actual room through their phone camera, solving the fundamental problem of imagining whether a piece will fit and look right in a specific space. The interface keeps interactions simple — point the camera, tap to place, pinch to adjust — avoiding complex 3D manipulation gestures that would frustrate casual users who have no experience with spatial interfaces. The success of this application demonstrates how AR delivers the most value when it solves a concrete spatial problem that 2D interfaces cannot address.
An enterprise VR training application transplants a desktop-style dropdown menu into 3D space, rendering small text labels as flat panels floating at arm's length that users must precisely point at with a ray-cast controller to select. The text is barely readable at the required distance, the hit targets are so small that users miss them repeatedly, and the flat 2D visual language provides no spatial depth cues that would help with targeting in a 3D environment. Users spend more time fighting the interface than completing training tasks, and the majority report hand fatigue and frustration within ten minutes.
• The most pervasive mistake is designing immersive interfaces on a flat screen and assuming they will translate — spatial relationships, depth perception, text readability, and interaction ergonomics all change dramatically when experienced through a headset versus previewed in a 2D viewport. Another critical error is ignoring the physical comfort dimension entirely: fixed UI elements that move with the user's head cause motion sickness, content placed too close triggers eye convergence fatigue, and sessions designed for 60-minute durations without rest prompts cause physical discomfort that users blame on the product. Teams also frequently underestimate the input precision gap between 2D and 3D interaction, designing interfaces that require the same targeting accuracy as mouse-driven desktop UIs despite the fact that hand tracking and controller pointing are inherently less precise.
Was this article helpful?