What AR/VR is, how it works on a high-level, what it means for UI/UX designers, and how you can begin to cut your teeth designing for AR and VR applications. Overview In the next ten years, especially with Facebook Meta, crypto, NFTs, and online-only content, AR & VR

How does AR/VR work?

Both AR and VR leverage a device’s computational power to generate dynamic superimposed 2D/3D user interfaces and experiences in real-time

  • AR takes the feed from front-facing cameras, LIDAR (Light Detecting and Ranging), and other inputs, creates depth-maps, and uses constant field-watching to update the UI and feedback through the device
  • VR takes the device viewport, hijacks it, and superimposes a 2D and 3D mixed environment that the user can interact with

Zone 3: Rough view

Exists from the edge of decent view to about another 10-15° away radially. This area is good for information that is not absolutely necessary but the user should have access to if they want to glance at it.

Approaching the design

Box your AR designs in a way that allows for maximum usage with minimal potential for eye strain

  • Make sure the user can see what they’re working with
  • Clearly knows where their interactions will be taking place
  • Can easily see what needs to be seen without having to strain their eyes

Bringing it all together

As we march towards a future where AR and VR will dominate the cross-platform application marketspace, it is worth learning how to work with and design for AR & VR

  • Today, we covered: What AR/VR is, How it works on a high-level, What it means for UI/UX designers, and How you can begin to cut your teeth designing for AR /VR applications

Real-time Shared Experiences

You will need to account for real-time shared experiences between users

What does this mean for UI/UX designers?

AR and VR are VERY different animals

How to start designing for AR/VR

Design for the human eye itself

  • FOV (Field of View) exists between human eye and viewport surface
  • Interaction is not confined strictly to the viewport’s size, but user’s ability to see them
  • Zones 1 and 2 are the greatest for conveying important, immediate information

Zone 4: Nope

AVOID putting anything the user needs to be able to see, or interact with, outside of a radius of about 30° radius from the eye’s central line of sight (this constitutes the average maximum rotation that the human eye is capable of).

Surface-less interactions

The interactions in AR/VR will be nearly 100% virtual, and all haptics (physical feedback) from these interactions will be device-specific and non-standard

  • How deep did the user press? Where in space are they targeting? Do you have fixed or dynamic depth? How far out is the object of interaction?

Multiple input types

Most interactions will be visual, track with eye movement, thumb controls, reach-depth, speech recognition, head-tilt gestures, and a whole lot more

  • These additional types of input changes the way that users interact with your product and will come to expect your product to react

Source