Controller-free VR, Navigating in the 3D World.

Controller-free VR, Navigating in the 3D World.

Virtual Reality 101 - please, no controllers.

Let's embrace a controller-free experience, unless you are designing a serious game.

Spatial apps are 3D spaces, pure and simple. They are:

  • Not just enlarged versions of your phone or laptop apps.
  • Not replacements for your 21-inch curved 8K monitor

So, navigation should be reinvented. We are so used to top-left to bottom-right (F pattern) for screens - that’s UX 101 for laptops and smartphones, it falls flat (pun intended) in 3D. Careers were created in designing user experiences for screens and maximizing on-screen real estate to provide as much information in a visually aesthetically pleasing manner. For 3D spaces however, there’s no real estate challenge, there’s infinite space, and the user needs to look around. While it is tempting to draw a parallel with the interior design of physical real estate, that’s partly possible if you are creating a real world replica in the virtual world. But, you are virtually exposed to unlimited possibilities to create virtual worlds - in space, underwater, in the desert, in the middle of an afternoon sky with clouds flying past, and quite literally anywhere else. You really cannot draw a parallel with interior design at that point.

What you have is an immersive 3D space where physics is optional and you have an infinite supply of real estate. In addition, the users are generally seated with limited physical movement, and navigation/interactions in the 3D space is with users’ hands or gaze or a combination of both. With the ground rules established, what’s the best way to enable effective navigation inside a spatial computer?

See-through is always enabled

that you can turn ON/OFF based on the kind of experience. This is the eventual merger of augmented reality and virtual reality where you can turn ON/OFF the real world for virtual reality. Or augment existing reality with selective 3D content. In both instances, having the context of the real world helps the user to anchor themselves without being disoriented. And when they do get disoriented or nauseated, getting them anchored back to the real world is far easier. Both Oculus Quest 2 & above, and Apple Vision Pro have got this right - like studying with live spatial demonstration!

Use teleportation to cover large distances in the virtual space

we need to utilize the depth that virtual reality provides and that means travelling large distances. And teleport should not be a button on a controller, it is triggered by the user’s hand gestures. 

Intelligently populate VR assets in AR

and intelligently populate the assets in the real world if the user chooses augmented reality mode instead of virtual reality. Like a website is responsive to work well with screens of different sizes, make spatial apps responsive to working well in augmented or virtual modes. For e.g., You can have an out-of-this-world product experience space in virtual reality, but showcase only product in augmented reality (link to VR101 video no. 4).

Ensure users don’t go through everything

There’s something called as collision where 3D content act as solid objects instead of going through each other. It’s important to have collision in virtual reality since it’s good to have the laws of physics sometimes. We still need them, one cannot go through walls all the time (this is the new ghosting I suppose). Teleport or walk around, but restrict movement through virtual walls, trees, people, planets, etc. in virtual spaces.

Trial and error

This is an evolving discipline and we will discover new things as more people adopt spatial computers. Watch from other spatial apps, learn from them, and try different experiments to see what works over the next 2-3 years.

It is imperative that we get our user experience act together for immersive spaces, and the only way to do this is spending more time using spatial computers and apps.