Why widgets will be crucial for the future of AR interfaces
I conducted a research of AR interfaces to come up with a set of rules of designing unobstructive UI. I designed a navigation app concept to test my ideas. Mocked up in Maquette, designed in Unity, user tested with Tilt5 AR glasses.

Conclusions? Cognitive overload is a serious problem. Designers have to pick carefully what to show and when to show it.

People expect technology to be more integrated with their lives. Popularity of wearables and voice assistants point to that. So, what does that mean for AR apps? Well, widgets are the best root to take.

The challenge facing AR designers is how to show the most essential functionality, at the right time, and move out of the way. Apple Watch introduced Glances to tackle this exact issue. So it’s safe to assume the initial AR apps for visionOS will have to rely closely on iOS widgets.

CarPlay took this concept even further, showing multiple widgets at once. It translates well into 3D and is great for displaying multiple contextual features. This gives me even more confidence, that widgets will help our interfaces look less out of place in AR.

While on the move, obscuring view is very dangerous. Giving user ability to control where the UI is in an intuitive way is important. Embodied interactions are the way to go here — using gestures to make our actions feel natural, like interacting with a real object.

There are couple of options we have here:
  • Making it follow your hand
  • Dragging
  • Snapping it to part of the field of view (billboarding)
  • Snapping to a surface (surface magnetism).
For best results we have to give user better control over the UI than we are used to. We can move our phone, rotate the watch with our wrist, we need at least the same level of freedom in AR.