“It made my eyes go weird. My stomach felt like I was on a rollercoaster some of the time. I don’t think I could use a 3D interface like this for very long without getting a headache…” — #mexuservoice
Dedicate even a small amount of time to testing 3D interfaces and you’ll quickly realise the challenges they present. A significant minority of users report varying degrees of motion sickness, disorientation and visual fatigue when experiencing visual depth in digital interfaces. This occurs across a range of products, from the latest virtual reality headsets to stereoscopic 3D displays.
The difficulties arise from an incomplete interaction model: visual depth is displayed as an output, but the input mechanism (usually a touchscreen) does not allow the user to respond in 3D. The brain finds this disconnect inherently dissatisfying. Users quickly become frustrated, no matter how visually impressive the 3D UI. This is true not just for immersive 3D experiences emerging on virtual and augmented reality platforms, but for the subtler sense of visual depth enriching interfaces on a wide range of device classes, from wearables to tablets.
New forms of input, ranging from pressure sensing (such as Apple’s Force Touch) to motion detection (such as the handheld controllers of the HTC Vive), are helping to complete this model and give users a sense of agency over interfaces with visual depth. However, the balance of progress in 3D remains heavily skewed towards the visual dimension. Mainstream adoption will remain low until the input methods catch up.
MEX Pathway #4, entitled ‘Dimensions’ and initiated as far back as July 2010, continues to explore this theme, with a wealth of talks, podcasts and articles to explore in the archive. For starters, try ‘Preparing for Apple’s new interactions‘ and Dale Herigstad’s talk on ‘Spatial interface design‘.
The principle, part of an emerging series in the MEX journal, is summarised below in a tweetable, shareable graphic. Thank you for citing appropriately.