Google Glass is a welcome experiment in bad user experience

Google Glass is a welcome experiment in bad user experience

Google Glass - you might look like marginally less of an idiot than you feared

Google Glass remains confined to a small niche of customers because the technology is unreliable, the features unappealing for most users and the price too high. In time, technological progress will solve issues of reliability and price, but far less certain is whether Glass will ever achieve the balance of utility, ease and satisfaction required for mass market adoption.

I spent some time testing it in London recently (June 2014), exploring a space Google designed specifically to showcase Glass. However, it was beset with technical problems which left me surprised and disappointed that it still felt so experimental nearly 18 months after launch:

  • Overheating, causing Glass to display an error message saying it needed to pause for a while to cool down.
  • Lag, with noticeable gaps between input and response, both from voice input and the touch panel.
  • Power drain, consuming about 10% of its battery in 5 minutes. Google staff refreshed the devices frequently, suggesting this was a known problem they were keen to avoid.

These are basic issues which, while frustrating, would be trivial to solve over time. If this was the extent of Glass’ problems, users could simply wait patiently until the chipset development cycle delivered greater efficiency.

However, my short time with Glass also left me questioning the potential of its overall interaction model. Even imagining a subsequent version of the product with no lag and all day battery life, major problems would remain:

  • Interaction. Input and output seem disjointed. The visual responses on screen don’t feel naturally connected to the control actions of touching the frame-mounted panel or talking to Glass. Some of this was down to the lag, but I think the problem would remain even with instantaneous response. The brain struggles to see the link between input and output because none of the expected affordances are present, like the way users can intuitively make the connection between touching an object beneath their finger tip on a touchscreen or feeling the tactile response of a hardware button.
  • Structure. There was no logic to Glass’ visual architecture, meaning I never felt I knew where I was within the platform, how to get back to a central point or access the features I wanted. Sometimes a tap or a double swipe would result in the ‘home screen’ showing the time and “ok glass” voice input cue, but on other occasions it took me into seemingly circular carousel menus. Glass doesn’t necessarily need a ‘home screen’ architecture akin to smartphones; indeed, it is more likely some kind of always present, search-driven approach to accessing services would be more appropriate. However, currently it lacks both, so it feels neither structured, nor free form – just confusing and uncertain. I suspect some of this is a hangover from applying the legacy of Android’s app driven architecture to the new platform.
  • Social context. Far and away the biggest issue for me was how Glass removes you from the shared moment. I was testing Glass with MEX co-host Andrew Muir Wood and a group of other participants, but the activity of wearing the glasses made the occasion less sociable. We were physically together in the same place, but our minds were engaged in separate, liminal worlds, half immersed in digital services and half in the physical present. It led to the disconcerting feeling of someone looking straight at you with unseeing eyes, their vision instead focused on the glowing postage stamp digitally projected into their field of view.
  • UI design. Quite apart from its structural issues, the visual design of Glass’ digital interface lacks any kind of polish. It is more akin to a set of prototype wireframes than a finished UI. The kind interpretation would be to assume this was a deliberate exercise in minimalism, but I fear this is not the case – it is just very early.

There were, however, some positive surprises. The hardware was well built, light and comfortable to wear. Voice recognition mostly worked, even though we were in a very noisy environment. Also, most people didn’t look like total idiots wearing it – still a long way from being ‘stylish’, but not as offensively futuristic as I had imagined (see

My single most important lesson from this brief time with Glass, however, was how little unique value it currently delivers. To clarify, Glass is capable of doing valuable things – like showing you message notifications or giving you Google Maps directions – but none of these experiences felt like they derived unique benefit from being delivered to an eye-mounted screen rather than through a phone, tablet, watch or any other device. If Glass is ever to be perceived as more than an expensive, experimental plaything, it will need a whole new generation of services designed to take advantage of its unique characteristics.

For instance, the first demonstration area in the Glass’ experience tour was a set of foreign language posters, which users could supposedly translate on-the-fly. In reality, you first had to initiate the translation service with a voice command and then select which language you were trying to translate from a circular carousel menu by scrolling with a fingertip. Once the app was launched, you shuffled back and forth, tilting your head at unusual angles to try to fit the text within the viewfinder target. After a few seconds, if you’d picked the right language and got the positioning right, you saw a pigeon English translation.

How many ways can you think of – digital or otherwise – to achieve the same result which would be quicker, more accurate and less socially awkward? Here’s one to get you started: you could ask another human being!

I am glad Glass exists and applaud Google for investing in this kind of experimentation. It has done the industry a service by committing to making a lab prototype accessible in the real world – but make no mistake, this is still very much an experiment, and one I doubt will ever be suitable for the mass market in anything like it’s current form. Hopefully its availability will enable others to learn from its mistakes and evolve the concept of augmenting life with digital vision into something more useful.

Unless you have a very specific use case in mind or are happy dropping £1000 on a curiosity, do not even think about buying this to use personally day-to-day. You’ll be very disappointed. Instead, buy one as a group, try it, tinker with designing for it in your labs and share your impressions so the industry can start getting to grips with a usable version of this concept.


Add yours
  1. 2

    Very good and sensible observations from a User Experience and value perspective. I am following this whole Smart Glasses area with great interest and see investment opportunities, as the technology is at the cusp of emerging properly. I agree that Google gets points for launching the awareness campaign, but the sweet spot for this type of product is ripe now in the industrial and professional sector where companies like Vuzix and SAP are concentrating on from a wearable device and applications standpoint. Your comments on autonomy, performance and user interface are right on.

    • 3
      Marek Pawlowski

      Thanks for sharing your thoughts on this Ken, I know you’ve been tracking this area longer than most! Which specific vertical do you think most likely to adopt these kind of products first? I’m guessing there’s a natural association between roles which require the user to be hands-free…

+ Leave a Comment