UX risks in Google’s new multi-touchpoint visual responses

UX risks in Google’s new multi-touchpoint visual responses


Google Home, the speaker embedded with the company’s voice assistant, will soon be able to display results on nearby TVs and phones. It provides a significant new mainstream example of how multi-touchpoint experience design enables digital interactions to unfold across a number of devices.

At I/O, Google’s May 2017 developer conference, the company showed a couple of real life examples:

  1. The question ‘where’s my next event?’ prompted Google Home to read out the location. Simply saying ‘Ok Google, let’s go,’ resulted in Google sending the directions to the user’s phone and launching the mapping experience.
  2. ‘Ok Google, show my calendar for Saturday on the TV’ did exactly that, displaying the user’s next three appointments on a nearby Chromecast enabled TV.

The Verge has a video of these interactions being demonstrated on stage at I/O (from about 30 seconds into the clip).

Regular readers of this MEX Journal will know this has been a long-time coming. We first codified our investigation into this form of experience design 8 years ago at the MEX/6 conference in 2009 and it has remained a focus of the MEX initiative ever since. There’s a dedicated MEX Pathway – #2, multi-touchpoint, which archives all of our content on this theme, but it has become rather larger than that, expanding into a key structural underpinning of how we think about experience design.

Google’s Home developments are important for several reasons.

Firstly, there is the question of scale. While there have already been numerous niche examples of multi-touchpoint experience design in action, the scale of Google’s Android ecosystem and the wider user base for its services is unparalleled. While Home itself remains niche, Google has shown with devices like Chromecast that it can leverage pricing power and user relationships to achieve mass market scale in new product categories.

Then there is the matter of depth. No company knows more about the behaviour, preferences and data of its users than Google. This has the potential to be terrifying and useful in equal measure. Multi-touch point experiences thrive on contextual relevance and, for better or worse, Google is uniquely positioned to shape the design of users’ experience according to their behaviour.

However, the brief examples Google demonstrated at I/O also highlight some of the potential pitfalls we’ve formalised over the years in a series of MEX principles for multi-touchpoint experience design. These include:

  1. A touchpoint is an individual element, digital or analogue, which contributes to the users’ experience. It may be visible and interactive, such as a touchscreen, or hidden and passive, such as an environment sensor.
  2. User tasks are increasingly migrating back and forth across touchpoints. User research must reflect this behaviour.
  3. Multi-touchpoint design should respond first to user intent, prioritising it over more obvious differences such as display size. For instance, user intent when viewing photos changes little regardless of whether it is conducted on a 4 inch phone or a 10 inch tablet.
  4. Users should always understand their relative position in an overall multi-touchpoint experience. Some applications thrive as linear stories, where progress made on one touchpoint is used to unlock features on the next, others favour more dynamic flows.
  5. Multi-touchpoint experiences are most effective when bridging the physical and digital worlds. As more of users’ life achievements become virtual, bridges convert virtual achievements into real world benefits become more important. For instance, turning virtual currency into real money.
  6. A systematic approach to multi-touchpoint design separates the experience into functional, aesthetic and emotional elements, allowing building blocks to construct consistent, but action-specific, experiences across touchpoints.
  7. Employ consistent language across each touchpoint and user mission. Tone should reflect conversational characteristics to aid ease of use.

For more on these, read my earlier essay ‘Principles for natively neighbourly multi-touchpoint experience design‘.

I’m particularly interested to see how Google addresses the fundamental unreliability of the cloud between touchpoints such as the Home speaker, a Chromecast TV and an Android phone.

For instance, what happens if one of the touchpoints loses network connectivity during these distributed interaction sequences? Have they planned for graceful failure to unfold on the devices which remain connected? Chromecast remains weak in this regard. I’ve personally found my Android phone frequently loses connection to the Chromecast, dropping the virtual remote control from the smartphone screen, while the content continues to play on the TV. The only way to stop it or reconnect is to power off the TV and start all over again.

How have they planned the hierarchy of interruptions across touchpoints? In the I/O example of simply telling the Home speaker ‘Let’s go’ to make mapping directions appear on a smartphone, what happens to the app or content currently active on the phone? The complexity of interrupting user flows multiplies rapidly as you add more touchpoints to the experience.

Google, of course, is not alone in its multi-touchpoint adventures. Apple has been experimenting with AirPlay and Continuity for years. Microsoft is playing around the same space with Continuum. These all feel early and inadequate at the moment, but make no mistake, multi-touchpoint architectures will become the default for experience design – it is just a matter of who gets their first and makes it mainstream.

+ There are no comments

Add yours