If you listen to tech podcasts, you’ll likely already be familiar with curse of the keyword. When a presenter says ‘Ok Google’ or ‘Hey Siri’ on their podcast, thousands of listeners’ devices all over the world automatically initiate a search. It has been especially prevalent in 2016, as frequent updates to Google, Apple and Amazon’s voice assistants have provided commentators with plenty of opportunities to talk about these products.
I recently experienced the perfect storm when a podcast, the ‘Ok Google’ keyword, Android Auto and my in-car audio system combined to keep me locked in a 21st century version of Groundhog Day. The presenter mentioned the keyword, my Huawei phone initiated a Google search and the podcast app (in this case, Pocket Casts) paused the audio stream playing via Android Auto to my car’s sound system.
I realised what had happened, smiled, and just waited for it to resume.
However, I’d forgotten Pocket Casts has a ‘convenience feature’, which rewinds the podcast by a few seconds when you resume listening. This is something I usually appreciate as it enables me to regain the thread of the discussion. In this instance, it meant restarting the audio just before the presenter uttered the fateful keyword.
The result: a never ending feedback loop.
With Android Auto running on my phone the interface is purposely simplified to reduce distractions while driving. Manual intervention was impossible at 70 miles per hour on the motorway.
I pulled over at the next junction to fix it and sat for a moment to consider my brief encounter with this dystopian soundscape.
Individually, none of the components were guilty of poor experience design:
- Pocket Casts had likely implemented its ‘rewind on resume’ feature in response to user research.
- Google’s ‘Ok Google’ was simply performing as it was supposed to.
- The sensitivity of the in-car and smartphone microphones is usually a virtue.
- Android Auto’s interface minimises distractions and keeps me focused on the road.
However, despite all of these individual virtues, I – as an end user – had become locked in a poor experience.
I was reminded of Louisa Heinrich’s talk at MEX/15 on the etiquette of robot user experience, in which she shared her concern about the looming prospect of smart appliances engaged in domestic warfare. In her scenario, arguments raged between the smart windows, the smart refrigerator and the smart house plants over the optimal temperature, while the smart thermostat tried in vain to keep the peace.
Problems like these arise when the experience design process assumes a product to be an individual entity, rather than part of an unpredictable, connected eco-system of multiple touchpoints.
This isn’t a new issue. We began exploring this notion as far back as the MEX/6 cycle in 2009, when MEX Manifesto point number 3 stated:
We believe the number of platforms in users’ lives will continue to increase, leading to exponential growth in the quantity of potential user pathways and creating ever more complex usage scenarios to challenge user experience designers.
I’ve since spoken frequently – at MEX conferences, for our private advisory clients and in university lectures – on the need for a ‘natively neighbourly’ approach to experience design. It is impossible to anticipate every permutation of user context, app, environment and touchpoint (or even a tiny proportion of the billions of possible combinations). Instead, we must assume each experience to be multi-touchpoint by default and consider how it will relate to the gaps between its own experiential sphere and the properties of other digital spheres it may encounter.
Perhaps the most productive way to think about this challenge is how, at its best, an education in early life prepares us not just for specific tasks, but with a set of values and behaviours which allow us to adapt to new scenarios. How might we apply the same principle to designing for the ever multiplying number of digital touchpoints in users’ lives, such that each product or service knows what it means to be a good citizen of the network?
Seven years after we first raised the issue during MEX/6, it remains the most pressing macro design challenge facing digital practitioners. The MEX community has since uncovered and shared numerous techniques to illuminate multi-touchpoint design. Here are some of the most frequently cited to get you started:
- A touchpoint is an individual element, digital or analogue, which contributes to the users’ experience. It may be visible and interactive, such as a touchscreen, or hidden and passive, such as an environment sensor.
- User tasks are increasingly migrating back and forth across touchpoints. User research must reflect this behaviour.
- Multi-touchpoint design should respond first to user intent, prioritising it over more obvious differences such as display size. For instance, user intent when viewing photos changes little regardless of whether it is conducted on a 4 inch phone or a 10 inch tablet.
- Users should always understand their relative position in an overall multi-touchpoint experience. Some applications thrive as linear stories, where progress made on one touchpoint is used to unlock features on the next, others favour more dynamic flows.
- Multi-touchpoint experiences are most effective when bridging the physical and digital worlds. As more of users’ life achievements become virtual, bridges convert virtual achievements into real world benefits become more important. For instance, turning virtual currency into real money.
- A systematic approach to multi-touchpoint design separates the experience into functional, aesthetic and emotional elements, allowing building blocks to construct consistent, but action-specific, experiences across touchpoints.
- Employ consistent language across each touchpoint and user mission. Tone should reflect conversational characteristics to aid ease of use.
Do you use others in your own work? I’d love to hear from readers with additional examples. Feel free to post them as comments below or, better still, include a link to case study.