Robots & digital actors as the new frontier for experience design

Robots & digital actors as the new frontier for experience design


Is a new category of digital products emerging, defined by their ability to create physical change in the world, as opposed to the purely virtual outcomes we currently associate with digital experiences?

There are many products which fit this description, from ‘helper robots’ descended from science fiction fantasies to seemingly mundane home appliance controllers. However, is this specific characteristic of having a physical impact on their environment a sufficiently unifying factor that it should define a new approach to their user experience?

Generational shifts in technology – the type significant enough to require new design approaches – are infrequent. They are often proclaimed by marketers, but rarely happen. They are also easier to identify with hindsight than to predict in advance. Looking back through the 20 years I’ve been involved in digital industry, there have been a couple worthy of consideration:

  • Multi-touchpoint, where digital experiences are manifest across multiple devices – something made possible by the arrival of the ubiquitous communications network. This necessitated a different design approach, where the focus shifted from making products succeed within their own confines to products where success is defined by how well they interconnect with others. This gradual change occured over many years, but becomes easily apparent when you contrast PCs ten years ago – isolated data silos – and the ease with which users today interact with a unified service experience across, say, an Android phone, smart watch, tablet, TV and web browser.
  • Direct manipulation, where interfaces evolved from a series of visible controls, activated using intermediaries like a mouse or keyboard, to visual canvases where the content itself has become the UI. The role of UI chrome as our starting point for controlling the digital world has diminished to the point where when we see a photo on a screen we expect to interact with it like a real world object with physical properties. Early examples were things like ‘pinch to zoom’ on the first iPhones and now we see the likes of Google extending consistent UI physics across their digital universe in the form of ‘Material Design’.

The significance of these changes is obvious with hindset, but what can we extract from this historical knowledge to help us evaluate an emerging landscape we might call ‘digital actors’ – those products capable of impacting their physical environment? I’d include within this everything from driverless cars and performative architecture to industrial robots and personal drones.

There are some signals we can look for, which we know represented big changes in previous generational shifts:

  1. Unease. First and foremost, are these developments different enough to cause a sense of unease among users? It is from that disquiet that new design approaches often emerge.
  2. Lifestyle. Do the technologies in question offer the potential to change user lifestyles? Unless something delivers benefits distinctive enough to impact life beyond their own category, they should only ever be defined as iterative.
  3. Inadequate existing methods. Are existing design approaches unlikely to meet user’s expectations, derived from wider cultural norms, of how these products should behave?

Unease

The unease is perhaps the simplest to address. There is an overwhelmingly body of evidence emerging in mainstream media, academic research and early user stories that the moment a digital experience is capable of moving itself, altering the physical environment or otherwise creating a result outside of its own space, users start to react differently. That ability for a product to be an actor in both the physical and virtual world raises questions of trust, expectation and consequences which simply aren’t present when a product is limited to the virtual sphere.

Self-driving cars provide a useful example of this illogical unease. There is quite convincing evidence that self-driving cars are less likely to crash than those driven by humans. Logically, it is easy to see why: computers react faster and do not suffer from the random distractions which afflict humans. They can also be supported my multiple levels of failsafe redundancy – equivalent to a human driving a car alongside multiple co-pilots capable of intervening if they failed to notice something.

Yet, still, humans are almost unanimous in their sense of disquiet about self-driving cars, born of unfamiliarity with the technology married to a very real image of what could happen if it did go wrong. Every human knows what it means to be in a car accident, almost none know how self-driving systems are designed to make those accidents less likely.

Driving is at the far end of the spectrum of unease, shared with other scenarios where handing control to technology is perceived to have sudden and serious consequences for safety. Medical is another area: no matter how much evidence is presented that a robot surgeon is more accurate and reliable, who can blame someone who feels more comfortable knowing the grey-haired, friendly doctor will be taking care of their operation?

Lifestyle

What of the second test, the ability to change lifestyles? The majority of humans once spent the majority of their time gathering enough food to survive. Technology has gradually abstracted individual humans from the basic necessities of life. It has begun to chip away at things like distance, weather and the weaknesses of human bodies as limitations on how we can shape the world and our roles within it.

However, almost every one of those technologies – especially those small and cheap enough to be present in our personal lives – remains in the virtual realm; they’re capable of assisting our interactions with the physical environment, but not yet carrying them out themselves. Crucially, they also come with a latency of supervision, meaning the new things they allow us to do or the speed at which we can do them is always offset by the new time we have to allocate to programming them.

Crucially, they also come with a latency of supervision, meaning the new things they allow us to do or the speed at which we can do them is always offset by the new time we have to allocate to programming them.

Today, most people can make a video call to anywhere in the world using the computer in their pocket, but they’re still physically taking out their own trash, adjusting their home thermostat and opening and closing their own doors.

How would our lives change if the time we spend today on utilitarian tasks – things from which we derive no pleasure – was freed up by digital actors? We only have to look back to the impact of the agricultural and industrial revolutions to understand the sweeping changes to lifestyle which become possible when humans are given more leisure time – for better or worse!

Inadequate existing methods

The question of whether we need different design approaches is the most opaque, and the one I hope to explore in detail at MEX.

Consider an experience like a personal drone. Technology is emerging which could, in theory, allow each of us to carry, employ or host a drone companion capable of hovering beside us, equipped with cameras, environment sensors and the ability to fly off on errands. It makes me think about digital manifestations of the animal daemons in Philip Pullman’s Dark Materials trilogy, popularised as The Golden Compass films.

Just writing a description of such a thing highlights the scale of the design challenge. The language alone is a minefield. How would you describe the human relationship with a digital actor like this – is the correct term ‘carry’ or ’employ’ or ‘host’ or ‘control’ or some other word language is yet to provide?

There are existing design principles which could provide a starting point: for instance, an obvious need for deep ethnography. Similarly, the tools which have emerged to support multi-touchpoint experience design are relevant here, as a digital actor like this would necessarily exist as part of an overall experience combining numerous devices, and gaps between those devices.

However, I think there’s something else here, something deeper. Something changes when a digital actor becomes capable of crossing the line between the virtual world, where the consequences of its actions are by definition limited, and the physical world where it will be sub-consciously evaluated as much for the subleties of its body language, etiquette and relationship with its environment. It is elevated from the inanimate – the realm of tools under our direct control – to a form of being – the realm of entities of which we have behavioural expectations.

It is elevated from the inanimate – the realm of tools under our direct control – to a form of being – the realm of entities of which we have behavioural expectations.

When that line is crossed, the design challenge evolves from one primarily defined by the efficiency and aesthetics we strive for in the virtual sphere to something more complex. It will have elements of behaviour shaping, consequence evaluation and long-term planning which – by virtue of their new and massively expanded importance – probably require new tools, and certainly a reorganisation of the toolkit.

I don’t yet know what that balance between new tools and re-purposing of existing techniques looks like. That’s what I hope will grow from discussions within the MEX community, online and at our events, but I’ll leave you with some half-formed thoughts which might serve as initial lines of enquiry:

  • When a digital actor is capable of impacting its physical environment, what methods might we use to measure and iterate those physical consequences? We measure virtual outcomes all the time – from usage tracking to software error reporting – so how can we do this in a meaningful way when the canvas is the entire physical world?
  • Will the lifecycle of a digital actor have more in common with the way we relate to domestic pets than the way we interact with digital products limited to the virtual sphere?
  • If we accept the human mind inherently limits the extent to which we can be persuaded by logic alone, how might we consider the role of more esoteric considerations – from belief to fantasy – in shaping the coming generation of digital actors?

We are lucky to have Louisa Heinrich to facilitate exploration of this theme – one of several – at the next MEX event in London on 24th – 25th March 2015, with both a keynote presentation and a team creative session. Louisa holds audiences enthralled – often without any notes or slides – as she shares what’s happening at the cutting edge of humans’ interactions with machines. Her domain ranges from physical drones to the invisible systems monitoring users’ lives, always grounded in understanding the potential unease these technologies may bring.

+ There are no comments

Add yours