Human integration and the death of the device, pt. 1
A team of US scientists recently announced they had successfully implanted a device allowing a paralysed patient to control a mouse pointer on-screen, open email and operate a robotic arm. The unit was connected to the motor cortex, an area of the brain which sends signals down the spinal cord and out to the limbs. In this instance, the patient had lost the use of his arms and legs over four years ago, but was able to control the devices just by thinking about the actions he wanted to perform.
It has been heralded by the scientific community as a major breakthrough, with obvious benefits for those who have suffered paralysis through injury or disease.
It also got me thinking about a hypothesis I have been toying with for some time: if technology negates the requirement for a physical handset, will users eventually abandon mobile devices? Is the handset fundamentally a barrier to a good user experience? Is the best user interface one which doesn’t exist at all?
Let’s take a step back and first try to understand the science behind this latest development. Although the mechanisms which process information in the brain are incredibly complex and little understood, the signals communicating the brain’s requirements to other parts of the body are easier to interpret. There are particular bands within the region of the brain which controls motor function and it is relatively simple to see when these areas are experiencing activity.
Basic functions, such as switching something on or off, or moving left to right, are a natural first step. The complications are likely to arise when scientists start to do something more complex, such as translating text input from brain signals into typed characters. The days of writing a text message simply by thinking about what you want to say are still some way off.
Hypothetically, however, it may be possible to integrate the features we currently access through a mobile phone directly into the brain. My severely limited knowledge of neuroscience prevents me from offering an estimate of when or to what extent this will be possible, but the evidence of this most recent development has given me enough food for thought to start pondering the implications of the question.
User interface designers often talk about metrics such as the ‘number of clicks’ or ‘time to complete’. They are ways of measuring the effectiveness of an interface: i.e. how quickly can a user reach what they want? The goal is always to reduce these figures: less clicks and less time equals a more satisfied user.
So what happens if we extrapolate that principle? Is the most effective interface one which requires no clicks and has zero latency. You want to see a video clip of the World Cup final, so you simply imagine it in your head and it is transmitted over a wireless network, through your brain receiver and replayed in your mind’s eye.
Immediately the hypothesis is in trouble. Translating signals from the brain is the area where most progress has been made, but what happens when you require a return path for the transaction. How can the brain be stimulated to replay a video clip. At present this is impossible, so some form of display technology would be required, taking us back into the realm of physical devices.
Confirmation would also be a major issue. Every purchasing mechanism we have today relies on physical confirmation of the participant’s desire to complete the transaction. Whether it is handing over a physical payment token such as a banknote or entering a PIN number for a credit card purchase, validation is required. With a direct interface to the brain, there would be no way of differentiating between genuine purchases and imaginary desires. A user may day-dream about purchasing a Ferrari, but have no intention of doing so.
These issues all stem from a central problem: everyone thinks in a different way. There is no uniform structure, no process flow which can be easily translated into the rigid language of computer applications. Humans are also easily distracted; the complexity of separating out thoughts relating to a particular application and the myriad ideas floating around in our heads at any one time would be enormous.
This is part one of a two part essay by Marek Pawlowski on the future of user interfaces, mobile handsets and human integration. Part two will be published on Monday next week. Please add your comments on this topic to the blog below.
Post a comment
Spread the word
Read related articles