The season for annual predictions is upon us and they remain as useless as ever, limited as they are to either recounting the obvious or speculating on the unknowable. Instead, I’d like to reflect on a future just becoming visible over the horizon, the potential it has to impact life and what it means for practitioners charged with crafting such experiences.
When I think about how technology might affect the way we live, I focus on developments capable of disrupting physical constraints:
- Connected objects which move themselves
- Personal computing freed from long-standing form factor conventions
- Improving health through real-time monitoring
The last 20 years has been defined by the web eliminating distance as a barrier to sharing information in the virtual sphere. The next leap will result from automating how these virtual data drive physical changes in the real world.
A simple example, and one which has been in the news recently, is Amazon’s flying delivery drones. Shopping from personal mobile is well established, but there is a bottleneck constraining future growth: reliance on human labour for the physical fulfilment and delivery of orders. The 2013 holiday season saw demand for home shopping exceed delivery capacity in the United States, with Amazon, Walmart and other retailers unable to make good on promises of pre-Christmas delivery because the delivery networks couldn’t handle the volume.
Amazon believes it is about five years away from launching drones capable of delivering five pound payloads within a 10 mile radius, cutting the time from ordering to delivery from 1 or 2 days to about 30 minutes. That’s a significant enough change from the customer perspective to make home shopping the default choice, where today it accounts for only about 6 percent of US retail sales.
Delivery is just one example of where connected, self-moving objects might drive significant change. Once objects understand their own context and can take physical actions in response, the potential applications are diverse.
Consider agriculture, where watering devices could be improved, sensing not only when the moisture content of the soil is low but also moving themselves to deliver the optimum amount of water to different crops or areas. Transportation is another obvious example, where numerous companies, from Google to existing automotive manufacturers, are already testing self-driving vehicles. Attention has focused initially on automating vehicles similar to the passenger cars we know today, but it seems more likely to me the full potential will be realised when a new class of objects are able to transport themselves: what if litter bins could sense when they were full, drive themselves to a central facility for emptying and then return to their original position?
Objects moving themselves from one location to another is one category, but there is also potential for objects capable of physical movement within a fixed space. How would our notion of displaying visual information change if our homes were equipped with hovering projectors which could be summoned at will to beam screens on to any flat surface?
This is where the potential for self-moving objects intersects with my second area of focus: breaking free from traditional form factor constraints. For decades, really since the arrival of the desktop PC, the display and input components have determined the size and shape of personal digital devices. The brains of personal computers – the processing components – ceased to require meaningful amounts of space many years ago. Instead, it is the size of the physical surfaces required for displaying and inputing information which determine the shape, weight and portability of personal computers.
What will become possible if this constraining link can be broken? The early signs of this are appearing. Google Glass hints at one possibility, where a tiny screen is placed directly in front of the eye, delivering a display experience which feels much larger than it actually is. Improvements in miniature projection technology suggest another possibility, where information could be beamed onto any convenient surface, with the size of the image adjusted depending on the application: a full, widescreen-style projection for watching a video or simply showing a small clock on a wall next to the bed when we wake up.
If devices can be smaller and lighter, there is the obvious benefit of increased portability, but the most significant impact will be in the way it changes the flow of digital interactions. Today, almost all our engagements with digital information require enough of our attention that they interrupt our lives. You only have to look around trains, cafes and offices – even the living rooms of our own homes – to see how distracting digital has become, with users glued to the screens of their personal devices, their attention focused on reading and interacting with touch interfaces.
Display technologies like digital eyewear, free space projection and the ability to connect seamlessly with nearby display panels will change the visual part of this equation. On the input side, gesture and voice recognition will make controlling our digital interactions as natural as talking to another human.
By enabling ways to deliver digital interactions which augment rather than interrupt the flow of daily life, new display and input mechanisms will allow the benefits of connectivity to become more pervasive and repair the damage caused to human sociability by the demanding nature of today’s personal computing devices.
An even more direct and measurable impact on human wellbeing is also taking shape. The reduced size, cost and power requirements of personal sensors are enabling new ways for individuals to understand how their actions affect their health. Where today most people base their diet, exercise and even treatments for minor ailments on ‘guess work’, a new class of sensors promises to provide individuals with accurate health data in real-time.
The first signs are visible in the proliferation of personal activity trackers, from dedicated devices like Fitbit and Jawbone Up, to apps which integrate with the motion sensors of today’s smartphones and tablets. Moves is one example of this, sitting quietly in the background and tracking how far you walk, run and cycle each day. It delivers daily and weekly summary reports to help users understand if they are meeting their targets. More importantly, Moves allows individuals to connect the data it generates with other wellbeing apps to provide additional analysis and recommendations for improving their health.
Movement, of course, represents just one aspect of personal health. Already there are connected consumer products which also monitor sleep patterns, blood glucose levels and the presence of stress-related chemicals on the surface of the skin. As the efficiency and capabilities of these sensors improves it will be possible to see accurate metrics about all aspects of health in real-time. Users will have the information they need to understand how choices about food and exercise impact their lives and the long-term data required to identify chronic illnesses earlier, at a more treatable stage.
All three of the trends I’ve focused on in this article – self-moving objects, form factors and health monitoring – share a common design challenge: creating experiences beyond the boundaries of individual products. Where today most digital user experience designers focus on single pieces of hardware, or software which succeeds within the limits of a specific device, tomorrow’s practitioners will require a broader view and a collaborative outlook.
Consider the many strands of designing an experience like Amazon’s delivery drones or digital eyewear, where the myriad physical environments in which they will be used might have just as much impact on their usability as the visual interface. Consider the many stakeholders with an interest in such products and the different requirements they will have for how they interact with it.
This is experience design which requires not just the specific, desk-based practitioner skills which define most ‘UX’ roles today, but a breadth of knowledge and creativity of approach that thrives on being willing to get out into the field and truly empathise with your users. For all three of these trends, user experience will be critical to their success. Each challenges existing habits and disrupts established industries, characteristics which place more emphasis than ever on design which respects, guides and empowers users.
I’d love to hear your feedback: which skills will you focus on developing in the coming year? How do you think these trends will take shape? Post your comment below.