
Digging digitally at the intersection of consumption and creativity
It was a Sunday in May and, if we wanted the beans to the grow, it was time they were in the ground. So I found myself on the windswept allotment, under the big white clouds and blue of a Norfolk sky, digging over that patch of ground which had become overgrown with weeds and the bolted remnants of last year’s crops.
Digging is a strangely satisfying activity imbued with the same rhythmic quality as rowing or climbing hills on a bike. An area of this size might take a couple of hours labour to break the ground, turn the soil and transform into a plantable bed. I use it as time to catch up on podcasts. I have a pair of Sony SBH80 Bluetooth headphones, which connect to my Samsung Galaxy Note 4, and stream the audio wirelessly. For a job like this, you want a lengthy episode. I searched out the latest Pocketnow Weekly, which is always good for at least 90 minutes – plenty of time to make some progress.
The podcast discussion turned to the presenters’ experiences with the Apple Watch and the satisfying tactility of the digital crown, that knurled rotary dial on the side which serves to both scroll and confirm actions with a click. It seemed to have struck a chord with the Pocketnow team after years of increasingly generic touchscreen interactions through smooth glass. A little spark went off and I was reminded we had a MEX session on this very topic several years ago, when Jason Mesut talked about the virtues of physical, analogue controls in digital devices, drawing on his personal interests in vintage and unusual electronic musical instruments.
Perhaps you know that feeling? You’re consuming information in some way, shape or form – listening or watching or reading – and a connection occurs, where you relate what you’ve just absorbed to a memory or a current task. For me, these moments seem to happen most often when I’m driving or walking (read Robert McFarlane’s ‘The Old Ways‘ for a wonderful disquisition of why walking is the activity most conducive to creative inspiration) and the podcast in my ears, or something I notice with my eyes, generates some kind of mental hyperlink.
However, standing in the middle of a muddy field, the wind blustering in my ears and a garden fork held in gloved hands now covered with dirt, my options for acting on this moment were limited. I could try to commit it to memory, of course, but the uncomfortable reality of the permanently connected, information-abundant life I lead is that there’s every chance it will be pushed out by another thought before I have a chance to make it usable.
Ideally I needed some way of placing a marker in my stream of consumption and adding the basic foundations of the newly created tangent it had prompted, enabling me to pick it up later and build it into something meaningful. Perhaps that moment of inspiration could then become an essay on this blog or some additional context around the video of Mesut’s talk in the MEX archives?
The reality of today’s digital tools, with their fragility and expectation of head down, immersive attention, is that to capture this moment required an interruption. In this case, it would likely mean:
- Manually pausing the podcast
- Putting down the fork
- Removing the heavy gloves
- Extracting the phone from the deep pocket where it had been placed to avoid the dirt and dust
- Checking my hands to ensure there was no grit to scratch the screen
- Opening a note-taking app
- Typing a note
- …and then reversing this procedure to get back to what I was doing
My example is, of course, grounded in my personal circumstances, but you could substitute digging for any number of other situations: commuting to work by car, where the in-vehicle environment, limited controls and availability of attention span, all place restrictions on how easily you can switch between a mode of consumption and creation; outdoor sports, from hiking to cross-country skiing; or the kitchen, where the process of cooking presents all the same challenges on what you can do with your hands and how the environment itself may be unsuited to expensive, fragile devices.
User experience often breaks down in the gaps between services and touchpoints. This is hardly surprising given that most products still prioritise perfecting a set of features intended to give the best user experience within their own confines: a messaging app wants to be the best messaging app just as a photo app wants to be the best photo app. The forethought to recognise that a specific app will be enhanced by how well it interacts with the rest of a user’s workflow – how seamlessly it allows users to segue between different activities – is often lacking and therefore product teams remain incentivised around narrow goals.
These moments of intersection are becoming more frequent for a variety of reasons: the number of digital touchpoints in users’ lives is increasing, so that it is not uncommon to be simultaneously engaged with a phone, tablet, TV and smartwatch; the accessibility of users’ data is also improving, primarily because more of it is stored in cloud services rather than trapped on a particular device; and connectivity is ubiquitous for many in developed markets, bridging from device-to-device with short range technologies like Bluetooth and wrapping everything in a blanket of Wifi and high speed cellular.
I’m interested in what happens to user behaviour in those moments of intersection, especially when they coincide with a user shifting their mode from downstream consumption to acting upon a creative impulse. It seems to be a scenario which calls into question many of the underpinnings of the digital world:
- The role of input methods, from voice recognition to tactile controls
- How operating systems handle links between apps
- How we create personal meta data overlaying streams of often-copyrighted information consumption
- How interaction methods adapt to the external environment, from wind noise to movement
In my particular situation, my hands covered in muddy gloves and the wind preventing any kind of voice control, I suppose physical buttons would be the natural choice.
Sony’s Bluetooth headphones offer several, as it happens, with a separate control array on both earphone cables. These allow for basics like pausing, skipping and controlling the volume, but perhaps there’s an opportunity here for the developer of the podcast app (Pocketcasts, in my case) to embed some kind of bookmarking system? A quick double click of the pause button (the largest of the controls, and therefore most suited to use with gloved hands) could drop a link into the user’s note-taking app of choice (I happen to use Evernote), showing the episode information and linking back to a point in the audio stream 10 or 20 seconds before I clicked the button. There are other opportunities for service integration too, like automatically creating a to-do in my list (I’m a Todoist user), reminding me to go back and review the clip once I’m back at work.
It would be impractical for any product owner to anticipate every permutation of these kind of user needs, but we can start to recognise this basic need for users to shift between different modes of behaviour and different services. By adopting a sense of virtual neighbourliness in the way digital services are architected, ensuring they can integrate more easily with others and adapt to changing environments.
This specific intersection between consumption and creativity is an area of particular interest at the moment, and the subject of an ongoing MEX exploration, building on previous MEX creative sessions where facilitated teams have worked on design challenges in these areas.
I’d love to hear feedback from the community on their own experiences or services you’ve found which address this need. Have you any examples you can share? Please add your comments below.
Neat, I need to overcome the same challenge when I notice new language and vocabulary I want to start to acquire at some later stage – a one tap solution that initiates some kind of effective trajectory of later interactions…
I suspect that the secret to making this kind of thing feel elegant would be how clever the software was at extracting the right meta information about that particular moment. Given that most of these interactions are going to be asynchronous, with quite a significant pause between the moment of capture and the moment of review, it leaves quite a lot of time for the system to do some additional data processing in the background – e.g. maybe it could upload that section of the audio clip to a server, do some speech recognition, and return it as a text string into whatever note it had created?