Analysing Apple’s Continuity: limited utility today, potential to introduce new experiences

Analysing Apple’s Continuity: limited utility today, potential to introduce new experiences


The ability to have calls and texts across Apple devices might offer the most immediate practical utility, but it is the least interesting part of Apple’s wider Continuity strategy. Continuity is based around the notion each device, like an iPhone or iPad or Apple Watch, is an actor in an overall production, like an app or web service. Just like actors, each device can give cues to the others, enabling a Mac to pick-up where an iPhone left-off, or vice versa. The future result may be the ability to create productions which seem impossible today.

Handoff for calls and texts works well, mostly. Over a few weeks of testing, incoming calls rang through to my Macbook and iPad Mini with few problems. It didn’t work quite as well the other way around, where I had trouble inputting phone numbers or finding contacts on the Macbook and iPad when trying to dial out through the iPhone, but these were relatively minor quibbles.

Overall it achieves a good experience – that calls and text messages are available on any of your Apple devices – but this kind of stuff is already just table stakes, regardless of whether you’re in the Apple, Google or Microsoft ecosystem. It’s only a matter of time before it works pretty well on any device you choose.

But what of Apple’s loftier ambitions for Continuity? It is subtly different to the conceptual framework employed by consumer cloud services, where each device is essentially a passive window through which users can see and interact with a single, centralised chunk of data. For instance, if I open a Google Doc on a PC, phone and tablet at the same time, it feels like I’m editing the same document in real-time, albeit through multiple viewers. All of the data flow from each individual device, up into the cloud to be interpreted centrally, then synced – seemingly in real-time – back down to the other devices.

Continuity is different. Each Apple device periodically reports its status back to the cloud, but devices are also able to sense the localised presence of others using Bluetooth. The Bluetooth-based proximity sensing serves as a trigger, which might prompt a full sync back to the cloud to get the latest data from another device, but this is dependent on user interaction rather than the seamless and ongoing synchronisation model employed by the likes of Google Docs.

Apple Continuity example - editing Pages document between iPhone and iPad

Apple Continuity example – editing Pages document between iPhone and iPad

In practice, it is actually quite limited in this initial version. Examples might include editing a Pages document on your Mac at your desk, then picking up your iPhone to find there is a little Pages icon in the bottom left-hand corner of the lock screen. This indicates the iPhone is aware you’ve been working in Pages on another device and invites you to swipe up from the icon to open the Pages app on the phone and start editing where you left off on the Mac.

Some third party apps support it as well, like task manager Wunderlist, which uses a similar interaction cue to drop you into the same list you were editing on another device. Unfortunately it doesn’t seem able to bring-up incomplete entries started on another device, only to take you to the most recently used list. Newstand publications like the New York Times offer a similar kind of synchronisation, so that Continuity’s jumps you to the page you last read on a different device.

Again, like Handoff for calls and texts, most of these examples worked well. For documents stored in iCloud Drive, I was able to open them on any of my Apple devices, make changes and find that if I picked up another device, it would display the little icon on the lock screen to invite me to continue my work.

It felt slow at times, even over a robust Wifi network, taking about 15 to 20 seconds to open a Pages document, load the latest changes and jump to where I left off. It was also apparent that, while the triggers could be sensed locally over Bluetooth, it relied on an internet connection to sync the latest changes via the cloud. For the moment at least, it seems this couldn’t be used in a situation like being on a plane, where you might have local connectivity between your Macbook and iPad, but no web connection. That’s a shame, as the ability to use local Wifi or Bluetooth when cloud connectivity isn’t present could have been a competitive differentiator.

Today, Continuity feels like little more than a UI shortcut. Strip away Apple’s marketing spin and what you have is an ability to sense when the user is doing something in a very limited range of apps, then show them an icon on their other devices to load up the same data. As of January 2015, it is disappointingly basic, delivers few benefits compared to existing ‘web native’ cloud services like Google Docs and misses out on many of the features made possible by real-time sync, like live collaboration.

That said, I remain interested in the concept. It is something to watch in the future. Today’s implementation may be about nothing more than displaying shortcuts and pushing calls from one device to another, but the framework laid out by Apple has potential.

We’ve been exploring multi-touchpoint experiences in the MEX initiative for several years now and, almost to the point of cliche, everyone has recognised the importance of being able to sense user context across multiple devices if you want to deliver a good overall experience. Continuity could deliver important bits of contextual information through its localised sensing, enabling apps to know when the user last picked up a particular device, how far they are from another of their touchpoints or if any of the devices nearby offer complementary capabilities useful to their current scenario.

Apple has laid the right conceptual foundation – the idea that each device should exist, and interact, not just on the basis of its relationship with the cloud, but with others in the vicinity. What’s needed now is to build on those foundations by expanding the range of contextual information available to third party developers and seeding the market with some inspirational examples of how it could be used.

Here are some first steps: the Apple Watch should know when I’m playing content on my nearby Apple TV and subtly offer me a shortcut to media controls, while my iPad should automatically know when the Apple TV has popped up a text box, and offer to shortcut me into the virtual keyboard of the Remote app. Basic stuff, and the framework is already in place. Next up: the option to unlock straight into some high fidelity image editing controls on the touchscreen of my iPhone when I’m image editing on my Mac. These are the kind experiences which will make Continuity feel greater than the sum of its parts and enable developers to start thinking about truly multi-touchpoint design.

+ There are no comments

Add yours