Preparing for Apple’s new interactions

Preparing for Apple’s new interactions


Strip away the hype from Apple’s September 2014 product announcements and you’re left with 4 new interactions which might change digital user experience at scale:

  1. Magnified depth
  2. Pressure levels
  3. Haptic feedback
  4. Activated confirmation

Their ramifications will ripple out beyond Apple’s own customer base as competitors replicate these methods.

Historically, Apple has used the introduction of new interaction methods to initiate new product categories: the Mac and the mouse, the iPod and the click wheel, the iPhone and capacitive touch (I wrote about this previously). The significance of the Apple Watch, iPhone 6 and Apple Pay is best understood in this context.

Apple’s modus operandi tends to be refining existing technologies rather than initiating wholly new methods. In fact, the MEX community already has a rich resource of best practice methods we can apply to these 4 interactions.

Consider this a guide to the strategic questions you should be asking.

Magnified depth

Magnified depth in Apple Watch UI

Turning the digital crown on the Apple Watch moves the user’s perspective through layers of magnification. The effect is demonstrated in the photo app, which presents a zoomed out view of all the user’s photos arranged as a wall of tiny mosaic tiles, then allows users to magnify in until a single photo fills the entire screen.

This is just one example of how this interaction method may be applied, but it should be understood for its wider significance. It introduces the notion that the interface canvas can be presented at any level of magnification and the user may control that magnification by turning the digital crown and swiping the touchscreen to pan left to right and up and down. That relationship between the crown for layers and the touchscreen for swiping is at the heart of the Apple Watch’s interaction model.

Beyond photos there are many instances where this interaction may change the user experience, such as navigating hierarchies (e.g. from individual songs through albums and up to entire genres of music) to revealing layers of granularity in data (e.g. a pie chart which explodes into more levels of detail as the user zooms).

MEX began investigating the effects of visual depth some years ago, coinciding with the introduction of the first glassless stereo 3D displays, like the Nintendo 3DS and smartphones from LG. While smartphones with this specific technology did not succeed, they opened our eyes to the principles of making visual depth usable and useful.

    MEX Principles

  • Human vision is better described as 2D rather than 3D. Our sight limits perception to 2D layers and it is only through physical interactions we explore additional spatial dimensions. For instance, in the physical world, the under-side of an item placed on a flat surface is invisible until we turn it over. Limiting freedom of navigation in digital 3D interfaces, therefore, helps them to feel natural.
  • User interface design for personal digital devices has always been 2D and constrained within a frame, such as the screen bezel, from desktop computers positioned a few feet away to handheld mobile devices. What happens when you remove the frame and allow the interface to exist within the physical environment?
  • Introducing the concept of spatial depth into the user interface allows designers to change the users’ relationship with content by moving items through three dimensions. This does not require true stereo 3D, but instead can be created through visual tricks – similar to those used by film directors – of moving objects into focus and adjusting their relative size.
  • Designing in 3D is not a purely visual exercise. It succeeds only when input mechanisms are appropriate for navigating and interacting with the 3D content.
  • Don’t over-emphasise the Z axis of depth – the X and Y axis remain vital in spatial interface design.
  • Considering the size, relative scale and weight of interface elements is essential to good spatial interface design.
  • Tapping into additional senses, such as hearing, by using sound effects and changes in music cadence and volume, further enhances spatial exploration.
  • Consider spatial zones. When consuming video, the majority of content might exist in two dimensions. However, a 3rd dimension might be introduced when there is a lull in the action, or a user wants to pull something out of the flat plane for future examination. For instance, a 2D advertisement could be saved for later in a 3D space.
  • — inspired by MEX session summaries, including Dale Herigstad, March 2014; Mattias Andersson, December 2010

Pressure levels

Pressure levels in the Apple Watch UI

The Apple Watch can differentiate between a light tap and a firm press. In effect, this creates the notion in the user’s mind that the screen is no longer a flat surface, but something with virtual depth. Apple has taken note of the principle that interfaces with visual depth require input methods capable of reflecting that depth.

Apple’s demonstration focused on using this command to bring up a contextual menu of additional options in apps, but the actions could potentially be applied to other scenarios. For instance, it may be possible to respond tentatively to an invitation with a light tap, but guarantee attendance with a firm press.

    MEX Principles

  • Physical controls provide better ‘eyes free’ interaction than touchscreens. They also allow for more nuanced control. The user experience can be enhanced through physical tactility and, in contrast to the current trend towards abstracting controls into software and touchscreens, manufacturers should explore how new types of physical input mechanisms can create distinctive product attributes.
  • Motor memory is embedded in most humans and interfaces which take advantage of this principle enable users to interact much more quickly with complex digital systems.
  • Artefact-based interfaces may be insufficient to cope with the level of contextual data in users’ lives.
  • — inspired by MEX session summaries, including Louisa Heinrich, September 2012; Ben Medlock, December 2011; Jason Mesut, November 2010

Haptic feedback

Haptic feedback in the Apple Watch UI

Apple devices have hitherto expressed themselves entirely through sound and visuals, but the Taptic Engine in the Apple Watch adds a new dimension: haptic feedback. This may be considered as both a distinct sensory communications channel with the ability to act as a standalone notification method and as a way of augmenting an overall experience with accompanying sound and visuals.

Its significance lies in the ability to communicate with users in environments and social contexts where it would otherwise be difficult or inappropriate. For instance, it may be the preferred notification channel in high noise environments when even the vibration of a phone in the pocket would not be sensed as easily as a vibration delivered direct to the wrist. Or in partial attention environments, from being in a meeting to walking down the street, where vibrations to the wrist may be the only appropriate way to convey information in the background without disrupting the user.

    MEX Principles

  • Tactile feedback is suited to communicating low level, simple emotions but unsuited to communicating complex or precise information.
  • Tactility has many dimensions, which can be combined or used individually to convey different sensations, including sharpness, texture, weight, balance and the number of sensory points. For instance, sharpness is most effective at communicating the importance of an interaction.
  • Sensory interactions such as sound and tactility are most effectively understood when proceeded by a signalling interaction. This notifies users, allows them to indicate their readiness and preference of sensory channel. A metaphor for this is the game of charades, where universally understood gestures are used to indicate the type of mime, e.g. book or film or person.
  • Tactility and sound are at their most effective when augmenting, rather than replacing, visual UIs. However, these channels can be effective in isolation for experiences where visual privacy is important, or with high emotional content.
  • Sensory interactions become more effective when conveyed in a continuous cycle, rather than occasional, discrete events.
  • Sensory interfaces take longer to learn than visual UIs. Designers should create tutorials and factor in a training period for users, gradually introducing sensory interactions to play a greater part in the overall experience.
  • Biomimicry can be an effective source of inspiration for tactile and audible interactions. Replicating the rhythms found in nature can make an experience feel natural. Deliberately unnatural interactions can be used to create surprise or when immediate attention is required.
  • Sound and tactility can be aligned with specific visual elements, used to convey motion or reflect spatial dimensions. For instance, in mapping, tracing a fingertip over a visual route could be augmented by sound and haptic feedback.
  • Context, particularly physical environment, creates variation in the way people filter haptic and audible interactions. Designing to dynamically adapt in response to environmental factors can overcome this and it is essential to conduct user testing in real world conditions, not just the lab.
  • User tolerance of playfulness in haptics and sound is greater than that of visual design. This creates possibilities for exploration, especially while users are learning a system for the first time.
  • Sound and tactility can be just as effective as visual interfaces in creating the signature feeling of an experience and should be considered a key part of branding.
  • — inspired by MEX session summaries, including Charlotte Magnusson; Patrick Bergel; Sophie Arkette; all September 2012

Activated confirmation

Activated confirmations with the Apple Pay UI

Apple Pay is interesting and impressive as a commerce tool, but the confirmation interaction it introduces has wider significance and may become the trigger for other applications. Strip away the payment context and you find a classic transaction trigger: a physical token (in this case the NFC chip embedded in the top of the iPhone) which initiates an identity check (in this case the Touch ID sensor embedded in the iPhone’s home button).

Again, this is not new, but Apple’s approach is different. Firstly, they have chosen to place the NFC chip at the top of the phone, which overcomes one of the ergonomic challenges afflicting current implementations of NFC in smartphones, which tend to place it in the middle of the phone’s rear casing. Simply put, Apple’s positioning is likely to feel more comfortable for most users when, for instance, touching their phone to an NFC reader at the point of sale.

Apple has also designed the overall experience as a single gesture: touching the phone to the NFC reader and holding a finger on the Touch ID sensor can be completed in a single motion. This is where it differs significantly from previous attempts at using NFC for smartphone payments, which required the phone to be held awkwardly to the NFC reader, before prompting for an additional series of inputs on the touchscreen.

If the convenience of this gesture works in practice, it is easy to see it expanding to the many areas of life which require a transaction trigger and confirmation: everything from opening a door to verifying your identity for a flight.

Crucially, Apple has designed the interaction to win the users’ trust – once established, this may act as a springboard. The March 2014 creative session at MEX provides insight into the principles behind trust in the digital environment.

    MEX Principles

  • Communicate clear affordances. People trust things that they immediately know how to use.
  • Visibility of system status. Keep users informed of every action within reasonable time.
  • Facilitate social proof. Build confidence driven from like minded communities through endorsements.
  • Set expectations and verify. Establish your brand promise to create understanding, communicate purpose and create a cycle of validation.
  • Culture trumps everything. Speak the users’ language with words, phases, symbols and language that are familiar.
  • — inspired by MEX session summaries, including Rich Clayton and team, March 2014

I have felt for some time that the introduction of visual depth and linking it with corresponding interaction channels like pressure sensing and haptic feedback could represent a new phase in mobile user experience. The work of the MEX initiative over the past several years reflects this and we are lucky to have a wealth of speaker presentations, results from working groups and research pieces to serve as guides to these methods.

Apple’s newfound focus on these areas will do doubt bring them the mainstream attention they’ve been lacking. We’ll be doing more to share best practice from the existing MEX archive and continuing to nurture new ideas by maintaining our exploration of these interactions at the next MEX event in March 2015.

Which do you think will have the most significance for users? Please post your comments below.

2 Comments

Add yours

+ Leave a Comment