Ken Willes #mexsession on cross-modal digital experience design


Ken Willes talks about the complexity of designing cross-modal digital experiences, offering use cases and practical advice in this 30 minute talk. He draws on a wealth of experience in the sector to go in-depth, looking at how complexity increases exponentially with each modality added, such as combining tilt-motion and touchscreen. However, properly executed, these cross-modal pairings can result in a more natural experience for the user.

If you can’t see the video embedded above, click here to watch this MEX talk on Vimeo.

Insights

  • WIMP (Windows Icons Menus Pointers) has given way to OCMG (Objects Containers Motion Gestures) as the dominant interaction framework.
  • Growing consumer interest in wearables devices is increasing the number of potential sensors and input mechanisms which can be woven into cross-modal experiences.
  • The complexity of designing cross-modal input increases exponentially with each modality added. For instance, even simple examples like combining tilt-motion and touchscreen are non-trivial. However, properly executed, they result in a more natural experience for the user.
  • Cross-modal input is well suited to simultaneous, multi-person user interfaces (SMUIs). For instance, there are now 100 inch touchscreen tables supporting simultaneous use by 10 people. Human interaction etiquette always trumps digital interaction conventions in these situations.
  • Cross-modal input thrives when discrete and continuous modes are paired. For instance, precise touchscreen input and the analogue feel of accelerometer-controlled tilt motion. Some pairings are more obvious than others – focus on what seems natural in context. “Don’t do it just because you can.”
  • Use different modalities to verify intent. For instance, a voice command could confirm the intention of a motion gesture. This does not always need to happen in real-time: historical usage data can tapped to confirm the likelihood of a particular input.
  • Feedback and boundaries are essential, equivalent to visual pingback physics in touchscreen lists, especially with gestural input.
  • Introducing physical tokens can help the usability of gestural input. For instance, the Wii Motion controller by Nintendo.
  • Each input modality should be considered an equal citizen in the design process to make it easy to vary their relative importance as the experience design progresses.

Recorded at MEX, March 2014

+ There are no comments

Add yours