Mobile touch is in its infancy


Last week at MWC I met with a wealth of companies doing pioneering work in mobile user experience. There’s a lot of ground to cover, so I’ll be sharing the insights from these meetings in stages. Part 1 is all about the evolution of touch technology.

Keep an eye on the MEX blog for more updates on my MWC insights over the next couple of weeks, or add my Twitter feed @marekpawlowski.

The technology of touch is still in its infancy.

If we think about touch on mobile products today, it is typically limited to the screen of the handset, but in the future it will encompass many more surfaces and serve many more purposes.

I had several enlightening conversations at Mobile World Congress on this topic: my thanks to Terence Warmbier of Immersion, Guillaume Largillier at Stantum and Dhwani Vyas from Cypress for their insights into this area.

We currently use touch like a simple replacement for the mouse pointer we’re accustomed to on the PC: selecting icons, clicking on scroll buttons and tapping virtual keyboards.

Apple has taken this further with multi-touch on the iPhone, allowing for more natural gestures – a pinch for zooming or a flick to scroll – but still limited to navigating what we see on the screen.

The input method has changed, but how it is applied to the interface is essentially the same.

What if a handset could sense touch input on the side of the casing? Users could scroll through lists by brushing their thumb along the side of the device, in effect creating a virtual scroll wheel and allowing for much more natural, one-handed navigation. Palm will be one of the first to do this with the gesture area beneath the main screen on the Pre.

What if you could sense how firmly a user was gripping their device and transmit that information over the network, using haptics to deliver a corresponding vibration of a certain intensity as an emotional connection to the user on the other end of the call? Mobile touch could become an interface for sending virtual ‘hugs’.

What if the screen was able to detect touch in three dimensions – the harder the user pressed, the deeper they’d dive into the UI? Search results would be transformed from scanning up and down a flat page to an explorative environment where conceptual tangents could extend out in three dimensions.

This will be one of the major themes for discussion at this year’s MEX Mobile User Experience conference on 19th – 20th May.

One of the most impressive demonstrations of touch I saw at MWC this year was given by Guillaume Largillier, founder and Chief Strategy Officer of Stantum.

He showed me several concepts built on Stantum’s resistive controller:

1) Sufficient accuracy and pressure sensitivity to re-create the natural effect of a paint brush on a mobile screen. Guillaume actually had a brush with him and was able to paint realistic strokes on the screen just as you would on a physical canvas.

2) High speed multi-touch sensing. Stantum has developed a demonstration game where you use all of your fingers at the same time to move characters around the screen simultaneously.

3) Pressure sensing, with an accurate visual indication of how hard the user is pressing the screen. The demonstration was limited to a graphical indicator, but this technology could be applied to develop immersive, 3D interfaces.

About 5 years ago, Guillaume and his co-founders harnessed their passion for music to build Jazzmutant, a company developing pioneering musical instruments using touchscreen technology. They are now used by the likes of Bjork to create a funky, electronica sound.

In developing these instruments, Guillaume and his colleagues had to push the boundaries of resistive touchscreen technology, creating touch controllers capable of sensing multiple input points, with a high degree of accuracy and an ability to respond to the relative pressure applied by the finger tip.

The Jazzmutant musical instruments business has since become a separate division and Stantum was formed to commercialise its touchscreen technology in other industries. Backed by a recent USD 10m investment round, the company has a particular focus on mobile telecoms.

For those who aren’t familiar with the area, there are two major types of touchscreen technology used in mobile today: resistive and capacitive.

1) Resistive technology uses two or more surfaces, with a thin gap between them. When the user touches the screen, the layers are pressed together and the x-y co-ordinate location is determined.

This has made resistive technology good for touch interfaces which use a stylus. It provides accurate positioning and can detect fine strokes, such as those required for handwriting recognition or sketching.

However, the weakness of resistive technology has always been its ability to sense the kind of light, multi-point touches made famous by Apple’s Multi-Touch interface on the iPhone. The use of multiple layers and coatings can also reduce the clarity of the screen.

2) Multi-touch currently works best using capacitive technology. This is based on electro-static sensing, where the screen senses interrupts in its own electro-static field caused by the electro-static presence of a human finger.

It is good for multi-touch, doesn’t suffer from the screen clarity issues which can affect resistive panels and can even sense ‘touches’ when the finger is hovering rather than actually pressed against the glass of the screen, leading to a much lighter feel.

Disadvantages include accuracy issues and cost. According to Guillaume at Stantum, capacitive panels currently cost about twice as much to implement in mobile products than resistive screens.

I was excited by Stantum’s technology because it promises to combine multi-touch, high accuracy and pressure sensing in a low-cost, resistive package. In addition to the screen of the device, it can also be used on any surface where there is sufficient ‘flex’ to support the layers of a resistive sensor – the back or sides of a handset, for instance.

Stantum’s model is to license its software IP to chipset manufacturers and work in partnership with glass panel manufacturers. According to Guillaume, it already has agreements in place with two of the major chipset companies and is hoping to see mobile devices coming to market shortly based on its technology.

Something hanging over everyone working in touchscreens at the moment is the threat of litigation. As with all rapidly expanding technology markets, each player is looking to protect its long-term interests by registering proprietary techniques as patents and watching competitors closely for any sign of ‘copy cat’ approaches.

Apple, for instance, has made it very clear it intends to make a robust defense of multi-touch, already levelling barely veiled accusations at Palm over the Pre.

Guillaume at Stantum believes they have a strong patent portfolio, having developed the bulk of their IP several years before the iPhone was even a gleam in Steve Jobs’ eye.

I also had an interesting conversation with Dhwani Vyas, who leads the business unit responsible for programable system on a chip (PSoC) products at Cypress.

He explained that litigation in this area would be around the way in which particular touch interfaces were implemented, rather than the style of interface itself. For instance, it would not be possible for a company to patent a gesture like the well-known iPhone ‘pinch’. Instead, they would have to protect the way that pinch was interpreted and fed back into the UI.

Dhwani told me Cypress currently provides a library of around 30 gestures to work with its capacitive touchscreen chipsets. The company works closely with user experience researchers to develop new gestures and provide partners with the tools to implement their own.

He believes we are in the very early stages of development for this and there is a much richer gestural language yet to come.

Today these gestures are expressed through a physical touch on the screen or touchpad of a device. However, Cypress is also developing proximity sensing technology which will allow users to perform gestures in the air space in front of the sensor.

Imagine flicking your hand in front of the screen to scroll through your photos or iTunes CoverFlow. Imagine grabbing the air to pick up an item on the desktop and fling it into a virtual trash can.

Dhwani sees the number of modalities within mobile devices expanding rapidly. Of course, he also hopes a Cypress chipset, which sits alongside the baseband and application processors in a handset, will be at the heart of this evolution.

In the future, a single chip will make it possible to control a laser trackpad for fine navigation of web pages, a capacitive touchscreen for selecting icons and a gesture scroll area on the device casing.

This does raise some questions about confusing the user experience through a multiplicity of options. Single modality devices, like key-driven Nokia products or the simple touchscreen of the iPhone, have distinct advantages: the user never has to think about which input option to select. This momentary latency, when the user performs a sub-conscious evaluation of which method is best suited to their needs, can often outweigh the benefits of having a variety of more efficient modalities on the same product.

All of the developments I’ve talked about so far focus on how we express ourselves to the mobile device. What happens when the device wants to talk back?

Currently that happens primarily through visual feedback on the screen or audible alerts played back through the speaker. However, mobile devices are starting to reach out to a third sense: touch.

Immersion has been working in this area of haptics for many years, across numerous different fields, including medical devices and game controllers. Terence Warmbier, responsible for European business development at the company, sat down with me at MWC to talk about how this technology is evolving. (Disclosure: Immersion is a sponsor of the 2009 MEX Conference & Awards).

It is already appearing in an early form in numerous handsets. Immersion has licensing agreements with most of the world’s major handset manufacturers and Samsung and LG of South Korea have been particularly progressive in building Immersion’s platform into their products. Terence estimated Samsung had released or planned about 70 products featuring haptic feedback.

Most existing products utilise a fairly standard vibration motor within the handset, using Immersion’s technology to activate a number of different vibration styles in response to interface events. For instance, reaching the end of a scrolling list will generate a quick ‘bong’, while crashing your racing car in a driving game will cause a major vibration.

This is currently limited by the centralised nature of the vibration motor. While it is possible to trick the finger tips into feeling the haptics in different ways, it is all coming from a certain location within the handset.

Terence explained the 3 evolutionary steps required to take haptic technology into the future:

1) The current generation – centralised vibration from a simple motor

2) Piezo-electric – providing local feedback to specific areas of the device case or screen

3) Electro-deformable polymer – passing a current through a particular area of the surface will cause the material to actually change shape, allowing buttons which physically raise and lower from the surface of the screen

Stages 2 and 3 will allow for some very natural and human gestures. Imagine, for instance, having a conversation with your partner over a distance of many miles and being able to share the feeling of touch alongside voice and video. A ‘stimulating’ possibility to say the least!

I believe developments in touch input and haptic feedback will be inter-dependent. Developments like pressure sensing and proximity gestures will be significantly enriched by an ability to transmit correspondingly complex feedback to the user.

Similarly, it is difficult to imagine either of these enhancements reaching its full potential without the capabilities of the other. It would be almost like trying to have a heated debate with a statue!

Keep an eye on the MEX blog for more updates on my MWC insights over the next couple of weeks, or add my Twitter feed @marekpawlowski.

What do you think is going to happen with touch technology and haptics? What are some potential use cases? This is going to be a key topic for discussion at the 2009 MEX Mobile User Experience Conference on 19th – 20th May – I hope you’ll be there to take part. In the mean time, contribute to the debate by posting your comments to the MEX blog using the link below.


No related posts.

5 Comments

Add yours
  1. 1
    Marek Pawlowski

    What do you think is going to happen with touch technology and haptics? What are some potential use cases? This is going to be a key topic for discussion at the 2009 MEX Mobile User Experience Conference on 19th – 20th May – I hope you’ll be there to take part. In the mean time, contribute to the debate by posting your comments to the MEX blog using the link below.

  2. 2
    Tim Deluca-Smith

    Marek; I came away from MWC with a slightly different feeling towards touchscreens.

    When it comes to handset design and innovation, the mobile industry has a strong herd mentality, often driven by short product lifecycles and a strong reliance on the Asian ODM powerhouses (frequently resulting in a parts-bin approach to handset development).

    One of the strongest currents over the last 12 months has been the trend for touchscreens. After the iPhone and early LG devices, nearly all major handset manufactures have released devices that feature a touch-input mechanism; even RIM, at the vanguard of business mobility.

    While I would like to believe that this signals a trend towards greater experimentation in user interface design and progression towards gesture-control and other innovations, the cynic in me sees an industry selling itself short and disappointing consumers with ‘me-too’ products that at best fail to meet consumer expectations and, at worst, simply aren’t fit for purpose.

    Remember, the UI is merely the window into the handset. It is the means by which we navigate and interact with on-board features and services. The success of touchscreen must, therefore, be intrinsically linked to a device’s operating system and menu hierarchy.

    Now, I’m by no means an Apple fanboy, but the iPhone works because the device’s OS compliments touch navigation. Clean hierarchies, graphical menus, strong integration with hardware (e.g the accelerometer). Arguably the iPhone OS was only ever designed with touch and gesture in mind, the same can’t be said for any other OS, even Android. Simply porting a touchscreen UI onto a legacy OS, or iteration such as S60 Touch, is going to yield challenges as legacy code does battle with new input mechanisms and changes in the ways that consumers interact with their devices.

    This is the difference between Touch Enabled and Touch Optimised.

    That’s not to say that everything should be Touch Optimised. Indeed, this is the trap that many seem to fall into. There will always be applications and products that simply don’t suit touch as the primary method of navigation and input. Outside of basic SMS and IM (which can be satisfactorily serviced via virtual keyboards), messaging is an obvious example and a full QWERTY keyboard will usually win favour amongst business users. Even a basic T9 keypad has the upshot of being operable with one hand.

    A more pragmatic approach to touch should be adopted. Development and design decisions must be based on qualified and considered use cases and evaluation of which interactions benefit the most from the natural flow of touch and gesture.

    Touch, for touch’s sake, may build early market share and tap into the current Zeitgeist, but I suspect that it may come at the cost of long term brand loyalty and credible evolution of touch, gesture and haptic control. Only last week a study of nearly 20,000 consumers by Reevoo.com found that half of the 10 least popular phones were touchscreens. Only the iPhone and LG Renoir made it into the Top 10. Users cited some very basic complaints, including icons that simply weren’t big enough for ‘male fingers’. A quick check of the ‘Introduction to S60 Touch’ guidelines published by Symbian shows a recommendation that icons should be no smaller than 7mm x 7mm with a 1mm gap between. Quick, go grab a ruler; that’s pretty small.

    Clearly there is a need to evolve traditional handset form factors and user interfaces. The way we interact with our devices has moved on, so too has the core use case for mobile devices. Indeed, one of the touchscreens greatest benefits is its ability to free up valuable real estate on the front of a device and allow visual content to finally be viewed on a respectable screen size.

    The way in which we interact with a mobile device will, to a certain extent, always be limited by its physical form factor; a fact that will mean compromises will always have to be made. The danger is that to try and remove these compromises, designers and UI specialist will be tempted to integrate several input methods within a single form factor. Can current form factors accommodate new developments in pressure sensing or resistive technology casing (essentially turning the entire unit into an input mechanism)? Can these technologies co-exist and how long before we overwhelm the user to a point that UI advancements actually become counter-productive?

  3. 3
    Marek Pawlowski

    Very much in agreement on the risks faced by handset manufacturers as they rush to join the touchscreen ‘land grab’. I’m sure we will see some truly awful products from OEMs who see the addition of a touch interface as simply ticking a box on the specifications sheet.

    Those who succeed will need to think carefully about how touch impacts all aspects of the user experience, from issues as basic as how fingermarks on the screen can impair viewing in certain light conditions to fundamental changes to interaction flow within applications.

    However, touch-based interfaces will be about much more than the primary, on-screen interactions we see in today’s products. There is the potential for much more subtle use of touch as it extends throughout the surfaces of the device.

    If we think about the overall experience of a device during a particular interaction, I can imagine a situation in the future where input through the main touchscreen may define 60% of that experience, 20% would come from front facing hardware controls, 10% from other parts of the device casing and 10% from gesture sensing.

  4. 5
    Philippe J DEWOST

    Marek : it was good to meet in person in the ACCESS booth @ MWC. As very often, if not always, your analysis is fresh, mind driving, and brightly optimistic as it open new doors and ways.

    Tim, thank you for the down to earth wisdom you bring to the conversation.

    Reading your discussion thread connects me to a company I am currently helping and investing in, that is adressing the “choice” issue that arises whith large touchscreen interfaces : as pointed by tim, buttons are too small, too numerous (20 soft choices on an iPhone screen), and the call for the appropriate connection between the touchscreen technology, the UI and the OS has never been so appropriate.

    Tiki has found a rather disruptive yet very simple way to organize navigation within/through contextualized 6 item clusters (remember that the human mind usually sports a maximum of seven simultaneous high speed access memory slots), that can be used either for navigating through information or for efficient, one thumb text input.

    They have won Nokia’s Mobile Rules innovation contest in 2008 and I do think they can bring a very relevant contribution to the debate you both opened in this page.

    Check http://tikilabs.com and let me know what you think

    Regards

    Phil

Leave a Reply to Tim Deluca-Smith Cancel reply