Sound experiences outside the visual canvas
The brain has a remarkable ability to perceive the location of a sound source. If an object bangs on the floor behind you, your brain uses its understanding of acoustics to translate the sound waves into a clear image of where that incident occurred.
Sound is also an ‘always on’ sense, waking us at night if we hear something unexpected. It is an instinct hard coded into human biology and goes some way towards explaining why most people find sound to be a highly emotional sense. Sound is often more likely to move us to tears than vision.
The state of audio reproduction on mobile devices, however, is poor. They are typically equipped with low quality mono speakers and uninspiring, if any, use of sound effects within the operating system.
SRS provide audio post processing technology. They are best known for their work in the television business, enabling flat panel displays to reproduce surround sound-style effects or enhancing the reproduction of 5.1 channel sound. They are also applying that expertise to mobile, embedding their capabilities into chipsets from Qualcom, TI and ST Ericsson.
Bob Lyle, Vice President of Mobile at SRS, demonstrated how a mobile device with stereo headphones could be enhanced to reproduce not just a higher quality of sound, but the acoustic trickery necessary to give a sense of directional source. There are obvious applications in content reproduction, such as video and gaming.
The demonstrations included a virtual helicopter which could be repositioned relative to the user while the sound source appeared to move acoustically. There was also a shooting game, where the SRS enhancement gave an accurate reproduction of shell cases flying out of the gun and landing behind the gamer.
The effect was at its best through high quality headphones, but was also apparent even when played through the built-in speakers of a phone or tablet.
The technology has the potential to create transformative experiences beyond enhancing existing content.
Consider how digital interactions might evolve if they took place not just within a visual canvas but within a three dimensional soundscape. What would a list of messages sound like if some were made acoustically closer to the user than others? What would sharing a media object from your mobile device to a wall mounted screen sound like if you could manifest that interaction as a sound event in 3D space?
To achieve these kind of possibilities, audible experiences need to be assigned a new level of strategic priority. All too often companies like SRS are bought in as a feature ‘bolt on’ in the final stages of product development: a way for manufacturers to add another tick box to the specification sheet of a new device. If more time was spent getting the developers of operating systems and creative agencies to experiment with audible elements early in the experience development process, we would see a much broader range of use cases emerge.
In addition to these new possibilities, there are also more immediate potential improvements to user experience. Lyle explained how SRS can trick the ear into perceiving headphone sound less directly, reducing the sense of ear fatigue.
The MEX initiative is exploring these issues through the ongoing work on Pathway #9, entitled ‘Expand mobile interactions with the neglected dimensions of sound and tactility‘.
[…] remained reliant on visual innovations. However, Immersion’s HD haptics, SRS’ audio technology and HaptiMap’s research efforts all provided examples of how mobile experiences can engage […]