Search just got Pinteresting

Search just got Pinteresting

Pinterest Lens allows users to capture inspiration from the real world and spark an exploration of ideas within the Pinterest service. Launched as a beta in February 2017, the video below shows the the concept:

You might recall that in 2013 we introduced ‘Explore‘, one of several MEX user modes. These are elemental states of behaviour in digital environments and ‘Explore’ was defined as:

Discovering novelty on an evolving path

Pinterest, even in its 2013 form, represented a particularly strong example of this exploration behaviour. It was the subject of Sofia Svanteson’s poem (starts at 12:15 in the linked video) when she presented on this mode at MEX/13.

With Lens, users can begin their meandering and sometimes whimsical paths of discovery from a new starting point: the myriad colours, textures, objects and visual treats which fill our daily lives in the physical world:

  • A juicy fruit spotted on a market stall could lead to a series of recipes
  • A colourful pigment on a sun drenched wall could reveal interior design ideas
  • A scarf spotted in a vintage shop could suggest companion items of clothing
By adding this new form of interaction to the overall Pinterest experience, the service becomes present in users’ lives in a new way.

Text-based search requires a certain type of cognitive attention and is unlikely to be used ‘in the moment’. Using a phone camera to capture real world snapshots, however, is an established behaviour and more likely to feel right in these unexpected periods of inspiration.

Visual search is not new. We’ve seen many examples of early augmented reality apps capable of overlaying digital information on the real world through the viewfinder of a smartphone. Where Lens differs from these largely unsuccessful experiments is in its ability to recognise individual objects rather than overlaying secondary data sources such as geo-tagged place names and points of interest.

More importantly still, it taps into Pinterest’s vast knowledge of how images relate to each other in different ways, from colour and texture to recognising specific objects. In doing so, it is more likely to lead to an interesting path of exploration for the user than being able to see the nearest Starbucks plotted as a floating marker within the viewfinder.

We have known for a long-time that there is an opportunity to become the ‘Google of object recognition’. I’ve seen a number of companies, large and small, touch on the enormous potential, but ultimately fail to unlock it because they focused solely on engineering and digital imaging prowess rather than user experience. As is often the case, it is Pinterest’s emergence from the left-field which suddenly reveals them as an unlikely contender to establish a leading position in being able to identify, interpret and interelate objects from the real world.

Its ability to be used asynchronously is particularly significant, allowing paths of exploration to start both in the moment and after the event. For instance, it is possible to capture a quick video clip or photo using the phone’s standard camera app, then open it in Pinterest Lens at a later date to start a visual search.

An inspiring example of user-centred design helping an app to expand beyond its original purpose.