Experimenting with Microsoft Hyperlapse
Brief thoughts on the user experience of Microsoft Hyperlapse:
While Hyperlapse corrects motion through a combination of image processing and hiding imperfections through speed, it does not solve the issue of camera positioning. My first experiment was to record a short bike ride and this proved challenging simply to get the camera (in my case a Samsung Galaxy Note 4) mounted in a usable way. I resorted to placing it in a waterproof case on a neck lanyard, but this meant portrait orientation video and a poor angle any time I stood on the pedals rather than remaining seated.
For my second attempt, filming a walking trail, I tried to capture the footage as close to ground level as possible, which resulted in wandering around the countryside stooped over. The results were better and more film-like, but it was an uncomfortable walk!
Hyperlapses seem to work best as loops, so I tried to make a point of starting and finishing my quick quarter mile circuit in the same place and with the same aspect.
It feels less like a new form of creativity and more like a way of making an existing form of first person, accelerated video recording accessible to a group of creators which would never otherwise be able to take these shots. This is software in an empowering form, using large scale processing and intelligence to overcome existing barriers of the cost, complex kit and skills normally required to do this kind of filming.
For this experiment, I used a Samsung Galaxy Note 4 and filmed on an esker in the Norfolk countryside. The video and this post were created in the field, on the Note 4, and uploaded over Three’s HSPA network. There was something strangely satisfying about being able to record, upload and blog about this entire creative act in the field, literally, surrounded by birdsong and with a breeze from the sea.
A couple of follow-up observations. Viewed on a bigger, desktop screen the video quality looks poor even at the max 720p resolution. In particular, the smoothing process of Hyperlapse seems to struggle with small objects like leaves and blades of grass, which just become a homogenous blur. Also, Youtube detected the shakiness in the video and offered to smooth it out automatically for me, even offering a side-by-side preview of what the new video would look like compared to the original, Hyperlapse-generated version. Is Google doing this for all videos created by Hyperlapse, a subtle dig at Microsoft, I wonder?