Add Depth to VR Video with Sidewinder

December 14, 2017

By Meredith Alexander Kunz, Adobe Research

Is there a low-cost, easy-capture way to improve the realism of VR video? Adobe Research has found a promising path with Sidewinder, an experimental technology presented in a MAX Sneak by Stephen DiVerdi, senior research scientist.

One of the problems with VR video today is that the camera sits in just one place. That means that when you put the headset on to watch the video, you can only see the scene correctly from a single angle. If you move your head or your body while viewing it, the scene doesn’t seem to change with you.

Sidewinder aims to change that by employing new information about the scene to create more realistic 3D imagery: its depth map.

“In normal VR, you take a panorama from a point in space, and you wrap that around a sphere,” DiVerdi says. “With our technique, we also take a depth map and wrap it. Our system determines the specific distance away of the pixels you see, and projects the world around you so it can be rendered like a full virtual 3D scene.”

Without springing for “lightfield” camera setups that can cost upwards of $100,000, VR filmmakers using Sidewinder technology could rely on more basic cameras to capture a scene, yet still provides a more dimensional, parallax-enabled view that allows you to look around from different viewpoints.

Mid-grade VR cameras in the $10,000 range can record stereo VR video, and standard computer vision techniques can then be used to make computations that provide a depth map based on that information. No additional equipment is required.

Viewers who put on a 3D headset to try Sidewinder at MAX felt the tool’s power immediately. “You can move your head to ‘peer’ around objects, and it really does feel very different,” says DiVerdi, echoing remarks by Sneaks host Kumail Nanjiani as he tested it out. “There’s a huge difference in your sense of presence.”

In the future, Adobe Research scientists hope to improve the tool’s results by cleaning up the depth data and allowing users to correct artifacts themselves.

Contributors:

Stephen DiVerdi, senior research scientist; Aaron Hertzmann, principal scientist; Zhili Chen, computer scientist (Adobe Research)

Ana Belen Serrano (Adobe Research intern)

Diego Gutierrez, Belen Masia (Universidad de Zaragoza, Spain)

Recent Posts