Virtual Reality is an exciting new platform for immersive storytelling.

Creating VR video is unlike traditional video in almost every way. VR video starts with flat video captured from multiple cameras which is stitched together to create a sphere that surrounds you. Great VR video requires a lot of resolution, up to 50 million pixels per frame!

We need to get these high-resolution video spheres to our viewers without having to transmit giant files. How do we do that? Let’s start with a map.

Mapping flat images correctly onto a round sphere is a tricky problem that mapmakers have worked on for centuries. Google Maps uses the Mercator (mer-KAY-ter) projection, which distorts and stretches anything near the poles. Greenland is NOT the size of Africa. It only looks that way because it is near the pole. If we moved it to the equator, it would be closer in size to Mexico.

Instead of the Mercator, VR video uses an equirectangular projection to fit video into a flat format. It lets you see the whole sphere, but it also adds extra pixels and distortion.

Unfortunately, there is no perfect way to correct the projection problem.

Alternative projections can cause less distortion and eliminate the extra pixels. But, they can also result in strange artifacts when used for VR video. Video encoders just can’t handle the irregular shapes.

For decades, computer graphics has used a cube map projection with 6 equal sides to represent a sphere. Cube maps divide the data into 6 equal faces, but this doesn’t optimize the view for any single face.

VR video is often projected onto a cube. Let’s call it a viewbox. But if we want to increase image quality and lower bandwidth, we must deliver more pixels to one side of the viewbox. We do this with a shape called a “frustum.” A frustum viewbox is like a cube, but we enlarge the front face and shrink the back.

 

Projection_vs_Perception-768x432

Projection_vs_Perception_skater-768x432


A frustum is a lot like a square lamp shade where the top and bottom are of unequal size. 
When the viewer watches the video, it still looks they are inside a sphere, but the video will be sharpest when they look straight ahead. We combine multiple viewboxes together with a series of optimized viewports. When head motion is detected, the VR video switches between viewports. Each new viewport snaps into perfect focus.

This technique is a system we call Field of View Adaptive Streaming. Viewers get great looking video and content creators save money by streaming less data to places where the viewer rarely looks.

 

 

Field-of-View-Adaptive-Streaming-768x480

 

Pixvana has built a powerful platform for VR video creation and delivery. Our scalable cloud-based engine is designed for high-speed VR processing.

Higher quality VR video means that VR films will feel more immersive and lifelike, giving everyone a better experience.

This video is part of an ongoing series of technical theme introductions about the Pixvana Virtual Reality Video Platform. Don’t miss our first video: Why Does VR Video Look Soft?

Filed Under