VR video cameras consist of anywhere from as few as 2 lenses and sensors, to as many as 20+. In all cases, VR cameras produce raw videos that need to be analyzed and “stitched” together to form the raw video elements that are then used to edit together a final 180/360 degree, mono or 3d, VR video.
Most commercial camera systems are often bundled with software that will perform a high-quality stitch for that camera. But many stitchers offer additional functionality, performance, and quality. Let’s review some of the leading stitchers on the market.LEARN ABOUT SPIN STUDIO STITCHER
Let’s review the core steps that a video stitching software must go through in order to analyze, arrange, composite, and produce a high-resolution master. All VR video stitchers go through theses steps; some present each of these functions in their UI and workflow, while others more fully automate the process. Depending on the level of control you need for a given set of videos you may need more intervention (and thus cost/time) to “stitch” your video to the quality you need. Generally, more automation, with less intervention, will be a faster, higher value/lower-cost, workflow and allow you to focus on telling your story with less post-production hassles.
VR video cameras produce a lot of video data. If the VR camera writes all of the VR video footage captured for a 360 degree video onto 1 single memory card, that means that the absolute data-rate for the entire scene will be relatively low! Some cameras produce multiple streams onto multiple cards, which leads to a lot of data management and organization challenges, but yields more raw data from which to gather a high-resolution VR video. Getting all the videos off the camera’s media, and onto a local computer where it can be organized is the first step of stitching video. For Live streaming, you’ll be receiving video signals into a central input switch where the stitching software can receive the signals.
With the videos in one place, the next step for VR video stitching is for the software to analyze the individual video streams and compare each of them to one-another, in search of the common features found between the videos. These overlapping features allow the software to “solve” for how the cameras were set up and how their images relate to one-another. With this “camera solution”, the software can orient and distort each video so that they can be aligned and composited together, usually as a equirectangular projection map that represents the full 360 degree video, as a rectangular image. Some stitching software offer a set of templates for known cameras, but do not inherently have the features to analyze and “solve” for camera rigs they are unfamiliar with.
VR video stitching software will tend to solve the camera alignment elegantly, but have no good guesses as to where the horizon is in a scene, and where you will want to place the “center” of the scene (where the viewer will be gazing by default). This is an inherent challenge since the cameras are often rotated on several axis during the shoot, in order to better align the key action being filmed, against the center of as many of the lens and sensors in the VR camera rig. Interactive click/drag gestures, along with Yaw/Pitch/Rotation sliders are used in combination to balance out the horizon and then rotate the field-of-view to center the gaze in the clip. Some stitchers accomplish these steps with a few clicks, others require more wrestling with the controls.
The camera solution aligns all the images, but an additional analysis to create a “scene graph” where the images can be cut/blended together, must be evaluated and applied so that “seems” and clear cuts between the edges of each of the videos are less apparent. Depending on the camera setup, there may be a wide-range of overlapping areas between the cameras, allowing for more flexibility in choosing the graph/cut. With fewer cameras, there is less image area to choose from. Most stitching software will try to solve this problem fully automatically for the user; others will offer several tools to manually adjust these boundaries, which is accomplished with spline/area editors that are similar to rotoscoping tools for compositing and visual effects.
The individual cameras in the VR rig are often set to differing exposure settings–for example, one camera may be pointed directly into the sun in an outdoor setting, while the opposite camera will be directly away from the sun. These exposure differences mean that clear color and brightness/contrast variations need to be normalized across each video stream, so that the overall balance of the final rendered stitched video is uniform and appears to have been photographed with a global exposure balance. Further grading and color correction and effects may be added afterwards–but if the original stitch is not balanced, it will be very hard to salvage the shot later in post-production.
Most VR video cameras will be used with the camera firmly balanced and leveled to horizon on a tripod or stand which will appear in the shot, along with possible rigging or crew members that need to be removed from the scene. The process for cleaning these elements out of the plate is identical to rig-removal in traditional video, and involves either inserting a clear-patch that has been photographed for this purpose, or using cloning/patching tools in image editors like Adobe Photoshop, and then compositing these into the scene. Most VR video stitchers do not consider this step directly as part of the stitching process, rather, they leave this to post-processing software later in the production.
Pixvana’s SPIN Studio includes a integrated “Prep” module that can stitch both mono and 3d/stereo footage from a variety of cameras. Just upload the raw videos directly from your camera’s memory cards to our cloud. SPIN Studio will analyze, stitch, color correct, and output at the maximum resolution possible (ranging from 4-10k depending on your camera configuration).
Cloud stitching has tremendous “elastic computation” advantages over any desktop software, as dozens or even hundreds of GPU servers can be spun-up in parallel. If your project has 20 shots that are each 3 minutes long, all 60 minutes of footage can be stitched at the same time, divided across several hundred machines in our data-centers. Welcome to super-stitching!
TRY FOR FREE
Post-production software for VR video stitching has an emphasis on handling different camera configurations and on prioritizing high quality output and flexible workflows. These stitchers tend to have more control, produce higher resolution outputs at 8k and above resolution, and have built-in ability to make corrections and re-render the stitching. Post stitching can be done either on desktop computers or in elastic cloud grids where many more computers can be assigned to a single stitching job, greatlly accelerating performance.
Stitchers that handle “live” stitching focus on taking video input from multiple video streams and applying a camera solve, color balance, and output to 4k or lower resolution, which can then be uplinked to a data-center, encoded into various streaming resolutions, and then in as little latency as possible, connect that stream to VR headsets or Youtube or Facebook web-pages. The above image is from Suphersphere’s set-up for live production. Live stitchers will always be desktop based, as stitching is done in a production rig on-location–there is currently too much data generated from the VR video camera to send all of that data to the cloud. Stitching happens locally at the event, then the 4k stream is uploaded for simul-broadcast.
Choosing between VR video stitching software for either Live or Post-Production workflows is easy, as no packages on the market do both well. Post oriented stitchers are great at what they do and will have more control and output higher quality results including at 8k and above resolutions necessary to achieve maximum immersion. Live streaming stitchers are awesome at what they do, including sending streams to Youtube and Facebook’s live 360 video players where large audiences can be reached. So choosing which application is right for your project will lead your considerations for Live or Post stitching options.
Mistika VR is a desktop based stitcher that is part of a larger family of compositing and effects tools from Spanish company SGO. Mistika VR focusses on offering a fine level of manual control and correction for rotoscoping edges, applying optical flow analysis between seems of cameras, and high performance on a single desktop workstation. Mistika VR includes support for over 40 camera configurations, which come as presets that can be loaded for a given camera system.
Mistika VR produces high quality stitched VR video that can be uploaded and integrated with Pixvana SPIN Studio for editing and publishing.
Kolor Autopano Video is a 15 year-old desktop software product that was originally developed for still photography panoramas and was later adapted to 360 video. Since the acquisition of Kolor by GoPro in 2015 its development has focused on integration with GoPro cameras and consumer media efforts. It should be noted this package also requires Autopano Giga to actually edit the stitch–the two apps worked together, and require jumping back-and-forth between the apps depending on what configuration/step is being applied.
Kolor Autopano Video produces stitched VR video that can be uploaded and integrated with Pixvana SPIN Studio for editing and publishing.
Nuke is an advanced visual effects desktop software application used primarily to manipulate and combined multiple images. It has a steep price and learning curve but has established itself as the leading high-end visual effects package for feature film films VFX work. In 2016 Foundry, the company that makes Nuke, added a specialized plugin package to work with VR videos to make the process more controllable and flexible.
Cara VR Plugin is NOT a “press the button and stitch” type of solution and was not designed to be. A skilled VFX artist who is already familiar with Nuke can refine every step in the stitching process with tremendous specific control. Cara exposes the camera solver, adjustments, warping and compositing functions–and all individual parameters and values can be manipulated using the entire suite of Nuke features. Cara also includes a stabilizer and 3D tracker designed to work with VR and is flexible enough to deal with any camera rig, including custom rigs. Stereo controls are incorporated. For high-end VR video projects with the budgets and trained VFX experts on the team that will be required–this is perhaps the most configurable VR video stitching solution on the market. Nuke pricing starts at ~$4,500, and Cara VR will add another $4,000+ USD to the package.
Other visual effects software packages that can handle 360 video images for manipulation and adding motion graphics include Adobe After Effects which now incorporates a number of specialized functions, and Black Magic Design Fusion 9 VR.
Most VR camera manufacturers bundle basic stitching functionality with their cameras so that. footage shot can be stitched into a usable panoramic 360 video. In some consumer cameras the stitching may happen inside the camera during recording, or through a companion application running on a mobile smartphone that is connected via wifi to the camera.
For higher-quality stitching a dedicated desktop software is included for Mac/Windows. The quality and features of these stitchers varies greatly, which is why there are many 3rd party stitcher solutions that add advanced features, higher quality and more atomic control over behavior of the stitching, or cloud support for vastly improved speed and throughput when stitching lots of footage simultaneously on multiple servers.
Here are some of. the bundled sofwtare that are included with various cameras:
Pixvana’s SPIN Studio includes a advanced, highly automated, highly scalable VR video stitcher. Here are some of the top capabilities:
Our production team shares their production notes from producing VR video at Comicon San Diego. Read Blog Post
Unless XR creators shoot high-resolution video masters, their media will become obsolete when headset and camera resolution improve. Read Blog Post