Jason and Josh Diamond are twin brothers who have worked extensively as filmmakers and DPs. They are co-founders of the VR production company SuperSphere and partnered with us on our ultra HD Sizzle shoot. We sat down with Jason to learn more about their VR production workflow and perspective on the quickly growing industry.

 

Diamond_Blog

 

Why did you decide to work in VR and what your vision for SuperSphere?

Josh and I are creative people and want to tell the best story we can in whatever way makes sense. We’ve gotten very deep in regard to traditional filmmaking methodologies, but we’re also incredibly technical people and very interested in how technology can also drive and push a particular piece of content forward. As we got involved in VR, we quickly realized as a medium it is the apex of our interests in content and technology so we went in head first all the way into the deep end with no hesitation. SuperSphere’s vision is to create the highest end VR/AR experiences possible, be it Room-Scale, 360° Video, or any possible permutation of the available and future technologies we’re developing, discovering and integrating.

 

Do you ever differentiate between 360 video and VR video? Why, or why not?

We tend to differentiate 360° Video and VR as a whole based on the total end experience. 360° video tends to be a viewed experience where the consumer is watching video presented to them, with no real control over the experience other than looking around and being directed where to look, ideally, by the content. True VR is when the user has more control over the actions and direction of the content based on their input through a myriad of input devices or tech. That could be a Hand Controller on a Vive in a Room-Scale experience or it could be Gaze Points or many other tactile and/or programmatic inputs that allow the user to drive the content. True Immersion typically happens in the ‘VR’ scenario, but that does not discount the value of high-quality 360° Video and how it can impact storytelling or the ability to convey information. They both have a solid place in the VR landscape and we get excited to see what we can do in those spaces as the jobs and needs come in.

 

How do you decide what camera and rig to use on different shoots?

Just like traditional production there is the right camera or tool for the job. We decide based on many factors like deliverables, shoot location, are there any size restrictions for where the camera can fit, etc.. It’s really a job by job basis to design what’s best all around, there is no one perfect camera or rig at this point so we find the rig that ticks the most boxes for the needs and we get moving.

 

What are you favorite consumer and professional cameras for VR video?

We actually have a fair number of cameras at this point. We use them all for different things from scouting cams to production cams. We have Ricoh Theta-S, Samsung Gear 360’s, GoPro’s, Sony A7 rigs, BlackMagic Micro Camera Rigs and Custom RED Weapon and Helium Rigs for the highest possible Resolution stitches currently. We also use Nokia OzO and Jaunt One VR cams regularly. We’ve been shooting with the Jaunt One since September 2015 and have logged hundreds of hours and miles on it.

 

Do you have any custom camera rigs that you want to tell us about? How did you create them? What challenges did you encounter?

We honestly can make a rig or shot out of any camera within reason. Certainly physical size of a camera will dictate its usage in a rig but we’ve done a custom rig with a Phantom Flex 4K for a 1000fps VR shot that came out incredible, can’t say exactly how we did it but it’s possibly one of my favorite shots we’ve ever done. The most custom rig we currently have is our RED 6K Weapon and 8K Helium Rigs. that give us incredibly high-res stitches over 10-16K+. The challenges are always reducing parallax and nodal point distances between lenses, and portability if possible as well as power, cable and data management. At some point in the near future there will be a myriad of ‘Integrated’ cameras that help to solve these issues and just leave us to creating content like we can with a traditional camera.

 

What is your VR production workflow and editing process? What are your essential tools? Any editing tips for VR newbies?

Each production again, is different. We tailor the workflows to the jobs and the deliverables. Knowing your deliverable(s) and working backwards as you work forward is crucial to having a tight pipeline. Depending on the camera we’re shooting on we create rough stitches for editorial and cut in Adobe Premiere Pro, on Mac’s we use Tim Dashwoods’ 360VR Toolbox to manipulate our images however we need. Once we’re done we pass on the cut to Final Stitch and go to Color usually in DaVinci Resolve. If we’re using a camera like the Jaunt One we utilize their Cloud Services for Dailies and Final Stitch in Mono and Stereo. Also Spatialized Audio is a huge component to VR and we almost always capture Ambisonic Audio on every job. We’re getting ready to add-in some new technology to our pipeline like Teradek’s Sphere that will give us Real Time Stitches, in a reasonably small form factor, to view on an iPad so we and our clients can get proper on-set monitoring from almost any rig which is super exciting. I would say for anyone getting into VR editing or content creation is to watch as much VR as you can across multiple platforms and see what you like and what you feel works for your aesthetic. Then go shoot something or get footage from a friend or company and start cutting and watch it to see what works and what doesn’t. In a such a new medium the visual language hasn’t been codified yet and it may never really solidify beyond a loose set of ground rules given how many ways there are to use VR. A technique that works on one shot or sequence in any given piece may not ever work again for something else but the fun part is exploring how and why these things work and defining each of our own specific styles.

 

What are some hardware and software solutions that may not exist yet, but would create better VR?

GPU and Cloud rendering will eventually be the defacto workflow for most VR. We need to offload initial work to the computers to solve and use artists to tweak and comp/roto design the final shots. Manpower is much more expensive than computing power especially with Computational Stereo acquisition/delivery or eventually Lightfield or any other future tech that will be solely dependent on heavy computing. We consider VR Post production to be an Art much like and akin to VFX Artistry, it’s certainly not a ‘set it and forget it’ world.

We love making rigs and experimenting and always will, but we’re looking forward to the near future when we can choose any number of integrated cameras, like we can with traditional cameras, and put them into the pipeline and go. These cameras will all have some sort of live-stitching built in be it for rough dailies or actual “broadcast” quality streams.

 

Motion Study is a new blog stories profiling VR industry leaders, filmmakers, producers, and creators.

Filed Under