A couple weekends ago, I flew down to California to attend Oculus Connect 6, a conference which “gathers the industry’s leading developers, creators and visionaries to celebrate VR’s journey and the road ahead.” 

Oculus Connect 6 (OC6) was exciting in a very different way than OC5. Rather than hyping up new products, game announcements, and exciting experience trailers, Oculus showcased new tech features, new developer features, new enterprise-specific versions of their products, business deployment upgrades, and so on. They more so targeted attendees participating in the growth of the industry, rather than enthusiasts and “VR-curious” consumers. Here are some of my notes and thoughts from the talks and demos I attended at Oculus Connect 6. 

 

Oculus Link

One of the first features I got to try at OC6 was the Oculus Link. Oculus Link refers to a USB-C connection between an Oculus Quest and a PC. This connection allows the Oculus Quest to use the PC’s GPU for graphics processing, rather than it’s onboard chip. This makes for much improved graphics and higher levels of detail in the experience. It also makes the Quest function like a Rift or Rift S by using the Oculus Desktop Application to drive what experiences will work. Oculus will provide a proprietary USB-C cable that’s longer and performs better than a standard USB-C cable would. And this feature would be released with a simple software update!

Oculus Link also makes developing for the Quest much more accessible. You’re now able to run a Unity scene in-editor real-time on the Quest just like you would on a Rift/Vive/etc. No more writing log statements, building an apk, plugging in your quest and holding it with one hand, holding a controller in the other, taking off the headset every once in a while to check adb logcat, figuring out the issue, repeat… too real. No more of that!

When I tried it out, I was mainly looking for controller tracking stability and latency, headset tracking, compression, and general performance. Controller tracking stability and latency weren’t noticeably different from how they are on the standalone Quest. I tried going to a very graphically busy part of the scene with fire, particle effects, and what seemed like dynamic lighting, turned my head quickly and—no juddering! I can’t speak much to the compression since I hadn’t seen the demoed experience in a Rift S before to be able to compare.

 

Passthrough+

Passthrough is the feature that we’re familiar with from setting up the play area or leaving the play area on the Quest; a monochromatic view of the real-world is displayed in headset. Passthrough+ steps up this experience by making it accessible on-demand and dewarping the view to make it more comfortable. 

 

Hand Tracking (easily the most fascinating subject)

Oculus introduced hand tracking using the monochrome camera sensors already built in to the Quest headset!

At Facebook Reality Labs (FRL), they tested several types of sensors including the existing monochromatic cameras on the Quest since those are the most cost-effective. Recently, they achieved desirable results with the existing sensors.

As hardware work was making progress, a UX team at FRL used a setup with high-precision hand tracking to start iterating on design, interaction, and framework ideas for the future of UX design with hand-tracking.

Below, I’m going to attempt to capture key points from this talk ← (it’s totally worth watching.)

Since our hands are one of the most functional tools in our lives, a user’s expectation of what they can do with their hands in VR is high. They realized the more realistic the hand representation in VR was, the higher the expectations of the interactions. Using fictional or morphed hands created a lower expectation, as the look of the hands were initially more interesting than it’s functionality.

They have experimented with several different interactions including pressing floating buttons for keyboard input, pinch-to-spawn then throw, grabbing/holding and placing, pushing small objects using the full articulation of your hands. Eventually, they moved into how interacting with existing 2D UI widgets like color pickers, dropdown menus, volume control, etc could evolve to work better in VR using your hands.

“Self-haptics” was an important point. Right now, when you interact with a 2D UI you get a visual cue, an audio cue, and a haptic (touch) cue which is usually a mouse click or a keyboard button press. The lack of haptic feedback when pushing a floating button in VR makes the interaction feel particularly incomplete; the haptic feedback gives us a subtle sense of confirmation or completion. Taking that into account, they mentioned the importance of gestures like pinching or tapping your other hand. They showed several menu and widget UIs centered around designing towards these ideas. Super interesting.

They have also iterated on an ideal pointer/raycasting mechanism that makes selecting objects out of reach comfortable. They mentioned how zones of proximity around the hand-tracking-enabled user determine levels of importance from a design standpoint. 

Many interesting topics were covered, and they plan on releasing design guidelines for people interested in experimenting!

Now for the demo.

Farmers Insurance had a hand-tracking experience designed for a field worker in-training, including identifying water damage in a room within a time limit. They mentioned how they had to redesign parts of the experience to meet the expectations of a user with hand tracking. One big pro is that users had no learning overhead getting into the headset; no need to understand how controllers work, just using an expected control convention (their hands). A new challenge is that user now feels like they can do anything. They see a cupboard with a pull knob and they try to pull it, when their initial design made for the controllers was a click or tap to open the cupboard. So they had to change the visual to match the expected interaction with your hands (push/tap). They emphasized having to make several minor tweaks like this.

I got to try the demo. The hand tracking itself felt good. My finger and hand movement matched the model really well; I did have issues with the position tracking of my hands a little at the beginning, but didn’t notice it again after that. I felt like there was a little more latency with the hand/finger tracking than the controller tracking. It’s worth noting that they will still be improving and optimizing it quite a bit before rolling it out.

The UX/UI design of the demo experience itself was lacking mostly because the only interaction they used was collision detection with the hand model. But moving around the whole room, kneeling down, or reaching up to find where the water damage was much more compelling and immersive than say having to just stand at the center of the room, look around, and point with a laser and click at the water damage.

Hand tracking will definitely drastically evolve the future of UX for VR as well as remove a lot of non-gamer user friction. No more having to teach them about controllers, then when they put on the headset and cant see the controllers, having to re-teach the controllers…none of that.

 

VR in Education

VR in education has always been something I’m personally interested in, whether it’s teaching concepts that are inherently 3D in a 3D environment, exposing students to diverse scenarios they wouldn’t be able to experience otherwise to improve soft skills growth, raising awareness of situations by placing them in it, etc; so I was really excited to see that there has been a lot of trial an error with VR in the education space during this session at OC6.

One of the panelists working with Seattle Public Schools shared a few use-cases of her own:

A passive (non-interactive) Climate Change experience that took the kids to several affected areas; it shows the impact on several flora and fauna, how the landscape is changing, and what is causing it. She felt it promoted genuine emotional responses in the children. And that they felt a sense of responsibility towards what was shown to them. Watch the session to hear her talk about them.

Language courses had passive experiences that took students to different locations relevant to the language being taught, making cultural immersion much more accessible. Of course it doesn’t replace the real thing, but is a big step-up from what’s available in the classrooms of today.

A passive experience travelling through the perspective of another race or sexual orientation built empathy. There is clearly a lot of potential for soft skills, diversity, and inclusion education for children.

Another panelist was a biology teacher. They worked on an experience that allowed the students to navigate a 3D scene at a cellular scale to diagnose biological issues. They inspect RNA as they floated by a cell to see if it was what they expected, then could navigate into a nearby cell wall to see if what was inside was expected. The experience was a partner experience with one person on a 2D display and the other in headset.

The concept of one person in headset and one person out was heavily emphasized by several of the panelists, especially considering students will inevitably make VR a social experience. It also makes it function like a lab where you might have some students keeping track of the procedure, some students driving the experiment, some students noting down results, etc. Teamwork! Also, making sure the experiences are interactive and not just passive was important to them.

Another Professor was using VR as a collaboration tool for her Civil Engineering and Architecture students. These are inherently 3D concepts that make more sense to learn about in a 3D environment. Using VR as a shared space for this 3D environment makes collaborative learning much more accessible.

 

Oculus Business

After hearing from several companies, like ourselves at Pixvana, that are deploying VR solutions in a larger-scale landscape, Oculus took it upon themselves to dive into enterprise VR topics: what they think the superpowers of VR are, what they think the enterprise journey looks like, and how to address some of the deployment issues that exist today.

They spent some time talking about OssoVR’s use-case, a surgery training experience that significantly improved a first-timers ability to know the tools and the procedure for the surgery. They also mentioned Walmart’s use-case, a curriculum of more than 50 training modules that improved their associates’ performance significantly as well.

Oculus’s list of the superpowers of VR:

Muscle Memory; Unlimited Re-Do’s; High-Stakes, No Risks; Time Efficiencies; 3D Objects with Metadata; Instant Collaboration; Big Dollar Savings; Next Level Engagement; Possible Impossible Scenarios; Objective Insights; Varied Perspectives.

Oculus will be releasing an Enterprise SKU of Quest ($999) and Go ($599) headsets. The speaker mentioned a couple support programs for people using these headsets that would provide support for hardware, software, and product.

These headsets will work with their new Oculus Business desktop fleet management app which solves a few issues that exist today with enterprise VR deployment:

  • Can have several headsets in one account
  • Can make the headsets boot up to a menu with select few apps
  • Can make headsets boot straight into a single app without the ability to go back to Oculus Home (their ‘kiosk’ mode)
  • Can determine which Oculus Dash options these headsets can access
  • Can set up a PIN in these headsets to access an Admin mode which allows all Oculus Dash access
  • Can set a controller free mode (eventually will be hand tracking mode)

More info and program application at their oculus business website.

 

Unity XR Architecture Changes

This will be a little more technical as the talk was geared towards developers that use Unity for building VR experiences.

The Unity representative first talked about how Unity’s existing XR features are moving to a package to be used through the Package Manager. Unity is improving/handling hardware input abstraction, so we no longer have to worry about considering VRTK, or building our own abstraction layer. Unity will also natively support many VRTK4.0-and-VRTP-like tools built-in to their new XR packages (ie. selection/ui, teleportation, and grab pointers, ability to add canvases that work with pointers, can turn anything into a ‘vr tracked pose’/tracked element, built-in component concepts for interactable, interactor, non-interactable gameobjects, etc.)! All these features and more will be cross platform! Will definitely make designing in VR a lot faster and will likely make tools like VRTK and VRTP less necessary.

An Oculus representative then talked about a lot of rendering pipeline improvements and guidelines that were a bit over my head. That said definitely check them out here.

 

 

Filed Under: