Virtual Reality (VR) experiences that are made in VR are becoming more abundant, more robust, and more popular. Yet, still, there is no easy way to talk about these experiences. I could say “this experience was made in a VR headset”, but that’s a mouthful. I could get fancy and call it a “VR-native experience”. Or perhaps even fancier, I could call it a “VR experience produced in situ”. None of these are ideal.
I propose a new term: Ingenic. Ingenic is an adjective that describes content that is consumed in the same medium it was produced in. Ingenic is an entirely made up word inspired as a portmanteau of interior + genesis. It is meant to invoke the meaning of creation from within a medium. The way to describe a VR experience made in VR, then, would be to simply call it an “ingenic experience”.
Google Tilt Brush, Oculus Medium, Unity’s Editor XR, Unreal VR Editor, and Pixvana’s Creator tools are a few examples of ingenic tools today. They allow you to create 3D paintings, sculptures, games, and interactive videos without ever having to take off the headset. A creator uses their entire body to gesture the virtual creation into existence.
Pixvana’s Creator tool in action.
Ingenic tools rise above desktop tools in their intuitiveness. One of the top comments from users of Pixvana’s ingenic editor is how easy and natural it feels to pick up the controllers and start placing graphics and hyperports in space. To be fair, right now the interfaces in ingenic VR tools are heavily text based. This is a remnant of flatland design (webpages, books) where text still reigns supreme. But the trajectory of ingenic interface design will lean less on text and more on gesture, voice, and context-aware volumetric objects, which would provide an even more intuitive creation process.
There are many benefits to ingenic creation. For one, a piece of ingenic content is less likely to have mistakes that break the viewer’s immersion in the medium. A great example of this is viewing 360 video in a VR headset. Leveling the horizon is very important to creating a sense of immersion. In VR people are very sensitive to even a half-degree pitch in the ground. If our eyes tell us one thing and our inner ear says another, then all of a sudden we are nauseous. It’s easy to spot an off-kilter horizon in the ingenic environment and much more difficult to spot it in the exgenic environment. It’s only when we’re in the headset do we feel the off angle horizon and is our suspension of disbelief cut.
I mentioned exgenic just now. It means exactly what is sounds like. Exgenic describes content that is consumed in a different medium from which it was produced in. When you create VR outside of VR then you are operating at a dimensional deficit. You have to translate two-dimensional gestures on the mouse and keyboard into three-dimensional object displayed on a two-dimensional screen. It’s a mess. This dimensional deficit is a necessary effect of exgenic creation. During the import/export of content across media boundaries, this dimensional deficit can cause some information to be lost in translation. (Plus the dimensional deficit manifests itself in bodily fatigue: Repetitive Strain Injury and Carpal tunnel are not uncommon among 3D modelers using 2D interfaces.)
When we consume an exgenic piece of content we sometimes become aware of the fact that it is exgenic. There is a gap between what the content apparently intended to say, and how I am experiencing it right now. For instance, when you watch 360 video in-headset that causes such bodily discomfort that you know, in the back of your mind, that there was no way the creator of this video even previewed the video in VR before delivering it — that is the gap I’m talking about. It’s the subtle chasm between expectation and realization. This exgenic gap appears when we perceive stitch lines, left/right eye vertical misalignment in stereoscopic video, or misprojected 2D graphics. Something just feels off and it takes us out.
An exgenic gap is akin to the Uncanny Valley, in that we aren’t conscious of it once we’ve surpassed the threshold. When people say they don’t like CGI, what they are really saying is that they don’t like bad CGI. Similarly, when we experience ingenic content the medium recedes to the background of our perception. VR is good, but there was a lot of poorly executed exgenic VR experiences that have poisoned the well. This can mean seeing VR in a low fidelity, early prototype headset, could dissuade the person from trying VR again.
During the shift from toward spatial media (AR, VR, MR) there is going to be a tendency to either reinvent the wheel or entirely disregard previous learnings of flatland information visualization. I think the best way to think about this is to imagine spatial media as another organism added to an ecology of existing organisms. Here each organism represents a medium. As Marshall McLuhan noted, mediums never truly die off, they become the content of the challenger medium. In the landscape of mediums, flatland creatures have become the prey, and the spatial tools have become the predators. Both prey and predator must coexist and coevolve, adapt to one another, knowing full well the power has irreversibly shifted toward spatial media.
Nothing will ever be purely ingenic. Nonetheless, the idea of ingenic stands in as an important compass for design. It represents an ideal framework to produce and consume content intentionally, thoughtfully, and intuitively. Eventually it will be a compliment to comment on an exgenic piece and say it feels ingenic. We should not discard learnings from flat software (flatware) design, but we should not be afraid to leave those mediums and learn new, useful, intuitive, ingenic tools of the coming age.