Obligatory YouTube link. Script below the jump!
This year at GDC there were a bunch of virtual reality demos for Project Morpheus and the Oculus Rift, and while they varied in subject and tone they all sort of had a theme going on. There were multiple demos where the player was sitting in a spaceship. Sony’s Project Morpheus featured an underwater adventure where the player was stuck inside of a shark cage. And the Oculus had a demo that featured two players sitting on virtual couches in a virtual living room and controlling fighting robot dolls. And what stands out to me is that all of them focused on more or less stationary players. People who can’t move or at least can’t move much. Even in the space sim demos your ship may be flying around, but you are sitting stationary in a cockpit.
And what this shows, I think, is that we still don’t quite know how to handle giving you a body in VR, and whatever inputs you’re given tend to focus on your environment rather than you. It’s sort of the paradox of contemporary virtual reality. When you put those goggles on you feel like you’re somewhere else. Every tilt and bob and turn of your head is reflected in your view, you sense scale and distance, and you feel like you’re in the space you see even if you know you’re not. The parlance of VR enthusiasts for this is a sense of “presence,” a feeling of being inside of a virtual place that you simply can’t get from a monitor. But that sense of presence is so powerful and immersive it underscores other gameplay abstractions like “Press W to walk” or “Aim with right analog stick.” It feels like your head is there, but sort of propped up on a pike that floats through game space with mathematical precision. Your arms and legs don’t go with you to these places. Instead you leave them behind to remotely control the robot to which your head is attached. So to minimize that in demos we let you experience a shark attack or control a doll from afar or look around the inside of a spaceship while you fly it. But we don’t really try to simulate a real body that moves around in the game space because we’re not sure how to do that yet.
For example, typically in first person shooters, three directions are tied more or less into one vector. Where you’re quote-unquote “looking” is also where you’re “aiming” and where your physical body is facing. When I turn to the left in Borderlands, my body turns to the left, my view turns to the left, and my gun aims to the left. In VR these things are suddenly uncoupled. Or they should be because playing Borderlands by aiming with your head seems like it would give you some serious neck strain. But once they’re decoupled you end up with this odd situation where aiming is no longer what you’re looking at but where your gun is pointing. It’s like Trespasser – your head might be doing one thing and your gun can be off doing something else. So do we tie your body’s position to that of your head’s? Or your aim vector to that of your body? Or should all three be entirely decoupled using three separate input system? No one really knows what the answer is: TF2’s VR mode, for example, tries to circumvent the problem by simply offering a ton of permutations on which vector is tied to which input.
But the question is larger than just how we want to make first person shooters work in VR. It’s a question of in what ways and how much do we want to simulate our bodies in virtual spaces? There are lots of companies out there right now trying to propose answers. We’ve all seen the omnidirectional treadmills, there’s the rumored Oculus VR glove that’ supposedly in the works, there are accelerometer based controllers like the move, and then there’s a plain old controller or mouse and keyboard. But I think they may be jumping the gun a bit.
The Oculus is barely even nascent when it comes to any interactivity. We have this one thing we know it does incredibly well – a sense of presence. A sense of really being somewhere. How are we going to use that? Nobody knows yet – there isn’t an audience using these things, there haven’t been experimental failures or experimental successes. There are a lot of tech demos and back-porting of old games, but that’s not the same thing as trying to design for that sense of presence, to really take advantage of that technology in ways that benefit the experience. Virtual Reality isn’t going to be “It’s like Team Fortress but now you’re really there.” I mean, it might be able to support that but it’s not going to be what it’s known for. It’s going to be known for something fundamentally different. Something new. I’ve talked about this before, but inputs tend to shape the sort of experiences a platform can offer. And the Oculus Rift itself is an input that tracks head movement and position. That’s going to fundamentally shape the sort of experiences that will be produced for it. And we don’t know what those experiences are even going to be yet.
So trying to tack on omnidirectional treadmills, or kinect integration, or whatever else just seems… immature. I get the intent, there’s this deep-seeded and arguably deeply misguided attempt to recreate the Holodeck in real life. But that goes far beyond a sense of presence and into full-body alternate worlds. We finally got to a point where this new presence thing is possible, and instead of exploring it and expressing ourselves with it and pushing the edges of what it can do, we’re ready to rush on to the next thing before this thing has even seen a consumer audience. And it seems a shame to dump a powerful new sense like presence in order to chase an impossible goal of full body immersion.
So yeah. Developers working with the Oculus still don’t quite know how to give you a body that doesn’t feel weird, alien, robotic and disconnected. But in a weird way that’s liberating. It frees us from the mechanics of old and lets us focus on what we know the hardware can do really well. And whether it’s a flash in the pan technology or an amazing new platform, I’m eager to see what we can do with what we have in front of us. The rest of the future will come soon enough.