“Virtual reality” is here to stay, and the tools for authoring virtual reality content are becoming increasingly easy to access. Whether it’s a basic spherical image or a fully interactive “real” virtual reality simulation, the semantics matter less than the perspective shift it can offer to learners and researchers in the liberal arts.
There’s a new technology on the horizon! Quick, let’s have an argument over semantics!
If you spend any time with someone embedded in the world of virtual reality, at some point they’re likely to comment that such-and-such technology “isn’t actual virtual reality, it’s just spherical video.” (The author, in fact, has been guilty of this on several occasions.)
In this post, we’ll break out the different terminology in the space. But first, let’s be clear: “Virtual Reality” has already won the semantic smackdown. Just like we spent the 90s arguing about the difference between “hackers” and “crackers”, this argument has already been lost.
Spherical imaging involves capturing an image of everything around a single point. Think of it as a panorama photo, except the panorama goes all the way around you. For those viewing the resulting image, spherical imaging lets you place your viewer in a position, and then they decide where they want to look. It’s a great way to give someone a sense of a place without actually being there. Spherical imaging can capture either still images or video, and can be viewed on either a normal computer screen, or using some type of VR viewing headset. Here’s an example of a spherical image, as it’s captured by the camera. You’ll notice its raw form is kind of stretched and distorted. This is called an “equirectangular” image:
And here’s an example of how you can interact with it. Go ahead and poke, click and drag it – it won’t bite!
There are a few important distinctions to think about with spherical imaging. First off, it’s not three dimensional. Even though you can look all around, you can’t see different sides of an object. This is particularly noticeable when objects are close to the camera. Additionally, your viewer can’t move around. The perspective is stuck wherever the camera was when the image was captured.
Many folks would argue that these two facts disqualify spherical images from being considered “virtual reality.” We disagree, but we’ll get to that later. If you’re interested in capturing your own spherical images, LATIS currently has two Ricoh Theta360 cameras available to borrow. These are a simple, one-button solution for capturing these types of images. If you’d like to give them a try, get in touch!
At its most basic, 3D is just a matter of putting two cameras side by side, in a position that mimics the distance between human eyes. Then, you simply capture two sets of images or videos set apart at this distance. When displaying, you just need to send the correct image to the correct eye, and the viewer will have a 3D experience. However, that’s a pretty limited experience, as the “gaze” remains relatively fixed. The viewer can’t turn their head and look elsewhere, and they certainly can’t move around. The more interesting type of 3D combines 3D with spherical imaging.
In order to capture spherical 3D, you need two spherical images, offset just like they’d be in the human head. It’s a lot more complicated than putting two spherical cameras next to each other, though. If you did that, you’d only get a 3D image when looking straight ahead or straight behind. At any other position, the cameras would block each other. This is where things get math-y.
When folks capture spherical 3D today, they often do so by combining many traditional two-dimensional cameras in an array, with lots of overlap between the images. Afterwards, software builds two complete spherical images with the right offsets. This is a very processing-intensive approach. Most of the camera arrays available on the market use inexpensive cameras like the GoPro, but require many cameras to generate the 3D effect.
If you’ve got something like a Google Cardboard viewer, you can see an example of a 3D Spherical video on YouTube.
Unfortunately, we don’t currently have any equipment for this type of capture. Later in 2016, we expect a variety of more affordable 3D spherical cameras will begin shipping, and we’re excited to explore this space further.
When purists use the term “virtual reality,” they’re thinking about a very literal interpretation of the term. “Real” virtual reality would be an experience so real, you wouldn’t differentiate it from actual reality. We’re obviously not there yet, but there are a few basic features that are important to think about.
The first, most important factor in “real” virtual reality is freedom of movement. Within a given space, the viewer should be able to move wherever they want, and look at whatever they want. In a computer generated environment, like a video game, that’s relatively easy. If you to provide that sort of experience using a real location, it’s a lot harder – after all, you can’t place a camera at every possible location in a room (though some advanced technology is getting close to that.)
Today, virtual reality means creating a simulation, using technology similar to what’s used when making video games or animated films. The creator pieces together different 3D models and images, adds animation and interactivity, and then the viewer “plays” the simulation. While free or inexpensive software like Unity3d makes that feasible, it’s still a pretty complicated process.
Another important part of the “real” virtual reality experience is the ability to manipulate objects in a natural way. Some of the newest virtual reality viewer hardware on the market, like the Oculus Rift and HTC Vive offer hand controllers which allow you to gesture naturally in space. Some technologies even track your movement within a room, so you can walk around.
We’re just getting started exploring these technologies, and are learning to build simulations with Unity3d. If you’d like to work with us on this, please get in touch!