Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

VR Build and Run

Save for later
  • 25 min read
  • 22 Feb 2016

article-image

vr-build-and-run-img-0

Yeah well, this is cool and everything, but where's my VR? I WANT MY VR!

Hold on kid, we're getting there.

In this article, we are going to set up a project that can be built and run with a virtual reality head-mounted display (HMD) and then talk more in depth about how the VR hardware technology really works. We will be discussing the following topics:

  • The spectrum of the VR device integration software
  • Installing and building a project for your VR device
  • The details and defining terms for how the VR technology really works

(For more resources related to this topic, see here.)

VR device integration software

Before jumping in, let's understand the possible ways to integrate our Unity project with virtual reality devices. In general, your Unity project must include a camera object that can render stereographic views, one for each eye on the VR headset.

Software for the integration of applications with the VR hardware spans a spectrum, from built-in support and device-specific interfaces to the device-independent and platform-independent ones.

Unity's built-in VR support

Since Unity 5.1, support for the VR headsets is built right into Unity. At the time of writing this article, there is direct support for Oculus Rift and Samsung Gear VR (which is driven by the Oculus software). Support for other devices has been announced, including Sony PlayStation Morpheus. You can use a standard camera component, like the one attached to Main Camera and the standard character asset prefabs. When your project is built with Virtual Reality Supported enabled in Player Settings, it renders stereographic camera views and runs on an HMD.

The device-specific SDK

If a device is not directly supported in Unity, the device manufacturer will probably publish the Unity plugin package. An advantage of using the device-specific interface is that it can directly take advantage of the features of the underlying hardware.

For example, Steam Valve and Google have device-specific SDK and Unity packages for the Vive and Cardboard respectively. If you're using one of these devices, you'll probably want to use such SDK and Unity packages. (At the time of writing this article, these devices are not a part of Unity's built-in VR support.) Even Oculus, supported directly in Unity 5.1, provides SDK utilities to augment that interface (see, https://developer.oculus.com/documentation/game-engines/latest/concepts/unity-intro/).

Device-specific software locks each build into the specific device. If that's a problem, you'll either need to do some clever coding, or take one of the following approaches instead.

The OSVR project

In January 2015, Razer Inc. led a group of industry leaders to announce the Open Source Virtual Reality (OSVR) platform (for more information on this, visit http://www.osvr.com/) with plans to develop open source hardware and software, including an SDK that works with multiple devices from multiple vendors. The open source middleware project provides device-independent SDKs (and Unity packages) so that you can write your code to a single interface without having to know which devices your users are using.

With OSVR, you can build your Unity game for a specific operating system (such as Windows, Mac, and Linux) and then let the user configure the app (after they download it) for whatever hardware they're going to use. At the time of writing this article, the project is still in its early stage, is rapidly evolving, and is not ready for this article. However, I encourage you to follow its development.

WebVR

WebVR (for more information, visit http://webvr.info/) is a JavaScript API that is being built directly into major web browsers. It's like WebGL (2D and 3D graphics API for the web) with VR rendering and hardware support. Now that Unity 5 has introduced the WebGL builds, I expect WebVR to surely follow, if not in Unity then from a third-party developer.

As we know, browsers run on just about any platform. So, if you target your game to WebVR, you don't even need to know the user's operating system, let alone which VR hardware they're using! That's the idea anyway. New technologies, such as the upcoming WebAssembly, which is a new binary format for the Web, will help to squeeze the best performance out of your hardware and make web-based VR viable.

For WebVR libraries, check out the following:

3D worlds

There are a number of third-party 3D world platforms that provide multiuser social experiences in shared virtual spaces. You can chat with other players, move between rooms through portals, and even build complex interactions and games without having to be an expert.

For examples of 3D virtual worlds, check out the following:

For example, VRChat lets you develop 3D spaces and avatars in Unity, export them using their SDK, and load them into VRChat for you and others to share over the Internet in a real-time social VR experience.

Creating the MeMyselfEye prefab

To begin, we will create an object that will be a proxy for the user in the virtual environment.

Let's create the object using the following steps:

  1. Open Unity and the project from the last article. Then, open the Diorama scene by navigating to File | Open Scene (or double-click on the scene object in Project panel, under Assets).
  2. From the main menu bar, navigate to GameObject | Create Empty.
  3. Rename the object MeMyselfEye (hey, this is VR!).
  4. Set its position up close into the scene, at Position (0, 1.4, -1.5).
  5. In the Hierarchy panel, drag the Main Camera object into MeMyselfEye so that it's a child object.
  6. With the Main Camera object selected, reset its transform values (in the Transform panel, in the upper right section, click on the gear icon and select Reset).

The Game view should show that we're inside the scene. If you recall the Ethan experiment that we did earlier, I picked a Y-position of 1.4 so that we'll be at about the eye level with Ethan.

Now, let's save this as a reusable prefabricated object, or prefab, in the Project panel, under Assets:

  1. In Project panel, under Assets, select the top-level Assets folder, right-click and navigate to Create | Folder. Rename the folder Prefabs.
  2. Drag the MeMyselfEye prefab into the Project panel, under Assets/Prefabs folder to create a prefab.

Now, let's configure the project for your specific VR headset.

Build for the Oculus Rift

If you have a Rift, you've probably already downloaded Oculus Runtime, demo apps, and tons of awesome games. To develop for the Rift, you'll want to be sure that the Rift runs fine on the same machine on which you're using Unity.

Unity has built-in support for the Oculus Rift. You just need to configure your Build Settings..., as follows:

  1. From main menu bar, navigate to File | Build Settings....
  2. If the current scene is not listed under Scenes In Build, click on Add Current.
  3. Choose PC, Mac, & Linux Standalone from the Platform list on the left and click on Switch Platform.
  4. Choose your Target Platform OS from the Select list on the right (for example, Windows).
  5. Then, click on Player Settings... and go to the Inspector panel.
  6. Under Other Settings, check off the Virtual Reality Supported checkbox and click on Apply if the Changing editor vr device dialog box pops up.

To test it out, make sure that the Rift is properly connected and turned on. Click on the game Play button at the top of the application in the center. Put on the headset, and IT SHOULD BE AWESOME! Within the Rift, you can look all around—left, right, up, down, and behind you. You can lean over and lean in. Using the keyboard, you can make Ethan walk, run, and jump just like we did earlier.

Now, you can build your game as a separate executable app using the following steps. Most likely, you've done this before, at least for non-VR apps. It's pretty much the same:

  1. From the main menu bar, navigate to File | Build Settings....
  2. Click on Build and set its name.
  3. I like to keep my builds in a subdirectory named Builds; create one if you want to.
  4. Click on Save.

An executable will be created in your Builds folder. If you're on Windows, there may also be a rift_Data folder with built data. Run Diorama as you would do for any executable application—double-click on it. Choose the Windowed checkbox option so that when you're ready to quit, close the window with the standard Close icon in the upper right of your screen.

Build for Google Cardboard

Read this section if you are targeting Google Cardboard on Android and/or iOS.

A good starting point is the Google Cardboard for Unity, Get Started guide (for more information, visit https://developers.google.com/cardboard/unity/get-started).

The Android setup

If you've never built for Android, you'll first need to download and install the Android SDK. Take a look at Unity manual for Android SDK Setup (http://docs.unity3d.com/Manual/android-sdksetup.html). You'll need to install the Android Developer Studio (or at least, the smaller SDK Tools) and other related tools, such as Java (JVM) and the USB drivers.

It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine.) Make sure that you know how to install and run it on your Android phone.

The iOS setup

A good starting point is Unity manual, Getting Started with iOS Development guide (http://docs.unity3d.com/Manual/iphone-GettingStarted.html). You can only perform iOS development from a Mac. You must have an Apple Developer Account approved (and paid for the standard annual membership fee) and set up. Also, you'll need to download and install a copy of the Xcode development tools (via the Apple Store).

It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine). Make sure that you know how to install and run it on your iPhone.

Installing the Cardboard Unity package

To set up our project to run on Google Cardboard, download the SDK from https://developers.google.com/cardboard/unity/download.

Within your Unity project, import the CardboardSDKForUnity.unitypackage assets package, as follows:

  1. From the Assets main menu bar, navigate to Import Package | Custom Package....
  2. Find and select the CardboardSDKForUnity.unitypackage file.
  3. Ensure that all the assets are checked, and click on Import.

Explore the imported assets. In the Project panel, the Assets/Cardboard folder includes a bunch of useful stuff, including the CardboardMain prefab (which, in turn, contains a copy of CardboardHead, which contains the camera). There is also a set of useful scripts in the Cardboard/Scripts/ folder. Go check them out.

Adding the camera

Now, we'll put the Cardboard camera into MeMyselfEye, as follows:

  1. In the Project panel, find CardboardMain in the Assets/Cardboard/Prefabs folder.
  2. Drag it onto the MeMyselfEye object in the Hierarchy panel so that it's a child object.
  3. With CardboardMain selected in Hierarchy, look at the Inspector panel and ensure the Tap is Trigger checkbox is checked.
  4. Select the Main Camera in the Hierarchy panel (inside MeMyselfEye) and disable it by unchecking the Enable checkbox on the upper left of its Inspector panel.

Finally, apply theses changes back onto the prefab, as follows:

  1. In the Hierarchy panel, select the MeMyselfEye object. Then, in its Inspector panel, next to Prefab, click on the Apply button.
  2. Save the scene.

We now have replaced the default Main Camera with the VR one.

The build settings

If you know how to build and install from Unity to your mobile phone, doing it for Cardboard is pretty much the same:

  1. From the main menu bar, navigate to File | Build Settings....
  2. If the current scene is not listed under Scenes to Build, click on Add Current.
  3. Choose Android or iOS from the Platform list on the left and click on Switch Platform.
  4. Then, click on Player Settings… in the Inspector panel.
  5. For Android, ensure that Other Settings | Virtual Reality Supported is unchecked, as that would be for GearVR (via the Oculus drivers), not Cardboard Android!
  6. Navigate to Other Settings | PlayerSettings.bundleIdentifier and enter a valid string, such as com.YourName.VRisAwesome.
  7. Under Resolution and Presentation | Default Orientation set Landscape Left.

Play Mode

To test it out, you do not need your phone connected. Just press the game's Play button at the top of the application in the center to enter Play Mode. You will see the split screen stereographic views in the Game view panel.

While in Play Mode, you can simulate the head movement if you were viewing it with the Cardboard headset. Use Alt + mouse-move to pan and tilt forward or backwards. Use Ctrl + mouse-move to tilt your head from side to side.

You can also simulate magnetic clicks (we'll talk more about user input in a later article) with mouse clicks.

Note that since this emulates running on a phone, without a keyboard, the keyboard keys that we used to move Ethan do not work now.

Building and running in Android

To build your game as a separate executable app, perform the following steps:

  1. From the main menu bar, navigate to File | Build & Run.
  2. Set the name of the build. I like to keep my builds in a subdirectory named Build; you can create one if you want.
  3. Click on Save.

This will generate an Android executable .apk file, and then install the app onto your phone. The following screenshot shows the Diorama scene running on an Android phone with Cardboard (and Unity development monitor in the background).

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

Building and running in iOS

To build your game and run it on the iPhone, perform the following steps:

  1. Plug your phone into the computer via a USB cable/port.
  2. From the main menu bar, navigate to File | Build & Run.

This allows you to create an Xcode project, launch Xcode, build your app inside Xcode, and then install the app onto your phone.

vr-build-and-run-img-1

Antique Stereograph (source https://www.pinterest.com/pin/493073859173951630/)

The device-independent clicker

At the time of writing this article, VR input has not yet been settled across all platforms. Input devices may or may not fit under Unity's own Input Manager and APIs. In fact, input for VR is a huge topic and deserves its own book. So here, we will keep it simple.

As a tribute to the late Steve Jobs and a throwback to the origins of Apple Macintosh, I am going to limit these projects to mostly one-click inputs! Let's write a script for it, which checks for any click on the keyboard, mouse, or other managed device:

  1. In the Project panel, select the top-level Assets folder.
  2. Right-click and navigate to Create | Folder. Name it Scripts.
  3. With the Scripts folder selected, right-click and navigate to Create | C# Script. Name it Clicker.
  4. Double-click on the Clicker.cs file in the Projects panel to open it in the MonoDevelop editor.
  5. Now, edit the Script file, as follows:
    using UnityEngine;
    
    using System.Collections;
    
     
    
    public class Clicker {
    
    public bool clicked() {
    
       return Input.anyKeyDown;
    
    }
    
    }
  6. Save the file.

If you are developing for Google Cardboard, you can add a check for the Cardboard's integrated trigger when building for mobile devices, as follows:

using UnityEngine;

using System.Collections;

 

public class Clicker {

public bool clicked() {

#if (UNITY_ANDROID || UNITY_IPHONE)

   return Cardboard.SDK.CardboardTriggered;

#else

   return Input.anyKeyDown;

#endif

}

}

Any scripts that we write that require user clicks will use this Clicker file. The idea is that we've isolated the definition of a user click to a single script, and if we change or refine it, we only need to change this file.

How virtual reality really works

So, with your headset on, you experienced the diorama! It appeared 3D, it felt 3D, and maybe you even had a sense of actually being there inside the synthetic scene. I suspect that this isn't the first time you've experienced VR, but now that we've done it together, let's take a few minutes to talk about how it works.

The strikingly obvious thing is, VR looks and feels really cool! But why?

Immersion and presence are the two words used to describe the quality of a VR experience. The Holy Grail is to increase both to the point where it seems so real, you forget you're in a virtual world. Immersion is the result of emulating the sensory inputs that your body receives (visual, auditory, motor, and so on). This can be explained technically. Presence is the visceral feeling that you get being transported there—a deep emotional or intuitive feeling. You can say that immersion is the science of VR, and presence is the art. And that, my friend, is cool.

A number of different technologies and techniques come together to make the VR experience work, which can be separated into two basic areas:

  • 3D viewing
  • Head-pose tracking

In other words, displays and sensors, like those built into today's mobile devices, are a big reason why VR is possible and affordable today.

Suppose the VR system knows exactly where your head is positioned at any given moment in time. Suppose that it can immediately render and display the 3D scene for this precise viewpoint stereoscopically. Then, wherever and whenever you moved, you'd see the virtual scene exactly as you should. You would have a nearly perfect visual VR experience. That's basically it. Ta-dah!

Well, not so fast. Literally.

Stereoscopic 3D viewing

Split-screen stereography was discovered not long after the invention of photography, like the popular stereograph viewer from 1876 shown in the following picture (B.W. Kilborn & Co, Littleton, New Hampshire, see http://en.wikipedia.org/wiki/Benjamin_W._Kilburn). A stereo photograph has separate views for the left and right eyes, which are slightly offset to create parallax. This fools the brain into thinking that it's a truly three-dimensional view. The device contains separate lenses for each eye, which let you easily focus on the photo close up.

vr-build-and-run-img-2

Similarly, rendering these side-by-side stereo views is the first job of the VR-enabled camera in Unity.

Let's say that you're wearing a VR headset and you're holding your head very still so that the image looks frozen. It still appears better than a simple stereograph. Why?

The old-fashioned stereograph has twin relatively small images rectangularly bound. When your eye is focused on the center of the view, the 3D effect is convincing, but you will see the boundaries of the view. Move your eyeballs around (even with the head still), and any remaining sense of immersion is totally lost. You're just an observer on the outside peering into a diorama.

Now, consider what an Oculus Rift screen looks like without the headset (see the following screenshot):

vr-build-and-run-img-3

The first thing that you will notice is that each eye has a barrel shaped view. Why is that? The headset lens is a very wide-angle lens. So, when you look through it you have a nice wide field of view. In fact, it is so wide (and tall), it distorts the image (pincushion effect). The graphics software (SDK) does an inverse of that distortion (barrel distortion) so that it looks correct to us through the lenses. This is referred to as an ocular distortion correction. The result is an apparent field of view (FOV), that is wide enough to include a lot more of your peripheral vision. For example, the Oculus Rift DK2 has a FOV of about 100 degrees.

Also of course, the view angle from each eye is slightly offset, comparable to the distance between your eyes, or the Inter Pupillary Distance (IPD). IPD is used to calculate the parallax and can vary from one person to the next. (Oculus Configuration Utility comes with a utility to measure and configure your IPD. Alternatively, you can ask your eye doctor for an accurate measurement.)

It might be less obvious, but if you look closer at the VR screen, you see color separations, like you'd get from a color printer whose print head is not aligned properly. This is intentional. Light passing through a lens is refracted at different angles based on the wavelength of the light. Again, the rendering software does an inverse of the color separation so that it looks correct to us. This is referred to as a chromatic aberration correction. It helps make the image look really crisp.

Resolution of the screen is also important to get a convincing view. If it's too low-res, you'll see the pixels, or what some refer to as a screen door effect. The pixel width and height of the display is an oft-quoted specification when comparing the HMD's, but the pixels per inch (ppi) value may be more important. Other innovations in display technology such as pixel smearing and foveated rendering (showing a higher-resolution detail exactly where the eyeball is looking) will also help reduce the screen door effect.

When experiencing a 3D scene in VR, you must also consider the frames per second (FPS). If FPS is too slow, the animation will look choppy. Things that affect FPS include the graphics processor (GPU) performance and complexity of the Unity scene (number of polygons and lighting calculations), among other factors. This is compounded in VR because you need to draw the scene twice, once for each eye. Technology innovations, such as GPUs optimized for VR, frame interpolation and other techniques, will improve the frame rates. For us developers, performance-tuning techniques in Unity, such as those used by mobile game developers, can be applied in VR. These techniques and optics help make the 3D scene appear realistic.

Sound is also very important—more important than many people realize. VR should be experienced while wearing stereo headphones. In fact, when the audio is done well but the graphics are pretty crappy, you can still have a great experience. We see this a lot in TV and cinema. The same holds true in VR. Binaural audio gives each ear its own stereo view of a sound source in such a way that your brain imagines its location in 3D space. No special listening devices are needed. Regular headphones will work (speakers will not). For example, put on your headphones and visit the Virtual Barber Shop at https://www.youtube.com/watch?v=IUDTlvagjJA. True 3D audio, such as VisiSonics (licensed by Oculus), provides an even more realistic spatial audio rendering, where sounds bounce off nearby walls and can be occluded by obstacles in the scene to enhance the first-person experience and realism.

Lastly, the VR headset should fit your head and face comfortably so that it's easy to forget that you're wearing it and should block out light from the real environment around you.

Head tracking

So, we have a nice 3D picture that is viewable in a comfortable VR headset with a wide field of view. If this was it and you moved your head, it'd feel like you have a diorama box stuck to your face. Move your head and the box moves along with it, and this is much like holding the antique stereograph device or the childhood View Master. Fortunately, VR is so much better.

The VR headset has a motion sensor (IMU) inside that detects spatial acceleration and rotation rate on all three axes, providing what's called the six degrees of freedom. This is the same technology that is commonly found in mobile phones and some console game controllers. Mounted on your headset, when you move your head, the current viewpoint is calculated and used when the next frame's image is drawn. This is referred to as motion detection.

Current motion sensors may be good if you wish to play mobile games on a phone, but for VR, it's not accurate enough. These inaccuracies (rounding errors) accumulate over time, as the sensor is sampled thousands of times per second, one may eventually lose track of where you are in the real world. This drift is a major shortfall of phone-based VR headsets such as Google Cardboard. It can sense your head motion, but it loses track of your head position.

High-end HMDs account for drift with a separate positional tracking mechanism. The Oculus Rift does this with an inside-out positional tracking, where an array of (invisible) infrared LEDs on the HMD are read by an external optical sensor (infrared camera) to determine your position. You need to remain within the view of the camera for the head tracking to work.

Alternatively, the Steam VR Vive Lighthouse technology does an outside-in positional tracking, where two or more dumb laser emitters are placed in the room (much like the lasers in a barcode reader at the grocery checkout), and an optical sensor on the headset reads the rays to determine your position.

Either way, the primary purpose is to accurately find the position of your head (and other similarly equipped devices, such as handheld controllers).

Together, the position, tilt, and the forward direction of your head—or the head pose—is used by the graphics software to redraw the 3D scene from this vantage point. Graphics engines such as Unity are really good at this.

Now, let's say that the screen is getting updated at 90 FPS, and you're moving your head. The software determines the head pose, renders the 3D view, and draws it on the HMD screen. However, you're still moving your head. So, by the time it's displayed, the image is a little out of date with respect to your then current position. This is called latency, and it can make you feel nauseous.

Motion sickness caused by latency in VR occurs when you're moving your head and your brain expects the world around you to change exactly in sync. Any perceptible delay can make you uncomfortable, to say the least.

Latency can be measured as the time from reading a motion sensor to rendering the corresponding image, or the sensor-to-pixel delay. According to Oculus' John Carmack:

"A total latency of 50 milliseconds will feel responsive, but still noticeable laggy. 20 milliseconds or less will provide the minimum level of latency deemed acceptable."

There are a number of very clever strategies that can be used to implement latency compensation. The details are outside the scope of this article and inevitably will change as device manufacturers improve on the technology. One of these strategies is what Oculus calls the timewarp, which tries to guess where your head will be by the time the rendering is done, and uses that future head pose instead of the actual, detected one. All of this is handled in the SDK, so as a Unity developer, you do not have to deal with it directly.

Meanwhile, as VR developers, we need to be aware of latency as well as the other causes of motion sickness. Latency can be reduced by faster rendering of each frame (keeping the recommended FPS). This can be achieved by discouraging the moving of your head too quickly and using other techniques to make the user feel grounded and comfortable.

Another thing that the Rift does to improve head tracking and realism is that it uses a skeletal representation of the neck so that all the rotations that it receives are mapped more accurately to the head rotation. An example of this is looking down at your lap makes a small forward translation since it knows it's impossible to rotate one's head downwards on the spot.

Other than head tracking, stereography and 3D audio, virtual reality experiences can be enhanced with body tracking, hand tracking (and gesture recognition), locomotion tracking (for example, VR treadmills), and controllers with haptic feedback. The goal of all of this is to increase your sense of immersion and presence in the virtual world.

Summary

In this article, we discussed the different levels of device integration software and then installed the software that is appropriate for your target VR device. We also discussed what happens inside the hardware and software SDK that makes virtual reality work and how it matters to us VR developers.

For more information on VR development and Unity refer to the following Packt books:

Resources for Article:


Further resources on this subject: