Virtual reality technology is right in the middle of becoming massively available. But it has been considered farfetched—a distant holy grail largely confined to fiction—for long enough that it is still too easy to harbor certain myths:
The Internet is, of course, an astoundingly open and democratic communications medium, operating at an incredible scale. Browsers, considered as general software for delivering media content over the Internet, have steadily gained functionality and rendering power, and have been extraordinarily successful in combining media content with the values of the internet.
The native rendering mechanism at work in the browser environment consumes descriptions of:
* Structure and content in HTML: simple, static, and declarative
* Styling in CSS (optionally embedded in HTML)
* Behavior in JavaScript (optionally embedded in HTML): powerful but complex and loosely structured (oddly too powerful in some ways, and abuses such as pop-up ads were originally its most noticeable applications)
The primary metaphor of the Web as people first came to know it focused on traveling from place to place along connected pathways, reinforced by terms like "navigate", "explore", and "site". In this light, it is apparent that the next logical step is right inside of the images, videos and games that have proliferated around the world thanks to the web platform, and that this is no departure from the ongoing evolution of that platform.
On social media, we share a lot of content, but interact using low-bandwidth channels (limited media snippets—mostly text). Going forward, these will increasingly blend into shared virtual interactions of unlimited sensory richness and creativity. This has existed to a small extent for some time in the realm of first-person online games, and will see its true potential with the scale and generality of the Web and the immersive power of VR.
A quick aside on compatibility and accessiblity: the consequence of such a widely available Web is that your audience has a vast range of different devices, as well as a vast range of different sensorimotor capabilities. It is quite relevant when pushing the limits of the platform, and always wise to consider this: how the experience degrades when certain things are missing, and how best to fall back to an acceptable (not broken) variant of the content in these cases.
I believe that VR and the Web will accelerate each other's growth in this third decade of the Web, just as we saw with earlier media: images in its first decade and video in its second. Hyperlinks, essential to the experience of the Web, will teleport us from one site to another: this is one of many research avenues that has been explored by eleVR, a pioneering research team led by Vi Hart and funded by Y Combinator Research.
Adopted enthusiastically by the Chrome and Firefox browsers, and with Microsoft recently announcing support in its Edge browser, it is fair to say that widespread support for WebVR is imminent. For browsers that are not likely to support the technology natively in the near future (for example, Safari), there is a polyfill you can include so that you can use it anyway, and this is one of the things A-Frame takes care of for you.
A-Frame is a framework from Mozilla that:
The only essential structural elements are: * <a-scene>, the container * <a-entity>, an empty shell with transform attributes like position, rotation, and scale, but with neither appearance nor behavior.
HTML was designed to make web design accessible to non-programmers and opened up the ability for those without prior technical skills to learn to build a home page. A-Frame brings this power and simplicity to VR, making the creation of virtual environments accessible to all regardless of their experience level with 3D graphics and programming. The A-Frame Inspector also provides a powerful UI for VR creation using A-Frame.
Some stars are in fact extremely bright while others appear bright because they are so near (such as the brightest star in the sky, Sirius). I will describe just one possible use of VR to communicate these differences in star distance effectively.
Setting the scene:
<a-scene>
<a-sky color="black"></a-sky>
<a-light type="ambient" color="white"></a-light>
</a-scene>
A default camera is automatically configured for us, but if we want a different perspective we could add and position an <a-camera> element.
Using a dataset of stars' coordinates and visual magnitudes, we can plot the brightest stars in the sky to create a simulated sky view (a virtual stellarium), but scaled so as to be able to perceive the distances with our eyes, immediately and intuitively. The role of VR in this example is to hook into our familiar experience of looking around at the sky and our hardwired depth perceptivity. This approach has potential to foster a deeper and more lasting appreciation of the data than numbers in a spreadsheet can, and then abstract one- or two-dimensional depictions can, for that matter.
Inside the scene, per star:
<a-sphere position="100 120 -60" radius="1" color="white">
The position would be derived from the coordinates of the star, and the radius could reflect the absolute visual magnitude of the star. We could also have the color reflect the spectral range of the star.
A-Frame includes basic interaction handlers that work with a variety of rendering modes. While hovering over (or in VR, gazing at) a star, we could set up JavaScript handlers to view more information about it. By clicking on (in VR, pressing the button while gazing at) one of these stars, we could perhaps transform the view by shifting the camera and looking at the sky from that star's point of view. Or we could zoom in to a detailed view of that star, visualizing its planetary system, and so on.
Now, if we were to treat this visualization as a possible depiction of abstract data points or concepts, we can represent. For instance, the points in space could be people, and the distance could represent, perhaps, any combination of weighted criteria. You would see with the full power of your vision how near or far others are in terms of these criteria. This simple perspective enables immersive data storytelling, allowing you to examine entities in any given domain space.
My team at TWO-N makes heavy use of D3 and React in our work (among many other open source tools), both of which work seamlessly and automatically with A-Frame due to the nature of the interface that A-Frame provides. Whether you're writing or generating literal HTML, or relying on tools to help you manage the DOM dynamically, it's ultimately all about attaching elements to the document and setting their attributes; this is the browser's native content rendering engine, around which the client-side JavaScript ecosystem is built.
Paul Dechov is a visualization engineer at [TWO-N].