All of a sudden, we have something called the Metaverse that’s in the media and being talked about by seemingly everyone. Almost everyone’s consensus is that it will be revolutionary for business and society. However, the main topic when talking about the Metaverse is to try to pin down what it actually is, because it is not here today. Before beginning to explain what the Metaverse is, it helps to understand its origins and then, what it is and the reasons behind its anticipation.
Human beings have a wish to communicate with each other in person, and when not in person, in a way that they can convey their thoughts and—in many instances—their emotions. Whether it was writing on animal bones, stone, clay, papyrus, or—more recently—paper, or speaking on the telephone, humans always found some way to communicate. There are so many reasons to want to communicate, from the personal— to catch up with close and not-so-close friends and family—to the non-personal and professional—to complain to a co-op board, to appraise a work of art, negotiate to buy a house or building, buy clothes and food, applaud or disparage a certain politician, provide instructions to an employee, and so on. The motivations to communicate are many.
Several major things have changed from several thousands of years ago to today when it comes to communication, apart from the method of communication—speed, ease, precision, and audience reach.
The most obvious change is the increased speed with which communication can be done. Speed has even increased when it comes to communicating in person. From coming together by walking, riding horses, or traveling by chariot, carriage, car, plane, or helicopter, the opportunity for in-person communication has increased manifold, and with it, the speed of getting that communication done. And that is just in-person communication. When communicating not in person, the speed of that kind of communication has increased tremendously—from messengers and postmen delivering items of communication by walking or running, by horse, then by car and carrier plane, to wires being sent, to phone calls by landline, to email, cellphone, and videoconference.
With increased speed comes the ease with which communication is accomplished. That ease has compelled more people to communicate more often. That increase in communication has allowed for more precision of communicated intent. Let’s say that when letters were the largest mode of not-in-person communication, if the receiver of a letter misconstrued what the writer of that letter intended, the receiver could decide to not communicate anymore with the writer, or if they did, a return letter would have to be sent, and so on. The ease of faster communications allows for the correcting of any misconstrued messaging. And this is important when communication is able to reach as many people as is digitally possible.
More precise, easier, and faster communication, together with the capability of large audience reach is where we are today. Social media can be counted as a mode of communication, but the current formats don’t allow for much flexibility and personalization. Outside of social media, increased speed, ease, precision, and audience reach of current communications have made people more efficient and productive in both their personal and professional lives. Although it’s commonly thought that the main original motivation behind the Metaverse is to foster interoperability among different computer games, the origins of the current manifestation of the imagined Metaverse come out of a need for more improved and enhanced communication capabilities. And this improved and enhanced communication has the benefit of bringing about multiplying business opportunities. Use cases that exemplify what can be done in the Metaverse come later in this book, in Part 3.
To better understand how the Metaverse came about and its place in technology, it’s helpful to think of it as part of a paradigm shift; in this case, the fourth paradigm shift— spatial computing.
The Metaverse – part of the fourth paradigm
A technological paradigm shift is a change in the underlying principles that shape the development and use of technology in a society. A classic technological paradigm shift is the shift from horse and buggy to the automobile. Four technological paradigm shifts have been recognized in computing.
The first paradigm – the personal computer arrives
The shift from mainframe computers to personal computers (PCs) is considered the first technological paradigm shift. Mainframe computers were large, expensive, and complex machines that were primarily used by businesses, government agencies, and other organizations. They were typically housed in dedicated computer rooms and operated by trained technicians.
The IAS machine (also known as the Institute for Advanced Study computer) was an early computer built between 1946 and 1951 at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It was one of the first electronic computers to be built and was used continually and productively until 1960 for a wide range of research projects, including the development of the first high-level programming language, called FORTRAN.
In the 1970s, computer rooms were typically large, specialized spaces that housed mainframe computers and their associated equipment such as IBM’s very large Access Client Solutions (ACS) chip arrays that are shown in Figure 1.1. These computers were much larger and more expensive than the PCs that became popular in the 1980s, and they required dedicated space with specialized cooling and electrical systems to operate. The computer room was usually a secure area that was only accessible to authorized personnel, and it was often monitored by technicians who were responsible for maintaining the computer equipment:
Figure 1.1 – A section of IBM’s 1968-era very large ACS circuit board with a 10 x 10 array of chip packages that were used to power one computer (source: Robert Scoble)
The Apple 1 was a PC released in 1976 by Apple Computer, Inc. It was a small, relatively inexpensive PC that could be used by an individual or small group and was designed to be assembled by the user to be used in a home or small office setting. It was one of the first PCs on the market and was designed to be a kit that users could assemble themselves. The Apple 1 was powered by a MOS Technology 6502 microprocessor and had 4 KB of RAM, which could be expanded to 8 or 48 KB. It used a cassette tape to store data and programs, and it had a simple command-line interface for users to input commands.
Figure 1.2 – Steve Wozniak, co-founder of Apple Computer, stands with the Apple II that he helped develop and is now in the Computer History Museum (source: Robert Scoble)
The development and widespread adoption of PCs represented a paradigm shift in the way that people used computers. Before the development of PCs, computers were large, expensive machines that were used primarily by large organizations, such as businesses, universities, and government agencies, to support the computing needs of hundreds or thousands of users. These computers were operated by specialized personnel and were typically accessed remotely through terminals or other devices.
In contrast, PCs are smaller, more affordable, and easier to use than mainframe computers. They can be used by individuals and small businesses and do not require specialized training to operate. The development of the microprocessor and the PC revolutionized the way people interacted with computers and made it possible for people to use computers for a wide range of tasks, from word processing and spreadsheet creation to internet browsing and gaming. The development of PCs was a key factor in the growth of the digital economy.
The second paradigm – graphical user interfaces
A graphical user interface (GUI) is a type of UI that allows users to interact with electronic devices through graphical icons and visual indicators, rather than text-based commands. GUIs are designed to make it easier and more intuitive for users to access and use computer programs and other electronic devices. They use visual elements, such as icons, menus, and buttons, to represent different options and functions, which users can access using a pointing device, such as a mouse or a touchpad.
The concept of a GUI was first introduced in the 1970s, but it was not until the 1980s that GUIs became widely adopted. The first GUI was developed at Xerox Palo Alto Research Center (PARC) in the 1970s, and it was used on the Xerox Alto, one of the first PCs. The Xerox Alto was the first computer to use a mouse-based input system, which made it possible to use a GUI to navigate and interact with the computer.
The first widely available PC to use a GUI was the Apple Macintosh, which was introduced in 1984 and it helped to popularize the use of GUIs in PCs. In the following years, other companies, such as Microsoft, introduced their own GUI-based operating systems, and the use of GUIs became widespread in the PC market. Today, GUIs are the standard interface for most PCs and are widely used in a variety of electronic devices.
GUIs represented a paradigm shift in the way that people interact with computers because they made it much easier and more intuitive for users to access and use computer programs. Prior to the development of GUIs, computers used command-line interfaces, which required users to input commands using a keyboard. This was a time-consuming and error-prone process, and it was difficult for people who were not familiar with computers to learn how to use them.
The adoption of GUIs had a significant impact on the way that people use computers and has contributed to the widespread adoption of PCs. GUIs made it possible for people with little or no computer experience to use computers with ease, which has had a profound impact on many aspects of society, including education, business, and communication.
The third paradigm – mobile
The first mobile phones were developed in the late 1940s and 1950s, but they were large and expensive and were only used by a small number of people, such as wealthy individuals and businesses. The first commercially available mobile phone was the Motorola DynaTAC 8000X, which was released in 1983. These early mobile phones were quite large and expensive and were only used by a small number of people. Over time, mobile phones became smaller, less expensive, and more widely available, and their use became more widespread.
The LG Prada (also known as the LG KE850) was a mobile phone released by LG Electronics in May 2007. It was one of the first phones to feature a touchscreen display and was widely considered to be a fashionable and high-end device.
The first iPhone, on the other hand, was released by Apple in June 2007. It was a revolutionary device that introduced a new type of UI based on a multi-touch screen and established the smartphone as a new category of device. The iPhone also had a number of features that set it apart from other mobile phones at the time, such as a high-resolution display, a digital camera, and the ability to access the internet and run a wide range of apps.
Overall, the LG Prada was an important early touchscreen phone, but the iPhone was a more significant and influential device that set the stage for the modern smartphone market:
Figure 1.3 – The first iPhone versus the Nokia N97; the first iPhone was released in June 2007 and the Nokia N97 in December 2008 (source: Robert Scoble)
Apple is also widely known for obliterating the importance of Nokia when it comes to mobile phones. Nokia was considered the mobile phone leader before the iPhone came out. Yet, due to its miscalculation of the importance of the iPhone’s innovations, Nokia mistakenly thought that it would not need to do too much to stay ahead, which led to its steady downfall in the area.
The mobile phone has become a technological paradigm because it has fundamentally changed the way that people communicate and access information. Before the widespread adoption of mobile phones, people had to be physically present in a specific location to make phone calls or access information. With the advent of mobile phones, people are able to communicate and access information from anywhere at any time. This has had a profound impact on society and has led to the development of new industries and business models. Mobile phones have also had a major impact on the way that people interact with each other and with the world around them, and they have become an essential part of daily life for many people.
The fourth paradigm – spatial computing
Spatial computing refers to the use of technology to create an immersive, 3D digital environment that interacts with the physical world. It is a multidisciplinary field that combines computer science, engineering, design, and other areas to create an interactive experience that goes beyond traditional 2D screens. Spatial computing includes any technology that would be used to move about in a virtual or augmented 3D world. This includes virtual reality (VR), augmented reality (AR), mixed reality (MR), artificial intelligence (AI), computer vision (CV), and sensor technology, among others.
Spatial computing is considered the fourth paradigm because it represents a new way of interacting with technology that goes beyond traditional 2D screens and input methods. Applications of spatial computing include gaming, education, design, and industrial training, and it has emerging uses in many other industries such as healthcare, retail, and entertainment.
In 1987, Jaron Lanier coined the term VR. Lanier was a founder of VPL Research, a company that made early commercial VR headsets and wired gloves. There were earlier attempts to make headsets that were either completely experimental or commercially failed, such as Morton Heilig’s Telesphere Mask:
Figure 1.4 – Morton Heilig’s Telesphere Mask, a head-mounted display device patented in 1960 that commercially failed (source: United States Patent and Trademark Office (USPTO))
Others, such as a patent filed in 2008 by Apple for a VR headset and a remote controller, portrayed a product that was never produced. In 2012, the company Oculus VR was founded, with a VR headset, the Oculus Rift, becoming commercially available in 2016. A couple of years earlier in 2014, Facebook bought Oculus VR and started on the journey to creating more VR headset models. HTC and a couple of other players joined Oculus in creating competitive VR headsets:
Figure 1.5 – A patent filed in 2008 by Apple for a VR headset and a remote controller that would use an iPhone’s screen as the headset’s primary display; the headset was never commercially made (source: USPTO)
The first functional AR headset was made in 1980 by Steve Mann and was called the EyeTap, a helmet that displays virtual information in front of the wearer’s eye. Early AR headsets were not widely adopted due to limitations in technology and high cost. In the 2010s, advancements in technology, such as the development of smartphones and improved displays, led to the resurgence of interest in AR and the introduction of more advanced and affordable AR headsets, such as the Microsoft HoloLens and the Magic Leap.
Spatial computing has many potential benefits, some of which include the following:
- Immersive experience: Spatial computing allows for a more immersive and engaging experience for users, as it creates a 3D digital environment that interacts with the physical world. This allows for a more natural and intuitive way to interact with information and technology.
- Enhanced productivity: Spatial computing can be used to create more efficient and effective ways of working, such as VR and AR tools for industrial training, design, and education. It can also improve remote collaboration by creating shared virtual spaces.
- Improved accessibility: Spatial computing can be used to create more accessible experiences for users with disabilities, such as those who are visually impaired or have difficulty with fine motor skills.
- New opportunities in various industries: Spatial computing has potential use cases in various industries such as healthcare, retail, and entertainment. For example, in healthcare, it can be used for training and surgeries, in retail for virtual shopping, and in entertainment for games and movies.
- Increased convenience: Spatial computing can make it more convenient for users to access and interact with information, such as overlaying virtual instructions on real-world objects for repair or assembly.
- Data visualization: Spatial computing can be used to create 3D visualizations of complex data, making it easier to understand and analyze.
Spatial computing is a key enabler for the Metaverse, providing technology that allows for the creation of immersive, 3D digital environments that can be used for socializing, entertainment, work, and many other use cases. Now that we have glanced through the history of the technology that led up to the Metaverse, let’s understand what the Metaverse actually is.