Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Enterprise Augmented Reality Projects

You're reading from   Enterprise Augmented Reality Projects Build real-world, large-scale AR solutions for various industries

Arrow left icon
Product type Paperback
Published in Dec 2019
Publisher Packt
ISBN-13 9781789807400
Length 388 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Jorge R. López Benito Jorge R. López Benito
Author Profile Icon Jorge R. López Benito
Jorge R. López Benito
Enara Artetxe Gonz√°lez Enara Artetxe Gonz√°lez
Author Profile Icon Enara Artetxe Gonz√°lez
Enara Artetxe Gonz√°lez
Arrow right icon
View More author details
Toc

Table of Contents (10) Chapters Close

Preface 1. Introduction to AR and How It Fits the Enterprise 2. Introduction to Unity for AR Development FREE CHAPTER 3. AR for Manufacturing with ARCore 4. AR for Training with WebAR and Augmented Class! 5. AR for Marketing with EasyAR 6. AR for Retail with Vuforia 7. AR for Automation with Vuforia and AR Glasses 8. AR for Tourism with ARKit 9. Other Books You May Enjoy

Understanding AR

AR is the term that's used to describe the technology that allows users to visualize part of the real world through the camera of a technological device (smartphone, tablet, or AR glasses) with virtual graphical information that's been added by this device. The device adds this virtual information to existing physical information. By doing this, tangible physical elements combine with virtual elements, thus creating augmented reality in real-time. The following image shows how AR works:

A user seeing a 3D apple in AR with a tablet

Now, we are going to look at the beginnings of AR and learn how AR can be divided according to its functionality.

Short history – the beginnings of a new reality

AR is not a new technology. The beginnings of AR begin with the machine that was invented by Morton Heilig, a philosopher, visionary, and filmmaker, when, in 1957, he began to build a prototype with an appearance similar to the arcade video game machines that were very popular in the 90s. The following image shows a schema of how the prototype worked:

A schema on how the invention worked (This image is created by Morton Heilig)

Morton called his invention Sensorama, an experience that projected 3D images, added a surround sound, made the seat vibrate, and created wind that was thrown as air at the viewer. The closest similar experience we can feel today is seeing a movie in a 4D cinema, but these experiences were created more than 60 years ago.

In 1968, Harvard Electrical Engineering professor Ivan Sutherland created a device that would be the key to the future of the AR technology known as the Human-Mounted Display (HMD). Far from the AR glasses that we know of today, this HMD, called the Sword of Damocles, was a huge machine that hung from the ceiling of a laboratory and worked when the user was placed in the right place. In the following image, you can see what this invention looked like:

The Sword of Damocles (this image was created by OyundariZorigtbaatar) 

In 1992, Boeing researcher Tom Caudell invented the term AR, and at the same time, AR technology was boosted from two other works. The first AR system, from L.B. Rosenberg, who works for the United States Air Force, is a device that gives advice to the user on how to perform certain tasks as they are presented, something like a virtual guide. This can be seen in the following image:

Virtual Fixtures AR system on the left and its view on the right (this image was created by AR Trends)

The other research in this area was led at Columbia University, where a team of scientists invented an HMD that interacted with a printer. The device, baptized as Karma (AR based on knowledge for maintenance assistance), projected a 3D image to tell the user how to recharge the printer, instead of going to the user manual.

The following diagram is a representation of the continuum of advanced computer interfaces, based on Milgram and Kishino (1994), where we can see the different subdivisions of the MIXED REALITY (MR) that go from the REAL ENVIRONMENT to the VIRTUAL REALITY. AR that's located nearer to the REAL ENVIRONMENT is divided between spatial AR and see-through AR. However, the appearance of mobile devices in the 21st century has allowed a different version of AR, where we can display it using the device screen and camera:

MIXED REALITY and its subdivisions

Now that we have introduced the beginnings of AR, let's learn how this technology can be classified depending on the trigger that's used to show virtual elements in the real world.

The magic behind AR

AR can be created in many ways; the main challenge is how to make the combination of the real and virtual worlds as seamless as possible. Based on what is used to trigger the virtual elements to appear in the real world, AR can be classified as follows:

  • GPS coordinates: We use GPS coordinates, compasses, and accelerometers to locate the exact position of the user, including the cardinal point they are looking at. Depending on where the user is pointing to, they will see some virtual objects or others from the same position.
  • Black and white markers: We use very simple images, similar to black and white QR codes, to project virtual objects on them. This was one of the first AR examples, although nowadays they are used less often as there are more realistic ways to create the AR experience.
  • Image markers: We use the camera of the mobile device to locate predefined images (also called targets or markers) and then project virtual objects over them. This type of AR has substituted black and white markers.
  • Real-time markers: The user creates and defines their own images with the mobile camera to project any virtual object in them.
  • Facial recognition: Through the camera, we capture the movements of the face to execute certain actions in a request, for example, to give facial expressions to a virtual avatar.
  • SLAM: Short for Simultaneous Localization And Mapping, this technology understands the physical world through feature points, thereby making it possible for AR applications to recognize 3D objects and scenes, as well as to instantly track the world, and overlay digital interactive augmentations.
  • Beacons: eBeacons, RFID, and NFC are identification systems that use radio frequency or bluetooth, similar to GPS coordinates, to trigger the AR elements.

Now, you have a better grasp of what AR is and where it comes from. We have covered the basics of AR by looking at the first prototypes, and classified different types of AR according to the element that triggers the virtual images so that they appear on the screen. The next step is to see what is required to work with it.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image