Augmented reality applications are all about enhancing or augmenting the user's reality. In order to do this, we as AR app developers need a set of tools capable of understanding the user's environment. As we saw in the last chapter, ARCore uses visual-inertial odometry (VIO) to identify objects and features in the environment, which it can then use to obtain a pose of the device and track motion. However, this technology can also help us identify objects and their pose using the same toolkit. In this chapter, we will explore how we can use the ARCore API to better understand the user's environment. Here's a quick overview of the main topics we will cover in this chapter:
- Tracking the point cloud
- Meshing and the environment
- Interacting with the environment
- Drawing with OpenGL ES
- Shader programming
If you have not downloaded...