Interacting with the environment
We know that ARCore will provide us with identified feature points and planes/surfaces it recognizes around the user. From those identified points or planes, we can attach virtual objects. Since ARCore keeps track of these points and planes for us, as the user moves objects, those that are attached to a plane remain fixed. Except, how do we determine where a user is trying to place an object? In order to do that, we use a technique called ray casting. Ray casting takes the point of touch in two dimensions and casts a ray into the scene. This ray is then tested against other objects in the scene for collisions. The following diagram shows how this works:
Example of ray casting from device screen to 3D space
You, of course, have likely already seen this work countless times. Not only the sample app, but virtually every 3D application uses ray casting for object interaction and collision detection. Now that we understand how ray casting works, let's see how this...