Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Save for later
  • 19 min read
  • 06 Jun 2018

article-image

An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data.

If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh.

In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist.

You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe.

Basic sensory systems


Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout.

While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent.

The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain.

For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for.

Cone of sight


A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game:

ai-unity-game-developers-emulate-real-world-senses-img-0

The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone.

The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source.

Hearing, feeling, and smelling using spheres


One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling.

The following figure represents our sphere and how our agent fits into the setup:

ai-unity-game-developers-emulate-real-world-senses-img-1

As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere.

Expanding AI through omniscience


In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity.

In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them.

While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means.

Getting creative with sensing


While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away!

Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell.

Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game.

Setting up the scene


In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps:

  1. Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles.
  2. Add a plane to be used as a floor.
  3. Then, we add a directional light so that we can see what is going on in our scene.


As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene.

For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly:

ai-unity-game-developers-emulate-real-world-senses-img-2

Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene:

ai-unity-game-developers-emulate-real-world-senses-img-3

With the essential setup out of the way, we can begin tackling the code for driving the various systems.

Setting up the player tank and aspect


Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider.

Look at the following code in the Target.cs file:

using UnityEngine;
public class Target : MonoBehaviour
{
public Transform targetMarker;

void Start (){}

void Update ()
{
int button = 0;

//Get the point of the hit position when the mouse is being clicked
if(Input.GetMouseButtonDown(button)) 
{
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hitInfo;

if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) 
{
Vector3 targetPosition = hitInfo.point;
targetMarker.position = targetPosition;
}
}
}
}


You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector.

Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene.

A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way.

Implementing the player tank


Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank.

The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API.

The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly.

The code in the PlayerTank.cs file is as follows:

using UnityEngine;
public class PlayerTank : MonoBehaviour 
{
public Transform targetTransform;
public float targetDistanceTolerance = 3.0f;

private float movementSpeed;
private float rotationSpeed;

// Use this for initialization
void Start () 
{
movementSpeed = 10.0f;
rotationSpeed = 2.0f;
}

// Update is called once per frame
void Update () 
{
if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) 
{
return;
}

Vector3 targetPosition = targetTransform.position;
targetPosition.y = transform.position.y;
Vector3 direction = targetPosition - transform.position;

Quaternion tarRot = Quaternion.LookRotation(direction);
transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime);

transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime));
}
}


ai-unity-game-developers-emulate-real-world-senses-img-4

The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank.

This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable.

Implementing the Aspect class


Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for.

The code in the Aspect.cs file looks like this:

using UnityEngine;
public class Aspect : MonoBehaviour {
public enum AspectTypes {
PLAYER,
ENEMY,
}
public AspectTypes aspectType;
}


Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot:

ai-unity-game-developers-emulate-real-world-senses-img-5

Creating an AI character


Our NPC will be roaming around the scene in a random direction. It'll have the following two senses:

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
  • The perspective sense will check whether the tank aspect is within a set visible range and distance
  • The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step


Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own.

The code in the Wander.cs file is as follows:

using UnityEngine;
public class Wander : MonoBehaviour {
private Vector3 targetPosition;

private float movementSpeed = 5.0f;
private float rotationSpeed = 2.0f;
private float targetPositionTolerance = 3.0f;
private float minX;
private float maxX;
private float minZ;
private float maxZ;

void Start() {
minX = -45.0f;
maxX = 45.0f;

minZ = -45.0f;
maxZ = 45.0f;

//Get Wander Position
GetNextPosition();
}

void Update() {
if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) {
GetNextPosition();
}

Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position);
transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime);

transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime));
}

void GetNextPosition() {
targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ));
}
}


The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic.

Using the Sense class


The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively.

Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them.

The code in the Sense.cs file looks like this:

using UnityEngine;
public class Sense : MonoBehaviour {
public bool enableDebug = true;
public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY;
public float detectionRate = 1.0f;

protected float elapsedTime = 0.0f;

protected virtual void Initialize() { }
protected virtual void UpdateSense() { }

// Use this for initialization
void Start () 
{
elapsedTime = 0.0f;
Initialize();
}

// Update is called once per frame
void Update () 
{
UpdateSense();
}
}


The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses.

Giving a little perspective


The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console.

The code in the Perspective.cs file looks like this:

using UnityEngine;
public class Perspective : Sense
{
public int fieldOfView = 45;
public int viewDistance = 100;

private Transform playerTransform;
private Vector3 rayDirection;

protected override void Initialize() 
{
playerTransform = GameObject.FindGameObjectWithTag("Player").transform;
}

protected override void UpdateSense() 
{
elapsedTime += Time.deltaTime;

if (elapsedTime >= detectionRate) 
{
DetectAspect();
}
}

//Detect perspective field of view for the AI Character
void DetectAspect()
{
RaycastHit hit;
rayDirection = playerTransform.position - transform.position;

if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView)
{
// Detect if player is within the field of view
if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance))
{
Aspect aspect = hit.collider.GetComponent<Aspect>();
if (aspect != null)
{
//Check the aspect
if (aspect.aspectType != aspectName)
{
print("Enemy Detected");
}
}
}
}
}


We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property.

The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own.

The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY.

This method can be illustrated as follows:

void OnDrawGizmos()
    {
        if (playerTransform == null) 
        {
            return;
        }
Debug.DrawLine(transform.position, playerTransform.position, Color.red);

Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance);

//Approximate perspective visualization
Vector3 leftRayPoint = frontRayPoint;
leftRayPoint.x += fieldOfView * 0.5f;

Vector3 rightRayPoint = frontRayPoint;
rightRayPoint.x -= fieldOfView * 0.5f;

Debug.DrawLine(transform.position, frontRayPoint, Color.green);
Debug.DrawLine(transform.position, leftRayPoint, Color.green);
Debug.DrawLine(transform.position, rightRayPoint, Color.green);
}
}

Touching is believing


The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on.

We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide.

Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively.

The code in the Touch.cs file is as follows:

using UnityEngine;
public class Touch : Sense
{
void OnTriggerEnter(Collider other)
{
Aspect aspect = other.GetComponent<Aspect>();
if (aspect != null)
{
//Check the aspect
if (aspect.aspectType != aspectName)
{
print("Enemy Touch Detected");
}
}
}
}


Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot:

ai-unity-game-developers-emulate-real-world-senses-img-6

The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up:

ai-unity-game-developers-emulate-real-world-senses-img-7

For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want.

Testing the results


Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank:

ai-unity-game-developers-emulate-real-world-senses-img-8

The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class.

To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character.

If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017.

How to use arrays, lists, and dictionaries in Unity for 3D game development

How to create non-player Characters (NPC) with Unity 2018