In this article by Jorge Palacios, the author of the book Unity 5.x Game AI Programming Cookbook, you will learn how to implement agent awareness using a mixed approach that considers the previous learnt sensory-level algorithms.
(For more resources related to this topic, see here.)
This is probably the easiest way to simulate vision. We take a collider, be it a mesh or a Unity primitive, and use it as the tool to determine whether an object is inside the agent's vision range or not.
It's important to have a collider component attached to the same game object using the script on this recipe, as well as the other collider-based algorithms in this chapter. In this case, it's recommended that the collider be a pyramid-based one in order to simulate a vision cone. The lesser the polygons, the faster it will be on the game.
We will create a component that is able to see enemies nearby by performing the following steps:
using UnityEngine;
using System.Collections;
public class Visor : MonoBehaviour
{
public string tagWall = "Wall";
public string tagTarget = "Enemy";
public GameObject agent;
}
void Start()
{
if (agent == null)
agent = gameObject;
}
public void OnCollisionStay(Collision coll)
{
// next steps here
}
string tag = coll.gameObject.tag;
if (!tag.Equals(tagTarget))
return;
GameObject target = coll.gameObject;
Vector3 agentPos = agent.transform.position;
Vector3 targetPos = target.transform.position;
Vector3 direction = targetPos - agentPos;
float length = direction.magnitude;
direction.Normalize();
Ray ray = new Ray(agentPos, direction);
RaycastHit[] hits;
hits = Physics.RaycastAll(ray, length);
int i;
for (i = 0; i < hits.Length; i++)
{
GameObject hitObj;
hitObj = hits[i].collider.gameObject;
tag = hitObj.tag;
if (tag.Equals(tagWall))
return;
}
// TODO
// target is visible
// code your behaviour below
The collider component checks every frame to know whether it is colliding with any game object in the scene. We leverage the optimizations to Unity's scene graph and engine, and focus only on how to handle valid collisions.
After checking whether a target object is inside the vision range represented by the collider, we cast a ray in order to check whether it is really visible or there is a wall in between.
In this recipe, we will emulate the sense of hearing by developing two entities; a sound emitter and a sound receiver. It is based on the principles proposed by Millington for simulating a hearing system, and uses the power of Unity colliders to detect receivers near an emitter.
As with the other recipes based on colliders, we will need collider components attached to every object to be checked and rigid body components attached to either emitters or receivers.
We will create the SoundReceiver class for our agents and SoundEmitter for things such as alarms:
using UnityEngine;
using System.Collections;
public class SoundReceiver : MonoBehaviour
{
public float soundThreshold;
}
public virtual void Receive(float intensity, Vector3 position)
{
// TODO
// code your own behavior here
}
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class SoundEmitter : MonoBehaviour
{
public float soundIntensity;
public float soundAttenuation;
public GameObject emitterObject;
private Dictionary<int, SoundReceiver> receiverDic;
}
void Start()
{
receiverDic = new Dictionary<int, SoundReceiver>();
if (emitterObject == null)
emitterObject = gameObject;
}
public void OnCollisionEnter(Collision coll)
{
SoundReceiver receiver;
receiver = coll.gameObject.GetComponent<SoundReceiver>();
if (receiver == null)
return;
int objId = coll.gameObject.GetInstanceID();
receiverDic.Add(objId, receiver);
}
public void OnCollisionExit(Collision coll)
{
SoundReceiver receiver;
receiver = coll.gameObject.GetComponent<SoundReceiver>();
if (receiver == null)
return;
int objId = coll.gameObject.GetInstanceID();
receiverDic.Remove(objId);
}
public void Emit()
{
GameObject srObj;
Vector3 srPos;
float intensity;
float distance;
Vector3 emitterPos = emitterObject.transform.position;
// next step here
}
foreach (SoundReceiver sr in receiverDic.Values)
{
srObj = sr.gameObject;
srPos = srObj.transform.position;
distance = Vector3.Distance(srPos, emitterPos);
intensity = soundIntensity;
intensity -= soundAttenuation * distance;
if (intensity < sr.soundThreshold)
continue;
sr.Receive(intensity, emitterPos);
}
The collider triggers help register agents in the list of agents assigned to an emitter. The sound emission function then takes into account the agent's distance from the emitter in order to decrease its intensity using the concept of sound attenuation.
We can develop a more flexible algorithm by defining different types of walls that affect sound intensity. It works by casting rays and adding up their values to the sound attenuation:
public Dictionary<string, float> wallTypes;
intensity -= GetWallAttenuation(emitterPos, srPos);
public float GetWallAttenuation(Vector3 emitterPos, Vector3 receiverPos)
{
// next steps here
}
float attenuation = 0f;
Vector3 direction = receiverPos - emitterPos;
float distance = direction.magnitude;
direction.Normalize();
Ray ray = new Ray(emitterPos, direction);
RaycastHit[] hits = Physics.RaycastAll(ray, distance);
int i;
for (i = 0; i < hits.Length; i++)
{
GameObject obj;
string tag;
obj = hits[i].collider.gameObject;
tag = obj.tag;
if (wallTypes.ContainsKey(tag))
attenuation += wallTypes[tag];
}
return attenuation;
Smelling can be simulated by computing collision between an agent and odor particles, scattered throughout the game level.
In this recipe based on colliders, we will need collider components attached to every object to be checked, which can be simulated by computing a collision between an agent and odor particles.
We will develop the scripts needed to represent odor particles and agents able to smell:
using UnityEngine;
using System.Collections;
public class OdorParticle : MonoBehaviour
{
public float timespan;
private float timer;
}
void Start()
{
if (timespan < 0f)
timespan = 0f;
timer = timespan;
}
void Update()
{
timer -= Time.deltaTime;
if (timer < 0f)
Destroy(gameObject);
}
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class Smeller : MonoBehaviour
{
private Vector3 target;
private Dictionary<int, GameObject> particles;
}
void Start()
{
particles = new Dictionary<int, GameObject>();
}
public void OnCollisionEnter(Collision coll)
{
GameObject obj = coll.gameObject;
OdorParticle op;
op = obj.GetComponent<OdorParticle>();
if (op == null)
return;
int objId = obj.GetInstanceID();
particles.Add(objId, obj);
UpdateTarget();
}
public void OnCollisionExit(Collision coll)
{
GameObject obj = coll.gameObject;
int objId = obj.GetInstanceID();
bool isRemoved;
isRemoved = particles.Remove(objId);
if (!isRemoved)
return;
UpdateTarget();
}
private void UpdateTarget()
{
Vector3 centroid = Vector3.zero;
foreach (GameObject p in particles.Values)
{
Vector3 pos = p.transform.position;
centroid += pos;
}
target = centroid;
}
public Vector3? GetTargetPosition()
{
if (particles.Keys.Count == 0)
return null;
return target;
}
Just like the hearing recipe based on colliders, we use the trigger colliders to register odor particles to an agent's perception (implemented using a dictionary). When a particle is included or removed, the odor centroid is computed. However, we implement a function to retrieve that centroid because when no odor particle is registered, the internal centroid position is not updated.
The particle emission logic is left behind to be implemented according to our game's needs and it basically instantiates odor-particle prefabs. Also, it is recommended to attach the rigid body components to the agents. Odor particles are prone to be massively instantiated, reducing the game's performance.
We will start a recipe oriented to use graph-based logic in order to simulate sense. Again, we will start by developing the sense of vision.
It is important to grasp the chapter regarding path finding in order to understand the inner workings of the graph-based recipes.
We will just implement a new file:
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class VisorGraph : MonoBehaviour
{
public int visionReach;
public GameObject visorObj;
public Graph visionGraph;
}
void Start()
{
if (visorObj == null)
visorObj = gameObject;
}
public bool IsVisible(int[] visibilityNodes)
{
int vision = visionReach;
int src = visionGraph.GetNearestVertex(visorObj);
HashSet<int> visibleNodes = new HashSet<int>();
Queue<int> queue = new Queue<int>();
queue.Enqueue(src);
}
while (queue.Count != 0)
{
if (vision == 0)
break;
int v = queue.Dequeue();
List<int> neighbours = visionGraph.GetNeighbors(v);
foreach (int n in neighbours)
{
if (visibleNodes.Contains(n))
continue;
queue.Enqueue(v);
visibleNodes.Add(v);
}
}
foreach (int vn in visibleNodes)
{
if (visibleNodes.Contains(vn))
return true;
}
return false;
The recipe uses the breath-first search algorithm in order to discover nodes within its vision reach, and then compares this set of nodes with the set of nodes where the agents reside.
In this article, we explained some algorithms for simulating senses and agent awareness.
Further resources on this subject: