Placing interactable objects in the world
In MR game design, interactive objects are essential to bridging the virtual and real worlds. Interactable objects are designed to respond to user input, even as basic as hand (or controller) movements that allow natural and intuitive interactions like pushing, grabbing, throwing, or even complex multi-hand manipulation (for example, rotating and scaling the object). They really help to sell the reality of the environment, and as a result, they significantly enhance the player’s engagement and overall gameplay experience.
For our game’s purposes, we’ll have examples of a simple grab and placement interaction and, with the gun, a secondary interactable event action for shooting. Note that while many MR games and experiences are built for use with hands (hand tracking), our boss room example game will use controllers.
Let’s start by configuring the modules for grabbing – these will then be configured to be inserted into the slots on the control console (refer to the GDD in the Designing a boss room section).
Making objects XR interactables
The first grabbable object we’ll work with is the crystal module. The player must be able to grab the module and insert it into the control console, so we’ll open up the provided Module
Prefab asset in Prefab Mode (double-click on it in the Project window) and add an XR Grab Interactable
component to the root.
As seen in the following screenshot, grabbable objects should have a transform positioned and appropriately rotated for grabbing the item with the correct orientation for proper usage – here, we see both the Module
and the Gun
assets with their Attach
object positioned and rotated for a good grab.
Figure 14.11 – Configuring the XR grab attach transforms
Note from the screenshot that the forward direction (Z-axis, blue arrow) of the Attach
transform is pointing away from the player holding the object. Some experimentation may be done to attain the desired grab position.
Now, we just need to assign the Attach
object to the XR Grab Interactable
Attach Transform field to ensure it gets properly attached to the player’s controller. You can find the Attach Transform field hidden within the many options the interactable component provides.
Figure 14.12 – XR Grab Interactable Attach Transform assignment
Additional reading | Affordance system
The XRI affordance system gives visual color and audio feedback cues when interacting with objects, especially when haptics are unavailable while using hands, using an XR Interactable Affordance State Provider
component with the interactable source. Samples are provided in the XRI example project.
Affordance system: https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit%402.5/manual/affordance-system.html
Save the module and temporarily add it to your Boss Room
scene near the MR Interaction Setup
object. Enter Play Mode and test grabbing the module and moving it around with the controller (by using the grip button on the side of the controller, with your middle finger). Notice I said enter play mode this time, not build and run. That’s because we want to iterate changes like grab point attachment positions more quickly. For details, refer to the Quest Link callout in the Creating the Unity project section.
Completed interactable objects
All of the completed XR interactable objects are provided in the completed Unity project files for this chapter in the book’s GitHub repository here: https://github.com/PacktPublishing/Unity-2022-by-Example/tree/main/ch14/XR-Assets.
That’s all there is to making an object interactable in XR. XRI makes it very easy to get the minimum required interactions, such as grabbing in place for games and experiences. We saw how to dynamically place other digital objects in the world; let’s do the same for the modules, but with a twist.
Placing the modules in the room
In our boss room battle, our primary objective, besides just staying alive, is to collect the crystal modules to restore the functionality of the control console and energize the reactor to expel the evil plant entity. So, we have a sort of collection game here again! However, let’s add to the challenge of collecting and managing the modules.
Collecting objects in an MR game can be made more engaging by having these objects move around the room. The player will need to rely on their spatial awareness and timing skills, which introduces a more dynamic and novel challenge requiring them to explore the room. With the objects reacting to not only the player’s actions but the physicality of the space, it also deepens the immersion of the MR gameplay experience.
If the collectible objects were to float somehow, the modules could also contribute to the game’s narrative or aesthetic theme. As it happens, crystal modules have a strange other-worldly property – gravity does not affect them, but forces do. ¯\_(ツ)_/¯
With that context set, let’s first create the three required modules, then spawn them into the room when the game starts.
Creating unique module variants
The three Module
Prefab variants we’ll create can be seen in Figure 14.7, and each will have a unique identifier – the ID of the module will come into play when we configure the control console slots.
There are multiple ways to create a Prefab variant, but this time, we’ll use the following steps to create each unique module:
- Make a Prefab variant of
Module
by right-clicking on it in the Project window and selecting Create | Prefab Variant. - Name it
Module Variant A
(the proceeding variants will beB
andC
). - Double-click on
Module Variant A
to open it in Prefab Mode. - In the Module ID field of the
Module
component, set it toA
(followed byB
andC
). (TheModule
script is provided as part of the imported base assets.) - From the
Assets/Materials
folder, assign theModule_A
material. You can easily do this by dragging the material from the Project window onto the model visible in the Scene view. - Use Gravity =
false
on theRigidbody
component.
Repeat these steps to create variants for modules B
and C
, respectively. Remember, any edits you make to a Prefab variant, such as modified property values or added/removed components, become overrides of the base Prefab, so you don’t want to apply these overrides, or you’ll be applying them to the base Prefab asset, and we don’t want that!
XR interactable required component
Adding an XR Grab Interactable
component to our objects will automatically add a Rigidbody
component with its default values.
Three unique modules, check! We can now add the necessary code to our game manager to spawn the modules when the game starts.
Spawning the modules to get things moving
There is no sense in reinventing the wheel to spawn another Prefab into the scene; we can rely on the work we’ve already coded (as we should generally do). We will, however, change the spawning up just a bit because we don’t want to instantiate the objects about a plane object. We want a more arbitrary position in the world, but still in relation to the player position.
Open up the GameManager
script for editing. Let’s first create the serialized private member variable where we can assign all the module variants in the Inspector that need to be spawned into the scene:
[SerializeField] private GameObject[] _prefabModules;
Now, we can create another method overload for the SpawnPrefab()
method:
private void SpawnPrefab(GameObject[] prefabs, Vector3 playerOffset, Vector3 forceDirection, float force) { var playerTransform = Camera.main.transform; var spawnPosition = new Vector3( playerTransform.position.x + (playerTransform.right * playerOffset.x).x, playerTransform.position.y + (playerTransform.up * playerOffset.y).y, playerTransform.position.z + (playerTransform.forward * playerOffset.z).z ); foreach (var item in prefabs) { var module = Instantiate(item, spawnPosition, Quaternion.identity); } }
In the method signature, we’ve made the prefabs
parameter an array, GameObject[] prefabs
, to accept any number of Prefabs to spawn, then added forceDirection
and force
parameters, which we’ll use to apply a force to the objects after instantiation.
The primary difference with this Prefab spawning method is that we’re using a foreach
statement to iterate the array of Prefabs to ensure each one is instantiated.
Now, we can add the call to SpawnPrefab()
to do the module spawning. For simplicity’s sake, we’ll just tag it onto the console spawning. Add the following call to SpawnPrefab()
in the switch
statement’s floor plane classification case
statement:
case PlaneClassification.Floor: if (!_hasSpawnedPrefab_Console) { … SpawnPrefab(_prefabModules, new Vector3(0f, 1.5f, 0.8f), Vector3.up, 0.05f); } break;
A new vector position is passed in as the offset from the player’s position (world space), the Vector3.up
is the direction force, and 0.05f
is the force applied to the modules when they are instantiated. Simple.
Okay, we’ve talked about adding a force to the crystal modules so that they float about the room… now’s the time to implement it! Add the following lines to this iteration of the SpawnPrefab()
method:
// Existing line in foreach body. var module = Instantiate(item, spawnPosition, Quaternion.identity); // Added lines. if (forceDirection != Vector3.zero || force != 0) { if (module.TryGetComponent<Rigidbody>(out var rb)) { ApplyForce(rb); } }
If we have a force direction and amount passed as parameters to the SpawnPrefab()
call that are not zero, we attempt to get the Rigidbody
component of the instantiated Prefab. If the Rigidbody
component reference is successfully retrieved, we call ApplyForce()
and pass it in.
All that remains is to add the ApplyForce()
method as a local function to work its physics magic:
void ApplyForce(Rigidbody rb) { rb.AddForce(forceDirection * force, ForceMode.Impulse); var torqueMultiplier = 3f; var randomRotation = new Vector3( Random.Range(-1f, 1f), Random.Range(-1f, 1f), Random.Range(-1f, 1f)).normalized * (force * torqueMultiplier); rb.AddTorque(randomRotation, ForceMode.Impulse); }
The physics API methods we’re taking advantage of here are rb.AddForce()
and rb.AddTorque()
to apply forces using an Impulse
force mode.
Additional reading | Unity documentation
Rigidbody.AddForce
: https://docs.unity3d.com/2022.3/Documentation/ScriptReference/Rigidbody.AddForce.html
Rigidbody.AddTorque
: https://docs.unity3d.com/2022.3/Documentation/ScriptReference/Rigidbody.AddTorque.html
Save the script and assign all the crystal module Prefab variants to the GameManager
’s Prefab Modules field. Playtest and adjust the spawn position of the modules to your liking. Have fun chasing them down!
Applying impact force
The provided Module
Prefab comes with an ImpactApplyForce
script added to it that will apply an opposite force to the module when it collides with any other object with a collider. Combined with a very bouncy physics material assigned to the collider, this attempts to keep the modules moving about the room constantly.
In this section, we got the crystal modules floating about the room, adding the first challenge to the boss room battle mechanics. The second half of the challenge with the modules has to do with inserting them correctly into the slots of the control console. In the next section, we’ll perform the XR interactable configuration necessary for this interaction.
Making the module slots interactable
To have objects that can work together to create an intuitive system that mimics how things work in the real world, we use an XR Grab Interactable
object and an XR Socket Interactor
object – we have an interactable and an interactor. The grab interactor allows players to pick up and interact with objects, while the socket interactor provides the designated spots to place them. This handshake between the two components makes it easier for users to interact with objects and provides a more seamless and immersive experience in virtual or MR environments.
This means we’ll be configuring each control console slot with a socket interactor. Go ahead and open up the Console
Prefab in Prefab Mode from the Assets/Prefabs
folder. Add the XR Socket Interactor
component for the Slot A
, Slot B
, and Slot C
objects parented to the ConsoleSlots
object. The object hierarchy can be seen in the following screenshot:
Figure 14.13 – Console slot configuration
An attach transform object can also be seen in the preceding screenshot; each slot has an object parented to it, and named Socket Attach
. For each socket interactor added to the slot objects, assign the attach object to the interactor’s Attach Transform field (just like we did for the grab interactables).
We also want to ensure that only modules are inserted into the slots on the control console; we can do something about that. We can use the Interaction Layer Mask
property of both XR Grab Interactable
and XR
Socket Interactor
.
It doesn’t matter which one you start with, but it’s essential first to add a Module
interaction layer. You can do that from any Interaction Layer Mask field by clicking the dropdown and selecting Add layer… (at the bottom), then going back to the component and selecting Nothing, then Module for each.
Setting the interactive layer with the asset
Alternatively, find the interactive layer asset at Assets/XRI/Settings/Resources/InteractionLayerSettings
, add the Module
layer, then return to the components and set the layer.
The last part of the slot configuration is that the slots are configured with a ConsoleSlot
component already, similar to how we configured the module’s Module
component; ensure Slot ID for each of the slots has their ID assigned: A
, B
, and C
again.
Speaking of the ConsoleSlot
component, let’s take a closer look at the code. It’s more than just a slot ID – it can detect when a module is inserted or removed. This allows it to tell the parent console controller when the specific slot is interacted with, which can then respond accordingly:
public class ConsoleSlot : MonoBehaviour { [SerializeField] private char _slotID; private ConsoleController _controller; private XRSocketInteractor _socketInteractor; private void Awake() { _controller = GetComponentInParent<ConsoleController>(); _socketInteractor = GetComponent<XRSocketInteractor>(); _socketInteractor.selectEntered. AddListener(HandleModuleInserted); _socketInteractor.selectExited. AddListener(HandleModuleRemoved); } }
We declare our variables, then, in Awake()
, once we have the references to the required components, we register the listeners for responding to the socket interactor selectEntered
and selectExited
events for handling inserting and removing modules, respectively.
Here are the handler method declarations:
private char _moduleID; private void HandleModuleInserted(SelectEnterEventArgs arg) { _moduleID = arg.interactableObject.transform. GetComponent<Module>().ModuleID; if (!char.IsWhiteSpace(_moduleID)) { _controller.InsertModule(_slotID, _moduleID); } } private void HandleModuleRemoved(SelectExitEventArgs arg) => _controller.ResetSlots();
The first thing we do is get the ID of the inserted module (remember, only modules can be inserted due to the interaction’s layer mask assignment). We then call a method of the ConsoleController
instance for either the module being inserted, InsertModule()
, or simply resetting the slots, ResetSlots()
, when the module is removed.
You might be considering having ConsoleController
subscribe to a ConsoleSlot
exposed event. Since there are three slots, it is more efficient to have each slot handle its own interactions (objects should be responsible for their own state) and notify the controller (by passing its ID and the module’s ID). This is a more simplified approach.
Bonus activity
Feel free to flip the script and experiment with the console controller listening to events on all three slots to compare the required code differences.
You should now be able to playtest the console slot interactions by grabbing a crystal module and placing it in any slot. Fun!
There’s more fun to be had… let’s get that laser pistol configured to provide us some protection against infiltrated hover bots.
Configuring the laser gun
The configuration for the interactable gun object is pretty much the same as the crystal module; we already saw how to configure an attach transform in Figure 14.11. Except now, we’ll add a secondary action for shooting when the trigger is pulled.
Implementing shooting with XR Interactable Events
We only want shooting triggered when we’re actually grabbing the gun, so we won’t be relying on the reusable OnButtonPress
component this time. Instead, we’ll use the Interactable Events of the XR Grab Interactable
component, specifically, Activated
. Activated
is called when the interactor selecting the interactable sends a command to activate the interactable – precisely what we need.
Additional reading | Grab interactables
Both basic and advanced examples of grab interactions are available in the XRI examples: https://github.com/Unity-Technologies/XR-Interaction-Toolkit-Examples/blob/main/Documentation/GrabInteractables.md.
To set up the Gun
Prefab, take the following steps:
- Either modify the provided
Gun
Prefab directly or make a Prefab variant to work with. - Open the Prefab in Prefab Mode.
- Add an
XR Grab Interactable
component to the root.- Assign the
Attach
object to the Attach Transform field. - For Interactable Events |
Activated
, assign theGun.Shoot
function.
- Assign the
Figure 14.14 – XR Grab Interactable event Activated assignment
- For the
Rigidbody
component, use the following property values (the gun will stay floating in the air right where the player releases their grip; Kryk’zylx military tech is truly advanced!):- Use Gravity =
false
- IsKinematic =
true
- Use Gravity =
And that’s all that’s required to configure the Gun
Prefab to make it an interactable object that players can pick up and shoot. Pew-pew!
Gun sound FX
We also have sound FX added for the shooting, courtesy of AudioManager
and the AudioPlayerSFX3D
audio-playing component. So, add the audio manager to the boss room scene, create an audio mixer and the required mixer groups, and then assign the mixer groups to the audio manager. For a refresher, visit Chapter 12.
All the code responsible for making the gun shoot a laser beam when the Shoot()
method is called is contained entirely within the Gun
class. It’s single-responsibility for its specific use case in this game, and the code is simple and straightforward, so I didn’t feel the need to overcomplicate the architecture here.
Code architecture philosophy
“When you have a hammer, everything looks like a nail” is a metaphor we can apply to a common pitfall in software development. People may use their favorite approaches to solve every problem they encounter, unintentionally leading to overcomplicated and inefficient code. Choosing the most appropriate solution for each problem or situation is important, rather than relying solely on a software doctrine.
Sometimes, you just need to embrace simplicity. Knowing when to – or not to – is called experience.
When you examine the Gun
script, you’ll see that we’re simply using Physics.Raycast()
and LineRenderer
with the two points for drawing the line set to the firing point and the end of the gun’s firing range, or the point at which the ray hits a damageable object (filtered by use of a layer mask).
Tip
Unity provides a specialized XRLineRenderer
component for producing an XR-optimized line render compared to the regular LineRenderer
component. It’s also capable of producing very inexpensive glow effects, which is fantastic for laser beams!
XR Line Renderer: https://github.com/Unity-Technologies/XRLineRenderer
If the raycast hits a damageable object, we pass the damage amount specified in _damageAmount
in a call to TakeDamage()
. This is how we’ll work within our health system, from Chapter 8 (yes, reusable system for the win!), to cause damage to objects that have health (i.e., a HealthSystem
component added).
Now that we have a functional self-defense weapon, let’s get it into the player’s hands.
Spawning the gun position
Alright, this will be a piece of cake! We’re already pros at spawning virtual objects into the room. We’ll reuse most of what we already have in place for spawning objects because we’ll spawn the gun near the player, on their right-hand side (sorry, left-handers).
First things first, open up the GameManager
script and add a declaration for a serialized private variable, _prefabGun
, to hold the reference to the Gun
Prefab:
[SerializeField] private GameObject _prefabGun;
We’re already using the Console
Prefab spawning section to spawn other objects, so let’s tag the gun instantiation onto it:
case PlaneClassification.Floor: if (!_hasSpawnedPrefab_Console) { … SpawnPrefab(_prefabGun, new Vector3(0.5f, 1.2f, 0.15f)); } break;
Notice this time, when we call SpawnPrefab()
, we have another new method signature. This is very much like the method overload we used to spawn the modules, except we’re going to spawn a single Prefab and won’t apply any physics force in a specified direction to it.
In this version, let’s create a new method overload for spawning a single Prefab. This method will simply pass values to our previous SpawnPrefab()
method, which requires an array of Prefabs. So, we just need to add the single Prefab to a single item array first:
private void SpawnPrefab(GameObject prefab, Vector3 playerOffset) => SpawnPrefab(new GameObject[] { prefab }, playerOffset, Vector3.zero, 0f);
Notice we preset the parameter values for forceDirection
and force
to zeros to ensure no physics forces will be applied to the spawned object.
Save the script, assign Gun
to the GameManager
’s Prefab Gun field, save the scene, and playtest with all the elements in place for the start of our game.
In this section, we learned how to create interactable Prefab variants for the player and collect and place modules into slots on the control console, enhancing player engagement within the environment. We also learned how to implement shooting for the gun as a secondary activate action for objects held by the player. Now, with the added ability to shoot, let’s see how we bring everything together with the gameplay mechanics.