Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - 3D Game Development

115 Articles
article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 3093

article-image-3d-modeling
Packt
05 Feb 2015
7 min read
Save for later

3D Modeling

Packt
05 Feb 2015
7 min read
In this article by Suryakumar Balakrishnan Nair and Andreas Oehlke, authors of Learning LibGDX Game Development, Second Edition, you will learn how to load a model and create a basic 3D scene. In a game, we need an actual model exported from Blender or any other 3D animation software. (For more resources related to this topic, see here.) Loading a model Copy these three files to the assets folder of the android project: car.g3dj: This is the model file to be used in our example tiretext.jpg and yellowtaxi.jpg: These are the materials for the model Replacing the ModelBuilder class in our ModelTest.java file, we add the following code: assets = new AssetManager(); assets.load("car.g3dj", Model.class); assets.finishLoading(); model = assets.get("car.g3dj", Model.class); instance = new ModelInstance(model); Additionally, a camera input controller is also added to inspect the model from various angles as follows: camController = new CameraInputController(cam); Gdx.input.setInputProcessor(camController); camController.update(); This camera input controller will be updated on each render() by calling camController.update(). The completed MyModelTest.java is as follows: public class MyModelTest extends ApplicationAdapter { public Environment environment; public PerspectiveCamera cam; public CameraInputController camController; public ModelBatch modelBatch; public Model model; public ModelInstance instance; public AssetManager assets ; @Override public void create() { environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f)); modelBatch = new ModelBatch(); cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(1,1,1); cam.lookAt(0, 0, 0); cam.near = 1f; cam.far = 300f; cam.update(); assets = new AssetManager(); assets.load("car.g3dj", Model.class); assets.finishLoading(); model = assets.get("car.g3dj", Model.class); instance = new ModelInstance(model); camController = new CameraInputController(cam); Gdx.input.setInputProcessor(camController); } @Override public void render() { camController.update(); Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(instance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); assets.dispose() ; } } The new additions are highlighted. The following is a screenshot of the render scene. Use the W , S , A , D keys and mouse to navigate through the scene. Model formats and the FBX converter LibGDX supports three model formats, namely Wavefront OBJ, G3DJ, and G3DB. Wavefront OBJ models are intended for testing purposes only because this format does not include enough information for complex models. You can export your 3D model as .obj from any 3D animation or modeling software, however LibGDX does not fully support .obj, hence, if you use your own .obj model, then it might not render correctly. The G3DJ is a JSON textual format supported by LibGDX and can be used for debugging, whereas the G3DB is a binary format and is faster to load. One of the most popular model formats supported by any modeling software is FBX. LibGDX provides a tool called FBX converter to convert formats such as .obj and .fbx into the LibGDX supported formats .g3dj and .g3db. To convert car.fbx to a .g3db format, open the command line and call fbx-conv-win32, as shown in the following screenshot: Make sure that the fbx-conv-win32.exe file is in the same folder as car.fbx. Otherwise, you will have to use the full path of the source file to convert. To find out more about FBX converter visit https://github.com/libgdx/fbx-conv and https://github.com/libgdx/libgdx/wiki/3D-animations-and-skinning. Also, you can download FBX converter from http://libgdx.badlogicgames.com/fbx-conv. Creating a basic 3D scene Create a simple scene with a ball and ground, as shown in the following screenshot: Add the following code to MyCollisionTest.java: package com.packtpub.libgdx.collisiontest; import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; ... import com.badlogic.gdx.utils.Array; public class MyCollisionTest extends ApplicationAdapter { PerspectiveCamera cam; ModelBatch modelBatch; Array<Model> models; ModelInstance groundInstance; ModelInstance sphereInstance; Environment environment; ModelBuilder modelbuilder; @Override public void create() { modelBatch = new ModelBatch(); environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f)); cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(0, 10, -20); cam.lookAt(0, 0, 0); cam.update(); models = new Array<Model>(); modelbuilder = new ModelBuilder(); // creating a ground model using box shape float groundWidth = 40; modelbuilder.begin(); MeshPartBuilder mpb = modelbuilder.part("parts", GL20.GL_TRIANGLES, Usage.Position | Usage.Normal | Usage.Color, new Material(ColorAttribute.createDiffuse(Color.WHITE))); mpb.setColor(1f, 1f, 1f, 1f); mpb.box(0, 0, 0, groundWidth, 1, groundWidth); Model model = modelbuilder.end(); models.add(model); groundInstance = new ModelInstance(model); // creating a sphere model float radius = 2f; final Model sphereModel = modelbuilder.createSphere(radius, radius, radius, 20, 20, new Material(ColorAttribute.createDiffuse(Color.RED), ColorAttribute.createSpecular(Color.GRAY), FloatAttribute.createShininess(64f)), Usage.Position | Usage.Normal); models.add(sphereModel); sphereInstance = new ModelInstance(sphereModel); sphereinstance.transform.trn(0, 10, 0); } public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(groundInstance, environment); modelBatch.render(sphereInstance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); for (Model model : models) model.dispose(); } } The ground is actually a thin box created using ModelBuilder just like the sphere. Now that we have created a simple 3D scene, let's add some physics using the following code: public class MyCollisionTest extends ApplicationAdapter { ... private btDefaultCollisionConfiguration collisionConfiguration; private btCollisionDispatcher dispatcher; private btDbvtBroadphase broadphase; private btSequentialImpulseConstraintSolver solver; private btDiscreteDynamicsWorld world; private Array<btCollisionShape> shapes = new Array<btCollisionShape>(); private Array<btRigidBodyConstructionInfo> bodyInfos = new Array<btRigidBody.btRigidBodyConstructionInfo>(); private Array<btRigidBody> bodies = new Array<btRigidBody>(); private btDefaultMotionState sphereMotionState; @Override public void create() { ... // Initiating Bullet Physics Bullet.init(); //setting up the world collisionConfiguration = new btDefaultCollisionConfiguration(); dispatcher = new btCollisionDispatcher(collisionConfiguration); broadphase = new btDbvtBroadphase(); solver = new btSequentialImpulseConstraintSolver(); world = new btDiscreteDynamicsWorld(dispatcher, broadphase, solver, collisionConfiguration); world.setGravity(new Vector3(0, -9.81f, 1f)); // creating ground body btCollisionShape groundshape = new btBoxShape(new Vector3(20, 1 / 2f, 20)); shapes.add(groundshape); btRigidBodyConstructionInfo bodyInfo = new btRigidBodyConstructionInfo(0, null, groundshape, Vector3.Zero); this.bodyInfos.add(bodyInfo); btRigidBody body = new btRigidBody(bodyInfo); bodies.add(body); world.addRigidBody(body); // creating sphere body sphereMotionState = new btDefaultMotionState(sphereInstance.transform); sphereMotionState.setWorldTransform(sphereInstance.transform); final btCollisionShape sphereShape = new btSphereShape(1f); shapes.add(sphereShape); bodyInfo = new btRigidBodyConstructionInfo(1, sphereMotionState, sphereShape, new Vector3(1, 1, 1)); this.bodyInfos.add(bodyInfo); body = new btRigidBody(bodyInfo); bodies.add(body); world.addRigidBody(body); } public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); world.stepSimulation(Gdx.graphics.getDeltaTime(), 5); sphereMotionState.getWorldTransform(sphereInstance.transform); modelBatch.begin(cam); modelBatch.render(groundInstance, environment); modelBatch.render(sphereInstance, environment); modelBatch.end(); } @Override public void dispose() { modelBatch.dispose(); for (Model model : models) model.dispose(); for (btRigidBody body : bodies) { body.dispose(); } sphereMotionState.dispose(); for (btCollisionShape shape : shapes) shape.dispose(); for (btRigidBodyConstructionInfo info : bodyInfos) info.dispose(); world.dispose(); collisionConfiguration.dispose(); dispatcher.dispose(); broadphase.dispose(); solver.dispose(); Gdx.app.log(this.getClass().getName(), "Disposed"); } } The highlighted parts are the addition to our previous code. After execution, we see the ball falling and colliding with the ground. Summary In this article, you learned how to load a 3D model of a car and created a basic 3D scene. Resources for Article: Further resources on this subject: Getting Started with GameSalad [article] Sparrow iOS Game Framework - The Basics of Our Game [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 2946

article-image-c-ngui
Packt
29 Dec 2014
23 min read
Save for later

C# with NGUI

Packt
29 Dec 2014
23 min read
In this article by Charles Pearson, the author of Learning NGUI for Unity, we will talk about C# scripting with NGUI. We will learn how to handle events and interact with them through code. We'll use them to: Play tweens with effects through code Implement a localized tooltip system Localize labels through code Assign callback methods to events using both code and the Inspector view We'll learn many more useful C# tips throughout the book. Right now, let's start with events and their associated methods. (For more resources related to this topic, see here.) Events When scripting in C# with the NGUI plugin, some methods will often be used. For example, you will regularly need to know if an object is currently hovered upon, pressed, or clicked. Of course, you could code your own system—but NGUI handles that very well, and it's important to use it at its full potential in order to gain development time. Available methods When you create and attach a script to an object that has a collider on it (for example, a button or a 3D object), you can add the following useful methods within the script to catch events: OnHover(bool state): This method is called when the object is hovered or unhovered. The state bool gives the hover state; if state is true, the cursor just entered the object's collider. If state is false, the cursor has just left the collider's bounds. OnPress(bool state): This method works in the exact same way as the previous OnHover() method, except it is called when the object is pressed. It also works for touch-enabled devices. If you need to know which mouse button was used to press the object, use the UICamera.currentTouchID variable; if this int is equal to -1, it's a left-click. If it's equal to -2, it's a right-click. Finally, if it's equal to -3, it's a middle-click. OnClick(): This method is similar to OnPress(), except that this method is exclusively called when the click is validated, meaning when an OnPress(true) event occurs followed by an OnPress(false) event. It works with mouse click and touch (tap). In order to handle double clicks, you can also use the OnDoubleClick() method, which works in the same way. OnDrag(Vector2 delta): This method is called at each frame when the mouse or touch moves between the OnPress(true) and OnPress(false) events. The Vector2 delta argument gives you the object's movement since the last frame. OnDrop(GameObject droppedObj): This method is called when an object is dropped on the GameObject on which this script is attached. The dropped GameObject is passed as the droppedObj parameter. OnSelect(): This method is called when the user clicks on the object. It will not be called again until another object is clicked on or the object is deselected (click on empty space). OnTooltip(bool state): This method is called when the cursor is over the object for more than the duration defined by the Tooltip Delay inspector parameter of UICamera. If the Sticky Tooltip option of UICamera is checked, the tooltip remains visible until the cursor moves outside the collider; otherwise, it disappears as soon as the cursor moves. OnScroll(float delta): This method is called when the mouse's scroll wheel is moved while the object is hovered—the delta parameter gives you the amount and direction of the scroll. If you attach your script on a 3D object to catch these events, make sure it is on a layer included in Event Mask of UICamera. Now that we've seen the available event methods, let's see how they are used in a simple example. Example To illustrate when these events occur and how to catch them, you can create a new EventTester.cs script with the following code: void OnHover(bool state) { Debug.Log(this.name + " Hover: " + state); }   void OnPress(bool state) { Debug.Log(this.name + " Pressed: " + state); }   void OnClick() { Debug.Log(this.name + " Clicked"); }   void OnDrag(Vector2 delta) { Debug.Log(this.name + " Drag: " + delta); }   void OnDrop(GameObject droppedObject) { Debug.Log(droppedObject.name + " dropped on " + this.name); }   void OnSelect(bool state) { Debug.Log(this.name + " Selected: " + state); }   void OnTooltip(bool state) { Debug.Log("Show " + this.name + "'s Tooltip: " + state); }   void OnScroll(float delta) { Debug.Log("Scroll of " + delta + " on " + this.name); } The above highlighted lines are the event methods we discussed, implemented with their respective necessary arguments. Attach our Event Tester component now to any GameObject with a collider, like our Main | Buttons | Play button. Hit Unity's play button. From now on, events that occur on the object they're attached to are now tracked in the Console output:   I recommend that you keep the EventTester.cs script in a handy file directory as a reminder for available event methods in the future. Indeed, for each event, you can simply replace the Debug.Log() lines with the instructions you need. Now we know how to catch events through code. Let's use them to display a tooltip! Creating tooltips Let's use the OnTooltip() event to show a tooltip for our buttons and different options, as shown in the following screenshot:   The tooltip object shown in the preceding screenshot, which we are going to create, is composed of four elements: Tooltip: The tooltip container, with the Tooltip component attached. Background: The background sprite that wraps around Label. Border: A yellow border that wraps around Background. Label: The label that displays the tooltip's text. We will also make sure the tooltip is localized using NGUI methods. The tooltip object In order to create the tooltip object, we'll first create its visual elements (widgets), and then we'll attach the Tooltip component to it in order to define it as NGUI's tooltip. Widgets First, we need to create the tooltip object's visual elements: Select our UI Root GameObject in the Hierarchy view. Hit Alt + Shift + N to create a new empty child GameObject. Rename this new child from GameObject to Tooltip. Add the NGUI Panel (UIPanel) component to it. Set this new Depth of UIPanel to 10. In the preceding steps, we've created the tooltip container. It has UIPanel with a Depth value of 10 in order to make sure our tooltip will remain on top of other panels. Now, let's create the faintly transparent background sprite: With Tooltip selected, hit Alt + Shift + S to create a new child sprite. Rename this new child from Sprite to Background. Select our new Tooltip | Background GameObject, and configure UISprite, as follows: Perform the following steps: Make sure Atlas is set to Wooden Atlas. Set Sprite to the Window sprite. Make sure Type is set to Sliced. Change Color Tint to {R: 90, G: 70, B: 0, A: 180}. Set Pivot to top-left (left arrow + up arrow). Change Size to 500 x 85. Reset its Transform position to {0, 0, 0}. Ok, we can now easily add a fully opaque border with the following trick: With Tooltip | Background selected, hit Ctrl + D to duplicate it. Rename this new duplicate to Border. Select Tooltip | Border and configure its attached UI Sprite, as follows:   Perform the following steps: Disable the Fill Center option. Change Color Tint to {R: 255, G: 220, B: 0, A: 255}. Change the Depth value to 1. Set Anchors Type to Unified. Make sure the Execute parameter is set to OnUpdate. Drag Tooltip | Background in to the new Target field. By not filling the center of the Border sprite, we now have a yellow border around our background. We used anchors to make sure this border always wraps the background even during runtime—thanks to the Execute parameter set to OnUpdate. Right now, our Game and Hierarchy views should look like this:   Let's create the tooltip's label. With Tooltip selected, hit Alt + Shift + L to create a new label. For the new Label GameObject, set the following parameters for UILabel:   Set Font Type to NGUI, and Font to Arimo20 with a size of 40. Change Text to [FFCC00]This[FFFFFF] is a tooltip. Change Overflow to ResizeHeight. Set Effect to Outline, with an X and Y of 1 and black color. Set Pivot to top-left (left arrow + up arrow). Change X Size to 434. The height adjusts to the text amount. Set the Transform position to {33, -22, 0}. Ok, good. We now have a label that can display our tooltip's text. This label's height will adjust automatically as the text gets longer or shorter. Let's configure anchors to make sure the background always wraps around the label: Select our Tooltip | Background GameObject. Set Anchors Type to Unified. Drag Tooltip | Label in the new Target field. Set the Execute parameter to OnUpdate. Great! Now, if you edit our tooltip's text label to a very large text, you'll see that it adjusts automatically, as shown in the following screenshot:   UITooltip We can now add the UITooltip component to our tooltip object: Select our UI Root | Tooltip GameObject. Click the Add Component button in the Inspector view. Type tooltip with your keyboard to search for components. Select Tooltip and hit Enter or click on it with your mouse. Configure the newly attached UITooltip component, as follows: Drag UI Root | Tooltip | Label in the Text field. Drag UI Root | Tooltip | Background in the Background field. The tooltip object is ready! It is now defined as a tooltip for NGUI. Now, let's see how we can display it when needed using a few simple lines of code. Displaying the tooltip We must now show the tooltip when needed. In order to do that, we can use the OnTooltip() event, in which we request to display the tooltip with localized text: Select our three Main | Buttons | Exit, Options, and Play buttons. Click the Add Component button in the Inspector view. Type ShowTooltip with your keyboard. Hit Enter twice to create and attach the new ShowTooltip.cs script to it. Open this new ShowTooltip.cs script. First, we need to add this public key variable to define which text we want to display: // The localization key of the text to display public string key = ""; Ok, now add the following OnTooltip() method that retrieves the localized text and requests to show or hide the tooltip depending on the state bool: // When the OnTooltip event is triggered on this object void OnTooltip(bool state) { // Get the final localized text string finalText = Localization.Get(key);   // If the tooltip must be removed... if(!state) { // ...Set the finalText to nothing finalText = ""; }   // Request the tooltip display UITooltip.ShowText(finalText); } Save the script. As you can see in the preceding code, the Localization.Get(string key) method returns localized text of the corresponding key parameter that is passed. You can now use it to localize a label through code anytime! In order to hide the tooltip, we simply request UITooltip to show an empty tooltip. To use Localization.Get(string key), your label must not have a UILocalize component attached to it; otherwise, the value of UILocalize will overwrite anything you assign to UILabel. Ok, we have added the code to show our tooltip with localized text. Now, open the Localization.txt file, and add these localized strings: // Tooltips Play_Tooltip, "Launch the game!", "Lancer le jeu !" Options_Tooltip, "Change language, nickname, subtitles...", "Changer la langue, le pseudo, les sous-titres..." Exit_Tooltip, "Leaving us already?", "Vous nous quittez déjà ?" Now that our localized strings are added, we could manually configure the key parameter for our three buttons' Show Tooltip components to respectively display Play_Tooltip, Options_Tooltip, and Exit_Tooltip. But that would be a repetitive action, and if we want to add localized tooltips easily for future and existing objects, we should implement the following system: if the key parameter is empty, we'll try to get a localized text based on the GameObject's name. Let's do this now! Open our ShowTooltip.cs script, and add this Start() method: // At start void Start() { // If key parameter isn't defined in inspector... if(string.IsNullOrEmpty(key)) { // ...Set it now based on the GameObject's name key = name + "_Tooltip"; } } Click on Unity's play button. That's it! When you leave your cursor on any of our three buttons, a localized tooltip appears:   The preceding tooltip wraps around the displayed text perfectly, and we didn't have to manually configure their Show Tooltip components' key parameters! Actually, I have a feeling that the display delay is too long. Let's correct this: Select our UI Root | Camera GameObject. Set Tooltip Delay of UICamera to 0.3. That's better—our localized tooltip appears after 0.3 seconds of hovering. Adding the remaining tooltips We can now easily add tooltips for our Options page's element. The tooltip works on any GameObject with a collider attached to it. Let's use a search by type to find them: In the Hierarchy view's search bar, type t:boxcollider Select Checkbox, Confirm, Input, List (both), Music, and SFX: Click on the Add Component button in the Inspector view. Type show with your keyboard to search the components. Hit Enter or click on the Show Tooltip component to attach it to them. For the objects with generic names, such as Input and List, we need to set their key parameter manually, as follows: Select the Checkbox GameObject, and set Key to Sound_Tooltip. Select the Input GameObject, and set Key to Nickname_Tooltip. For the List for language selection, set Key to Language_Tooltip. For the List for subtitles selection, set Key to Subtitles_Tooltip. To know if the selected list is the language or subtitles list, look at Options of its UIPopup List: if it has the None option, then it's the subtitles selection. Finally, we need to add these localization strings in the Localization.txt file: Sound_Tooltip, "Enable or disable game sound", "Activer ou désactiver le son du jeu" Nickname_Tooltip, "Name used during the game", "Pseudo utilisé lors du jeu" Language_Tooltip, "Game and user interface language", "Langue du jeu et de l'interface" Subtitles_Tooltip, "Subtitles language", "Langue des sous-titres" Confirm_Tooltip, "Confirm and return to main menu", "Confirmer et retourner au menu principal" Music_Tooltip, "Game music volume", "Volume de la musique" SFX_Tooltip, "Sound effects volume", "Volume des effets" Hit Unity's play button. We now have localized tooltips for all our options! We now know how to easily use NGUI's tooltip system. It's time to talk about Tween methods. Tweens The tweens we have used until now were components we added to GameObjects in the scene. It is also possible to easily add tweens to GameObjects through code. You can see all available tweens by simply typing Tween inside any method in your favorite IDE. You will see a list of Tween classes thanks to auto-completion, as shown in the following screenshot:   The strong point of these classes is that they work in one line and don't have to be executed at each frame; you just have to call their Begin() method! Here, we will apply tweens on widgets, but keep in mind that it works in the exact same way with other GameObjects since NGUI widgets are GameObjects. Tween Scale Previously, we've used the Tween Scale component to make our main window disappear when the Exit button is pressed. Let's do the same when the Play button is pressed, but this time we'll do it through code to understand how it's done. DisappearOnClick Script We will first create a new DisappearOnClick.cs script that will tween a target's scale to {0.01, 0.01, 0.01} when the GameObject it's attached to is clicked on: Select our UI Root | Main | Buttons | Play GameObject. Click the Add Component button in the Inspector view. Type DisappearOnClick with your keyboard. Hit Enter twice to create and add the new DisappearOnClick.cs script. Open this new DisappearOnClick.cs script. First, we must add this public target GameObject to define which object will be affected by the tween, and a duration float to define the speed: // Declare the target we'll tween down to {0.01, 0.01, 0.01} public GameObject target; // Declare a float to configure the tween's duration public float duration = 0.3f; Ok, now, let's add the following OnClick() method, which creates a new tween towards {0.01, 0.01, 0.01} on our desired target using the duration variable: // When this object is clicked private void OnClick() { // Create a tween on the target TweenScale.Begin(target, duration, Vector3.one * 0.01f); } In the preceding code, we scale down the target for the desired duration, towards 0.01f. Save the script. Good. Now, we simply have to assign our variables in the Inspector view: Go back to Unity and select our Play button GameObject. Drag our UI Root | Main object in the DisappearOnClick Target field. Great. Now, hit Unity's play button. When you click the menu's Play button, our main menu is scaled down to {0.01, 0.01, 0.01}, with the simple TweenScale.Begin() line! Now that we've seen how to make a basic tween, let's see how to add effects. Tween effects Right now, our tween is simple and linear. In order to add an effect to the tween, we first need to store it as UITweener, which is its parent class. Replace lines of our OnClick() method by these to first store it and set an effect: // Retrieve the new target's tween UITweener tween = TweenScale.Begin(target, duration, Vector3.one * 0.01f); // Set the new tween's effect method tween.method = UITweener.Method.EaseInOut; That's it. Our tween now has an EaseInOut effect. You also have the following tween effect methods:   Perform the following steps: BounceIn: Bouncing effect at the start of tween BounceOut: Bouncing effect at the end of tween EaseIn: Smooth acceleration effect at the start of tween EaseInOut: Smooth acceleration and deceleration EaseOut: Smooth deceleration effect at the end of tween Linear: Simple linear tween without any effects Great. We now know how to add tween effects through code. Now, let's see how we can set event delegates through code. You can set the tween's ignoreTimeScale to true if you want it to always run at normal speed even if your Time.timeScale variable is different from 1. Event delegates Many NGUI components broadcast events, for which you can set an event delegate—also known as a callback method—executed when the event is triggered. We did it through the Inspector view by assigning the Notify and Method fields when buttons were clicked. For any type of tween, you can set a specific event delegate for when the tween is finished. We'll see how to do this through code. Before we continue, we must create our callback first. Let's create a callback that loads a new scene. The callback Open our MenuManager.cs script, and add this static LoadGameScene() callback method: public static void LoadGameScene() { //Load the Game scene now Application.LoadLevel("Game"); } Save the script. The preceding code requests to load the Game scene. To ensure Unity finds our scenes at runtime, we'll need to create the Game scene and add both Menu and Game scenes to the build settings: Navigate to File | Build Settings. Click on the Add Current button (don't close the window now). In Unity, navigate to File | New Scene. Navigate to File | Save Scene as… Save the scene as Game.unity. Click on the Add Current button of the Build Settings window and close it. Navigate to File | Open Scene and re-open our Menu.unity scene. Ok, now that both scenes have been added to the build settings, we are ready to link our callback to our event. Linking a callback to an event Now that our LoadGameScene() callback method is written, we must link it to our event. We have two solutions. First, we'll see how to assign it using code exclusively, and then we'll create a more flexible system using NGUI's Notify and Method fields. Code In order to set a callback for a specific event, a generic solution exists for all NGUI events you might encounter: the EventDelegate.Set() method. You can also add multiple callbacks to an event using EventDelegate.Add(). Add this line at the end of the OnClick() method of DisappearOnClick.cs: // Set the tween's onFinished event to our LoadGameScene callback EventDelegate.Set(tween.onFinished, MenuManager.LoadGameScene); Instead of the preceding line, we can also use the tween-specific SetOnFinished() convenience method to do this. We'll get the exact same result with fewer words: // Another way to assign our method to the onFinished event tween.SetOnFinished(MenuManager.LoadGameScene); Great. If you hit Unity's play button and click on our main menu's Play button, you'll see that our Game scene is loaded as soon as the tween has finished! It is possible to remove the link of an existing event delegate to a callback by calling EventDelegate.Remove(eventDelegate, callback);. Now, let's see how to link an event delegate to a callback using the Inspector view. Inspector Now that we have seen how to set event delegates through code, let's see how we can create a variable to let us choose which method to call within the Inspector view, like this:   The method to call when the target disappears can be set any time without editing the code The On Disappear variable shown in the preceding screenshot is of the type EventDelegate. We can declare it right now with the following line as a global variable for our DisappearOnClick.cs script: // Declare an event delegate variable to be set in Inspector public EventDelegate onDisappear; Now, let's change the OnClick() method's last line to make sure the tween's onFinished event calls the defined onDisappear callback: // Set the tween's onFinished event to the selected callback tween.SetOnFinished(onDisappear); Ok. Great. Save the script and go to Unity. Select our main menu's Play button: a new On Disappear field has appeared. Drag UI Root—which holds our MenuManager.cs script—in the Notify field. Now, try to select our MenuManager | LoadGameScene method. Surprisingly, it doesn't appear, and you can only select the script's Exit method… why is that? That is simply because our LoadGameScene() method is currently static. If we want it to be available in the Inspector view, we need to remove its static property: Open our MenuManager.cs script. Remove the static keyword from our LoadGameScene() method. Save the script and return to Unity. You can now select it in the drop-down list:   Great! We have set our callback through the Inspector view; the Game scene will be loaded when the menu disappears. Now that we have learned how to assign event delegates to callback methods through code and the Inspector view, let's see how to assign keyboard keys to user interface elements. Keyboard keys In this section, we'll see how to add keyboard control to our UI. First, we'll see how to bind keys to buttons, and then we'll add a navigation system using keyboard arrows. UIKey binding The UIKey Binding component assigns a specific key to the widget it's attached to. We'll use it now to assign the keyboard's Escape key to our menu's Exit button: Select our UI Root | Main | Buttons | Exit GameObject. Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Binding and hit Enter or click on it with your mouse. Let's see its available parameters. Parameters We've just added the following UIKey Binding component to our Exit button, as follows:   The newly attached UIKey Binding component has three parameters: Key Code: Which key would you like to bind to an action? Modifier: If you want a two-button combination. Select on the four available modifiers: Shift, Control, Alt or None. Action: Which action should we bind to this key? You can simulate a button click with PressAndClick, a selection with Select, or both with All. Ok, now, we'll configure it to see how it works. Configuration Simply set the Key Code field to Escape. Now, hit Unity's play button. When you hit the Escape key of our keyboard, it reacts as if the Exit button was pressed! We can now move on to see how to add keyboard and controller navigation to the UI. UIKey navigation The UIKey Navigation component helps us assign objects to select using the keyboard arrows or controller directional-pad. For most widgets, the automatic configuration is enough, but we'll need to use the override parameters in some cases to have the behavior we need. The nickname input field has neither the UIButton nor the UIButton Scale components attached to it. This means that there will be no feedback to show the user it's currently selected with the keyboard navigation, which is a problem. We can correct this right now. Select UI Root | Options | Nickname | Input, and then: Add the Button component (UIButton) to it. Add the Button Scale component (UIButton Scale) to it. Center Pivot of UISprite (middle bar + middle bar). Reset Center of Box Collider to {0, 0, 0}. The Nickname | Input GameObject should have an Inspector view like this:   Ok. We'll now add the Key Navigation component (UIKey Navigation) to most of the buttons in the scene. In order to do that, type t:uibutton in the Hierarchy view's search bar to display only GameObjects with the UIButton component attached to them:   Ok. With the preceding search filter, select the following GameObjects:   Now, with the preceding selection, follow these steps: Click the Add Component button in the Inspector view. Type key with your keyboard to search for components. Select Key Navigation and hit Enter or click on it with your mouse. We've added the UIKey Navigation component to our selection. Let's see its parameters. Parameters We've just added the following UIKey Navigation component to our objects:   The newly attached UIKey Navigation component has four parameter groups: Starts Selected: Is this widget selected by default at the start? Select on Click: Which widget should be selected when this widget is clicked on — or the Enter key/confirm button has been pressed? This option can be used to select a specific widget when a new page is displayed. Constraint: Use this to limit the navigation movement from this widget: None: The movement is free from this widget Vertical: From this widget, you can only go up or down Horizontal: From this widget, you can only move left or right Explicit: Only move to widgets specified in the Override Override: Use the Left, Right, Up, and Down fields to force the input to select the specified objects. If the Constraint parameter is set to Explicit, only widgets specified here can be selected. Otherwise, automatic configuration still works for fields left to None. Summary This article thus has given an introduction to how C# is used in Unity. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 2137

article-image-enemy-and-friendly-ais
Packt
17 Dec 2014
29 min read
Save for later

Enemy and Friendly AIs

Packt
17 Dec 2014
29 min read
In this article by Kyle D'Aoust, author of the book Unity Game Development Scripting, we will see how to create enemy and friendly AIs. (For more resources related to this topic, see here.) Artificial Intelligence, also known as AI, is something that you'll see in every video game that you play. First-person shooter, real-time strategy, simulation, role playing games, sports, puzzles, and so on, all have various forms of AI in both large and small systems. In this article, we'll be going over several topics that involve creating AI, including techniques, actions, pathfinding, animations, and the AI manager. Then, finally, we'll put it all together to create an AI package of our own. In this article, you will learn: What a finite state machine is What a behavior tree is How to combine two AI techniques for complex AI How to deal with internal and external actions How to handle outside actions that affect the AI How to play character animations What is pathfinding? How to use a waypoint system How to use Unity's NavMesh pathfinding system How to combine waypoints and NavMesh for complete pathfinding AI techniques There are two very common techniques used to create AI: the finite state machine and the behavior tree. Depending on the game that you are making and the complexity of the AI that you want, the technique you use will vary. In this article, we'll utilize both the techniques in our AI script to maximize the potential of our AI. Finite state machines Finite state machines are one of the most common AI systems used throughout computer programming. To define the term itself, a finite state machine breaks down to a system, which controls an object that has a limited number of states to exist in. Some real-world examples of a finite state machine are traffic lights, television, and a computer. Let's look at an example of a computer finite state machine to get a better understanding. A computer can be in various states. To keep it simple, we will list three main states. These states are On, Off, and Active. The Off state is when the computer does not have power running it, the On state is when the computer does have power running it, and the Active state is when someone is using the computer. Let's take a further look into our computer finite state machine and explore the functions of each of its states: State Functions On Can be used by anyone Can turn off the computer Off Can turn on the computer Computer parts can be operated on Active Can access the Internet and various programs Can communicate with other devices Can turn off the computer Each state has its own functions. Some of the functions of each state affect each other, while some do not. The functions that do affect each other are the functions that control what state the finite state machine is in. If you press the power button on your computer, it will turn on and change the state of your computer to On. While the state of your computer is On, you can use the Internet and possibly some other programs, or communicate to other devices such as a router or printer. Doing so will change the state of your computer to Active. When you are using the computer, you can also turn off the computer by its software or by pressing the power button, therefore changing the state to Off. In video games, you can use a finite state machine to create AI with a simple logic. You can also combine finite state machines with other types of AI systems to create a unique and perhaps more complex AI system. In this article, we will be using finite state machines as well as what is known as a behavior tree. The behavior tree form of the AI system A behavior tree is another kind of AI system that works in a very similar way to finite state machines. Actually, behavior trees are made up of finite state machines that work in a hierarchical system. This system of hierarchy gives us great control over an individual, and perhaps many finite state systems within the behavior tree, allowing us to have a complex AI system. Taking a look back at the table explaining a finite state machine, a behavior tree works the same way. Instead of states, you have behaviors, and in place of the state functions, you have various finite state machines that determine what is done while the AI is in a specific behavior. Let's take a look at the behavior tree that we will be using in this article to create our AI: On the left-hand side, we have four behaviors: Idle, Guard, Combat, and Flee. To the right are the finite state machines that make up each of the behaviors. Idle and Flee only have one finite state machine, while Guard and Combat have multiple. Within the Combat behavior, two of its finite state machines even have a couple of their own finite state machines. As you can see, this hierarchy-based system of finite state machines allows us to use a basic form of logic to create an even more complex AI system. At the same time, we are also getting a lot of control by separating our AI into various behaviors. Each behavior will run its own silo of code, oblivious to the other behaviors. The only time we want a behavior to notice another behavior is either when an internal or external action occurs that forces the behavior of our AI to change. Combining the techniques In this article, we will take both of the AI techniques and combine them to create a great AI package. Our behavior tree will utilize finite state machines to run the individual behaviors, creating a unique and complex AI system. This AI package can be used for an enemy AI as well as a friendly AI. Let's start scripting! Now, let's begin scripting our AI! To start off, create a new C# file and name it AI_Agent. Upon opening it, delete any functions within the main class, leaving it empty. Just after the using statements, add this enum to the script: public enum Behaviors {Idle, Guard, Combat, Flee}; This enum will be used throughout our script to determine what behavior our AI is in. Now let's add it to our class. It is time to declare our first variable: public Behaviors aiBehaviors = Behaviors.Idle; This variable, aiBehaviors, will be the deciding factor of what our AI does. Its main purpose is to have its value checked and changed when needed. Let's create our first function, which will utilize one of this variable's purposes: void RunBehaviors(){switch(aiBehaviors){case Behaviors.Idle:   RunIdleNode();   break;case Behaviors.Guard:   RunGuardNode();   break;case Behaviors.Combat:   RunCombatNode();   break;case Behaviors.Flee:   RunFleeNode();   break;}} What this function will do is check the value of our aiBehaviors variable in a switch statement. Depending on what the value is, it will then call a function to be used within that behavior. This function is actually going to be a finite state machine, which will decide what that behavior does at that point. Now, let's add another function to our script, which will allow us to change the behavior of our AI: void ChangeBehavior(Behaviors newBehavior){aiBehaviors = newBehavior; RunBehaviors();} As you can see, this function works very similarly to the RunBehaviors function. When this function is called, it will take a new behaviors variable and assign its value to aiBehaviors. By doing this, we changed the behavior of our AI. Now let's add the final step to running our behaviors; for now, they will be empty functions that act as placeholders for our internal and external actions. Add these functions to the script: void RunIdleNode(){ } void RunGuardNode(){ }void RunCombatNode(){ }void RunFleeNode(){ } Each of these functions will run the finite state machines that make up the behaviors. These functions are essentially a middleman between the behavior and the behavior's action. Using these functions is the beginning of having more control over our behaviors, something that can't be done with a simple finite state machine. Internal and external actions The actions of a finite state machine can be broken up into internal and external actions. Separating the actions into the two categories makes it easier to define what our AI does in any given situation. The separation is helpful in the planning phase of creating AI, but it can also help in the scripting part as well, since you will know what will and will not be called by other classes and GameObjects. Another way this separation is beneficial is that it eases the work of multiple programmers working on the same AI; each programmer could work on separate parts of the AI without as many conflicts. External actions External actions are functions and activities that are activated when objects outside of the AI object act upon the AI object. Some examples of external actions include being hit by a player, having a spell being cast upon the player, falling from heights, losing the game by an external condition, communicating with external objects, and so on. The external actions that we will be using for our AI are: Changing its health Raising a stat Lowering a stat Killing the AI Internal actions Internal actions are the functions and activities that the AI runs within itself. Examples of these are patrolling a set path, attacking a player, running away from the player, using items, and so on. These are all actions that the AI will choose to do depending on a number of conditions. The internal actions that we will be using for our AI are: Patrolling a path Attacking a player Fleeing from a player Searching for a player Scripting the actions It's time to add some internal and external actions to the script. First, be sure to add the using statement to the top of your script with the other using statements: using System.Collections.Generic; Now, let's add some variables that will allow us to use the actions: public bool isSuspicious = false;public bool isInRange = false;public bool FightsRanged = false;public List<KeyValuePair<string, int>> Stats = new List<KeyValuePair<string, int>>();public GameObject Projectile; The first three of our new variables are conditions to be used in finite state machines to determine what function should be called. Next, we have a list of the KeyValuePair variables, which will hold the stats of our AI GameObject. The last variable is a GameObject, which is what we will use as a projectile for ranged attacks. Remember the empty middleman functions that we previously created? Now with these new variables, we will be adding some code to each of them. Add this code so that the empty functions are now filled: void RunIdleNode(){Idle();} void RunGuardNode(){Guard();}void RunCombatNode(){if(FightsRanged)   RangedAttack();else   MeleeAttack();}void RunFleeNode(){Flee();} Two of the three boolean variables we just created are being used as conditionals to call different functions, effectively creating finite state machines. Next, we will be adding the rest of our actions; these are what is being called by the middleman functions. Some of these functions will be empty placeholders, but will be filled later on in the article: void Idle(){} void Guard(){if(isSuspicious){   SearchForTarget();}else{   Patrol();}}void Combat(){if(isInRange){   if(FightsRanged)   {     RangedAttack();   }   else   {     MeleeAttack();   }}else{   SearchForTarget();}}void Flee(){} void SearchForTarget(){} void Patrol(){} void RangedAttack(){GameObject newProjectile;newProjectile = Instantiate(Projectile, transform.position, Quaternion.identity) as GameObject;} void MeleeAttack(){} In the Guard function, we check to see whether the AI notices the player or not. If it does, then it will proceed to search for the player; if not, then it will continue to patrol along its path. In the Combat function, we first check to see whether the player is within the attacking range; if not, then the AI searches again. If the player is within the attacking range, we check to see whether the AI prefers attacking up close or far away. For ranged attacks, we first create a new, temporary GameObject variable. Then, we set it to an instantiated clone of our Projectile GameObject. From here, the projectile will run its own scripts to determine what it does. This is how we allow our AI to attack the player from a distance. To finish off our actions, we have two more functions to add. The first one will be to change the health of the AI, which is as follows: void ChangeHealth(int Amount){if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == "Health")   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     if(Stats[i].Value <= 0)     {       Destroy(gameObject);     }     else if(Stats[i].Value < 25)     {       isSuspicious = false;       ChangeBehavior(Behaviors.Flee);     }     break;   }}} This function takes an int variable, which is the amount by which we want to change the health of the player. The first thing we do is check to see if the amount is negative; if it is, then we make our AI suspicious and change the behavior accordingly. Next, we search for the health stat in our list and set its value to a new value that is affected by the Amount variable. We then check if the AI's health is below zero to kill it; if not, then we also check if its health is below 25. If the health is that low, we make our AI flee from the player. To finish off our actions, we have one last function to add. It will allow us to affect a specific stat of the AI. These modifications will either add to or subtract from a stat. The modifications can be permanent or restored anytime. For the following instance, the modifications will be permanent: void ModifyStat(string Stat, int Amount){for(int i = 0; i < Stats.Capacity; i++){   if(Stats[i].Key == Stat)   {     int tempValue = Stats[i].Value;     Stats[i] = new KeyValuePair<string, int>(Stats[i].Key, tempValue += Amount);     break;   }}if(Amount < 0){   if(!isSuspicious)   {     isSuspicious = true;     ChangeBehavior(Behaviors.Guard);   }}} This function takes a string and an integer. The string is used to search for the specific stat that we want to affect and the integer is how much we want to affect that stat by. It works in a very similar way to how the ChangeHealth function works, except that we first search for a specific stat. We also check to see if the amount is negative. This time, if it is negative, we change our AI behavior to Guard. This seems to be an appropriate response for the AI after being hit by something that negated one of its stats! Pathfinding Pathfinding is how the AI will maneuver around the level. For our AI package, we will be using two different kinds of pathfinding, NavMesh and waypoints. The waypoint system is a common approach to create paths for AI to move around the game level. To allow our AI to move through our level in an intelligent manner, we will use Unity's NavMesh component. Creating paths using the waypoint system Using waypoints to create paths is a common practice in game design, and it's simple too. To sum it up, you place objects or set locations around the game world; these are your waypoints. In the code, you will place all of your waypoints that you created in a container of some kind, such as a list or an array. Then, starting at the first waypoint, you tell the AI to move to the next waypoint. Once that waypoint has been reached, you send the AI off to another one, ultimately creating a system that iterates through all of the waypoints, allowing the AI to move around the game world through the set paths. Although using the waypoint system will grant our AI movement in the world, at this point, it doesn't know how to avoid obstacles that it may come across. That is when you need to implement some sort of mesh navigation system so that the AI won't get stuck anywhere. Unity's NavMesh system The next step in creating AI pathfinding is to create a way for our AI to navigate through the game world intelligently, meaning that it does not get stuck anywhere. In just about every game out there that has a 3D-based AI, the world it inhabits has all sorts of obstacles. These obstacles could be plants, stairs, ramps, boxes, holes, and so on. To get our AI to avoid these obstacles, we will use Unity's NavMesh system, which is built into Unity itself. Setting up the environment Before we can start creating our pathfinding system, we need to create a level for our AI to move around in. To do this, I am just using Unity primitive models such as cubes and capsules. For the floor, create a cube, stretch it out, and squish it to make a rectangle. From there, clone it several times so that you have a large floor made up of cubes. Next, delete a bunch of the cubes and move some others around. This will create holes in our floor, which will be used and tested when we implement the NavMesh system. To make the floor easy to see, I've created a material in green and assigned it to the floor cubes. After this, create a few more cubes, make one really long and one shorter than the previous one but thicker, and the last one will be used as a ramp. I've created an intersection of the really long cube and the thick cube. Then, place the ramp towards the end of the thick cube, giving access to the top of the cubes. Our final step in creating our test environment is to add a few waypoints for our AI. For testing purposes, create five waypoints in this manner. Place one in each corner of the level and one in the middle. For the actual waypoints, use the capsule primitive. For each waypoint, add a rigid body component. Name the waypoints as Waypoint1, Waypoint2, Waypoint3, and so on. The name is not all that important for our code; it just makes it easier to distinguish between waypoints in the inspector. Here's what I made for my level:  Creating the NavMesh Now, we will create the navigation mesh for our scene. The first thing we will do is select all of the floor cubes. In the menu tab in Unity, click on the Window option, and then click on the Navigation option at the bottom of the dropdown; this will open up the Navigation window. This is what you should be seeing right now:  By default, the OffMeshLink Generation option is not checked; be sure to check it. What this does is create links at the edges of the mesh allowing it to communicate with any other OffMeshLink nearby, creating a singular mesh. This is a handy tool since game levels typically use more than one mesh as a floor. The Scene filter will just show specific objects within the hierarchy view list. Selecting all the objects will show all of your GameObjects. Selecting mesh renderers will only show GameObjects that have the mesh renderer component. Then, finally, if you select terrains, only terrains will be shown in the Hierarchy view list. The Navigation Layer dropdown will allow you to set the area as either walkable, not walkable, or jump accessible. Walkable areas typically refer to floors, ramps, and so on. Non-walkable areas refer to walls, rocks, and other various obstacles. Next, click on the Bake tab next to the Object tab. You should see information that looks like this: For this article, I am leaving all the values at their defaults. The Radius property is used to determine how close to the walls the navigation mesh will exist. Height determines how much vertical space is needed for the AI agent to be able to walk on the navigation mesh. Max Slope is the maximum angle that the AI is allowed to travel on for ramps, hills, and so on. The Step Height property is used to determine how high the AI can step up onto surfaces higher than the ground level. For Generated Off Mesh Links, the properties are very similar to each other. The Drop Height value is the maximum amount of space the AI can intelligently drop down to another part of the navigation mesh. Jump Distance is the opposite of Height; it determines how high the AI can jump up to another part of the navigation mesh. The Advanced options are to be used when you have a better understanding of the NavMesh component and want a little more out of it. Here, you can further tweak the accuracy of the NavMesh as well as create Height Mesh to coincide with the navigation mesh. Now that you know all the basics of the Unity NavMesh, let's go ahead and create our navigation mesh. At the bottom-right corner of the Navigation tab in the Inspector window, you should see two buttons: one that says Clear and the other that says Bake. Click on the Bake button now to create your new navigation mesh. Select the ramp and the thick cube that we created earlier. In the Navigation window, make sure that the OffMeshLink Generation option is not checked, and that Navigation Layer is set to Default. If the ramp and the thick cube are not selected, reselect the floor cubes so that you have the floors, ramp, and thick wall selected. Bake the navigation mesh again to create a new one. This is what my scene looks like now with the navigation mesh: You should be able to see the newly generated navigation mesh overlaying the underlying mesh. This is what was created using the default Bake properties. Changing the Bake properties will give you different results, which will come down to what kind of navigation mesh you want the AI to use. Now that we have a navigation mesh, let's create the code for our AI to utilize. First, we will code the waypoint system, and then we will code what is needed for the NavMesh system. Adding our variables To start our navigation system, we will need to add a few variables first. Place these with the rest of our variables: public Transform[] Waypoints;public int curWaypoint = 0;bool ReversePath = false;NavMeshAgent navAgent;Vector3 Destination;float Distance; The first variable is an array of Transforms; this is what we will use to hold our waypoints. Next, we have an integer that is used to iterate through our Transform array. We have a bool variable, which will decide how we should navigate through the waypoints. The next three variables are more oriented towards our navigation mesh that we created earlier. The NavMeshAgent object is what we will reference when we want to interact with the navigation mesh. The destination will be the location that we want the AI to move towards. The distance is what we will use to check how far away we are from that location. Scripting the navigation functions Previously, we created many empty functions; some of these are dependent on pathfinding. Let's start with the Flee function. Add this code to replace the empty function: void Flee(){for(int fleePoint = 0; fleePoint < Waypoints.Length; fleePoint++){   Distance = Vector3.Distance(gameObject.transform.position, Waypoints[fleePoint].position);   if(Distance > 10.00f)   {     Destination = Waypoints[curWaypoint].position;     navAgent.SetDestination(Destination);     break;   }   else if(Distance < 2.00f)   {     ChangeBehavior(Behaviors.Idle);   }}} What this for loop does is pick a waypoint that has Distance of more than 10. If it does, then we set the Destination value to the current waypoint and move the AI accordingly. If the distance from the current waypoint is less than 2, we change the behavior to Idle. The next function that we will adjust is the SearchForTarget function. Add the following code to it, replacing its previous emptiness: void SearchForTarget(){Destination = GameObject.FindGameObjectWithTag("Player").transform.position;navAgent.SetDestination(Destination);Distance = Vector3.Distance(gameObject.transform.position, Destination);if(Distance < 10)   ChangeBehavior(Behaviors.Combat);} This function will now be able to search for a target, the Player target to be more specific. We set Destination to the player's current position, and then move the AI towards the player. When Distance is less than 10, we set the AI behavior to Combat. Now that our AI can run from the player as well as chase them down, let's utilize the waypoints and create paths for the AI. Add this code to the empty Patrol function: void Patrol(){Distance = Vector3.Distance(gameObject.transform.position, Waypoints[curWaypoint].position);if(Distance > 2.00f){   Destination = Waypoints[curWaypoint].position;   navAgent.SetDestination(Destination);}else{   if(ReversePath)   {     if(curWaypoint <= 0)     {       ReversePath = false;     }     else     {       curWaypoint--;       Destination = Waypoints[curWaypoint].position;     }   }   else   {     if(curWaypoint >= Waypoints.Length - 1)     {       ReversePath = true;     }     else     {       curWaypoint++;       Destination = Waypoints[curWaypoint].position;     }   }}} What Patrol will now do is check the Distance variable. If it is far from the current waypoint, we set that waypoint as the new destination of our AI. If the current waypoint is close to the AI, we check the ReversePath Boolean variable. When ReversePath is true, we tell the AI to go to the previous waypoint, going through the path in the reverse order. When ReversePath is false, the AI will go on to the next waypoint in the list of waypoints. With all of this completed, you now have an AI with pathfinding abilities. The AI can also patrol a path set by waypoints and reverse the path when the end has been reached. We have also added abilities for the AI to search for the player as well as flee from the player. Character animations Animations are what bring the characters to life visually in the game. From basic animations to super realistic movements, all the animations are important and really represent what scripters do to the player. Before we add animations to our AI, we first need to get a model mesh for it! Importing the model mesh For this article, I am using a model mesh that I got from the Unity Asset Store. To use the same model mesh that I am using, go to the Unity Asset Store and search for Skeletons Pack. It is a package of four skeleton model meshes that are fully textured, propped, and animated. The asset itself is free and great to use. When you import the package into Unity, it will come with all four models as well as their textures, and an example scene named ShowCase. Open that scene and you should see the four skeletons. If you run the scene, you will see all the skeletons playing their idle animations. Choose the skeleton you want to use for your AI; I chose skeletonDark for mine. Click on the drop-down list of your skeleton in the Hierarchy window, and then on the Bip01 drop-down list. Then, select the magicParticle object. For our AI, we will not need it, so delete it from the Hierarchy window. Create a new prefab in the Project window and name it Skeleton. Now select the skeleton that you want to use from the Hierarchy window and drag it onto the newly created prefab. This will now be the model that you will use for this article. In your AI test scene, drag and drop Skeleton Prefab onto the scene. I have placed mine towards the center of the level, near the waypoint in the middle. In the Inspector window, you will be able to see the Animation component full of animations for the model. Now, we will need to add a few components to our skeleton. Go to the Components menu on the top of the Unity window, select Navigation, and then select NavMesh Agent. Doing this will allow the skeleton to utilize the NavMesh we created earlier. Next, go into the Components menu again and click on Capsule Collider as well as Rigidbody. Your Inspector window should now look like this after adding the components: Your model now has all the necessary components needed to work with our AI script. Scripting the animations To script our animations, we will take a simple approach to it. There won't be a lot of code to deal with, but we will spread it out in various areas of our script where we need to play the animations. In the Idle function, add this line of code: animation.Play("idle"); This simple line of code will play the idle animation. We use animation to access the model's animation component, and then use the Play function of that component to play the animation. The Play function can take the name of the animation to call the correct animation to be played; for this one, we call the idle animation. In the SearchForTarget function, add this line of code to the script: animation.Play("run"); We access the same function of the animation component and call the run animation to play here. Add the same line of code to the Patrol function as well, since we will want to use that animation for that function too. In the RangedAttack and MeleeAttack functions, add this code: animation.Play("attack"); Here, we call the attack animation. If we had a separate animation for ranged attacks, we would use that instead, but since we don't, we will utilize the same animation for both attack types. With this, we finished coding the animations into our AI. It will now play those animations when they are called during gameplay. Putting it all together To wrap up our AI package, we will now finish up the script and add it to the skeleton. Final coding touches At the beginning of our AI script, we created some variables that we have yet to properly assign. We will do that in the Start function. We will also add the Update function to run our AI code. Add these functions to the bottom of the class: void Start(){navAgent = GetComponent<NavMeshAgent>(); Stats.Add(new KeyValuePair<string, int>("Health", 100));Stats.Add(new KeyValuePair<string, int>("Speed", 10));Stats.Add(new KeyValuePair<string, int>("Damage", 25));Stats.Add(new KeyValuePair<string, int>("Agility", 25));Stats.Add(new KeyValuePair<string, int>("Accuracy", 60));} void Update (){RunBehaviors();} In the Start function, we first assign the navAgent variable by getting the NavMeshAgent component from the GameObject. Next, we add new KeyValuePair variables to the Stats array. The Stats array is now filled with a few stats that we created. The Update function calls the RunBehaviors function. This is what will keep the AI running; it will run the correct behavior as long as the AI is active. Filling out the inspector To complete the AI package, we will need to add the script to the skeleton, so drag the script onto the skeleton in the Hierarchy window. In the Size property of the waypoints array, type the number 5 and open up the drop-down list. Starting with Element 0, drag each of the waypoints into the empty slots. For the projectile, create a sphere GameObject and make it a prefab. Now, drag it onto the empty slot next to Projectile. Finally, set the AI Behaviors to Guard. This will make it so that when you start the scene, your AI will be patrolling. The Inspector window of the skeleton should look something like this: Your AI is now ready for gameplay! To make sure everything works, we will need to do some playtesting. Playtesting A great way to playtest the AI is to play the scene in every behavior. Start off with Guard, then run it in Idle, Combat, and Flee. For different outputs, try adjusting some of the variables in the NavMesh Agent component, such as Speed, Angular Speed, and Stopping Distance. Try mixing your waypoints around so the path is different. Summary In this article, you learned how to create an AI package. We explored a couple of techniques to handle AI, such as finite state machines and behavior trees. Then, we dived into AI actions, both internal and external. From there, we figured out how to implement pathfinding with both a waypoint system and Unity's NavMesh system. Finally, we topped the AI package off with animations and put everything together, creating our finalized AI. Resources for Article: Further resources on this subject: Getting Started – An Introduction to GML [article] Animations in Cocos2d-x [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 3697

article-image-2d-twin-stick-shooter
Packt
11 Nov 2014
21 min read
Save for later

2D Twin-stick Shooter

Packt
11 Nov 2014
21 min read
This article written by John P. Doran, the author of Unity Game Development Blueprints, teaches us how to use Unity to prepare a well formed game. It also gives people experienced in this field a chance to prepare some great stuff. (For more resources related to this topic, see here.) The shoot 'em up genre of games is one of the earliest kinds of games. In shoot 'em ups, the player character is a single entity fighting a large number of enemies. They are typically played with a top-down perspective, which is perfect for 2D games. Shoot 'em up games also exist with many categories, based upon their design elements. Elements of a shoot 'em up were first seen in the 1961 Spacewar! game. However, the concept wasn't popularized until 1978 with Space Invaders. The genre was quite popular throughout the 1980s and 1990s and went in many different directions, including bullet hell games, such as the titles of the Touhou Project. The genre has recently gone through a resurgence in recent years with games such as Bizarre Creations' Geometry Wars: Retro Evolved, which is more famously known as a twin-stick shooter. Project overview Over the course of this article, we will be creating a 2D multidirectional shooter game similar to Geometry Wars. In this game, the player controls a ship. This ship can move around the screen using the keyboard and shoot projectiles in the direction that the mouse is points at. Enemies and obstacles will spawn towards the player, and the player will avoid/shoot them. This article will also serve as a refresher on a lot of the concepts of working in Unity and give an overview of the recent addition of native 2D tools into Unity. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from beginning to end. Here is the outline of our tasks: Setting up the project Creating our scene Adding in player movement Adding in shooting functionality Creating enemies Adding GameController to spawn enemy waves Particle systems Adding in audio Adding in points, score, and wave numbers Publishing the game Prerequisites Before we start, we will need to get the latest Unity version, which you can always get by going to http://unity3d.com/unity/download/ and downloading it there: At the time of writing this article, the version is 4.5.3, but this project should work in future versions with minimal changes. Navigate to the preceding URL, and download the Chapter1.zip package and unzip it. Inside the Chapter1 folder, there are a number of things, including an Assets folder, which will have the art, sound, and font files you'll need for the project as well as the Chapter_1_Completed.unitypackage (this is the complete article package that includes the entire project for you to work with). I've also added in the complete game exported (TwinstickShooter Exported) as well as the entire project zipped up in the TwinstickShooter Project.zip file. Setting up the project At this point, I have assumed that you have Unity freshly installed and have started it up. With Unity started, go to File | New Project. Select Project Location of your choice somewhere on your hard drive, and ensure you have Setup defaults for set to 2D. Once completed, select Create. At this point, we will not need to import any packages, as we'll be making everything from scratch. It should look like the following screenshot: From there, if you see the Welcome to Unity pop up, feel free to close it out as we won't be using it. At this point, you will be brought to the general Unity layout, as follows: Again, I'm assuming you have some familiarity with Unity before reading this article; if you would like more information on the interface, please visit http://docs.unity3d.com/Documentation/Manual/LearningtheInterface.html. Keeping your Unity project organized is incredibly important. As your project moves from a small prototype to a full game, more and more files will be introduced to your project. If you don't start organizing from the beginning, you'll keep planning to tidy it up later on, but as deadlines keep coming, things may get quite out of hand. This organization becomes even more vital when you're working as part of a team, especially if your team is telecommuting. Differing project structures across different coders/artists/designers is an awful mess to find yourself in. Setting up a project structure at the start and sticking to it will save you countless minutes of time in the long run and only takes a few seconds, which is what we'll be doing now. Perform the following steps: Click on the Create drop-down menu below the Project tab in the bottom-left side of the screen. From there, click on Folder, and you'll notice that a new folder has been created inside your Assets folder. After the folder is created, you can type in the name for your folder. Once done, press Enter for the folder to be created. We need to create folders for the following directories:      Animations      Prefabs      Scenes      Scripts      Sprites If you happen to create a folder inside another folder, you can simply drag-and-drop it from the left-hand side toolbar. If you need to rename a folder, simply click on it once and wait, and you'll be able to edit it again. You can also use Ctrl + D to duplicate a folder if it is selected. Once you're done with the aforementioned steps, your project should look something like this: Creating our scene Now that we have our project set up, let's get started with creating our player: Double-click on the Sprites folder. Once inside, go to your operating system's browser window, open up the Chapter 1/Assets folder that we provided, and drag the playerShip.png file into the folder to move it into our project. Once added, confirm that the image is Sprite by clicking on it and confirming from the Inspector tab that Texture Type is Sprite. If it isn't, simply change it to that, and then click on the Apply button. Have a look at the following screenshot: If you do not want to drag-and-drop the files, you can also right-click within the folder in the Project Browser (bottom-left corner) and select Import New Asset to select a file from a folder to bring it in. The art assets used for this tutorial were provided by Kenney. To see more of their work, please check out www.kenney.nl. Next, drag-and-drop the ship into the scene (the center part that's currently dark gray). Once completed, set the position of the sprite to the center of the Screen (0, 0) by right-clicking on the Transform component and then selecting Reset Position. Have a look at the following screenshot: Now, with the player in the world, let's add in a background. Drag-and-drop the background.png file into your Sprites folder. After that, drag-and-drop a copy into the scene. If you put the background on top of the ship, you'll notice that currently the background is in front of the player (Unity puts newly added objects on top of previously created ones if their position on the Z axis is the same; this is commonly referred to as the z-order), so let's fix that. Objects on the same Z axis without sorting layer are considered to be equal in terms of draw order; so just because a scene looks a certain way this time, when you reload the level it may look different. In order to guarantee that an object is in front of another one in 2D space is by having different Z values or using sorting layers. Select your background object, and go to the Sprite Renderer component from the Inspector tab. Under Sorting Layer, select Add Sorting Layer. After that, click on the + icon for Sorting Layers, and then give Layer 1 a name, Background. Now, create a sorting layer for Foreground and GUI. Have a look at the following screenshot: Now, place the player ship on the foreground and the background by selecting the object once again and then setting the Sorting Layer property via the drop-down menu. Now, if you play the game, you'll see that the ship is in front of the background, as follows: At this point, we can just duplicate our background a number of times to create our full background by selecting the object in the Hierarchy, but that is tedious and time-consuming. Instead, we can create all of the duplicates by either using code or creating a tileable texture. For our purposes, we'll just create a texture. Delete the background sprite by left-clicking on the background object in the Hierarchy tab on the left-hand side and then pressing the Delete key. Then select the background sprite in the Project tab, change Texture Type in the Inspector tab to Texture, and click on Apply. Now let's create a 3D cube by selecting Game Object | Create Other | Cube from the top toolbar. Change the object's name from Cube to Background. In the Transform component, change the Position to (0, 0, 1) and the Scale to (100, 100, 1). If you are using Unity 4.6 you will need to go to Game Object | 3D Object | Cube to create the cube. Since our camera is at 0, 0, -10 and the player is at 0, 0, 0, putting the object at position 0, 0, 1 will put it behind all of our sprites. By creating a 3D object and scaling it, we are making it really large, much larger than the player's monitor. If we scaled a sprite, it would be one really large image with pixelation, which would look really bad. By using a 3D object, the texture that is applied to the faces of the 3D object is repeated, and since the image is tileable, it looks like one big continuous image. Remove Box Collider by right-clicking on it and selecting Remove Component. Next, we will need to create a material for our background to use. To do so, under the Project tab, select Create | Material, and name the material as BackgroundMaterial. Under the Shader property, click on the drop-down menu, and select Unlit | Texture. Click on the Texture box on the right-hand side, and select the background texture. Once completed, set the Tiling property's x and y to 25. Have a look at the following screenshot: In addition to just selecting from the menu, you can also drag-and-drop the background texture directly onto the Texture box, and it will set the property. Tiling tells Unity how many times the image should repeat in the x and y positions, respectively. Finally, go back to the Background object in Hierarchy. Under the Mesh Renderer component, open up Materials by left-clicking on the arrow, and change Element 0 to our BackgroundMaterial material. Consider the following screenshot: Now, when we play the game, you'll see that we now have a complete background that tiles properly. Scripting 101 In Unity, the behavior of game objects is controlled by the different components that are attached to them in a form of association called composition. These components are things that we can add and remove at any time to create much more complex objects. If you want to do anything that isn't already provided by Unity, you'll have to write it on your own through a process we call scripting. Scripting is an essential element in all but the simplest of video games. Unity allows you to code in either C#, Boo, or UnityScript, a language designed specifically for use with Unity and modelled after JavaScript. For this article, we will use C#. C# is an object-oriented programming language—an industry-standard language similar to Java or C++. The majority of plugins from Asset Store are written in C#, and code written in C# can port to other platforms, such as mobile, with very minimal code changes. C# is also a strongly-typed language, which means that if there is any issue with the code, it will be identified within Unity and will stop you from running the game until it's fixed. This may seem like a hindrance, but when working with code, I very much prefer to write correct code and solve problems before they escalate to something much worse. Implementing player movement Now, at this point, we have a great-looking game, but nothing at all happens. Let's change that now using our player. Perform the following steps: Right-click on the Scripts folder you created earlier, click on Create, and select the C# Script label. Once you click on it, a script will appear in the Scripts folder, and it should already have focus and should be asking you to type a name for the script—call it PlayerBehaviour. Double-click on the script in Unity, and it will open MonoDevelop, which is an open source integrated development environment (IDE) that is included with your Unity installation. After MonoDevelop has loaded, you will be presented with the C# stub code that was created automatically for you by Unity when you created the C# script. Let's break down what's currently there before we replace some of it with new code. At the top, you will see two lines: using UnityEngine;using System.Collections; The engine knows that if we refer to a class that isn't located inside this file, then it has to reference the files within these namespaces for the referenced class before giving an error. We are currently using two namespaces. The UnityEngine namespace contains interfaces and class definitions that let MonoDevelop know about all the addressable objects inside Unity. The System.Collections namespace contains interfaces and classes that define various collections of objects, such as lists, queues, bit arrays, hash tables, and dictionaries. We will be using a list, so we will change the line to the following: using System.Collections.Generic; The next line you'll see is: public class PlayerBehaviour : MonoBehaviour { You can think of a class as a kind of blueprint for creating a new component type that can be attached to GameObjects, the objects inside our scenes that start out with just a Transform and then have components added to them. When Unity created our C# stub code, it took care of that; we can see the result, as our file is called PlayerBehaviour and the class is also called PlayerBehaviour. Make sure that your .cs file and the name of the class match, as they must be the same to enable the script component to be attached to a game object. Next up is the: MonoBehaviour section of the code. The : symbol signifies that we inherit from a particular class; in this case, we'll use MonoBehaviour. All behavior scripts must inherit from MonoBehaviour directly or indirectly by being derived from it. Inheritance is the idea of having an object to be based on another object or class using the same implementation. With this in mind, all the functions and variables that existed inside the MonoBehaviour class will also exist in the PlayerBehaviour class, because PlayerBehaviour is MonoBehaviour. For more information on the MonoBehaviour class and all the functions and properties it has, check out http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. Directly after this line, we will want to add some variables to help us with the project. Variables are pieces of data that we wish to hold on to for one reason or another, typically because they will change over the course of a program, and we will do different things based on their values. Add the following code under the class definition: // Movement modifier applied to directional movement.public float playerSpeed = 2.0f;// What the current speed of our player isprivate float currentSpeed = 0.0f;/** Allows us to have multiple inputs and supports keyboard,* joystick, etc.*/public List<KeyCode> upButton;public List<KeyCode> downButton;public List<KeyCode> leftButton;public List<KeyCode> rightButton;// The last movement that we've madeprivate Vector3 lastMovement = new Vector3(); Between the variable definitions, you will notice comments to explain what each variable is and how we'll use it. To write a comment, you can simply add a // to the beginning of a line and everything after that is commented upon so that the compiler/interpreter won't see it. If you want to write something that is longer than one line, you can use /* to start a comment, and everything inside will be commented until you write */ to close it. It's always a good idea to do this in your own coding endeavors for anything that doesn't make sense at first glance. For those of you working on your own projects in teams, there is an additional form of commenting that Unity supports, which may make your life much nicer: XML comments. They take up more space than the comments we are using, but also document your code for you. For a nice tutorial about that, check out http://unitypatterns.com/xml-comments/. In our game, the player may want to move up using either the arrow keys or the W key. You may even want to use something else. Rather than restricting the player to just having one button, we will store all the possible ways to go up, down, left, or right in their own container. To do this, we are going to use a list, which is a holder for multiple objects that we can add or remove while the game is being played. For more information on lists, check out http://msdn.microsoft.com/en-us/library/6sh2ey19(v=vs.110).aspx One of the things you'll notice is the public and private keywords before the variable type. These are access modifiers that dictate who can and cannot use these variables. The public keyword means that any other class can access that property, while private means that only this class will be able to access this variable. Here, currentSpeed is private because we want our current speed not to be modified or set anywhere else. But, you'll notice something interesting with the public variables that we've created. Go back into the Unity project and drag-and-drop the PlayerBehaviour script onto the playerShip object. Before going back to the Unity project though, make sure that you save your PlayerBehaviour script. Not saving is a very common mistake made by people working with MonoDevelop. Have a look at the following screenshot: You'll notice now that the public variables that we created are located inside Inspector for the component. This means that we can actually set those variables inside Inspector without having to modify the code, allowing us to tweak values in our code very easily, which is a godsend for many game designers. You may also notice that the names have changed to be more readable. This is because of the naming convention that we are using with each word starting with a capital letter. This convention is called CamelCase (more specifically headlessCamelCase). Now change the Size of each of the Button variables to 2, and fill in the Element 0 value with the appropriate arrow and Element 1 with W for up, A for left, S for down, and D for right. When this is done, it should look something like the following screenshot: Now that we have our variables set, go back to MonoDevelop for us to work on the script some more. The line after that is a function definition for a method called Start; it isn't a user method but one that belongs to MonoBehaviour. Where variables are data, functions are the things that modify and/or use that data. Functions are self-contained modules of code (enclosed within braces, { and }) that accomplish a certain task. The nice thing about using a function is that once a function is written, it can be used over and over again. Functions can be called from inside other functions: void Start () {} Start is only called once in the lifetime of the behavior when the game starts and is typically used to initialize data. If you're used to other programming languages, you may be surprised that initialization of an object is not done using a constructor function. This is because the construction of objects is handled by the editor and does not take place at the start of gameplay as you might expect. If you attempt to define a constructor for a script component, it will interfere with the normal operation of Unity and can cause major problems with the project. However, for this behavior, we will not need to use the Start function. Perform the following steps: Delete the Start function and its contents. The next function that we see included is the Update function. Also inherited from MonoBehaviour, this function is called for every frame that the component exists in and for each object that it's attached to. We want to update our player ship's rotation and movement every turn. Inside the Update function (between { and }), put the following lines of code: // Rotate player to face mouse Rotation(); // Move the player's body Movement(); Here, I called two functions, but these functions do not exist, because we haven't created them yet. Let's do that now! Below the Update function and before }, put the following function to close the class: // Will rotate the ship to face the mouse.void Rotation(){// We need to tell where the mouse is relative to the// playerVector3 worldPos = Input.mousePosition;worldPos = Camera.main.ScreenToWorldPoint(worldPos);/*   * Get the differences from each axis (stands for   * deltaX and deltaY)*/float dx = this.transform.position.x - worldPos.x;float dy = this.transform.position.y - worldPos.y;// Get the angle between the two objectsfloat angle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg;/*   * The transform's rotation property uses a Quaternion,   * so we need to convert the angle in a Vector   * (The Z axis is for rotation for 2D).*/Quaternion rot = Quaternion.Euler(new Vector3(0, 0, angle + 90));// Assign the ship's rotationthis.transform.rotation = rot;} Now if you comment out the Movement line and run the game, you'll notice that the ship will rotate in the direction in which the mouse is. Have a look at the following screenshot: Below the Rotation function, we now need to add in our Movement function the following code: // Will move the player based off of keys pressedvoid Movement(){// The movement that needs to occur this frameVector3 movement = new Vector3();// Check for inputmovement += MoveIfPressed(upButton, Vector3.up);movement += MoveIfPressed(downButton, Vector3.down);movement += MoveIfPressed(leftButton, Vector3.left);movement += MoveIfPressed(rightButton, Vector3.right);/*   * If we pressed multiple buttons, make sure we're only   * moving the same length.*/movement.Normalize ();// Check if we pressed anythingif(movement.magnitude > 0){   // If we did, move in that direction   currentSpeed = playerSpeed;   this.transform.Translate(movement * Time.deltaTime *                           playerSpeed, Space.World);   lastMovement = movement;}else{   // Otherwise, move in the direction we were going   this.transform.Translate(lastMovement * Time.deltaTime                           * currentSpeed, Space.World);   // Slow down over time   currentSpeed *= .9f;}} Now inside this function I've created another function called MoveIfPressed, so we'll need to add that in as well. Below this function, add in the following function as well: /** Will return the movement if any of the keys are pressed,* otherwise it will return (0,0,0)*/Vector3 MoveIfPressed( List<KeyCode> keyList, Vector3 Movement){// Check each key in our listforeach (KeyCode element in keyList){   if(Input.GetKey(element))   {     /*       * It was pressed so we leave the function       * with the movement applied.     */    return Movement; }}// None of the keys were pressed, so don't need to movereturn Vector3.zero;} Now, save your file and move back into Unity. Save your current scene as Chapter_1.unity by going to File | Save Scene. Make sure to save the scene to our Scenes folder we created earlier. Run the game by pressing the play button. Have a look at the following screenshot: Now you'll see that we can move using the arrow keys or the W A S D keys, and our ship will rotate to face the mouse. Great! Summary This article talks about the 2D twin-stick shooter game. It helps to bring you to familiarity with the game development features in Unity. Resources for Article: Further resources on this subject: Components in Unity [article] Customizing skin with GUISkin [article] What's Your Input? [article]
Read more
  • 0
  • 1
  • 5935

article-image-scaling-friendly-font-rendering-distance-fields
Packt
28 Oct 2014
8 min read
Save for later

Scaling friendly font rendering with distance fields

Packt
28 Oct 2014
8 min read
This article by David Saltares Márquez and Alberto Cejas Sánchez, the authors of Libgdx Cross-platform Game Development Cookbook, describes how we can generate a distance field font and render it in Libgdx. As a bitmap font is scaled up, it becomes blurry due to linear interpolation. It is possible to tell the underlying texture to use the nearest filter, but the result will be pixelated. Additionally, until now, if you wanted big and small pieces of text using the same font, you would have had to export it twice at different sizes. The output texture gets bigger rather quickly, and this is a memory problem. (For more resources related to this topic, see here.) Distance field fonts is a technique that enables us to scale monochromatic textures without losing out on quality, which is pretty amazing. It was first published by Valve (Half Life, Team Fortress…) in 2007. It involves an offline preprocessing step and a very simple fragment shader when rendering, but the results are great and there is very little performance penalty. You also get to use smaller textures! In this article, we will cover the entire process of how to generate a distance field font and how to render it in Libgdx. Getting ready For this, we will load the data/fonts/oswald-distance.fnt and data/fonts/oswald.fnt files. To generate the fonts, Hiero is needed, so download the latest Libgdx package from http://libgdx.badlogicgames.com/releases and unzip it. Make sure the samples projects are in your workspace. Please visit the link https://github.com/siondream/libgdx-cookbook to download the sample projects which you will need. How to do it… First, we need to generate a distance field font with Hiero. Then, a special fragment shader is required to finally render scaling-friendly text in Libgdx. Generating distance field fonts with Hiero Open up Hiero from the command line. Linux and Mac users only need to replace semicolons with colons and back slashes with forward slashes: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensions gdx-toolsgdx-tools.jar com.badlogic.gdx.tools.hiero.Hiero Select the font using either the System or File options. This time, you don't need a really big size; the point is to generate a small texture and still be able to render text at high resolutions, maintaining quality. We have chosen 32 this time. Remove the Color effect, and add a white Distance field effect. Set the Spread effect; the thicker the font, the bigger should be this value. For Oswald, 4.0 seems to be a sweet spot. To cater to the spread, you need to set a matching padding. Since this will make the characters render further apart, you need to counterbalance this by the setting the X and Y values to twice the negative padding. Finally, set the Scale to be the same as the font size. Hiero will struggle to render the charset, which is why we wait until the end to set this property. Generate the font by going to File | Save BMFont files (text).... The following is the Hiero UI showing a font texture with a Distance field effect applied to it: Distance field fonts shader We cannot use the distance field texture to render text for obvious reasons—it is blurry! A special shader is needed to get the information from the distance field and transform it into the final, smoothed result. The vertex shader found in data/fonts/font.vert is simple. The magic takes place in the fragment shader, found in data/fonts/font.frag and explained later. First, we sample the alpha value from the texture for the current fragment and call it distance. Then, we use the smoothstep() function to obtain the actual fragment alpha. If distance is between 0.5-smoothing and 0.5+smoothing, Hermite interpolation will be used. If the distance is greater than 0.5+smoothing, the function returns 1.0, and if the distance is smaller than 0.5-smoothing, it will return 0.0. The code is as follows: #ifdef GL_ES precision mediump float; precision mediump int; #endif   uniform sampler2D u_texture;   varying vec4 v_color; varying vec2 v_texCoord;   const float smoothing = 1.0/128.0;   void main() {    float distance = texture2D(u_texture, v_texCoord).a;    float alpha = smoothstep(0.5 - smoothing, 0.5 + smoothing, distance);    gl_FragColor = vec4(v_color.rgb, alpha * v_color.a); } The smoothing constant determines how hard or soft the edges of the font will be. Feel free to play around with the value and render fonts at different sizes to see the results. You could also make it uniform and configure it from the code. Rendering distance field fonts in Libgdx Let's move on to DistanceFieldFontSample.java, where we have two BitmapFont instances: normalFont (pointing to data/fonts/oswald.fnt) and distanceShader (pointing to data/fonts/oswald-distance.fnt). This will help us illustrate the difference between the two approaches. Additionally, we have a ShaderProgram instance for our previously defined shader. In the create() method, we instantiate both the fonts and shader normally: normalFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald.fnt")); normalFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); normalFont.setScale(4.5f);   distanceFont = new BitmapFont(Gdx.files.internal("data/fonts/oswald-distance.fnt")); distanceFont.setColor(0.0f, 0.56f, 1.0f, 1.0f); distanceFont.setScale(4.5f);   fontShader = new ShaderProgram(Gdx.files.internal("data/fonts/font.vert"), Gdx.files.internal("data/fonts/font.frag"));   if (!fontShader.isCompiled()) {    Gdx.app.error(DistanceFieldFontSample.class.getSimpleName(), "Shader compilation failed:n" + fontShader.getLog()); } We need to make sure that the texture our distanceFont just loaded is using linear filtering: distanceFont.getRegion().getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear); Remember to free up resources in the dispose() method, and let's get on with render(). First, we render some text with the regular font using the default shader, and right after this, we do the same with the distance field font using our awesome shader: batch.begin(); batch.setShader(null); normalFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 50.0f);   batch.setShader(fontShader); distanceFont.draw(batch, "Distance field fonts!", 20.0f, VIRTUAL_HEIGHT - 250.0f); batch.end(); The results are pretty obvious; it is a huge win of memory and quality over a very small price of GPU time. Try increasing the font size even more and be amazed at the results! You might have to slightly tweak the smoothing constant in the shader code though: How it works… Let's explain the fundamentals behind this technique. However, for a thorough explanation, we recommend that you read the original paper by Chris Green from Valve (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf). A distance field is a derived representation of a monochromatic texture. For each pixel in the output, the generator determines whether the corresponding one in the original is colored or not. Then, it examines its neighborhood to determine the 2D distance in pixels, to a pixel with the opposite state. Once the distance is calculated, it is mapped to a [0, 1] range, with 0 being the maximum negative distance and 1 being the maximum positive distance. A value of 0.5 indicates the exact edge of the shape. The following figure illustrates this process: Within Libgdx, the BitmapFont class uses SpriteBatch to render text normally, only this time, it is using a texture with a Distance field effect applied to it. The fragment shader is responsible for performing a smoothing pass. If the alpha value for this fragment is higher than 0.5, it can be considered as in; it will be out in any other case: This produces a clean result. There's more… We have applied distance fields to text, but we have also mentioned that it can work with monochromatic images. It is simple; you need to generate a low resolution distance field transform. Luckily enough, Libgdx comes with a tool that does just this. Open a command-line window, access your Libgdx package folder and enter the following command: java -cp gdx.jar;gdx-natives.jar;gdx-backend-lwjgl.jar;gdx-backend-lwjgl-natives.jar;extensionsgdx-tools gdx-tools.jar com.badlogic.gdx.tools.distancefield.DistanceFieldGenerator The distance field font generator takes the following parameters: --color: This parameter is in hexadecimal RGB format; the default is ffffff --downscale: This is the factor by which the original texture will be downscaled --spread: This is the edge scan distance, expressed in terms of the input Take a look at this example: java […] DistanceFieldGenerator --color ff0000 --downscale 32 --spread 128 texture.png texture-distance.png Alternatively, you can use the gdx-smart-font library to handle scaling. It is a simpler but a bit more limited solution (https://github.com/jrenner/gdx-smart-font). Summary In this article, we have covered the entire process of how to generate a distance field font and how to render it in Libgdx. Further resources on this subject: Cross-platform Development - Build Once, Deploy Anywhere [Article] Getting into the Store [Article] Adding Animations [Article]
Read more
  • 0
  • 0
  • 4388
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animation-and-unity3d-physics
Packt
27 Oct 2014
5 min read
Save for later

Animation and Unity3D Physics

Packt
27 Oct 2014
5 min read
In this article, written by K. Aava Rani, author of the book, Learning Unity Physics, you will learn to use Physics in animation creation. We will see that there are several animations that can be easily handled by Unity3D's Physics. During development, you will come to know that working with animations and Physics is easy in Unity3D. You will find the combination of Physics and animation very interesting. We are going to cover the following topics: Interpolate and Extrapolate Cloth component and its uses in animation (For more resources related to this topic, see here.) Developing simple and complex animations As mentioned earlier, you will learn how to handle and create simple and complex animations using Physics, for example, creating a rope animation and hanging ball. Let's start with the Physics properties of a Rigidbody component, which help in syncing animation. Interpolate and Extrapolate Unity3D offers a way that its Rigidbody component can help in the syncing of animation. Using the interpolation and extrapolation properties, we sync animations. Interpolation is not only for animation, it works also with Rigidbody. Let's see in detail how interpolation and extrapolation work: Create a new scene and save it. Create a Cube game object and apply Rigidbody on it. Look at the Inspector panel shown in the following screenshot. On clicking Interpolate, a drop-down list that contains three options will appear, which are None, Interpolate, and Extrapolate. By selecting any one of them, we can apply the feature. In interpolation, the position of an object is calculated by the current update time, moving it backwards one Physics update delta time. Delta time or delta timing is a concept used among programmers in relation to frame rate and time. For more details, check out http://docs.unity3d.com/ScriptReference/Time-deltaTime.html. If you look closely, you will observe that there are at least two Physics updates, which are as follows: Ahead of the chosen time Behind the chosen time Unity interpolates between these two updates to get a position for the update position. So, we can say that the interpolation is actually lagging behind one Physics update. The second option is Extrapolate, which is to use for extrapolation. In this case, Unity predicts the future position for the object. Although, this does not show any lag, but incorrect prediction sometime causes a visual jitter. One more important component that is widely used to animate cloth is the Cloth component. Here, you will learn about its properties and how to use it. The Cloth component To make animation easy, Unity provides an interactive component called Cloth. In the GameObject menu, you can directly create the Cloth game object. Have a look at the following screenshot: Unity also provides Cloth components in its Physics sections. To apply this, let's look at an example: Create a new scene and save it. Create a Plane game object. (We can also create a Cloth game object.) Navigate to Component | Physics and choose InteractiveCloth. As shown in the following screenshot, you will see the following properties in the Inspector panel: Let's have a look at the properties one by one. Blending Stiffness and Stretching Stiffness define the blending and stretching stiffness of the Cloth while Damping defines the damp motion of the Cloth. Using the Thickness property, we decide thickness of the Cloth, which ranges from 0.001 to 10,000. If we enable the Use Gravity property, it will affect the Cloth simulation. Similarly, if we enable Self Collision, it allows the Cloth to collide with itself. For a constant or random acceleration, we apply the External Acceleration and Random Acceleration properties, respectively. World Velocity Scale decides movement of the character in the world, which will affect the Cloth vertices. The higher the value, more movement of the character will affect. World Acceleration works similarly. The Interactive Cloth component depends on the Cloth Renderer components. Lots of Cloth components in a game reduces the performance of game. To simulate clothing in characters, we use the Skinned Cloth component. Important points while using the Cloth component The following are the important points to remember while using the Cloth component: Cloth simulation will not generate tangents. So, if you are using a tangent dependent shader, the lighting will look wrong for a Cloth component that has been moved from its initial position. We cannot directly change the transform of moving Cloth game object. This is not supported. Disabling the Cloth before changing the transform is supported. The SkinnedCloth component works together with SkinnedMeshRenderer to simulate clothing on a character. As shown in the following screenshot, we can apply Skinned Cloth: As you can see in the following screenshot, there are different properties that we can use to get the desired effect: We can disable or enable the Skinned Cloth component at any time to turn it on or off. Summary In this article, you learned about how interpolation and extrapolation work. We also learned about Cloth component and its uses in animation with an example. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Unity Networking – The Pong Game [article] The physics engine [article]
Read more
  • 0
  • 0
  • 4263

article-image-building-simple-boat
Packt
25 Aug 2014
15 min read
Save for later

Building a Simple Boat

Packt
25 Aug 2014
15 min read
It's time to get out your hammers, saws, and tape measures, and start building something. In this article, by Gordon Fisher, the author of Blender 3D Basics Beginner's Guide Second Edition, you're going to put your knowledge of building objects to practical use, as well as your knowledge of using the 3D View to build a boat. It's a simple but good-looking and water-tight craft that has three seats, as shown in the next screenshot. You will learn about the following topics: Using box modeling to convert a cube into a boat Employing box modeling's power methods, extrusion, and subdividing edges Joining objects together into a single object Adding materials to an object Using a texture for greater detail (For more resources related to this topic, see here.) Turning a cube into a boat with box modeling You are going to turn the default Blender cube into an attractive boat, similar to the one shown in the following screenshot. First, you should know a little bit about boats. The front is called the bow, and is pronounced the same as bowing to the Queen. The rear is called the stern or the aft. The main body of the boat is the hull, and the top of the hull is the gunwale, pronounced gunnel. You will be using a technique called box modeling to make the boat. Box modeling is a very standard method of modeling. As you might expect from the name, you start out with a box and sculpt it like a piece of clay to make whatever you want. There are three methods that you will use in most of the instances for box modeling: extrusion, subdividing edges, and moving, or translating vertices, edges, and faces. Using extrusion, the most powerful tool for box modeling Extrusion is similar to turning dough into noodles, by pushing them through a die.  Blender pushed out the edge and connected it to the old edge with a face. While extruding a face, the face gets pushed out and gets connected to the old edges by new faces. Time for action – extruding to make the inside of the hull The first step here is to create an inside for the hull. You will extrude the face without moving it, and shrink it a bit. This will create the basis for the gunwale: Create a new file and zoom into the default cube. Select Wireframe from the Viewport Shading menu on the header. Press the Tab key to go to Edit Mode. Choose Face Selection mode from the header. It is the orange parallelogram. Select the top face with the RMB. Press the E key to extrude the face, then immediately press Enter. Move the mouse away from the cube. Press the S key to scale the face with the mouse. While you are scaling it, press Shift + Ctrl, and scale it to 0.9. Watch the scaling readout in the 3D View header. Press the NumPad 1 key to change to the Front view and press the 5 key on the NumPad to change to the Ortho view. Move the cursor to a place a little above the top of the cube. Press E, and Blender will create a new face and let you now move it up or down. Move it down. When you are close to the bottom, press the Ctrl + Shift buttons, and move it down until the readout on the 3D View header is 1.9. Click the LMB to release the face. It will look like the following screenshot: What just happened? You just created a simple hull for your boat. It's going to look better, but at least you got the thickness of the hull established. Pressing the E key extrudes the face, making a new face and sides that connect the new face with the edges used by the old face. You pressed Enter immediately after the E key the first time, so that the new face wouldn't get moved. Then, you scaled it down a little to establish the thickness of the hull. Next, you extruded the face again. As you watched the readout, did you notice that it said D: -1.900 (1.900) normal? When you extrude a face, Blender is automatically set up to move the face along its normal, so that you can move it in or out, and keep it parallel with the original location. For your reference, the 4909_05_making the hull1.blend file, which has been included in the download pack, has the first extrusion. The 4909_05_making the hull2.blend file has the extrusion moved down. The 4909_05_making the hull3.blend file has the bottom and sides evened out. Using normals in 3D modeling What is a normal? The normal is an unseen ray that is perpendicular to a face. This is illustrated in the following image by the red line: Blender has many uses for the normal: It lets Blender extrude a face and keep the extruded face in the same orientation as the face it was extruded from This also keeps the sides straight and tells Blender in which direction a face is pointing Blender can also use the normal to calculate how much light a particular face receives from a given lamp, and in which direction lights are pointed Modeling tip If you create a 3D model and it seems perfect except that there is this unexplained hole where a face should have been, you may have a normal that faces backwards. To help you, Blender can display the normals for you. Time for action – displaying normals Displaying the normal does not affect the model, but sometimes it can help you in your modeling to see which way your faces are pointing: Press Ctrl + MMB and use the mouse to zoom out so that you can see the whole cube. In the 3D View, press N to get the Properties Panel. Scroll down in the Properties Panel until you get to the Mesh Display subpanel. Go down to where it says Normals. There are two buttons like the edge select and face select buttons in the 3D View header. Click on the button with a cube and an orange rhomboid, as outlined in the next screenshot, the Face Select button, to choose selecting the normals of the faces. Beside the Face Select button, there is a place where you can adjust the displayed size of the normal, as shown in the following screenshot. The displayed normals are the blue lines. Set Normals Size to 0.8. In the following image, I used the cube as it was just before you made the last extrusion so that it displays the normals a little better. Press the MMB, use the mouse to rotate your view of the cube, and look at the normals. Click on the Face Select button in the Mesh Display subpanel again to turn off the normals display. What just happened? To see the normals, you opened up the Properties Panel and instructed Blender to display them. They are displayed as little blue lines, and you can create them in whatever size that works best for you. Normals, themselves, have no length, just a direction. So, changing this setting does not affect the model. It's there for your use when you need to analyze the problems with the appearance of your model. Once you saw them, you turned them off. For your reference, the 4909_05_displaying normals.blend file has been included in the download pack. It has the cube with the first extrusion, and the normal display turned on. Planning what you are going to make It always helps to have an idea in mind of what you want to build. You don't have to get out caliper micrometers and measure every last little detail of something you want to model, but you should at least have some pictures as reference, or an idea of the actual dimensions of the object that you are trying to model. There are many ways to get these dimensions, and we are going to use several of these as we build our boats. Choosing which units to model in I went on the Internet and found the dimensions of a small jon boat for fishing. You are not going to copy it exactly, but knowing what size it should be will make the proportions that you choose more convincing. As it happened, it was an American boat, and the size was given in feet and inches. Blender supports three kinds of units for measuring distance: Blender units, Metric units, and Imperial units. Blender units are not tied to any specific measurement in the real world as Metric and Imperial units are. To change the units of measurement, go to the Properties window, to the right of the 3D View window, as shown in the following image, and choose the Scene button. It shows a light, a sphere, and a cylinder. In the following image, it's highlighted in blue. In the second subpanel, the Units subpanel lets you select which units you prefer. However, rather than choosing between Metric or Imperial, I decided to leave the default settings as they were. As the measurements that I found were Imperial measurements, I decided to interpret the Imperial measurements as Blender measurements, equating 1 foot to 1 Blender unit, and each inch as 0.083 Blender units. If I have an Imperial measurement that is expressed in inches, I just divide it by 12 to get the correct number in Blender units. The boat I found on the Internet is 9 feet and 10 inches long, 56 inches wide at the top, 44 inches wide at the bottom, and 18 inches high. I converted them to decimal Blender units or 9.830 long, 4.666 wide at the top, 3.666 wide at the bottom, and 1.500 high. Time for action – making reference objects One of the simplest ways to see what size your boat should be is to have boxes of the proper size to use as guides. So now, you will make some of these boxes: In the 3D View window, press the Tab key to get into Object Mode. Press A to deselect the boat. Press the NumPad 3 key to get the side view. Make sure you are in Ortho view. Press the 5 key on the NumPad if needed. Press Shift + A and choose Mesh and then Cube from the menu. You will use this as a reference block for the size of the boat. In the 3D View window Properties Panel, in the Transform subpanel, at the top, click on the Dimensions button, and change the dimensions for the reference block to 4.666 in the X direction, 9.83 in the Y direction, and 1.5 in the Z direction. You can use the Tab key to go from X to Y to Z, and press Enter when you are done. Move the mouse over the 3D View window, and press Shift + D to duplicate the block. Then press Enter. Press the NumPad 1 key to get the front view. Press G and then Z to move this block down, so its top is in the lower half of the first one. Press S, then X, then the number 0.79, and then Enter. This will scale it to 79 percent along the X axis. Look at the readout. It will show you what is happening. This will represent the width of the boat at the bottom of the hull. Press the MMB and rotate the view to see what it looks like. What just happened? To make accurate models, it helps to have references. For this boat that you are building, you don't need to copy another boat exactly, and the basic dimensions are enough. You got out of Edit Mode, and deselected the boat so that you could work on something else, without affecting the boat. Then, you made a cube, and scaled it to the dimensions of the boat, at the top of the hull, to use as a reference block. You then copied the reference block, and scaled the copy down in X for the width of the boat at the bottom of the hull as shown in the following image: Reference objects, like reference blocks and reference spheres, are handy tools. They are easy to make and have a lot of uses. For your reference, the 4909_05_making reference objects.blend file has been included in the download pack. It has the cube and the two reference blocks. Sizing the boat to the reference blocks Now that the reference blocks have been made, you can use them to guide you when making the boat. Time for action – making the boat the proper length Now that you've made the reference blocks the right size, it's time to make the boat the same dimensions as the blocks: Change to the side view by pressing the NumPad 3 key. Press Ctrl + MMB and the mouse to zoom in, until the reference blocks fill almost all of the 3D View. Press Shift + MMB and the mouse to re-center the reference blocks. Select the boat with the RMB. Press the Tab key to go into Edit Mode, and then choose the Vertex Select mode button from the 3D View header. Press A to deselect all vertices. Then, select the boat's vertices on the right-hand side of the 3D View. Press B to use the border select, or press C to use the circle select mode, or press Ctrl + LMB for the lasso select. When the vertices are selected, press G and then Y, and move the vertices to the right with the mouse until they are lined up with the right-hand side of the reference blocks. Press the LMB to drop the vertices in place. Press A to deselect all the vertices, select the boat's vertices on the left-hand side of the 3D View, and move them to the left until they are lined up with the left-hand side of the reference blocks, as shown in the following image: What just happened? You made sure that the screen was properly set up for working by getting into the side view in the Ortho mode. Next, you selected the boat, got into Edit Mode, and got ready to move the vertices. Then, you made the boat the proper length, by moving the vertices so that they lined up with the reference blocks. For your reference, the 4909_05_proper length.blend file has been included in the download pack. It has the bow and stern properly sized. Time for action – making the boat the proper width and height Making the boat the right length was pretty easy. Setting the width and height requires a few more steps, but the method is very similar: Press the NumPad 1 key to change to the front view. Use Ctrl + MMB to zoom into the reference blocks. Use Shift + MMB to re-center the boat so that you can see all of it. Press A to deselect all the vertices, and using any method select all of the vertices on the left of the 3D View. Press G and then X to move the left-side vertices in X, until they line up with the wider reference block, as shown in the following image. Press the LMB to release the vertices. Press A to deselect all the vertices. Select only the right-hand vertices with a method different from the one you used to select the left-hand vertices. Then, press G and then X to move them in X, until they line up with the right side of the wider reference block. Press the LMB when they are in place. Deselect all the vertices. Select only the top vertices, and press G and then Z to move them in the Z direction, until they line up with the top of the wider reference block. Deselect all the vertices. Now, select only the bottom vertices, and press G and then Z to move them in the Z direction, until they line up with the bottom of the wider reference block, as shown in the following image: Deselect all the vertices. Next, select only the bottom vertices on the left. Press G and then X to move them in X, until they line up with the narrower reference block. Then, press the LMB. Finally, deselect all the vertices, and select only the bottom vertices on the right. Press G and then X to move them in the X axis, until they line up with the narrower reference block, as shown in the following image. Press the LMB to release them: Press the NumPad 3 key to switch to the Side view again. Use Ctrl + MMB to zoom out if you need to. Press A to deselect all the vertices. Select only the bottom vertices on the right, as in the following illustration. You are going to make this the stern end of the boat. Press G and then Y to move them left in the Y axis just a little bit, so that the stern is not completely straight up and down. Press the LMB to release them. Now, select only the bottom vertices on the left, as highlighted in the following illustration. Make this the bow end of the boat. Move them right in the Y axis just a little bit. Go a bit further than the stern, so that the angle is similar to the right side, as shown here, maybe about 1.3 or 1.4. It's your call. What just happened? You used the reference blocks to guide yourself in moving the vertices into the shape of a boat. You adjusted the width and the height, and angled the hull. Finally, you angled the stern and the bow. It floats, but it's still a bit boxy. For your reference, the 4909_05_proper width and height1.blend file has been included in the download pack. It has both sides aligned with the wider reference block. The 4909_05_proper width and height2.blend file has the bottom vertices aligned to the narrower reference block. The 4909_05_proper width and height3.blend file has the bow and stern finished.
Read more
  • 0
  • 0
  • 1986

article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 1837

article-image-customizing-skin-guiskin
Packt
22 Jul 2014
8 min read
Save for later

Customizing skin with GUISkin

Packt
22 Jul 2014
8 min read
(For more resources related to this topic, see here.) Prepare for lift off We will begin by creating a new project in Unity. Let's start our project by performing the following steps: First, create a new project and name it MenuInRPG. Click on the Create new Project button, as shown in the following screenshot: Next, import the assets package by going to Assets | Import Package | Custom Package...; choose Chapter2Package.unityPackage, which we just downloaded, and then click on the Import button in the pop-up window link, as shown in the following screenshot: Wait until it's done, and you will see the MenuInRPGGame and SimplePlatform folders in the Window view. Next, click on the arrow in front of the SimplePlatform folder to bring up the drop-down options and you will see the Scenes folder and the SimplePlatform_C# and SimplePlatform_JS scenes, as shown in the following screenshot: Next, double-click on the SimplePlatform_C# (for a C# user) and SimplePlatform_JS (for a Unity JavaScript user) scenes, as shown in the preceding screenshot, to open the scene that we will work on in this project. When you double-click on either of the SimplePlatform scenes, Unity will display a pop-up window asking whether you want to save the current scene or not. As we want to use the SimplePlatform scene, just click on the Don't Save button to open up the SimplePlatform scene, as shown in the following screenshot: Then, go to the MenuInRPGGame/Resources/UI folder and click on the first file to make sure that the Texture Type and Format fields are selected correctly, as shown in the following screenshot: Why do we set it up in this way? This is because we want to have a UI graphic to look as close to the source image as possible. However, we set the Format field to Truecolor, which will make the size of the image larger than Compress, but will show the right color of the UI graphics. Next, we need to set up the Layers and Tags configurations; for this, go to Edit | Project Settings | Tags and set them as follows: Tags Element 0 UI Element 1 Key Element 2 RestartButton Element 3 Floor Element 4 Wall Element 5 Background Element 6 Door Layers User Layer Background User Layer Level Use Layer UI At last, we will save this scene in the MenuInRPGGame/Scenes folder, and name it MenuInRPG by going to File | Save Scene as... and then save it. Engage thrusters Now we are ready to create a GUI skin; for this, perform the following steps: Let's create a new GUISkin object by going to Assets | Create | GUISkin, and we will see New GUISkin in our Project window. Name the GUISkin object as MenuSkin. Then, click on MenuSkin and go to its Inspector window. We will see something similar to the following screenshot: You will see many properties here, but don't be afraid, because this is the main key to creating custom graphics for our UI. Font is the base font for the GUI skin. From Box to Scroll View, each property is GUIStyle, which is used for creating our custom UI. The Custom Styles property is the array of GUIStyle that we can set up to apply extra styles. Settings are the setups for the entire GUI. Next, we will set up the new font style for our menu UI; go to the Font line in the Inspector view, click the circle icon, and select the Federation Kalin font. Now, you have set up the base font for GUISkin. Next, click on the arrow in front of the Box line to bring up a drop-down list. We will see all the properties, as shown in the following screenshot: For more information and to learn more about these properties, visit http://unity3d.com/support/documentation/Components/class-GUISkin.html. Name is basically the name of this style, which by default is box (the default style of GUI.Box). Next, we will be seeing our custom UI to this GUISkin; click on the arrow in front of Normal to bring up the drop-down list, and you will see two parameters—Background and Text Color. Click on the circle icon on the right-hand side of the Background line to bring up the Select Texture2D window and choose the boxNormal texture, or you can drag the boxNormal texture from the MenuInRPG/Resources/UI folder and drop it to the Background space. We can also use the search bar in the Select Texture2D window or the Project view to find our texture by typing boxNormal in the search bar, as shown in the following screenshot: Then, under the Text Color line, we leave the color as the default color—because we don't need any text to be shown in this style—and repeat the previous step with On Normal by using the boxNormal texture. Next, click on the arrow in front of Active under Background. Choose the boxActive texture, and repeat this step for On Active. Then, go to each property in the Box style and set the following parameters: Border: Left: 14, Right: 14, Top: 14, Bottom: 14 Padding: Left: 6, Right: 6, Top: 6, Bottom: 6 For other properties of this style, we will leave them as default. Next, we go to the following properties in the MenuSkin inspector and set them as follows: Label Normal | Text Color R 27, G: 95, B: 104, A: 255 Window Normal | Background myWindow On Normal | Background myWindow Border Left: 27, Right: 27, Top: 55, Bottom: 96 Padding Left: 30, Right: 30, Top: 60, Bottom: 30 Horizontal Scrollbar Normal | Background horScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Thumb Normal | Background horScrollBarThumbNormal Hover | Background horScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Horizontal Scrollbar Left Button Normal | Background arrowLNormal Hover | Background arrowLHover Fixed Width 14 Fixed Height 15 Horizontal Scrollbar Right Button Normal | Background arrowRNormal Hover | Background arrowRHover Fixed Width 14 Fixed Height 15 Vertical Scrollbar Normal | Background verScrollBar Border Left: 4, Right: 4, Top: 4, Bottom: 4 Padding Left: 0, Right: 0, Top: 0, Bottom: 0 Vertical Scrollbar Thumb Normal | Background verScrollBarThumbNormal Hover | Background verScrollBarThumbHover Border Left: 4, Right: 4, Top: 4, Bottom: 4 Vertical Scrollbar Up Button Normal | Background arrowUNormal Hover | Background arrowUHover Fixed Width 16 Fixed Height 14 Vertical Scrollbar Down Button Normal | Background arrowDNormal Hover | Background arrowDHover Fixed Width 16 Fixed Height 14 We have finished setting up of the default styles. Now we will go to the Custom Styles property and create our custom GUIStyle to use for this menu; go to Custom Styles and under Size, change the value to 6. Then, we will see Element 0 to Element 5. Next, we go to the first element or Element 0; under Name, type Tab Button, and we will see Element 0 change to Tab Button. Set it as follows: Tab Button (or Element 0) Name Tab Button Normal Background tabButtonNormal Text Color R: 27, G: 62, B: 67, A: 255 Hover Background tabButtonHover Text Color R: 211, G: 166, B: 9, A: 255 Active Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 On Normal Background tabButtonActive Text Color R: 27, G: 62, B: 67, A: 255 Border Left: 12, Right: 12, Top: 12, Bottom: 4 Padding Left: 6, Right: 6, Top: 6, Bottom: 4 Font Size 14 Alignment Middle Center Fixed Height 31 The settings are shown in the following screenshot: For the Text Color value, we can also use the eyedropper tool next to the color box to copy the same color, as we can see in the following screenshot: We have finished our first style, but we still have five styles left, so let's carry on with Element 1 with the following settings: Exit Button (or Element 1) Name Exit Button Normal | Background buttonCloseNormal Hover | Background buttonCloseHover Fixed Width 26 Fixed Height 22 The settings for Exit Button are showed in the following screenshot: The following styles will create a style for Element 2: Text Item (or Element 2) Name Text Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Left Word Wrap Check The settings for Text Item are shown in the following screenshot: To set up the style for Element 3, the following settings should be done: Text Amount (or Element 3) Name Text Amount Normal | Text Color R: 27, G: 95, B: 104, A: 255 Alignment Middle Right Word Wrap Check The settings for Text Amount are shown in the following screenshot: The following settings should be done to create Selected Item: Selected Item (or Element 4) Name Selected Item Normal | Text Color R: 27, G: 95, B: 104, A: 255 Hover Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 Active Background itemSelectHover Text Color R: 27, G: 95, B: 104, A: 255 On Normal Background itemSelectActive Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings are shown in the following screenshot: To create the Disabled Click style, the following settings should be done: Disabled Click (or Element 5) Name Disabled Click Normal Background itemSelectNormal Text Color R: 27, G: 95, B: 104, A: 255 Border Left: 6, Right: 6, Top: 6, Bottom: 6 Margin Left: 2, Right: 2, Top: 2, Bottom: 2 Padding Left: 4, Right: 4, Top: 4, Bottom: 4 Alignment Middle Center Word Wrap Check The settings for Disabled Click are shown in the following screenshot: And now, we have finished this step.
Read more
  • 0
  • 0
  • 2763
article-image-skinning-character
Packt
21 Apr 2014
6 min read
Save for later

Skinning a character

Packt
21 Apr 2014
6 min read
(For more resources related to this topic, see here.) Our world in 5000 AD is incomplete without our mutated human being Mr. Green. Our Mr. Green is a rigged model, exported from Blender. All famous 3D games from Counter Strike to World of Warcraft use skinned models to give the most impressive real world model animations and kinematics. Hence, our learning has to now evolve to load Mr. Green and add the same quality of animation in our game. We will start our study of character animation by discussing the skeleton, which is the base of the character animation, upon which a body and its motion is built. Then, we will learn about skinning, how the bones of the skeleton are attached to the vertices, and then understand its animations. In this article, we will cover basics of a character's skeleton, basics of skinning, and some aspects of Loading a rigged JSON model. Understanding the basics of a character's skeleton A character's skeleton is a posable framework of bones. These bones are connected by articulated joints, arranged in a hierarchical data structure. The skeleton is generally rendered and is used as an invisible armature to position and orient a character's skin. The joints are used for relative movement within the skeleton. They are represented by a 4 x 4 linear transformation matrices (combination of rotation, translation, and scale). The character skeleton is set up using only simple rotational joints as they are sufficient to model the joints of real animals. Every joint has limited degrees of freedom (DOFs). DOFs are the possible ranges of motion of an object. For instance, an elbow joint has one rotational DOF and a shoulder joint has three DOFs, as the shoulder can rotate along three perpendicular axes. Individual joints usually have one to six DOFs. Refer to the link http://en.wikipedia.org/wiki/Six_degrees_of_freedom to understand different degrees of freedom. A joint local matrix is constructed for each joint. This matrix defines the position and orientation of each joint and is relative to the joint above it in the hierarchy. The local matrices are used to compute the world space matrices of the joint, using the process of forward kinematics. The world space matrix is used to render the attached geometry and is also used for collision detection. The digital character skeleton is analogous to the real-world skeleton of vertebrates. However, the bones of our digital human character do have to correspond to the actual bones. It will depend on the level of detail of the character you require. For example, you may or may not require cheek bones to animate facial expressions. Skeletons are not just used to animate vertebrates but also mechanical parts such as doors or wheels. Comprehending the joint hierarchy The topology of a skeleton is a tree or an open-directed graph. The joints are connected up in a hierarchical fashion to the selected root joint. The root joint has no parent of itself and is presented in the model JSON file with the parent value of -1. All skeletons are kept as open trees without any closed loops. This restriction though does not prevent kinematic loops. Each node of the tree represents a joint, also called bones. We use both terms interchangeably. For example, the shoulder is a joint, and the upper arm is a bone, but the transformation matrix of both objects is same. So mathematically, we would represent it as a single component with three DOFs. The amount of rotation of the shoulder joint will be reflected by the upper arm's bone. The following figure shows simple robotic bone hierarchy: Understanding forward kinematics Kinematics is a mathematical description of a motion without the underlying physical forces. Kinematics describes the position, velocity, and acceleration of an object. We use kinematics to calculate the position of an individual bone of the skeleton structure (skeleton pose). Hence, we will limit our study to position and orientation. The skeleton is purely a kinematic structure. Forward kinematics is used to compute the world space matrix of each bone from its DOF value. Inverse kinematics is used to calculate the DOF values from the position of the bone in the world. Let's dive a little deeper into forward kinematics and study a simple case of bone hierarchy that starts from the shoulder, moves to the elbow, finally to the wrist. Each bone/joint has a local transformation matrix, this.modelMatrix. This local matrix is calculated from the bone's position and rotation. Let's say the model matrices of the wrist, elbow, and shoulder are this.modelMatrixwrist, this.modelMatrixelbow, and this.modelMatrixshoulder respectively. The world matrix is the transformation matrix that will be used by shaders as the model matrix, as it denotes the position and rotation in world space. The world matrix for a wrist will be: this.worldMatrixwrist = this.worldMatrixelbow * this.modelMatrixwrist The world matrix for an elbow will be: this.worldMatrixelbow = this.worldMatrixshoulder * this.modelMatrixelbow If you look at the preceding equations, you will realize that to calculate the exact location of a wrist in the world space, we need to calculate the position of the elbow in the world space first. To calculate the position of the elbow, we first need to calculate the position of shoulder. We need to calculate the world space coordinate of the parent first in order to calculate that of its children. Hence, we use depth-first tree traversal to traverse the complete skeleton tree starting from its root node. A depth-first traversal begins by calculating modelMatrix of the root node and traverses down through each of its children. A child node is visited and subsequently all of its children are traversed. After all the children are visited, the control is transferred to the parent of modelMatrix. We calculate the world matrix by concatenating the joint parent's world matrix and its local matrix. The computation of calculating a local matrix from DOF and then its world matrix from the parent's world matrix is defined as forward kinematics. Let's now define some important terms that we will often use: Joint DOFs: A movable joint movement can generally be described by six DOFs (three for position and rotation each). DOF is a general term: this.position = vec3.fromValues(x, y, z); this.quaternion = quat.fromValues(x, y, z, w); this.scale = vec3.fromValues(1, 1, 1); We use quaternion rotations to store rotational transformations to avoid issues such as gimbal lock. The quaternion holds the DOF values for rotation around the x, y, and z values. Joint offset: Joints have a fixed offset position in the parent node's space. When we skin a joint, we change the position of each joint to match the mesh. This new fixed position acts as a pivot point for the joint movement. The pivot point of an elbow is at a fixed location relative to the shoulder joint. This position is denoted by a vector position in the joint local matrix and is stored in m31, m32, and m33 indices of the matrix. The offset matrix also holds initial rotational values.
Read more
  • 0
  • 0
  • 2261

article-image-making-entity-multiplayer-ready
Packt
07 Apr 2014
8 min read
Save for later

Making an entity multiplayer-ready

Packt
07 Apr 2014
8 min read
(For more resources related to this topic, see here.) Understanding the dataflow of Lua entities in a multiplayer environment When using your own Lua entities in a multiplayer environment, you need to make sure everything your entity does on one of the clients is also triggered on all other clients. Let's take a light switch as an example. If one of the players turned on the light switch, the switch should also be flipped on all other clients. Each client connected to the game has an instance of that light switch in their level. The CryENGINE network implementation already handles all the work involved in linking these individual instances together using network entity IDs. Each light switch can contact its own instances on all connected clients and call its functions over the network. All you need to do is use the functionality that is already there. One way of implementing the light switch functionality is to turn on the switch in the entity as soon as the OnUsed() event is triggered and then send a message to all other clients in the network to also turn on their lights. This might work for something as simple as a switch, but can soon get messy when the entity becomes more complex. Ping times and message orders can lead to inconsistencies if two players try to flip the light switch at the same time. The representation of the process would look like the following diagram: Not so good – the light switch entity could trigger its own switch on all network instances of itself Doing it this way, with the clients notifying each other, can cause many problems. In a more stable solution, these kinds of events are usually run through the server. The server entity—let's call it the master entity—determines the state of the entities across the network at all times and distributes the entities throughout the network. This could be visualized as shown in the following diagram: Better – the light switch entity calls the server that will distribute the event to all clients In the light switch scenario mentioned earlier, the light switch entity would send an event to the server light switch entity first. Then, the server entity would call each light switch entity, including the original sender, to turn on their lights. It is important to understand that the entity that received the event originally does nothing else but inform the server about the event. The actual light is not turned on until the server calls back to all entities with the request to do so. The aforementioned dataflow works in single player as well, as CryENGINE will just pretend that the local machine is both the client and the server. This way, you will not have to make adjustments or add extra code to your entity to check whether it is single player or multiplayer. In a multiplayer environment with a server and multiple clients, it is important to set the script up so that it acts properly and the correct functions are called on either the client or the server. The first step to achieve this is to add a client and server table to the entity script using the following code: Client = {}, Server = {}, With this addition, our script table looks like the following code snippet: Testy = {Properties={ fileModel = "",Physics = { bRigidBody=1, = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", }, } Now, we can go ahead and modify the functions so that they work properly in multiplayer. We do this by adding the Client and Server subtables to our script. This way, the network system will be able to identify the Client/Server functions on the entity. The Client/Server functions The Client/Server functions are defined within your entity script by using the respective subtables that we previously defined in the entity table. Let's update our script and add a simple function that outputs a debug text into the console on each client. In order for everything to work properly, we first need to update our OnInit() function and make sure it gets called on the server properly. Simply add a server subtable to the function so that it looks like the following code snippet: functionTesty.Server:OnInit() self:OnReset(); end; This way, our OnReset() function will still be called properly. Now, we can add a new function that outputs a debug text for us. Let's keep it simple and just make it output a console log using the CryENGINE Log function, as shown in the following code snippet: functionTesty.Client:PrintLogOutput(text) Log(text); end; This function will simply print some text into the CryENGINE console. Of course, you can add more sophisticated code at this point to be executed on the client. Please also note the Client subtable in the function definition that tells the engine that this is a client function. In the next step, we have to add a way to trigger this function so that we can test the behavior properly. There are many ways of doing this, but to keep things simple, we will simply use the OnHit() callback function that will be automatically triggered when the entity is hit by something; for example, a bullet. This way, we can test our script easily by just shooting at our entity. The OnHit() callback function is quite simple. All it needs to do in our case is to call our PrintLogOutput function, or rather request the server to call it. For this purpose, we add another function to be called on the server that calls our PrintLogOutput() function. Again, please note that we are using the Client subtable of the entity to catch the hit that happens on the client. Our two new functions should look as shown in the following code snippet: functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end We now have two new functions: one is a client function calling a server function and the other one is a server function calling the actual function on all the clients. The Remote Method Invocation definitions As a last step, before we are finished, we need to expose our entity and its functions to the network. We can do this by adding a table within the root of our entity script that defines the necessary Remote Method Invocation (RMI). The Net.Expose table will expose our entity and its functions to the network so that they can be called remotely, as shown in the following code snippet: Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; Each RMI is defined by providing a function name, a set of RMI flags, and additional parameters. The first RMI flag is an order flag and defines the order of the network packets. You can choose between the following options: UNRELIABLE_ORDERED RELIABLE_ORDERED RELIABLE_UNORDERED These flags tell the engine whether the order of the packets is important or not. The attachment flag will define at what time the RMI is attached during the serialization process of the network. This parameter can be either of the following flags: PREATTACH: This flag attaches the RMI before game data serialization. POSTATTACH: This flag attaches the RMI after game data serialization. NOATTACH: This flag is used when it is not important if the RMI is attached before or after the game data serialization. FAST: This flag performs an immediate transfer of the RMI without waiting for a frame update. This flag is very CPU intensive and should be avoided if possible. The Net.Expose table we just added defines which functions will be exposed on the client and the server and will give us access to the following three subtables: allClients otherClients server With these functions, we can now call functions either on the server or the clients. You can use the allClients subtable to call a function on all clients or the otherClients subtable to call it on all clients except the own client. At this point, the entity table of our script should look as follows: Testy = { Properties={ fileModel = "", Physics = { bRigidBody=1, bRigidBodyActive = 1, Density = -1, Mass = -1, }, Client = {}, Server = {}, Editor={ Icon="User.bmp", ShowBounds = 1, }, } Net.Expose { Class = Testy, ClientMethods = { PrintLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING }, }, ServerMethods = { SvRequestLogOutput = { RELIABLE_UNORDERED, POST_ATTACH, STRING}, }, ServerProperties = { }, }; This defines our entity and its network exposure. With our latest updates, the rest of our script with all its functions should look as follows: functionTesty.Server:OnInit() self:OnReset(); end; functionTesty:OnReset() local props=self.Properties; if(not EmptyString(props.fileModel))then self:LoadObject(0,props.fileModel); end; EntityCommon.PhysicalizeRigid(self,0,props.Physics,0); self:DrawSlot(0, 1); end; functionTesty:OnPropertyChange() self:OnReset(); end; functionTesty.Client:PrintLogOutput(text) Log(text); end; functionTesty.Client:OnHit(user) self.server:SvRequestLogOutput("My Text!"); end functionTesty.Server:SvRequestLogOutput(text) self.allClients:PrintLogOutput(text); end With these functions added to our entity, everything should be ready to go and you can test the behavior in game mode. When the entity is being shot at, the OnHit() function will request the log output to be printed from the server. The server calls the actual function on all clients. Summary In this article we learned about making our entity ready for a multiplayer environment by understanding the dataflow of Lua entities, understanding the Client/Server functions, and by exposing our entities to the network using the Remote Method Invocation definitions. Resources for Article: Further resources on this subject: CryENGINE 3: Terrain Sculpting [Article] CryENGINE 3: Breaking Ground with Sandbox [Article] CryENGINE 3: Fun Physics [Article]
Read more
  • 0
  • 0
  • 1332

article-image-grasping-hammer
Packt
17 Feb 2014
14 min read
Save for later

Grasping Hammer

Packt
17 Feb 2014
14 min read
(For more resources related to this topic, see here.) Terminology In this article, I will be guiding you through many examples using Hammer. There are a handful of terms that will recur many times that you will need to know. It might be a good idea to bookmark this page, so you can flip back and refresh your memory. Brush A brush is a piece of world geometry created with block tool. Brushes make up the basic building blocks of a map and must be convex. A convex object's faces cannot see each other, while a concave object's faces can. Imagine if you're lying on the pitched roof of a generic house. You wouldn't be able to see the other side of the roof because the profile of the house is a convex pentagon. If you moved the point of the roof down inside the house, you would be able to see the other side of the roof because the profile would then be concave. This can be seen in the following screenshot: Got it? Good. Don't think you're limited by this; you can always create convex shapes out of more than one brush. Since we're talking about houses, brushes are used to create the walls, floor, ceiling, and roof. Brushes are usually geometrically very simple; upon creation, common brushes have six faces and eight vertices like a cube. The brushes are outlined in the following screenshot: Not all brushes have six sides; however, the more advanced brushwork techniques can create brushes that have (almost) as many sides as you want. You need to be careful while making complex brushes with the more advanced tools though. This is because it can be quite easy to make an invalid solid if you're not careful. An invalid solid will cause errors during compilation and will make your map unplayable. A concave brush is an example of an invalid solid. World brushes are completely static or unmoving. If you want your brushes to have a special function, they need to be turned into entities. Entity An entity is anything in the game that has a special function. Entities come in two flavors: brush-based and point-based. A brush-based entity, as you can probably guess, is created from a brush. A sliding door or a platform lift are examples of brush-based entities. Point-based entities, on the other hand, are created in a point in space with the entity tool. Lights, models, sounds, and script events are all point-based entities. In the following figure, the models or props are highlighted: World The world is everything inside of a map that you create. Brushes, lights, triggers, sounds, models, and so on are all part of the world. They must all be contained within a sealed map made out of world brushes. Void The void is nothing or everything that isn't the world. The world must be sealed off from the void in order to function correctly when compiled and played in game. Imagine the void as outer space or a vacuum. World brushes seal off the world from the void. If there are any gaps in world brushes (or if there are any entities floating in the void), this will create a leak, and the engine will not be able to discern what the world is and what the void is. If a leak exists in your map, the engine will not know what is supposed to be seen during the compile! The map will compile, but performance-reducing side effects such as bland lighting and excess rendered polygons will plague your map. Settings If at any point in your mapping experience, Hammer doesn't seem to be operating the way you want it to be, go to Tools | Options, and see if there's any preferences you would like to change. You can customize general settings or options related to the 2D and 3D views. If you're coming from another editor, perhaps there's a setting that will make your Hammer experience similar to what you're used to. Loading Hammer for the first time You'll be opening SteamsteamappscommonHalf-Life 2bin often, so you may want to create a desktop shortcut for easier access. Run Hammer.bat from the bin folder to launch Valve Hammer Editor, Valve's map (level) creator, so you can start creating a map. Hammer will prompt you to choose a game configuration, so choose which game you want to map for. I will be using Half-Life 2: Episode Two in the following examples: When you first open up Hammer, you will see a blank gray screen surrounded by a handful of menus, as shown in the following screenshot: Like every other Microsoft Windows application, there is a main menu in the top-left corner of the screen that lets you open, save, or create a document. As far as Hammer is concerned, our documents are maps, and they are saved with the .vmf file extension. So let's open the File menu and load an example map so we can poke around a bit. Load the map titled Chapter2_Example.vmf. The Hammer overview In this section, we will be learning to recognize the different areas of Hammer and what they do. Being familiar with your environment is important! Viewports There are four main windows or viewports in Hammer, as shown in the following screenshot: By default, the top-left window is the 3D view or camera view. The top-right window is the top (x/y) view, the bottom-right window is the side (x/z) view, and the bottom-left window is the front (y/z) view. If you would like to change the layout of the windows, simply click on the top-left corner of any window to change what is displayed. In this article, I will be keeping the default layout but that does not mean you have to! Set up Hammer any way you'd like. For instance, if you would prefer your 3D view to be larger, grab the cross at the middle of the four screens and drag to extend the areas. The 3D window has a few special 3D display types such as Ray-Traced Preview and Lightmap Grid. We will be learning more about these later, but for now, just know that 3D Ray-traced Preview simulates the way light is cast. It does not mimic what you would actually see in-game, but it can be a good first step before compile to see what your lighting may look like. The 3D Lighting Preview will open in a new window and will update every time a camera is moved or a light entity is changed. You cannot navigate directly in the lighting preview window, so you will need to use the 2D cameras to change the viewing perspective. The Map toolbar Located to the left of the screen, the Map toolbar holds all the mapping tools. You will probably use this toolbar the most. The tools will each be covered in depth later on, but here's a basic overview, as shown in the following screenshot, starting from the first tool: The Selection Tool The Selection Tool is pretty self-explanatory; use this tool to select objects. The hot key for this is Shift + S. This is the tool that you will probably use the most. This tool selects objects in the 3D and 2D views and also lets you drag selection boxes in the 2D views. The Magnify Tool The Magnify Tool will zoom in and out in any view. You could also just use the mouse wheel if you have one or the + and – keys on the numerical keypad for the 2D views. The hot key for the magnify tool is Shift + G. The Camera Tool The Camera Tool enables 3D view navigation and lets you place multiple different cameras into the 2D views. Its hot key is Shift + C. The Entity Tool The Entity Tool places entities into the map. If clicked on the 3D view, an entity is placed on the closest brush to the mouse cursor. If used in the 2D view, a crosshair will appear noting the origin of the entity, and the Enter key will add it to the map at the origin. The entity placed in the map is specified by the object bar. The hot key is Shift + E. The Block Tool The Block Tool creates brushes. Drag a box in any 2D view and hit the Enter key to create a brush within the bounds of the box. The object bar specifies which type of brush will be created. The default is box and the hot key for this is Shift + B. The Texture Tool The Texture Tool allows complete control over how you paint your brushes. For now, just know where it is and what it does; the hot key is Shift + A. The Apply Current Texture Tool Clicking on the Apply Current Texture icon will apply the selected texture to the selected brush or brushes. The Decal Tool The Decal Tool applies decals and little detail textures to brushes and the hot key is Shift + D. The Overlay Tool The Overlay Tool is similar to the decal tool. However, overlays are a bit more powerful than decals. Shift + O will be the hot key. The Clipping Tool The Clipping Tool lets you slice brushes into two or more pieces, and the hot key is Shift + X. The Vertex manipulation Tool The Vertex manipulation Tool, or VM tool, allows you to move the individual vertices and edges of brushes any way you like. This is one of the most powerful tools you have in your toolkit! Using this tool improperly, however, is the easiest way to corrupt your map and ruin your day. Not to worry though, we'll learn about this in great detail later on. The hot key is Shift + V. The selection mode bar The selection mode bar (located at the top-right corner by default) lets you choose what you want to select. If Groups is selected, you will select an entire group of objects (if they were previously grouped) when you click on something. The Objects selection mode will only select individual objects within groups. Solids will only select solid objects. The texture bar Located just beneath the selection mode toolbar, the texture bar, as shown, in the following screenshot, shows a thumbnail preview of your currently selected (active) texture and has two buttons that let you select or replace a texture. What a nifty tool, eh? The filter control bar The filter control bar controls your VisGroups (short for visual groups). VisGroups separate your map objects into different categories, similar to layers in the image editing software. To make your mapping life a bit easier, you can toggle visibility of object groups. If, for example, you're trying to sculpt some terrain but keep getting your view blocked by tree models, you can just uncheck the Props box, as shown in the following screenshot, to hide all the trees! There are multiple automatically generated VisGroups such as entities, displacements, and nodraws that you can easily filter through. Don't think you're limited to this though; you can create your own VisGroup with any selection at any time. The object bar The object bar lets you control what type of brush you are creating with the brush tool. This is also where you turn brushes into brush-based entities and create and place prefabs, as shown in the following screenshot: Navigating in 3D You will be spending most of your time in the main four windows, so now let's get comfortable navigating in them, starting with the 3D viewport. Looking around Select the camera tool on the map tools bar. It's the third one down on the list and looks like a red 35 mm camera. Holding the left mouse button while in the 3D view will allow you to look around from a stationary point. Holding the right mouse button will allow you to pan left, right, up, and down in the 3D space. Holding both left and right mouse buttons down together will allow you to move forward and backwards as well as pan left and right. Scrolling the mouse wheel will move the camera forward and backwards. Practice flying down the example map hallway. While looking down the hallway, hold the right mouse button to rise up through the grate and see the top of the map. If you would prefer another method of navigating in 3D, you can use the W, S, A, and D keys to move around while the left mouse button is pressed. Just like your normal FPS game, W moves forward in the direction of the camera, S moves backwards, and A and D move left and right, respectively. You can also move the mouse to look around while moving. Being comfortable with the 3D view is necessary in order to become proficient in creating and scripting 3D environments. As with everything, practice makes perfect, so don't be discouraged if you find yourself hitting the wrong buttons. Having the camera tool selected is not necessary to navigate in 3D. With any tool selected, hold Space bar while the cursor is in the 3D window to activate the 3D navigation mode. While holding Space bar, the navigation functions exactly as it does as if the camera tool was selected. Releasing Space bar will restore normal functionality to the currently selected tool. This can be a huge time saver down the line when you're working on very technical object placements. Multiple cameras If you find yourself bouncing around between different areas of the map, or even changing angles near the same object, you can create multiple cameras and juggle between them. With the camera tool selected, hold Shift and drag a line with the left mouse button in any 2D viewport. The start of the line will be the camera origin, and the end of the line will be the camera's target. Whenever you create a new camera, the newly created camera becomes active and displays its view in the 3D viewport. To cycle between cameras, press the Page Up and Page Down buttons, or click on a camera in any 2D view. Camera locations are stored in the map file when you save, so you don't have to worry about losing them when you exit. Pressing the Delete key with a camera selected will remove the active camera. If you delete the only camera, your view will snap to the origin (0, 0, 0) but you will still be able to look around and create other cameras. In essence, you will always have at least one camera in your map. Selecting objects in the 3D viewport To select an object in the 3D viewport, you must have the selection tool active, as shown in the following screenshot. A quick way to activate the selection tool is to hit the Esc key while any tool is selected, or use the Shift + S hot key. A selected brush or a group of brushes will be highlighted in red with yellow edges as shown in the following screenshot: To deselect anything within the 3D window, click on any other brush, or on the background (void), or simply hit the Esc key. If you want to select an object behind another object, press and hold the left mouse button on the front object. This will cycle through all the objects that are located behind the cursor. You will be able to see the selected objects changing in the 2D and 3D windows about once per second. Simply release the mouse button to complete your selection. To select multiple brushes or objects in the 3D window, hold Ctrl and left-click on multiple brushes. Clicking on a selected object while Ctrl is held will deselect the object. If you've made a mistake choosing objects, you can undo your selections with Ctrl + Z or navigate to Edit | Undo.
Read more
  • 0
  • 0
  • 1695
article-image-thats-one-fancy-hammer
Packt
13 Jan 2014
8 min read
Save for later

That's One Fancy Hammer!

Packt
13 Jan 2014
8 min read
(For more resources related to this topic, see here.) Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard Browser-based 3D – welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 1709

article-image-unity-networking-pong-game
Packt
22 Nov 2013
15 min read
Save for later

Unity Networking – The Pong Game

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) Multiplayer is everywhere. It's a staple of AAA games and small-budget indie offerings alike. Multiplayer games tap into our most basic human desires. Whether it be teaming up with strangers to survive a zombie apocalypse, or showing off your skills in a round of "Capture the Flag" on your favorite map, no artificial intelligence in the world comes close to the feeling of playing with a living, breathing, and thinking human being. Unity3D has a sizable number of third-party networking middleware aimed at developing multiplayer games, and is arguably one of the easiest platforms to prototype multiplayer games. The first networking system most people encounter in Unity is the built-in Unity Networking API . This API simplifies a great many tasks in writing networked code by providing a framework for networked objects rather than just sending messages. This works by providing a NetworkView component, which can serialize object state and call functions across the network. Additionally, Unity provides a Master server, which essentially lets players search among all public servers to find a game to join, and can also help players in connecting to each other from behind private networks. In this article, we will cover: Introducing multiplayer Introducing UDP communication Setting up your own Master server for testing What a NetworkView is Serializing object state Calling RPCs Starting servers and connecting to them Using the Master server API to register servers and browse available hosts Setting up a dedicated server model Loading networked levels Creating a Pong clone using Unity networking Introducing multiplayer games Before we get started on the details of communication over the Internet, what exactly does multiplayer entail in a game? As far as most players are concerned, in a multiplayer game they are sharing the same experience with other players. It looks and feels like they are playing the same game. In reality, they aren't. Each player is playing a separate game, each with its own game state. Trying to ensure that all players are playing the exact same game is prohibitively expensive. Instead, games attempt to synchronize just enough information to give the illusion of a shared experience. Games are almost ubiquitously built around a client-server architecture, where each client connects to a single server. The server is the main hub of the game, ideally the machine for processing the game state, although at the very least it can serve as a simple "middleman" for messages between clients. Each client represents an instance of the game running on a computer. In some cases the server might also have a client, for instance some games allow you to host a game without starting up an external server program. While an MMO ( Massively Multiplayer Online ) might directly connect to one of these servers, many games do not have prior knowledge of the server IPs. For example, FPS games often let players host their own servers. In order to show the user a list of servers they can connect to, games usually employ another server, known as the "Master Server" or alternatively the "Lobby server". This server's sole purpose is to keep track of game servers which are currently running, and report a list of these to clients. Game servers connect to the Master server in order to announce their presence publicly, and game clients query the Master server to get an updated list of game servers currently running. Alternatively, this Master server sometimes does not keep track of servers at all. Sometimes games employ "matchmaking", where players connect to the Lobby server and list their criteria for a game. The server places this player in a "bucket" based on their criteria, and whenever a bucket is full enough to start a game, a host is chosen from these players and that client starts up a server in the background, which the other players connect to. This way, the player does not have to browse servers manually and can instead simply tell the game what they want to play. Introducing UDP communication The built-in Unity networking is built upon RakNet . RakNet uses UDP communication for efficiency. UDP ( User Datagram Protocols ) is a simple way to send messages to another computer. These messages are largely unchecked, beyond a simple checksum to ensure that the message has not been corrupted. Because of this, messages are not guaranteed to arrive, nor are they guaranteed to only arrive once (occasionally a single message can be delivered twice or more), or even in any particular order. TCP, on the other hand, guarantees each message to be received just once, and in the exact order they were sent, although this can result in increased latency (messages must be resent several times if they fail to reach the target, and messages must be buffered when received, in order to be processed in the exact order they were sent). To solve this, a reliability layer must be built on top of UDP. This is known as rUDP ( reliable UDP ). Messages can be sent unreliably (they may not arrive, or may arrive more than once), or reliably (they are guaranteed to arrive, only once per message, and in the correct order). If a reliable message was not received or was corrupt, the original sender has to resend the message. Additionally, messages will be stored rather than immediately processed if they are not in order. For example, if you receive messages 1, 2, and 4, your program will not be able to handle those messages until message 3 arrives. Allowing unreliable or reliable switching on a per-message basis affords better overall performance. Messages, such as player position, are better suited to unreliable messages (if one fails to arrive, another one will arrive soon anyway), whereas damage messages must be reliable (you never want to accidentally drop a damage message, and having them arrive in the same order they were sent reduces race conditions). In Unity, you can serialize the state of an object (for example, you might serialize the position and health of a unit) either reliably or unreliably (unreliable is usually preferred). All other messages are sent reliably. Setting up the Master Server Although Unity provide their own default Master Server and Facilitator (which is connected automatically if you do not specify your own), it is not recommended to use this for production. We'll be using our own Master Server, so you know how to connect to one you've hosted yourself. Firstly, go to the following page: http://unity3d.com/master-server/ We're going to download two of the listed server components: the Master Server and the Facilitator as shown in the following screenshot: The servers are provided in full source, zipped. If you are on Windows using Visual Studio Express, open up the Visual Studio .sln solution and compile in the Release mode. Navigate to the Release folder and run the EXE (MasterServer.exe or Facilitator.exe). If you are on a Mac, you can either use the included XCode project, or simply run the Makefile (the Makefile works under both Linux and Mac OS X). The Master Server, as previously mentioned, enables our game to show a server lobby to players. The Facilitator is used to help clients connect to each other by performing an operation known as NAT punch-through . NAT is used when multiple computers are part of the same network, and all use the same public IP address. NAT will essentially translate public and private IPs, but in order for one machine to connect to another, NAT punch-through is necessary. You can read more about it here: http://www.raknet.net/raknet/manual/natpunchthrough.html The default port for the Master Server is 23466, and for the Facilitator is 50005. You'll need these later in order to configure Unity to connect to the local Master Server and Facilitator instead of the default Unity-hosted servers. Now that we've set up our own servers, let's take a look at the Unity Networking API itself. NetworkViews and state serialization In Unity, game objects that need to be networked have a NetworkView component. The NetworkView component handles communication over the network, and even helps make networked state serialization easier. It can automatically serialize the state of a Transform, Rigidbody, or Animation component, or in one of your own scripts you can write a custom serialization function. When attached to a game object, NetworkView will generate a NetworkViewID for NetworkView. This ID serves to uniquely identify a NetworkView across the network. An object can be saved as part of a scene with NetworkView attached (this can be used for game managers, chat boxes, and so on), or it can be saved in the project as a prefab and spawned later via Network.Instantiate (this is used to generate player objects, bullets, and so on). Network.Instantiate is the multiplayer equivalent to GameObject.Instantiate —it sends a message over the network to other clients so that all clients spawn the object. It also assigns a network ID to the object, which is used to identify the object across multiple clients (the same object will have the same network ID on every client). A prefab is a template for a game object (such as the player object). You can use the Instantiate methods to create a copy of the template in the scene. Spawned network game objects can also be destroyed via Network.Destroy. It is the multiplayer counterpart of GameObject.Destroy. It sends a message to all clients so that they all destroy the object. It also deletes any RPC messages associated with that object. NetworkView has a single component that it will serialize. This can be a Transform, a Rigidbody, an Animation, or one of your own components that has an OnSerializeNetworkView function. Serialized values can either be sent with the ReliableDeltaCompressed option, where values are always sent reliably and compressed to include only changes since the last update, or they can be sent with the Unreliable option, where values are not sent reliably and always include the full values (not the change since the last update, since that would be impossible to predict over UDP). Each method has its own advantages and disadvantages. If data is constantly changing, such as player position in a first person shooter, in general Unreliable is preferred to reduce latency. If data does not often change, use the ReliableDeltaCompressed option to reduce bandwidth (as only changes will be serialized). NetworkView can also call methods across the network via Remote Procedure Calls ( RPC ). RPCs are always completely reliable in Unity Networking, although some networking libraries allow you to send unreliable RPCs, such as uLink or TNet. Writing a custom state serializer While initially a game might simply serialize Transform or Rigidbody for testing, eventually it is often necessary to write a custom serialization function. This is a surprisingly easy task. Here is a script that sends an object's position over the network: using UnityEngine; using System.Collections; public class ExampleUnityNetworkSerializePosition : MonoBehaviour { public void OnSerializeNetworkView( BitStream stream, NetworkMessageInfo info ) { // we are currently writing information to the network if( stream.isWriting ) { // send the object's position Vector3 position = transform.position; stream.Serialize( ref position ); } // we are currently reading information from the network else { // read the first vector3 and store it in 'position' Vector3 position = Vector3.zero; stream.Serialize( ref position ); // set the object's position to the value we were sent transform.position = position; } } } Most of the work is done with BitStream. This is used to check if NetworkView is currently writing the state, or if it is reading the state from the network. Depending on whether it is reading or writing, stream.Serialize behaves differently. If NetworkView is writing, the value will be sent over the network. However, if NetworkView is reading, the value will be read from the network and saved in the referenced variable (thus the ref keyword, which passes Vector3 by reference rather than value). Using RPCs RPCs are useful for single, self-contained messages that need to be sent, such as a character firing a gun, or a player saying something in chat. In Unity, RPCs are methods marked with the [RPC] attribute. This can be called by name via networkView.RPC( "methodName", … ). For example, the following script prints to the console on all machines when the space key is pressed. using UnityEngine; using System.Collections; public class ExampleUnityNetworkCallRPC : MonoBehavior { void Update() { // important – make sure not to run if this networkView is notours if( !networkView.isMine ) return; // if space key is pressed, call RPC for everybody if( Input.GetKeyDown( KeyCode.Space ) ) networkView.RPC( "testRPC", RPCMode.All ); } [RPC] void testRPC( NetworkMessageInfo info ) { // log the IP address of the machine that called this RPC Debug.Log( "Test RPC called from " + info.sender.ipAddress ); } } Also note the use of NetworkView.isMine to determine ownership of an object. All scripts will run 100 percent of the time regardless of whether your machine owns the object or not, so you have to be careful to avoid letting some logic run on remote machines; for example, player input code should only run on the machine that owns the object. RPCs can either be sent to a number of players at once, or to a specific player. You can either pass an RPCMode to specify which group of players to receive the message, or a specific NetworkPlayer to send the message to. You can also specify any number of parameters to be passed to the RPC method. RPCMode includes the following entries: All (the RPC is called for everyone) AllBuffered (the RPC is called for everyone, and then buffered for when new players connect, until the object is destroyed) Others (the RPC is called for everyone except the sender) OthersBuffered (the RPC is called for everyone except the sender, and then buffered for when new players connect, until the object is destroyed) Server (the RPC is sent to the host machine) Initializing a server The first thing you will want to set up is hosting games and joining games. To initialize a server on the local machine, call Network.InitializeServer. This method takes three parameters: the number of allowed incoming connections, the port to listen on, and whether to use NAT punch-through. The following script initializes a server on port 25000 which allows 8 clients to connect: using UnityEngine; using System.Collections; public class ExampleUnityNetworkInitializeServer : MonoBehavior { void OnGUI() { if( GUILayout.Button( "Launch Server" ) ) { LaunchServer(); } } // launch the server void LaunchServer() { // Start a server that enables NAT punchthrough, // listens on port 25000, // and allows 8 clients to connect Network.InitializeServer( 8, 25005, true ); } // called when the server has been initialized void OnServerInitialized() { Debug.Log( "Server initialized" ); } } You can also optionally enable an incoming password (useful for private games) by setting Network.incomingPassword to a password string of the player's choice, and initializing a general-purpose security layer by calling Network.InitializeSecurity(). Both of these should be set up before actually initializing the server. Connecting to a server To connect to a server you know the IP address of, you can call Network.Connect. The following script allows the player to enter an IP, a port, and an optional password and attempts to connect to the server: using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToServer : MonoBehavior { private string ip = ""; private string port = ""; private string password = ""; void OnGUI() { GUILayout.Label( "IP Address" ); ip = GUILayout.TextField( ip, GUILayout.Width( 200f ) ); GUILayout.Label( "Port" ); port = GUILayout.TextField( port, GUILayout.Width( 50f ) ); GUILayout.Label( "Password (optional)" ); password = GUILayout.PasswordField( password, '*',GUILayout.Width( 200f ) ); if( GUILayout.Button( "Connect" ) ) { int portNum = 25005; // failed to parse port number – a more ideal solution is tolimit input to numbers only, a number of examples can befound on the Unity forums if( !int.TryParse( port, out portNum ) ) { Debug.LogWarning( "Given port is not a number" ); } // try to initiate a direct connection to the server else { Network.Connect( ip, portNum, password ); } } } void OnConnectedToServer() { Debug.Log( "Connected to server!" ); } void OnFailedToConnect( NetworkConnectionError error ) { Debug.Log( "Failed to connect to server: " +error.ToString() ); } } Connecting to the Master Server While we could just allow the player to enter IP addresses to connect to servers (and many games do, such as Minecraft), it's much more convenient to allow the player to browse a list of public servers. This is what the Master Server is for. Now that you can start up a server and connect to it, let's take a look at how to connect to the Master Server you downloaded earlier. First, make sure both the Master Server and Facilitator are running. I will assume you are running them on your local machine (IP is 127.0.0.1), but of course you can run these on a different computer and use that machine's IP address. Keep in mind, if you want the Master Server publicly accessible, it must be installed on a machine with a public IP address (it cannot be in a private network). Let's configure Unity to use our Master Server rather than the Unity-hosted test server. The following script configures the Master Server and Facilitator to connect to a given IP (by default 127.0.0.1): using UnityEngine; using System.Collections; public class ExampleUnityNetworkingConnectToMasterServer : MonoBehaviour { // Assuming Master Server and Facilitator are on the same machine public string MasterServerIP = "127.0.0.1"; void Awake() { // set the IP and port of the Master Server to connect to MasterServer.ipAddress = MasterServerIP; MasterServer.port = 23466; // set the IP and port of the Facilitator to connect to Network.natFacilitatorIP = MasterServerIP; Network.natFacilitatorPort = 50005; } }
Read more
  • 0
  • 0
  • 2542