Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - 3D Game Development

115 Articles
article-image-adding-finesse-your-game
Packt
21 Oct 2013
7 min read
Save for later

Adding Finesse to Your Game

Packt
21 Oct 2013
7 min read
(For more resources related to this topic, see here.) Adding a background There is still a lot of black in the background and as the game has a space theme, let's add some stars in there. The way we'll do this is to add a sphere that we can map the stars texture to, so click on Game Object | Create Other | Sphere, and position it at X: 0, Y: 0, Z: 0. We also need to set the size to X: 100, Y: 100, Z: 100. Drag the stars texture, located at Textures/stars, on to the new sphere that we created in our scene. That was simple, wasn't that? Unity has added the texture to a material that appears on the outside of our sphere while we need it to show on the inside. To fix it, we are going to reverse the triangle order, flip the normal map, and flip the UV map with C# code. Right-click on the Scripts folder and then click on Create and select C# Script. Once you click on it, a script will appear in the Scripts folder; it should already have focus and be asking you to type a name for the script, call it SkyDome. Double-click on the script in Unity and it will open in MonoDevelop. Edit the Start method, as shown in the following code: void Start () {// Get a reference to the meshMeshFilterBase MeshFilter = transform.GetComponent("MeshFilter")as MeshFilter;Mesh mesh = BaseMeshFilter.mesh;// Reverse triangle windingint[] triangles = mesh.triangles;int numpolies = triangles.Length / 3;for(int t = 0;t <numpolies; t++){Int tribuffer = triangles[t * 3];triangles[t * 3] = triangles[(t * 3) + 2];triangles[(t * 3) + 2] = tribuffer;}// Read just uv map for inner sphere projectionVector2[] uvs = mesh.uv;for(int uvnum = 0; uvnum < uvs.Length; uvnum++){uvs[uvnum] = new Vector2(1 - uvs[uvnum].x, uvs[uvnum].y);}// Read just normals for inner sphere projectionVector3[] norms = mesh.normals;for(int normalsnum = 0; normalsnum < norms.Length; normalsnum++){[ 69 ]norms[normalsnum] = -norms[normalsnum];}// Copy local built in arrays back to the meshmesh.uv = uvs;mesh.triangles = triangles;mesh.normals = norms;} The breakdown of the code as is follows: Get the mesh of the sphere. Reverse the way the triangles are drawn. Each triangle has three indexes in the array; this script just swaps the first and last index of each triangle in the array. Adjust the X position for the UV map coordinates. Flip the normals of the sphere. Apply the new values of the reversed triangles, adjusted UV coordinates, and flipped normals to the sphere. Click and drag this script onto your sphere GameObject and test your scene. You should now see something like the following screenshot: Adding extra levels Now that the game is looking better, we can add some more content in to it. Luckily the jagged array we created earlier easily supports adding more levels. Levels can be any size, even with variable column heights per row. Double-click on the Sokoban script in the Project panel and switch over to MonoDevelop. Find levels array and modify it to be as follows: // Create the top array, this will store the level arraysint[][][] levels ={// Create the level array, this will store the row arraynew int [][] {// Create all row array, these will store column datanew int[] {1,1,1,1,1,1,1,1},new int[] {1,0,0,1,0,0,0,1},new int[] {1,0,3,3,0,3,0,1},new int[] {1,0,0,1,0,1,0,1},new int[] {1,0,0,1,3,1,0,1},new int[] {1,0,0,2,2,2,2,1},new int[] {1,0,0,1,0,4,1,1},new int[] {1,1,1,1,1,1,1,1}},// Create a new levelnew int [][] {new int[] {1,1,1,1,0,0,0,0},new int[] {1,0,0,1,1,1,1,1},new int[] {1,0,2,0,0,3,0,1},new int[] {1,0,3,0,0,2,4,1},new int[] {1,1,1,0,0,1,1,1},new int[] {0,0,1,1,1,1,0,0}},// Create a new levelnew int [][] {new int[] {1,1,1,1,1,1,1,1},new int[] {1,4,0,1,2,2,2,1},new int[] {1,0,0,3,3,0,0,1},new int[] {1,0,3,0,0,0,1,1},new int[] {1,0,0,1,1,1,1},new int[] {1,0,0,1},new int[] {1,1,1,1}}}; The preceding code has given us two extra levels, bringing the total to three. The layout of the arrays is still very visual and you can easily see the level layout just by looking at the arrays. Our BuildLevel, CheckIfPlayerIsAttempingToMove and MovePlayer methods only work on the first level at the moment, let's update them to always use the users current level. We'll have to store which level the player is currently on and use that level at all times, incrementing the value when a level is finished. As we'll want this value to persist between plays, we'll be using the PlayerPrefs object that Unity provides for saving player data. Before we get the value, we need to check that it is actually set and exists; otherwise we could see some odd results. Start by declaring our variable for use at the top of the Sokoban script as follows: int currentLevel; Next, we'll need to get the value of the current level from the PlayerPrefs object and store it in the Awake method. Add the following code to the top of your Awake method: if (PlayerPrefs.HasKey("currentLevel")) {currentLevel = PlayerPrefs.GetInt("currentLevel");} else {currentLevel = 0;PlayerPrefs.SetInt("currentLevel", currentLevel);} Here we are checking if we have a value already stored in the PlayerPrefs object, if we do then use it, if we don't then set currentLevel to 0, and then save it to the PlayerPrefs object. To fix the methods mentioned earlier, click on Search | Replace. A new window will appear. Type levels[0] in the top box and levels[currentLevel] in the bottom one, and then click on All. Level complete detection It's all well and good having three levels, but without a mechanism to move between them they are useless. We are going to add a check to see if the player has finished a level, if they have then increment the level counter and load the next level in the array. We only need to do the check at the end of every move; to do so every frame would be redundant. We'll write the following method first and then explain it: // If this method returns true then we have finished the levelboolhaveFinishedLevel () {// Initialise the counter for how many crates are on goal// tilesint cratesOnGoalTiles = 0;// Loop through all the rows in the current levelfor (int i = 0; i< levels[currentLevel].Length; i++) {// Get the tile ID for the column and pass it the switch// statementfor (int j = 0; j < levels[currentLevel][i].Length; j++) {switch (levels[currentLevel][i][j]) {case 5:// Do we have a match for a crate on goal// tile ID? If so increment the countercratesOnGoalTiles++;break;default:break;}}}// Check if the cratesOnGoalTiles variable is the same as the// amountOfCrates we set when building the levelif (amountOfCrates == cratesOnGoalTiles) {return true;} else {return false;}} In the BuildLevel method, whenever we instantiate crate, we increment the amountOfCrates variable. We can use this variable to check if the amount of crates on goal tiles is the same as the amountOfCrates variable, if it is then we know we have finished the current level. The for loops iterate through the current level's rows and columns, and we know that 5 in the array is a crate on a goal tile. The method returns a Boolean based on whether we have finished the level or not. Now let's add the call to the method. The logical place would be inside the MovePlayer method, so go ahead and add a call to the method just after the pCol += tCol; statement. As the method returns true or false, we're going to use it in an if statement, as shown in the following code: // Check if we have finished the levelif (haveFinishedLevel()) {Debug.Log("Finished");} The Debug.Log method will do for now, let's check if it's working. The solution for level one is on YouTube at http://www.youtube.com/watch?v=K5SMwAJrQM8&hd=1. Click on the play icon at the top-middle of the Unity screen and copy the sequence of moves in the video (or solve it yourself), when all the crates are on the goal tiles you'll see Finished in the Console panel. Summary The game now has some structure in the form of levels that you can complete and is easily expandable. If you wanted to take a break from the article, now would be a great time to create and add some levels to the game and maybe add some extra sound effects. All this hard work is for nothing if you can't make any money though, isn't it? Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Flash Game Development: Making of Astro-PANIC! [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 1398

article-image-mesh-animation
Packt
18 Oct 2013
5 min read
Save for later

Mesh animation

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) Using animated models is not very different from using normal models. There are essentially two types of animation to consider (in addition to manually changing the position of a mesh's geometry in Three.js). If all you need is to smoothly transition properties between different values—for example, changing the rotation of a door in order to animate it opening—you can use the Tween.js library at https://github.com/sole/tween.jsto do so instead of animating the mesh itself. Jerome Etienne has a nice tutorial on doing this at http://learningthreejs.com/blog/2011/08/17/tweenjs-for-smooth-animation/. Morph animation Morph animation stores animation data as a sequence of positions. For example, if you had a cube with a shrink animation, your model could hold the positions of the vertices of the cube at full size and at the shrunk size. Then animation would consist of interpolating between those states during each rendering or keyframe. The data representing each state can hold either vertex targets or face normals. To use morph animation, the easiest approach is to use a THREE.MorphAnimMesh class, which is a subclass of the normal mesh. In the following example, the highlighted lines should only be included if the model uses normals: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry) { var material = new THREE.MeshLambertMaterial({ color: 0x000000, morphTargets: true, morphNormals: true, }); if (geometry.morphColors && geometry.morphColors.length) { var colorMap = geometry.morphColors[0]; for (var i = 0; i < colorMap.colors.length; i++) { geometry.faces[i].color = colorMap.colors[i]; } material.vertexColors = THREE.FaceColors; } geometry.computeMorphNormals(); var mesh = new THREE.MorphAnimMesh(geometry, material); mesh.duration = 5000; // in milliseconds scene.add(mesh); morphs.push(mesh); }); The first thing we do is set our material to be aware that the mesh will be animated with the morphTargets properties and optionally with morphNormal properties. Next, we check whether colors will change during the animation, and set the mesh faces to their initial color if so (if you know your model doesn't have morphColors, you can leave out that block). Then the normals are computed (if we have them) and our MorphAnimMesh animation is created. We set the duration value of the full animation, and finally store the mesh in the global morphs array so that we can update it during our physics loop: for (var i = 0; i < morphs.length; i++) { morphs[i].updateAnimation(delta); } Under the hood, the updateAnimation method just changes which set of positions in the animation the mesh should be interpolating between. By default, the animation will start immediately and loop indefinitely. To stop animating, just stop calling updateAnimation. Skeletal animation Skeletal animation moves a group of vertices in a mesh together by making them follow the movement of bone. This is generally easier to design because artists only have to move a few bones instead of potentially thousands of vertices. It's also typically less memory-intensive for the same reason. To use morph animation, use a THREE.SkinnedMesh class, which is a subclass of the normal mesh: var loader = new THREE.JSONLoader(); loader.load('model.js', function(geometry, materials) { for (var i = 0; i < materials.length; i++) { materials[i].skinning = true; } var material = new THREE.MeshFaceMaterial(materials); THREE.AnimationHandler.add(geometry.animation); var mesh = new THREE.SkinnedMesh(geometry, material, false); scene.add(mesh); var animation = new THREE.Animation(mesh, geometry.animation.name); animation.interpolationType = THREE.AnimationHandler.LINEAR; // or CATMULLROM for cubic splines (ease-in-out) animation.play(); }); The model we're using in this example already has materials, so unlike in the morph animation examples, we have to change the existing materials instead of creating a new one. For skeletal animation we have to enable skinning, which refers to how the materials are wrapped around the mesh as it moves. We use the THREE.AnimationHandler utility to track where we are in the current animation and a THREE.SkinnedMesh utility to properly handle our model's bones. Then we use the mesh to create a new THREE.Animation and play it. The animation's interpolationType determines how the mesh transitions between states. If you want cubic spline easing (slow then fast then slow), use THREE.AnimationHandler.CATMULLROM instead of the LINEAR easing. We also need to update the animation in our physics loop: THREE.AnimationHandler.update(delta); It is possible to use both skeletal and morph animations at the same time. In this case, the best approach is to treat the animation as skeletal and manually update the mesh's morphTargetInfluences array as demonstrated in examples/webgl_animation_skinning_morph.html in the Three.js project. Summary This article explains how to manage external assets such as 3D models, as well as add details to your worlds and also teaches us to manage 3D models and animation. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article] 2D game development with Monkey [Article]
Read more
  • 0
  • 0
  • 2825

article-image-introducing-building-blocks-unity-scripts
Packt
11 Oct 2013
15 min read
Save for later

Introducing the Building Blocks for Unity Scripts

Packt
11 Oct 2013
15 min read
(For more resources related to this topic, see here.) Using the term method instead of function You are constantly going to see the words function and method used everywhere as you learn Unity. The words function and method truly mean the same thing in Unity. They do the same thing. Since you are studying C#, and C# is an Object-Oriented Programming (OOP) language, I will use the word "method" throughout this article, just to be consistent with C# guidelines. It makes sense to learn the correct terminology for C#. Also, UnityScript and Boo are OOP languages. The authors of the Scripting Reference probably should have used the word method instead of function in all documentation. From now on I'm going to use the words method or methods in this article. When I refer to the functions shown in the Scripting Reference , I'm going to use the word method instead, just to be consistent throughout this article. Understanding what a variable does in a script What is a variable? Technically, it's a tiny section of your computer's memory that will hold any information you put there. While a game runs, it keeps track of where the information is stored, the value kept there, and the type of the value. However, for this article, all you need to know is how a variable works in a script. It's very simple. What's usually in a mailbox, besides air? Well, usually there's nothing but occasionally there is something in it. Sometimes there's money (a paycheck), bills, a picture from aunt Mabel, a spider, and so on. The point is what's in a mailbox can vary. Therefore, let's call each mailbox a variable instead. Naming a variable Using the picture of the country mailboxes, if I asked you to see what is in the mailbox, the first thing you'd ask is which one? If I said in the Smith mailbox, or the brown mailbox, or the round mailbox, you'd know exactly which mailbox to open to retrieve what is inside. Similarly, in scripts, you have to name your variables with a unique name. Then I can ask you what's in the variable named myNumber, or whatever cool name you might use. A variable name is just a substitute for a value As you write a script and make a variable, you are simply creating a placeholder or a substitute for the actual information you want to use. Look at the following simple math equation: 2 + 9 = 11 Simple enough. Now try the following equation: 11 + myNumber = ??? There is no answer to this yet. You can't add a number and a word. Going back to the mailbox analogy, write the number 9 on a piece of paper. Put it in the mailbox named myNumber. Now you can solve the equation. What's the value in myNumber? The value is 9. So now the equation looks normal: 11 + 9 = 20 The myNumber variable is nothing more than a named placeholder to store some data (information). So anywhere you would like the number 9 to appear in your script, just write myNumber, and the number 9 will be substituted. Although this example might seem silly at first, variables can store all kinds of data that is much more complex than a simple number. This is just a simple example to show you how a variable works. Time for action – creating a variable and seeing how it works Let's see how this actually works in our script. Don't be concerned about the details of how to write this, just make sure your script is the same as the script shown in the next screenshot. In the Unity Project panel, double-click on LearningScript. In MonoDevelop, write the lines 6, 11, and 13 from the next screenshot. Save the file. To make this script work, it has to be attached to a GameObject. Currently, in our State Machine project we only have one GameObject, the Main Camera. This will do nicely since this script doesn't affect the Main Camera in any way. The script simply runs by virtue of it being attached to a GameObject. Drag LearningScript onto the Main Camera. Select Main Camera so that it appears in the Inspector panel. Verify whether LearningScript is attached. Open the Unity Console panel to view the output of the script. Click on Play. The preceding steps are shown in the following screenshot: What just happened? In the following Console panel is the result of our equations. As you can see, the equation on line 13 worked by substituting the number 9 for the myNumber variable: Time for action – changing the number 9 to a different number Since myNumber is a variable, the value it stores can vary. If we change what is stored in it, the answer to the equation will change too. Follow the ensuing steps: Stop the game and change 9 to 19. Notice that when you restart the game, the answer will be 30. What just happened? You learned that a variable works by simple process of substitution. There's nothing more to it than that. We didn't get into the details of the wording used to create myNumber, or the types of variables you can create, but that wasn't the intent. This was just to show you how a variable works. It just holds data so you can use that data elsewhere in your script. Have a go hero – changing the value of myNumber In the Inspector panel, try changing the value of myNumber to some other value, even a negative value. Notice the change in answer in the Console. Using a method in a script Methods are where the action is and where the tasks are performed. Great, that's really nice to know but what is a method? What is a method? When we write a script, we are making lines of code that the computer is going to execute, one line at a time. As we write our code, there will be things we want our game to execute more than once. For example, we can write a code that adds two numbers. Suppose our game needs to add the two numbers together a hundred different times during the game. So you say, "Wow, I have to write the same code a hundred times that adds two numbers together. There has to be a better way." Let a method take away your typing pain. You just have to write the code to add two numbers once, and then give this chunk of code a name, such as AddTwoNumbers(). Now, every time our game needs to add two numbers, don't write the code over and over, just call the AddTwoNumbers() method. Time for action – learning how a method works We're going to edit LearningScript again. In the following screenshot, there are a few lines of code that look strange. We are not going to get into the details of what they mean in this article.Getting into the Details of Methods. Right now, I am just showing you a method's basic structure and how it works: In MonoDevelop, select LearningScript for editing. Edit the file so that it looks exactly like the following screenshot. Save the file. What's in this script file? In the previous screenshot, lines 6 and 7 will look familiar to you; they are variables just as you learned in the previous section. There are two of them this time. These variables store the numbers that are going to be added. Line 16 may look very strange to you. Don't concern yourself right now with how this works. Just know that it's a line of code that lets the script know when the Return/Enter key is pressed. Press the Return/Enter key when you want to add the two numbers together. Line 17 is where the AddTwoNumbers() method gets called into action. In fact, that's exactly how to describe it. This line of code calls the method. Lines 20, 21, 22, and 23 make up the AddTwoNumbers() method. Don't be concerned about the code details yet. I just want you to understand how calling a method works. Method names are substitutes too You learned that a variable is a substitute for the value it actually contains. Well, a method is no different. Take a look at line 20 from the previous screenshot: void AddTwoNumbers () The AddTwoNumbers() is the name of the method. Like a variable, AddTwoNumbers() is nothing more than a named placeholder in the memory, but this time it stores some lines of code instead. So anywhere we would like to use the code of this method in our script, just write AddTwoNumbers(), and the code will be substituted. Line 21 has an opening curly-brace and line 23 has a closing curly-brace. Everything between the two curly-braces is the code that is executed when this method is called in our script. Look at line 17 from the previous screenshot: AddTwoNumbers(); The method name AddTwoNumbers() is called. This means that the code between the curly-braces is executed. It's like having all of the code of a method right there on line 17. Of course, this AddTwoNumbers() method only has one line of code to execute, but a method could have many lines of code. Line 22 is the action part of this method, the part between the curly-braces. This line of code is adding the two variables together and displaying the answer to the Unity Console. Then, follow the ensuing steps: Go back to Unity and have the Console panel showing. Now click on Play. What just happened? Oh no! Nothing happened! Actually, as you sit there looking at the blank Console panel, the script is running perfectly, just as we programmed it. Line 16 in the script is waiting for you to press the Return/Enter key. Press it now. And there you go! The following screenshot shows you the result of adding two variables together that contain the numbers 2 and 9: Line 16 waited for you to press the Return/Enter key. When you do this, line 17 executes which calls the AddTwoNumbers() method. This allows the code block of the method, line 23, to add the the values stored in the variables number1 and number2. Have a go hero – changing the output of the method While Unity is in the Play mode, select the Main Camera so its Components show in the Inspector. In the Inspector panel, locate Learning Script and its two variables. Change the values, currently 2 and 9, to different values. Make sure to click your mouse in the Game panel so it has focus, then press the Return/Enter key again. You will see the result of the new addition in the Console. You just learned how a method works to allow a specific block of code to to be called to perform a task. We didn't get into any of the wording details of methods here, this was just to show you fundamentally how they work. Introducing the class The class plays a major role in Unity. In fact, what Unity does with a class a little piece of magic when Unity creates Components. You just learned about variables and methods. These two items are the building blocks used to build Unity scripts. The term script is used everywhere in discussions and documents. Look it up in the dictionary and it can be generally described as written text. Sure enough, that's what we have. However, since we aren't just writing a screenplay or passing a note to someone, we need to learn the actual terms used in programming. Unity calls the code it creates a C# script. However, people like me have to teach you some basic programming skills and tell you that a script is really a class. In the previous section about methods, we created a class (script) called LearningScript. It contained a couple of variables and a method. The main concept or idea of a class is that it's a container of data, stored in variables, and methods that process that data in some fashion. Because I don't have to constantly write class (script), I will be using the word script most of the time. However, I will also be using class when getting more specific with C#. Just remember that a script is a class that is attached to a GameObject. The State Machine classes will not be attached to any GameObjects, so I won't be calling them scripts. By using a little Unity magic, a script becomes a Component While working in Unity, we wear the following two hats: A Game-Creator hat A Scripting (programmer) hat When we first wear our Game-Creator hat, we will be developing our Scene, selecting GameObjects, and viewing Components; just about anything except writing our scripts. When we put our Scripting hat on, our terminology changes as follows: We're writing code in scripts using MonoDevelop We're working with variables and methods The magic happens when you put your Game-Creator hat back on and attach your script to a GameObject. Wave the magic wand — ZAP — the script file is now called a Component, and the public variables of the script are now the properties you can see and change in the Inspector panel. A more technical look at the magic A script is like a blueprint or a written description. In other words, it's just a single file in a folder on our hard drive. We can see it right there in the Projects panel. It can't do anything just sitting there. When we tell Unity to attach it to a GameObject, we haven't created another copy of the file, all we've done is tell Unity we want the behaviors described in our script to be a Component of the GameObject. When we click on the Play button, Unity loads the GameObject into the computer's memory. Since the script is attached to a GameObject, Unity also has to make a place in the computer's memory to store a Component as part of the GameObject. The Component has the capabilities specified in the script (blueprint) we created. Even more Unity magic There's some more magic you need to be aware of. The scripts inherit from MonoBehaviour. For beginners to Unity, studying C# inheritance isn't a subject you need to learn in any great detail, but you do need to know that each Unity script uses inheritance. We see the code in every script that will be attached to a GameObject. In LearningScript, the code is on line 4: public class LearningScript : MonoBehaviour The colon and the last word of that code means that the LearningScript class is inheriting behaviors from the MonoBehaviour class. This simply means that the MonoBehaviour class is making few of its variables and methods available to the LearningScript class. It's no coincidence that the variables and methods inherited look just like some of the code we saw in the Unity Scripting Reference. The following are the two inherited behaviors in the LearningScript: Line 9:: void Start () Line 14: void Update () The magic is that you don't have to call these methods, Unity calls them automatically. So the code you place in these methods gets executed automatically. Have a go hero – finding Start and Update in the Scripting Reference Try a search on the Scripting Reference for Start and Update to learn when each method is called by Unity and how often. Also search for MonoBehaviour. This will show you that since our script inherits from MonoBehaviour, we are able to use the Start() and Update() methods. Components communicating using the Dot Syntax Our script has variables to hold data, and our script has methods to allow tasks to be performed. I now want to introduce the concept of communicating with other GameObjects and the Components they contain. Communication between one GameObject's Components and another GameObject's Components using Dot Syntax is a vital part of scripting. It's what makes interaction possible. We need to communicate with other Components or GameObjects to be able to use the variables and methods in other Components. What's with the dots? When you look at the code written by others, you'll see words with periods separating them. What the heck is that? It looks complicated, doesn't it. The following is an example from the Unity documentation: transform.position.x Don't concern yourself with what the preceding code means as that comes later, I just want you to see the dots. That's called the Dot Syntax. The following is another example. It's the fictitious address of my house: USA.Vermont.Essex.22MyStreet Looks funny, doesn't it? That's because I used the syntax (grammar) of C# instead of the post office. However, I'll bet if you look closely, you can easily figure out how to find my house. Summary This article introduced you to the basic concepts of variables, methods, and Dot Syntax. These building blocks are used to create scripts and classes. Understanding how these building blocks work is critical so you don't feel you're not getting it. We discovered that a variable name is a substitute for the value it stores; a method name is a substitute for a block of code; when a script or class is attached to a GameObject, it becomes a Component. The Dot Syntax is just like an address to locate GameObjects and Components. With these concepts under your belt, we can proceed to learn the details of the sentence structure, the grammar, and the syntax used to work with variables, methods, and the Dot Syntax. Resources for Article: Further resources on this subject: Debugging Multithreaded Applications as Singlethreaded in C# [Article] Simplifying Parallelism Complexity in C# [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 4588
Banner background image

article-image-let-there-be-light
Packt
04 Oct 2013
8 min read
Save for later

Let There be Light!

Packt
04 Oct 2013
8 min read
(For more resources related to this topic, see here.) Basic light sources You use lights to give a scene brightness, ambience, and depth. Without light, everything looks flat and dull. Use additional light sources to even-out lighting and to set up interior scenes. In Unity, lights are components of GameObjects. The different kinds of light sources are as follows: Directional lights: These lights are commonly used to mimic the sun. Their position is irrelevant, as only orientation matters. Every architectural scene should at least have one main Directional light. When you only need to lighten up an interior room, they are more tricky to use, as they tend to brighten up the whole scene, but they help getting some light through the windows, inside the project. We'll see a few use cases in the next few sections. Point lights: These lights are easy to use, as they emit light in any direction. Try to minimize their Range, so they don't spill light in other places. In most scenes, you'll need several of them to balance out dark spots and corners and to even-out the overall lighting. Spot lights: These lights only emit light into a cone and are good to simulate interior light fixtures. They cast a distinct bright circular light spot so use them to highlight something. Area lights: These are the most advanced lights, as they allow a light source to be given an actual rectangular size. This results in smoother lights and shadows, but their effect is only visible when baking and they require a pro-license. They are good to simulate light panels or the effect of light coming in through a window. In the free version, you can simulate them using multiple Spot or Directional Lights. Shadows Most current games support some form of shadows. They can be pre-calculated or rendered in real-time. Pre-calculation implies that the effect of shadows and lighting is calculated in advance and rendered onto an additional material layer. It only makes sense for objects that don't move in the scene. Real-time shadows are rendered using the GPU, but can be computationally expensive and should only be used for dynamic lighting. You might be familiar with real-time shadows from applications such as SketchUp and recent versions of ArchiCAD or Revit. Ideally, both techniques are combined. The overall lighting of the scene (for example, buildings, street, interiors, and so on) is pre-calculated and baked in texture maps. Additional real-time shadows are used on the moving characters. Unity can blend both types of shadows to simulate dynamic lighting in large scenes. Some of these techniques, however, are only supported in the pro-version. Real-time shadows Imagine we want to create a sun or shadow study of a building. This is best appreciated in real-time and by looking from the outside. We will use the same model as we did in the previous article, but load it in a separate scene. We want to have a light object acting as a sun, a spherical object to act as a visual clue where the sun is positioned and link them together to control the rotations in an easy way. The steps to be followed to achieve this are as follows: Add a Directional light, name it SunLight and choose the Shadow Type. Hard shadows are more sharply defined and are the best choice in this example, whereas Soft shadows look smoother and are better suited for a subtle, mood lighting. Add an empty GameObject by navigating to GameObject | Create Empty that is positioned in the center of the scene and name it ORIGIN. Create a Sphere GameObject by navigating to GameObject | Create Other | Sphere, name it VisualSun. Make it a child of the ORIGIN by dragging the VisualSun name in the Hierarchy Tab onto the ORIGIN name. Give it a bright, yellow material, using a Self-Illumin/Diffuse Shader. Deactivate Cast Shadows and Receive Shadows on the Mesh Renderer component. After you have placed the VisualSun as a child of the origin object, reset the position of the Sphere to be 0 for X, Y and Z. It now sits in the same place as its parent. Even if you move the parent, its local position stays at X=0, Y=0 and Z=0, which makes it convenient for a placement relative to its parent object. Alter the Z-position to define an offset from the origin, for example 20 units. The negative Z will facilitate the SunLight orientation in the next step. The SunLight can be dragged onto the VisualSun and its local position reset to zero as well. When all rotations are also zero, it emits light down the Z-axis and thus straight to the origin. If you want to have a nice glow effect, you can add a Halo by navigating to Components | Effects | Halo and then to SunLight and setting a suitable Size. We now have a hierarchic structure of the origin, the visible sphere and the Directional light, that is accentuated by the halo. We can adjust this assembly by rotating the origin around. Rotating around the Y-axis defines the orientation of the sun, whereas a rotation around the X-axis defines the azimuth. With these two rotations, we can position the sun wherever we want. Lightmapping Real-time lighting is computationally very expensive. If you don't have the latest hardware, it might not even be supported. Or you might avoid it for a mobile app, where hardware resources are limited. It is possible to pre-calculate the lighting of a scene and bake it onto the geometry as textures. This process is called Lightmapping, for more information on it visit: http://docs.unity3d.com/Documentation/Manual/Lightmapping.html While actual calculations are rather complex, the process in Unity is made easy, thanks to the integrated Beast Lightmapping. There are a few things you need to set up properly. These are given as follows: First, ensure that any object that needs to be baked is set to Static. Each GameObject has a static-toggle, right next to the Name property. Activate this for all models and light objects that will not move in the Scene. Secondly, ensure that all geometry has a second set of texture coordinates, called UV2 coordinates in Unity. Default GameObjects have those set up, but for imported models, they usually need to be added. Luckily for us, this is automated when Generate Lightmap UVs is activated on the model import settings given earlier in Quick Walk Around Your Design, in the section entitled, Controlling the import settings. If all lights and meshes are static and UV2 coordinates are calculated, you are ready to go. Open the Lightmapping dialog by navigating to Window | Lightmapping and dock it somewhere conveniently. There are several settings, but we start with a basic setup that consists of the following steps: Usually a Single Lightmap suffices. Dual Lightmaps can look better, but require the deferred rendering method that is only supported in Unity Pro. Choose the Quality High modus. Quality Low gives jagged edges and is only used for quick testing. Activate Ambient Occlusion as a quick additional rendering step that darkens corners and occluded areas, such as where objects touch. This adds a real sense of depth and is highly recommended. Set both sliders somewhere in the middle and leave the distance at 0.1, to control how far the system will look to detect occlusions. Start with a fairly low Resolution, such as 5 or 10 texels per world unit. This defines how detailed the calculated Lightmap texture is, when compared to the geometry. Look at the Scene view, to get a checkered overlay visible, when adjusting Lightmapping settings. For final results, increase this to 40 or 50, to give more detail to the shadows, at the cost of longer rendering times. There are additional settings for which Unity Pro is required, such as Sky Light and Bounced Lighting. They both add to the realism of the lighting, so they are actually highly recommended for architectural visualization, if you have the pro-version. On the Object sub-tab, you can also tweak the shadow calculation settings for individual lights. By increasing the radius, you get a smoother shadow edge, at the cost of longer rendering times. If you increase the radius, you should also increase the amount of samples, which helps reduce the noise that gets added with sampling. This is shown in the following screenshot: Now you can go on and click Bake Scene. It can take quite some time for large models and fine resolutions. Check the blue time indicator on the right side of the status bar (but you can continue working in Unity). After the calculations are finished, the model is reloaded with the new texture and baked shadows are visible in Scene and Game views, as shown in the following screenshot: Beware that to bake a Scene, it needs to be saved and given a name, as Unity places the calculated Lightmap textures in a subfolder with the same name as the Scene. Summary In this article we learned about the use of different light sources and shadow. To avoid the heavy burden of real-time shadows, we discussed the use Lightmapping technique to bake lights and shadows on the model, from within Unity Resources for Article: Further resources on this subject: Unity Game Development: Welcome to the 3D world [Article] Introduction to Game Development Using Unity 3D [Article] Unity 3-0 Enter the Third Dimension [Article]    
Read more
  • 0
  • 0
  • 1923

article-image-cross-platform-development-build-once-deploy-anywhere
Packt
01 Oct 2013
19 min read
Save for later

Cross-platform Development - Build Once, Deploy Anywhere

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) The demo application – how the projects work together Take a look at the following diagram to understand and familiarize yourself with the configuration pattern that all of your Libgdx applications will have in common: What you see here is a compact view of four projects. The demo project to the very left contains the shared code that is referenced (that is, added to the build path) by all the other platform-specific projects. The main class of the demo application is MyDemo.java. However, looking at it from a more technical view, the main class where an application gets started by the operating system, which will be referred to as Starter Classes from now on. Notice that Libgdx uses the term "Starter Class" to distinguish between these two types of main classes in order to avoid confusion. We will cover everything related to the topic of Starter Classes in a moment. While taking a closer look at all these directories in the preceding screenshot, you may have spotted that there are two assets folders: one in the demo-desktop project and another one in demo-android. This brings us to the question, where should you put all the application's assets? The demo-android project plays a special role in this case. In the preceding screenshot, you see a subfolder called data, which contains an image named libgdx.png, and it also appears in the demo-desktop project in the same place. Just remember to always put all of your assets into the assets folder under the demo-android project. The reason behind this is that the Android build process requires direct access to the application's assets folder. During its build process, a Java source file, R.java, will automatically be generated under the gen folder. It contains special information for Android about the available assets. It would be the usual way to access assets through Java code if you were explicitly writing an Android application. However, in Libgdx, you will want to stay platform-independent as much as possible and access any resource such as assets only through methods provided by Libgdx. You may wonder how the other platform-specific projects will be able to access the very same assets without having to maintain several copies per project. Needless to say that this would require you to keep all copies manually synchronized each time the assets change. Luckily, this problem has already been taken care of by the generator as follows: The demo-desktop project uses a linked resource, a feature by Eclipse, to add existing files or folders to other places in a workspace. You can check this out by right-clicking on the demo-desktop project then navigating to Properties | Resource | Linked Resources and clicking on the Linked Resources tab. The demo-html project requires another approach since Google Web Toolkit ( GWT ) has a different build process compared to the other projects. There is a special file GwtDefinition.gwt.xml that allows you to set the asset path by setting the configuration property gdx.assetpath, to the assets folder of the Android project. Notice that it is good practice to use relative paths such as ../demo-android/assets so that the reference does not get broken in case the workspace is moved from its original location. Take this advice as a precaution to protect you and maybe your fellow developers too from wasting precious time on something that can be easily avoided by using the right setup right from the beginning. The following is the code listing for GwtDefinition.gwt.xml from demo-html : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE module PUBLIC "-//Google Inc.//DTD Google Web Toolkit trunk//EN" "http://google-web-toolkit.googlecode.com/svn/trunk/ distro-source/core/src/gwt-module.dtd"> <module> <inherits name='com.badlogic.gdx.backends.gdx_backends_gwt' /> <inherits name='MyDemo' /> <entry-point class='com.packtpub.libgdx.demo.client.GwtLauncher' /> <set-configuration-property name="gdx.assetpath" value="../demo-android/assets" /> </module> Backends Libgdx makes use of several other libraries to interface the specifics of each platform in order to provide cross-platform support for your applications. Generally, a backend is what enables Libgdx to access the corresponding platform functionalities when one of the abstracted (platform-independent) Libgdx methods is called. For example, drawing an image to the upper-left corner of the screen, playing a sound file at a volume of 80 percent, or reading and writing from/to a file. Libgdx currently provides the following three backends: LWJGL (Lightweight Java Game Library) Android JavaScript/WebGL As already mentioned in Introduction to Libgdx and Project Setup , there will also be an iOS backend in the near future. LWJGL (Lightweight Java Game Library) LWJGL ( Lightweight Java Game Library ) is an open source Java library originally started by Caspian Rychlik-Prince to ease game development in terms of accessing the hardware resources on desktop systems. In Libgdx, it is used for the desktop backend to support all the major desktop operating systems, such as Windows, Linux, and Mac OS X. For more details, check out the official LWJGL website at http://www.lwjgl.org/. Android Google frequently releases and updates their official Android SDK. This represents the foundation for Libgdx to support Android in the form of a backend. There is an API Guide available which explains everything the Android SDK has to offer for Android developers. You can find it at http://developer.android.com/guide/components/index.html. WebGL WebGL support is one of the latest additions to the Libgdx framework. This backend uses the GWT to translate Java code into JavaScript and SoundManager2 ( SM2 ), among others, to add a combined support for HTML5, WebGL, and audio playback. Note that this backend requires a WebGL-capable web browser to run the application. You might want to check out the official website of GWT: https://developers.google.com/web-toolkit/. You might want to check out the official website of SM2: http://www.schillmania.com/projects/soundmanager2/. You might want to check out the official website of WebGL: http://www.khronos.org/webgl/. There is also a list of unresolved issues you might want to check out at https://github.com/libgdx/libgdx/blob/master/backends/gdx-backends-gwt/issues.txt. Modules Libgdx provides six core modules that allow you to access the various parts of the system your application will run on. What makes these modules so great for you as a developer is that they provide you with a single Application Programming Interface ( API ) to achieve the same effect on more than just one platform. This is extremely powerful because you can now focus on your own application and you do not have to bother with the specialties that each platform inevitably brings, including the nasty little bugs that may require tricky workarounds. This is all going to be transparently handled in a straightforward API which is categorized into logic modules and is globally available anywhere in your code, since every module is accessible as a static field in the Gdx class. Naturally, Libgdx does always allow you to create multiple code paths for per-platform decisions. For example, you could conditionally increase the level of detail in a game when run on the desktop platform, since desktops usually have a lot more computing power than mobile devices. The application module The application module can be accessed through Gdx.app. It gives you access to the logging facility, a method to shutdown gracefully, persist data, query the Android API version, query the platform type, and query the memory usage. Logging Libgdx employs its own logging facility. You can choose a log level to filter what should be printed to the platform's console. The default log level is LOG_INFO. You can use a settings file and/or change the log level dynamically at runtime using the following code line: Gdx.app.setLogLevel(Application.LOG_DEBUG); The available log levels are: LOG_NONE: This prints no logs. The logging is completely disabled. LOG_ERROR: This prints error logs only. LOG_INFO: This prints error and info logs. LOG_DEBUG: This prints error, info, and debug logs. To write an info, debug, or an error log to the console, use the following listings: Gdx.app.log("MyDemoTag", "This is an info log."); Gdx.app.debug("MyDemoTag", "This is a debug log."); Gdx.app.error("MyDemoTag", "This is an error log."); Shutting down gracefully You can tell Libgdx to shutdown the running application. The framework will then stop the execution in the correct order as soon as possible and completely de-allocate any memory that is still in use, freeing both Java and the native heap. Use the following listing to initiate a graceful shutdown of your application: Gdx.app.exit(); You should always do a graceful shutdown when you want to terminate your application. Otherwise, you will risk creating memory leaks, which is a really bad thing. On mobile devices, memory leaks will probably have the biggest negative impact due to their limited resources. Persisting data If you want to persist your data, you should use the Preferences class. It is merely a dictionary or a hash map data type which stores multiple key-value pairs in a file. Libgdx will create a new preferences file on the fly if it does not exist yet. You can have several preference files using unique names in order to split up data into categories. To get access to a preference file, you need to request a Preferences instance by its filename as follows: Preferences prefs = Gdx.app.getPreferences("settings.prefs"); To write a (new) value, you have to choose a key under which the value should be stored. If this key already exists in a preferences file, it will be overwritten. Do not forget to call flush() afterwards to persist the data, or else all the changes will be lost. prefs.putInteger("sound_volume", 100); // volume @ 100% prefs.flush(); Persisting data needs a lot more time than just modifying values in memory (without flushing). Therefore, it is always better to modify as many values as possible before a final flush() method is executed. To read back a certain value from a preferences file, you need to know the corresponding key. If this key does not exist, it will be set to the default value. You can optionally pass your own default value as the second argument (for example, in the following listing, 50 is for the default sound volume): int soundVolume = prefs.getInteger("sound_volume", 50); Querying the Android API Level On Android, you can query the Android API Level, which allows you to handle things differently for certain versions of the Android OS. Use the following listing to find out the version: Gdx.app.getVersion(); On platforms other than Android, the version returned is always 0. Querying the platform type You may want to write a platform-specific code where it is necessary to know the current platform type. The following example shows how it can be done: switch (Gdx.app.getType()) { case Desktop: // Code for Desktop application break; case Android: // Code for Android application break; case WebGL: // Code for WebGL application break; default: // Unhandled (new?) platform application break; } Querying memory usage You can query the system to find out its current memory footprint of your application. This may help you find excessive memory allocations that could lead to application crashes. The following functions return the amount of memory (in bytes) that is in use by the corresponding heap: long memUsageJavaHeap = Gdx.app.getJavaHeap(); long memUsageNativeHeap = Gdx.app.getNativeHeap(); Graphics module The graphics module can be accessed either through Gdx.getGraphics() or by using the shortcut variable Gdx.graphics. Querying delta time Query Libgdx for the time span between the current and the last frame in seconds by calling Gdx.graphics.getDeltaTime(). Querying display size Query the device's display size returned in pixels by calling Gdx.graphics.getWidth() and Gdx.graphics.getHeight(). Querying the FPS (frames per second) counter Query a built-in frame counter provided by Libgdx to find the average number of frames per second by calling Gdx.graphics.getFramesPerSecond(). Audio module The audio module can be accessed either through Gdx.getAudio() or by using the shortcut variable Gdx.audio. Sound playback To load sounds for playback, call Gdx.audio.newSound(). The supported file formats are WAV, MP3, and OGG. There is an upper limit of 1 MB for decoded audio data. Consider the sounds to be short effects like bullets or explosions so that the size limitation is not really an issue. Music streaming To stream music for playback, call Gdx.audio.newMusic(). The supported file formats are WAV, MP3, and OGG. Input module The input module can be accessed either through Gdx.getInput() or by using the shortcut variable Gdx.input. In order to receive and handle input properly, you should always implement the InputProcessor interface and set it as the global handler for input in Libgdx by calling Gdx.input.setInputProcessor(). Reading the keyboard/touch/mouse input Query the system for the last x or y coordinate in the screen coordinates where the screen origin is at the top-left corner by calling either Gdx.input.getX() or Gdx.input.getY(). To find out if the screen is touched either by a finger or by mouse, call Gdx.input.isTouched() To find out if the mouse button is pressed, call Gdx.input.isButtonPressed() To find out if the keyboard is pressed, call Gdx.input.isKeyPressed() Reading the accelerometer Query the accelerometer for its value on the x axis by calling Gdx.input.getAccelerometerX(). Replace the X in the method's name with Y or Z to query the other two axes. Be aware that there will be no accelerometer present on a desktop, so Libgdx always returns 0. Starting and canceling vibrator On Android, you can let the device vibrate by calling Gdx.input.vibrate(). A running vibration can be cancelled by calling Gdx.input.cancelVibrate(). Catching Android soft keys You might want to catch Android's soft keys to add an extra handling code for them. If you want to catch the back button, call Gdx.input.setCatchBackKey(true). If you want to catch the menu button, call Gdx.input.setCatchMenuKey(true). On a desktop where you have a mouse pointer, you can tell Libgdx to catch it so that you get a permanent mouse input without having the mouse ever leave the application window. To catch the mouse cursor, call Gdx.input.setCursorCatched(true). The files module The files module can be accessed either through Gdx.getFiles() or by using the shortcut variable Gdx.files. Getting an internal file handle You can get a file handle for an internal file by calling Gdx.files.internal(). An internal file is relative to the assets folder on the Android and WebGL platforms. On a desktop, it is relative to the root folder of the application. Getting an external file handle You can get a file handle for an external file by calling Gdx.files.external(). An external file is relative to the SD card on the Android platform. On a desktop, it is relative to the user's home folder. Note that this is not available for WebGL applications. The network module The network module can be accessed either through Gdx.getNet() or by using the shortcut variable Gdx.net. HTTP GET and HTTP POST You can make HTTP GET and POST requests by calling either Gdx.net.httpGet() or Gdx.net.httpPost(). Client/server sockets You can create client/server sockets by calling either Gdx.net.newClientSocket() or Gdx.net.newServerSocket(). Opening a URI in a web browser To open a Uniform Resource Identifier ( URI ) in the default web browser, call Gdx.net.openURI(URI). Libgdx's Application Life-Cycle and Interface The Application Life-Cycle in Libgdx is a well-defined set of distinct system states. The list of these states is pretty short: create, resize, render, pause, resume, and dispose. Libgdx defines an ApplicationListener interface that contains six methods, one for each system state. The following code listing is a copy that is directly taken from Libgdx's sources. For the sake of readability, all comments have been stripped. public interface ApplicationListener { public void create (); public void resize (int width, int height); public void render (); public void pause (); public void resume (); public void dispose (); } All you need to do is implement these methods in your main class of the shared game code project. Libgdx will then call each of these methods at the right time. The following diagram visualizes the Libgdx's Application Life-Cycle: Note that a full and dotted line basically has the same meaning in the preceding figure. They both connect two consecutive states and have a direction of flow indicated by a little arrowhead on one end of the line. A dotted line additionally denotes a system event. When an application starts, it will always begin with create(). This is where the initialization of the application should happen, such as loading assets into memory and creating an initial state of the game world. Subsequently, the next state that follows is resize(). This is the first opportunity for an application to adjust itself to the available display size (width and height) given in pixels. Next, Libgdx will handle system events. If no event has occurred in the meanwhile, it is assumed that the application is (still) running. The next state would be render(). This is where a game application will mainly do two things: Update the game world model Draw the scene on the screen using the updated game world model Afterwards, a decision is made upon which the platform type is detected by Libgdx. On a desktop or in a web browser, the displaying application window can be resized virtually at any time. Libgdx compares the last and current sizes on every cycle so that resize() is only called if the display size has changed. This makes sure that the running application is able to accommodate a changed display size. Now the cycle starts over by handling (new) system events once again. Another system event that can occur during runtime is the exit event. When it occurs, Libgdx will first change to the pause() state, which is a very good place to save any data that would be lost otherwise after the application has terminated. Subsequently, Libgdx changes to the dispose() state where an application should do its final clean-up to free all the resources that it is still using. This is also almost true for Android, except that pause() is an intermediate state that is not directly followed by a dispose() state at first. Be aware that this event may occur anytime during application runtime while the user has pressed the Home button or if there is an incoming phone call in the meanwhile. In fact, as long as the Android operating system does not need the occupied memory of the paused application, its state will not be changed to dispose(). Moreover, it is possible that a paused application might receive a resume system event, which in this case would change its state to resume(), and it would eventually arrive at the system event handler again. Starter Classes A Starter Class defines the entry point (starting point) of a Libgdx application. They are specifically written for a certain platform. Usually, these kinds of classes are very simple and mostly consist of not more than a few lines of code to set certain parameters that apply to the corresponding platform. Think of them as a kind of boot-up sequence for each platform. Once booting has finished, the Libgdx framework hands over control from the Starter Class (for example, the demo-desktop project) to your shared application code (for example, the demo project) by calling the different methods from the ApplicationListener interface that the MyDemo class implements. Remember that the MyDemo class is where the shared application code begins. We will now take a look at each of the Starter Classes that were generated during the project setup. Running the demo application on a desktop The Starter Class for the desktop application is called Main.java. The following listing is Main.java from demo-desktop : package com.packtpub.libgdx.demo; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; public class Main { public static void main(String[] args) { LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration(); cfg.title = "demo"; cfg.useGL20 = false; cfg.width = 480; cfg.height = 320; new LwjglApplication(new MyDemo(), cfg); } } In the preceding code listing, you see the Main class, a plain Java class without the need to implement an interface or inherit from another class. Instead, a new instance of the LwjglApplication class is created. This class provides a couple of overloaded constructors to choose from. Here, we pass a new instance of the MyDemo class as the first argument to the constructor. Optionally, an instance of the LwjglApplicationConfiguration class can be passed as the second argument. The configuration class allows you to set every parameter that is configurable for a Libgdx desktop application. In this case, the window title is set to demo and the window's width and height is set to 480 by 320 pixels. This is all you need to write and configure a Starter Class for a desktop. Let us try to run the application now. To do this, right-click on the demo-desktop project in Project Explorer in Eclipse and then navigate to Run As | Java Application. Eclipse may ask you to select the Main class when you do this for the first time. Simply select the Main class and also check that the correct package name ( com.packtpub.libgdx.demo ) is displayed next to it. The desktop application should now be up and running on your computer. If you are working on Windows, you should see a window that looks as follows: Summary In this article, we learned about Libgdx and how all the projects of an application work together. We covered Libgdx's backends, modules, and Starter Classes. Additionally, we covered what the Application Life Cycle and corresponding interface are, and how they are meant to work. Resources for Article: Further resources on this subject: Panda3D Game Development: Scene Effects and Shaders [Article] Microsoft XNA 4.0 Game Development: Receiving Player Input [Article] Introduction to Game Development Using Unity 3D [Article]
Read more
  • 0
  • 0
  • 1994

article-image-managing-and-displaying-information
Packt
17 Sep 2013
37 min read
Save for later

Managing and Displaying Information

Packt
17 Sep 2013
37 min read
(For more resources related to this topic, see here.) In order to realize these goals, in this article, we'll be doing the following: Displaying a countdown timer on the screen Configuring fonts Creating a game attribute to count lives Using graphics to display information Counting collected actors Keeping track of the levels Prior to continuing with the development of our game, let's take a little time out to review what we have achieved so far, and also to consider some of the features that our game will need before it can be published. A review of our progress The gameplay mechanics are now complete; we have a controllable character in the form of a monkey, and we have some platforms for the monkey to jump on and traverse the scene. We have also introduced some enemy actors, the croc and the snake, and we have Aztec statues falling from the sky to create obstacles for the monkey. Finally, we have the fruit, all of which must be collected by the monkey in order to successfully complete the level. With regards to the scoring elements of the game, we're currently keeping track of a countdown timer (displayed in the debug console), which causes the scene to completely restart when the monkey runs out of time. When the monkey collides with an enemy actor, the scene is not reloaded, but the monkey is sent back to its starting point in the scene, and the timer continues to countdown. Planning ahead – what else does our game need? With the gameplay mechanics working, we need to consider what our players will expect to happen when they have completed the task of collecting all the fruits. As mentioned in the introduction to this article, our plan is to create additional, more difficult levels for the player to complete! We also need to consider what will happen when the game is over; either when the player has succeeded in collecting all the fruits, or when the player has failed to collect the fruits in the allocated time. The solution that we'll be implementing in this game is to display a message to advise the player of their success or failure, and to provide options for the player to either return to the main menu, or if the task was completed successfully, continue to the next level within the game. We need to implement a structure so that the game can keep track of information, such as how many lives the player has left and which level of the game is currently being played. Let's put some information on the screen so that our players can keep track of the countdown timer. Displaying a countdown timer on the screen We created a new scene behavior called Score Management, which contains the Decrement Countdown event, shown as follows: Currently, as we can see in the previous screenshot, this event decrements the Countdown attribute by a value of 1, every second. We also have a debug print instruction that displays the current value of Countdown in the debug console to help us, as game developers, keep track of the countdown. However, players of the game cannot see the debug console, so we need to provide an alternative means of displaying the amount of time that the player has to complete the level. Let's see how we can display that information on the screen for players of our game. Time for action – displaying the countdown timer on the screen Ensure that the Score Management scene behavior is visible: click on the Dashboard tab, select Scene Behaviors, and double-click on the Score Management icon in the main panel. Click + Add Event | Basics | When Drawing. Double-click on the title of the new Drawing event, and rename it to Display Countdown. Click on the Drawing section button in the instruction block palette. Drag a draw text anything at (x: 0 y: 0) block into the orange when drawing event block in the main panel. Enter the number 10 into the x: textbox and also enter 10 into the y: textbox. Click on the drop-down arrow in the textbox after draw text and select Text | Basics. Then click on the text & text block. In the first textbox in green, … & … block, enter the text COUNTDOWN: (all uppercase, followed by a colon). In the second textbox, after the & symbol, click on the drop-down arrow and select Basics, then click on the anything as text block. Click on the drop-down arrow in the … as text block, and select Number | Attributes | Countdown. Ensure that the new Display Countdown event looks like the following screenshot: Test the game. What just happened? When the game is played, we can now see in the upper-left corner of the screen, a countdown timer that represents the value of the Countdown attribute as it is decremented each second. First, we created a new Drawing event, which we renamed to Display Countdown, and then we added a draw text anything at (x: 0 y: 0) block, which is used to display the specified text in the required location on the screen. We set both the x: and y: coordinates for displaying the drawn text to 10 pixels, that is, 10 pixels from the left-hand side of the screen, and 10 pixels from the top of the screen. The next task was to add some text blocks that enabled us to display an appropriate message along with the value of the Countdown attribute. The text & text block enables us to concatenate, or join together, two separate pieces of text. The Countdown attribute is a number, so we used the anything as text block to convert the value of the Countdown attribute to text to ensure that it will be displayed correctly when the game is being played. In practice, we could have just located the Countdown attribute block in the Attributes section of the palette, and then dragged it straight into the text & text block. However, it is best practice to correctly convert attributes to the appropriate type, as required by the instruction block. In our case, the number attribute is being converted to text because it is being used in the text concatenation instruction block. If we needed to use a text value in a calculation, we would convert it to a number using an anything as number block. Configuring fonts We can see, when testing the game, that the font we have used is not very interesting; it's a basic font that doesn't really suit the style of the game! Stencyl allows us to specify our own fonts, so our next step is to import a font to use in our game. Time for action – specifying a font for use in our game Before proceeding with the following steps, we need to locate the fonts-of-afrikaAfritubu.TTF file. Place the file in a location where it can easily be located, and continue with the following steps: In the Dashboard tab, click on Fonts. In the main panel, click on the box containing the words This game contains no Fonts. Click here to create one. In the Name textbox of the Create New… dialog box, type HUD Font and click on the Create button. In the left-hand panel, click on the Choose… button next to the Font selector. Locate the file Afritubu.TTF and double-click on it to open it. Note that the main panel shows a sample of the new font. In the left-hand panel, change the Size option to 25. Important: save the game! Return to the Display Countdown event in the Score Management scene behavior. In the instruction block palette, click on the Drawing section button and then the Styles category button. Drag the set current font to Font block above the draw text block in the when drawing event. Click on the Font option in the set current font to Font block, and select Choose Font from the pop-up menu. Double-click on the HUD Font icon in the Choose a Font… dialog box. Test the game. Observe the countdown timer at the upper-left corner of the game. What just happened? We can see that the countdown timer is now being displayed using the new font that we have imported into Stencyl, as shown in the following screenshot: The first step was to create a new blank font in the Stencyl dashboard and to give it a name (we chose HUD Font), and then we imported the font file from a folder on our hard disk. Once we had imported the font file, we could see a sample of the font in the main panel. We then increased the size of the font using the Size option in the left-hand panel. That's all we needed to do in order to import and configure a new font in Stencyl! However, before progressing, we saved the game to ensure that the newly imported font will be available for the next steps. With our new font ready to use, we needed to apply it to our countdown text in the Display Countdown behavior. So, we opened up the behavior and inserted the set current font to Font style block. The final step was to specify which font we wanted to use, by clicking on the Font option in the font style block, and choosing the new font, HUD Font, which we configured in the earlier steps. Heads-Up Display (HUD) is often used in games to describe either text or graphics that is overlaid on the main game graphics to provide the player with useful information. Using font files in Stencyl Stencyl can use any TrueType font that we have available on our hard disk (files with the extension TTF); many thousands of fonts are available to download from the Internet free of charge, so it's usually possible to find a font that suits the style of any game that we might be developing. Fonts are often subject to copyright, so be careful to read any licensing agreements that are provided with the font file, and only download font files from reliable sources. Have a go hero When we imported the font into Stencyl, we specified a new font size of 25, but it is a straightforward process to modify further aspects of the font style, such as the color and other effects. Click on the HUD Font tab to view the font settings (or reopen the Font Editor from the Dashboards tab) and experiment with the font size, color, and other effects to find an appropriate style for the game. Take this opportunity to learn more about the different effects that are available, referring to Stencyl's online help if required. Remember to test the game to ensure that any changes are effective and the text is not difficult to read! Creating a game attribute to count lives Currently, our game never ends. As soon as the countdown reaches zero, the scene is restarted, or when the monkey collides with an enemy actor, the monkey is repositioned at the starting point in the scene. There is no way for our players to lose the game! In some genres of game, the player will never be completely eliminated; effectively, the same game is played forever. But in a platform game such as ours, the player typically will have a limited number of chances or lives to complete the required task. In order to resolve our problem of having a never-ending game, we need to keep track of the number of lives available to our player. So let's start to implement that feature right now by creating a game attribute called Lives! Time for action – creating a Lives game attribute Click on the Settings icon on the Stencyl toolbar at the top of the screen. In the left-hand panel of the Game Settings dialog box, click on the Attributes option. Click on the green Create New button. In the Name textbox, type Lives. In the Category textbox, change the word Default to Scoring. In the Type section, ensure that the currently selected option is Number. Change Initial Value to 3. Click on OK to confirm the configuration. We'll leave the Game Settings dialog box open, so that we can take a closer look. What just happened? We have created a new game attribute called Lives. If we look at the rightmost panel of the Game Settings dialog box that we left open on the screen, we can see that we have created a new heading entitled SCORING, and underneath the heading, there is a label icon entitled Lives, as shown in the following screenshot: The Lives item is a new game attribute that can store a number. The category name of SCORING that we created is not used within the game. We can't access it with the instruction blocks; it is there purely as a memory aid for the game developer when working with game attributes. When many game attributes are used in a game, it can become difficult to remember exactly what they are for, so being able to place them under specific headings can be helpful. Using game attributes The attributes we have used so far, such as the Countdown attribute that we created in the Score Management behavior, lose their values as soon as a different scene is loaded, or when the current scene is reloaded. Some game developers may refer to these attributes as local called Lives, so let's attributes, because they belong to the behavior in which they were created. Losing its value is fine when the attribute is just being used within the current scene; for example, we don't need to keep track of the countdown timer outside of the Jungle scene, because the countdown is reset each time the scene is loaded. However, sometimes we need to keep track of values across several scenes within a game, and this is when game attributes become very useful. Game attributes work in a very similar manner to local attributes. They store values that can be accessed and modified, but the main difference is that game attributes keep their values even when a different scene is loaded. Currently, the issue of losing attribute values when a scene is reloaded is not important to us, because our game only has one scene. However, when our players succeed in collecting all the fruits, we want the next level to be started without resetting the number of lives. So we need the number of lives to be remembered when the next scene is loaded. We've created a game attribute called Lives, so let's put it to good use. Time for action – decrementing the number of lives If the Game Settings dialog box is still open, click on OK to close it. Open the Manage Player Collisions actor behavior. Click on the Collides with Enemies event in the left-hand panel. Click on the Attributes section button in the palette. Click on the Game Attributes category button. Locate the purple set Lives to 0 block under the Number Setters subcategory and drag it into the orange when event so that it appears above the red trigger event RestartLevel in behavior Health for Self block. Click on the drop-down arrow in the set Lives to … block and select 0 - 0 in the Math section. In the left textbox of the … - … block, click on the drop-down arrow and select Game Attributes | Lives. In the right-hand textbox, enter the digit 1. Locate the print anything block in the Flow section of the palette, under the Debug category, and drag it below the set Lives to Lives – 1 block. In the print … block, click on the drop-down arrow and select Text | Basics | text & text. In the first empty textbox, type Lives remaining: (including the colon). Click on the drop-down arrow in the second textbox and select Basics | anything as text. In the … as text block, click on the drop-down arrow and select Number | Game Attributes | Lives. Ensure that the Collides with Enemies event looks like the following screenshot: Test the game; make the monkey collide with an enemy actor, such as the croc, and watch the debug console! What just happened? We have modified the Collides with Enemies event in the Manage Player Collisions behavior so that it decrements the number of lives by one when the monkey collides with an enemy actor, and the new value of Lives is shown in the debug console. This was achieved by using the purple game attribute setter and getter blocks to set the value of the Lives game attribute to its current value minus one. For example, if the value of Lives is 3 when the event occurs, Lives will be set to 3 minus 1, which is 2! The print … block was then used to display a message in the console, advising how many lives the player has remaining. We used the text & text block to join the text Lives remaining: together with the current value of the Lives game attribute. The anything as text block converts the numeric value of Lives to text to ensure that it will display correctly. Currently, the value of the Lives attribute will continue to decrease below 0, and the monkey will always be repositioned at its starting point. So our next task is to make something happen when the value of the Lives game attribute reaches 0! No more click-by-click steps! From this point onwards, click-by-click steps to modify behaviors and to locate and place each instruction block will not be specified! Instead, an overview of the steps will be provided, and a screenshot of the completed event will be shown towards the end of each Time for action section. The search facility, at the top of the instruction block palette, can be used to locate the required instruction block; simply click on the search box and type any part of the text that appears in the required block, then press the Enter key on the keyboard to display all the matching blocks in the block palette. Time for action – detecting when Lives reaches zero Create a new scene called Game Over — Select the Dashboard tab, select Scenes, and then select Click here to create a new Scene. Leave all the settings at their default configuration and click on OK. Close the tab for the newly created scene. Open the Manage Player Collisions behavior and click on the Collides with Enemies event to display the event's instruction blocks. Insert a new if block under the existing print block. Modify the if block to if Lives < 0. Move the existing block, trigger event RestartLevel in behavior Health for Self into the if Lives > 0 block. Insert an otherwise block below the if Lives > 0 block. Insert a switch to Scene and Crossfade for 0 secs block inside the otherwise block. Click on the Scene option in the new block, then click on Choose Scene and select the Game Over scene. Change the secs textbox to 0 (zero). Ensure that our modified Collides with Enemies event now looks like the following screenshot: Test the game; make the monkey run into an enemy actor, such as the croc, three times. What just happened? We have modified the Collides with Enemies event so that the value of the Lives game attribute is tested after it has been decremented, and the game will switch to the Game Over scene if the number of lives remaining is less than zero. If the value of Lives is greater than zero, the RestartLevel event in the monkey's Health behavior is triggered. However, if the value of Lives is not greater than zero, the instruction in the otherwise block will be executed, and this switches to the (currently blank) Game Over scene that we have created. If we review all the instructions in the completed Collides with Enemies event, and write them in English, the statement will be: When the monkey collides with an enemy, reduce the value of Lives by one and print the new value to the debug console. Then, if the value of Lives is more than zero, trigger the RestartLevel event in the monkey's Health behavior, otherwise switch to the Game Over scene. Before continuing, we should note that the Game Over scene has been created as a temporary measure to ensure that as we are in the process of developing the game, it's immediately clear to us (the developer) that the monkey has run out of lives. Have a go hero Change the Countdown attribute value to 30 — open the Jungle scene, click on the Behaviors button, then select the Score Management behavior in the left panel to see the attributes for this behavior. The following tasks in this Have a go hero session are optional — failure to attempt them will not affect future tutorials, but it is a great opportunity to put some of our newly learned skills to practice! In the section, Time for action – displaying the countdown timer on the screen, we learned how to display the value of the countdown timer on the screen during gameplay. Using the skills that we have acquired in this article, try to complete the following tasks: Update the Score Management behavior to display the number of lives at the upper-right corner of the screen, by adding some new instruction blocks to the Display Counter event. Rename the Display Counter event to Display HUD. Remove the print Countdown block from the Decrement Countdown event also found in the Score Management behavior. Right-click on the instruction block and review the options available in the pop-up menu! Remove the print Lives remaining: & Lives as text instruction block from the Collides with Enemies event in the Manage Player Collisions behavior. Removing debug instructions Why did we remove the debug print … blocks in the previous Have a go hero session? Originally, we added the debug blocks to assist us in monitoring the values of the Countdown attribute and Lives game attribute during the development process. Now that we have updated the game to display the required information on the screen, the debug blocks are redundant! While it would not necessarily cause a problem to leave the debug blocks where they are, it is best practice to remove any instruction blocks that are no longer in use. Also, during development, excessive use of debug print blocks can have an impact on the performance of the game; so it's a good idea to remove them as soon as is practical. Using graphics to display information We are currently displaying two on-screen pieces of information for players of our game: the countdown timer and the number of lives available. However, providing too much textual information for players can be distracting for them, so we need to find an alternative method of displaying some of the information that the player needs during gameplay. Rather than using text to advise the player how much time they have remaining to complete the level, we're going to display a timer bar on the screen. Time for action – displaying a timer bar Open the Score Management scene behavior and click on the Display HUD event. In the when drawing event, right-click on the blue block that draws the text for the countdown timer and select Activate / Deactivate from the pop-up menu. Note that the block becomes faded. Locate the draw rect at (x: 0 y: 0) with (w: 0 h: 0) instruction block in the palette, and insert it at the bottom of the when drawing event. Click on the draw option in the newly inserted block and change it to fill. Set both the x: and y: textboxes to 10. Set the width (w:) to Countdown x 10. Set the height (h:) to 10. Ensure that the draw text … block and the fill rect at … block in the Display HUD event appear as shown in the following screenshot (the draw text LIVES: … block may look different if the earlier Have a go hero section was attempted): Test the game! What just happened? We have created a timer bar that displays the amount of time remaining for the player to collect the fruit, and the timer bar reduces in size with the countdown! First, in the Display HUD event we deactivated, or disabled, the block that was drawing the textual countdown message, because we no longer want the text message to be displayed on the screen. The next step was to insert a draw rect … block that was configured to create a filled rectangle at the upper-left corner of the screen and with a width equal to the value of the Countdown timer multiplied by 10. If we had not multiplied the value of the countdown by 10, the timer bar would be very small and difficult to see (try it)! We'll be making some improvements to the timer bar later in this article. Activating and deactivating instruction blocks When we deactivate an instruction block, as we did in the Display HUD event, it no longer functions; it's completely ignored! However, the block remains in place, but is shown slightly faded, and if required, it can easily be reenabled by right-clicking on it and selecting the Activate / Deactivate option. Being able to activate and deactivate instruction blocks without deleting them is a useful feature — it enables us to try out new instructions, such as our timer bar, without having to completely remove blocks that we might want to use in the future. If, for example, we decided that we didn't want to use the timer bar, we could deactivate it and reactivate the draw text … block! Deactivated instruction blocks have no impact on the performance of a game; they are completely ignored during the game compilation process. Have a go hero The tasks in this Have a go hero session are optional; failure to attempt them will not affect future tutorials. Referring to Stencyl's online help if required at www.stencyl.com/help/, try to make the following improvements to the timer bar: Specify a more visually appealing color for the rectangle Make it thicker (height) so that it is easier to see when playing the game Consider drawing a black border (known as a stroke) around the rectangle Try to make the timer bar reduce in size smoothly, rather than in big steps Ask an independent tester for feedback about the changes and then modify the bar based on the feedback. To view suggested modifications together with comments, review the Display HUD event in the downloadable files that accompany this article. Counting collected actors With the number of lives being monitored and displayed for the player, and the timer bar in place, we now need to create some instructions that will enable our game to keep track of how many of the fruit actors have been collected, and to carry out the appropriate action when there is no fruit left to collect. Time for action – counting the fruit Open the Score Management scene behavior and create a new number attribute (not a game attribute) with the configuration shown in the following screenshot (in the block palette, click on Attributes, and then click on Create an Attribute…). Add a new when created event and rename it to Initialize Fruit Required. Add the required instruction blocks to the new when created event, so the Initialize Fruit Required event appears as shown in the following screenshot, carefully checking the numbers and text in each of the blocks' textboxes: Note that the red of group block in the set actor value … block cannot be found in the palette; it has been dragged into place from the orange for each … of group Collectibles block. Test the game and look for the Fruit required message in the debug console. What just happened? Before we can create the instructions to determine if all the fruit have been collected, we need to know how many fruit actors there are to collect. So we have created a new event that stores that information for us in a number attribute called Fruit Required and displays it in the debug console. We have created a for each … of group Collectibles block. This very useful looping block will repeat the instructions placed inside it for each member of the specified group that can be found in the current scene. We have specified the Collectibles group, and the instruction that we have placed inside the new loop is increment Fruit Required by 1. When the loop has completed, the value of the Fruit Required attribute is displayed in the debug console using a print … block. When constructing new events, it's good practice to insert print … blocks so we can be confident that the instructions achieve the results that we are expecting. When we are happy that the results are as expected, perhaps after carrying out further testing, we can remove the debug printing from our event. We have also introduced a new type of block that can set a value for an actor; in this case, we have set actor value Collected for … of group to false. This block ensures that each of the fruit actors has a value of Collected that is set to false each time the scene is loaded; remember that this instruction is inside the for each … loopso it is being carried out for every Collectible group member in the current scene. Where did the actor's Collected value come from? Well, we just invented it! The set actor value … block allows us to create an arbitrary value for an actor at any time. We can also retrieve that value at any time with a corresponding get actor value … block, and we'll be doing just that when we check to see if a fruit actor has been collected in the next section, Time for action – detecting when all the fruit are collected. Translating our instructions into English, results in the following statement: For each actor in the Collectibles group, that can be found in this scene, add the value 1 to the Fruit Required attribute and also set the actor's Collected value to false. Finally, print the result in the debug console. Note that the print … block has been placed after the for each … loop, so the message will not be printed for each fruit actor; it will appear just once, after the loop has completed! If we wish to prove to ourselves that the loop is counting correctly, we can edit the Jungle scene and add as many fruit actors as we wish. When we test the game, we can see that the number of fruit actors in the scene is correctly displayed in the debug console. We have designed a flexible set of instructions that can be used in any scene with any number of fruit actors, and which does not require us (as the game designer) to manually configure the number of fruit actors to be collected in that scene! Once again, we have made life easier for our future selves! Now that we have the attribute containing the number of fruit to be collected at the start of the scene, we can create the instructions that will respond when the player has successfully collected them all. Time for action – detecting when all fruits have been collected Create a new scene called Level Completed, with a Background Color of yellow. Leave all the other settings at their default configuration. Close the tab for the newly created scene. Return to the Score Management scene behavior, and create a new custom event by clicking on + Add Event | Advanced | Custom Event. In the left-hand panel, rename the custom event to Fruit Collected. Add the required instruction blocks to the new Fruit Collected event, so it appears as shown in the following screenshot, again carefully checking the parameters in each of the text boxes: Note that there is no space in the when FruitCollected happens custom event name. Save the game and open the Manage Player Collisions actor behavior. Modify the Collides with Collectibles event so it appears as shown in the following screenshot. The changes are listed in the subsequent steps: A new if get actor value Collected for … of group = false block has been inserted. The existing blocks have been moved into the new if … block. A set actor value Collected for … of group to true block has been inserted above the grow … block. A trigger event FruitCollected in behavior Score Management for this scene block has been inserted above the do after 0.5 seconds block. An if … of group is alive block has been inserted into the do after 0.5 seconds block, and the existing kill … of group block has been moved inside the newly added if … block. Test the game; collect several pieces of fruit, but not all of them! Examine the contents of the debug console; it may be necessary to scroll the console horizontally to read the messages. Continue to test the game, but this time collect all the fruit actors. What just happened? We have created a new Fruit Collected event in the Score Management scene behavior, which switches to a new scene when all the fruit actors have been collected, and we have also modified the Collides with Collectibles event in the Manage Player Collisions actor behavior in order to count how many pieces of fruit remain to be collected. When testing the game we can see that, each time a piece of fruit is collected, the new value of the Fruit Required attribute is displayed in the debug console, and when all the fruit actors have been collected, the yellow Level Completed scene is displayed. The first step was to create a blank Level Completed scene, which will be switched to when all the fruit actors have been collected. As with the Game Over scene that we created earlier in this article, it is a temporary scene that enables us to easily determine when the task of collecting the fruit has been completed successfully for testing purposes. We then created a new custom event called Fruit Collected in the Score Management scene behavior. This custom event waits for the FruitCollected event trigger to occur, and when that trigger is received, the Fruit Required attribute is decremented by 1 and its new value is displayed in the debug console. A test is then carried out to determine if the value of the Fruit Required attribute is equal to zero, and if it is equal to zero, the bright yellow, temporary Level Completed scene will be displayed! Our final task was to modify the Collides with Collectibles event in the Manage Player Collisions actor behavior. We inserted an if… block to test the collectible actor's Collected value; remember that we initialized this value to false in the previous section, Time for action – counting the fruit. If the Collected value for the fruit actor is still false, then it hasn't been collected yet, and the instructions contained within the if … block will be carried out. Firstly, the fruit actor's Collected value is set to false, which ensures that this event cannot occur again for the same piece of fruit. Next, the FruitCollected custom event in the Score Management scene behavior is triggered. Following that, the do after 0.5 seconds block is executed, and the fruit actor will be killed. We have also added an if … of group is alive check that is carried out before the collectible actor is killed. Because we are killing the actor after a delay of 0.5 seconds, it's good practice to ensure that the actor still exists before we try to kill it! In some games, it may be possible for the actor to be killed by other means during that very short 0.5 second delay, and if we try to kill an actor that does not exist, a runtime error may occur, that is, an error that happens while the game is being played. This may result in a technical error message being displayed to the player, and the game cannot continue; this is extremely frustrating for players, and they are unlikely to try to play our game again! Preventing multiple collisions from being detected A very common problem experienced by game designers, who are new to Stencyl, occurs when a collision between two actors is repeatedly detected. When two actors collide, all collision events that have been created with the purpose of responding to that collision will be triggered repeatedly until the collision stops occurring, that is, when the two actors are no longer touching. If, for example, we need to update the value of an attribute when a collision occurs, the attribute might be updated dozens or even hundreds of times in only a few seconds! In our game, we want collisions between the monkey actor and any single fruit actor to cause only a single update to the Fruit Required attribute. This is why we created the actor value Collected for each fruit actor, and this value is initialized to be false, not collected, by the Initialize Fruit Required event in the Score Management scene behavior. When the Collides with Collectibles event in Manage Player Collisions actor behavior is triggered, a test is carried out to determine if the fruit actor has already been collected, and if it has been collected, no further instructions are carried out. If we did not have this test, then the FruitCollected custom event would be triggered numerous times, and therefore the Fruit Required attribute would be decremented numerous times, causing the value of the Fruit Required attribute to reach zero almost instantly; all because the monkey collided with a single fruit actor! Using a Boolean value of True or False to carry out a test in this manner is often referred to by developers as using a flag or Boolean flag. Note that, rather than utilizing an actor value to record whether or not a fruit actor has been collected, we could have created a new attribute and initialized and updated the attribute in the same way that we initialized and updated the actor value. However, this would have required more effort to configure, and there is no perceptible impact on performance when using actor values in this manner. Some Stencyl users never use actor values (preferring to always use attributes instead), however, this is purely a matter of preference and it is at the discretion of the game designer which method to use. In order to demonstrate what happens when the actor value Collected is not used to determine whether or not a fruit actor has been collected, we can simply deactivate the set actor value Collected for … of group to true instruction block in the Collides with Collectibles event. After deactivating the block, run the game with the debug console open, and allow the monkey to collide with a single fruit actor. The Fruit Required attribute will instantly be decremented multiple times, causing the level to be completed after colliding with only one fruit actor! Remember to reactivate the set actor value … block before continuing! Keeping track of the levels As discussed in the introduction to this article, we're going to be adding an additional level to our game, so we'll need a method for keeping track of a player's progress through the game's levels. Time for action – adding a game attribute to record the level Create a new number game attribute called Level, with the configuration shown in the following screenshot: Save the game. What just happened? We have created a new game attribute called Level, which will be used to record the current level of the game. A game attribute is being used here, because we need to access this value in other scenes within our game; local attributes have their values reset whenever a scene is loaded, whereas game attributes' values are retained regardless of whether or not a different scene has been loaded. Fixing the never-ending game! We've finished designing and creating the gameplay for the Monkey Run game, and the scoring behaviors are almost complete. However, there is an anomaly with the management of the Lives game attribute. The monkey correctly loses a life when it collides with an enemy actor, but currently, when the countdown expires, the monkey is simply repositioned at the start of the level, and the countdown starts again from the beginning! If we leave the game as it is, the player will have an unlimited number of attempts to complete the level — that's not much of a challenge! Have a go hero (Recommended) In the Countdown expired event, which is found in the Health actor behavior, modify the test for the countdown so that it checks for the countdown timer being exactly equal to zero, rather than the current test, which is for the Countdown attribute being less than 1. We only want the ShowAngel event to be triggered once when the countdown equals exactly zero! (Recommended) Update the game so that the Show Angel event manages the complete process of losing a life, that is, either when a collision occurs between the monkey and an enemy, or when the countdown timer expires. A single event should deduct a life and restart the level. (Optional) If we look carefully, we can see that the countdown timer bar starts to grow backwards when the player runs out of time! Update the Display HUD event in the Score Management scene behavior, so that the timer bar is only drawn when the countdown is greater than zero. There are many different ways to implement the above modifications, so take some time and plan the recommended modifications! Test the game thoroughly to ensure that the lives are reduced correctly and the level restarts as expected, when the monkey collides with the enemy, and when the countdown expires. It would certainly be a good idea to review the download file for this session, compare it with your own solutions, and review each event carefully, along with the accompanying comment blocks. There are some useful tips in the example file, so do take the time to have a look! Summary Although our game doesn't look vastly different from when we started this article, we have made some very important changes. First, we implemented a text display to show the countdown timer, so that players of our game can see how much time they have remaining to complete the level. We also imported and configured a font and used the new font to make the countdown display more visually appealing. We then implemented a system of tracking the number of lives that the player has left, and this was our first introduction to learning how game attributes can store information that can be carried across scenes. The most visible change that we implemented in this article was to introduce a timer bar that reduces in size as the countdown decreases. Although very few instruction blocks were required to create the timer bar, the results are very effective, and are less distracting for the player than having to repeatedly look to the top of the screen to read a text display. The main challenge for players of our game is to collect all the fruit actors in the allocated time, so we created an initialization event to count the number of fruit actors in the scene. Again, this event has been designed to be reusable, as it will always correctly count the fruit actors in any scene. We also implemented the instructions to test when there are no more fruit actors to be collected, so the player can be taken to the next level in the game when they have completed the challenge. A very important skill that we learned while implementing these instructions was to use actor values as Boolean flags to ensure that collisions are counted only once. Finally, we created a new game attribute to keep track of our players' progress through the different levels in the game Resources for Article : Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] 2D game development with Monkey [Article] Getting Started with GameSalad [Article]
Read more
  • 0
  • 0
  • 1455
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-bringing-your-game-life-ai-and-animations
Packt
26 Aug 2013
16 min read
Save for later

Bringing Your Game to Life with AI and Animations

Packt
26 Aug 2013
16 min read
(For more resources related to this topic, see here.) After going through these principles, we will be completing the tasks to enhance the maze game and the gameplay. We will apply animations to characters and trigger these in particular situations. We will improve the gameplay by allowing NPCs to follow the player where he/she is nearby (behavior based on distance), and attack the user when he/she is within reach. All material required to complete this article is available for free download on the companion website: http://patrickfelicia.wordpress.com/publications/books/unity-outbreak/. The pack for this article includes some great models and animations that were provided by the company Mixamo to enhance the quality of our final game. The characters were animated using Mixamo's easy online sequences and animation building tools. For more information on Mixamo and its easy-to-use 3D character rigging and animation tools, you can visit http://www.mixamo.com. Before we start creating our level, we will need to rename our scene and download the necessary assets from the companion website as follows: Duplicate the scene we have by saving the current scene (File Save | Scene), and then saving this scene as chapter5 (File | Save Scene As…). Open the link for the companion website: http://patrickfelicia.wordpress.com/publications/books/unity-outbreak/. Click on the link for the chapter5 pack to download this file. In Unity3D, create a new folder, chapter5, inside the Assets folder and select this folder (that is, chapter5). From Unity, select Assets | Import Package | Custom Package, and import the package you have just downloaded. This should create a folder, chapter5_pack, within the folder labeled chapter5. Importing and configuring the 3D character We will start by inserting and configuring the zombie character in the scene as shown in the following steps: Open the Unity Assets Store window (Window | Asset Store). In the Search field located in the top-right corner, type the text zombie. Click on the search result labeled Zombie Character Pack, and then click on the button labeled Import. In the new window entitled Importing package, uncheck the last box for the low-resolution zombie character and then click on Import. This will import the high-resolution zombie character inside our project and create a corresponding folder labeled ZombieCharacterPack inside the Assets folder. Locate the prefab zombie_hires by navigating to Assets | ZombieCharacterPack. Select this prefab and open the Inspector window, if it is not open yet. Click on the Rig tag, set the animation type to humanoid, and leave the other options as default. Click on the Apply button and then click on the Configure button; a pop-up window will appear: click on Save. In the new window, select: Mapping | Automap, as shown in the following screenshot: After this step, if we check the Hierarchy window, we should see a hierarchy of bones for this character. Select Pose | Enforce T-Pose as shown in the following screenshot: Click on the Muscles tab and then click on Apply in the new pop-up window. The Muscles tab makes it possible to apply constraints on our character. Check whether the mapping is correct by moving some of the sliders and ensuring that the character is represented properly. After this check, click on Done to go back to the previous window. Animating the character for the game Once we have applied these settings to the character, we will now use it for our scene. Drag-and-drop the prefab labeled zombie_hires by navigating to Assets | ZombieCharacterPack to the scene, change its position to (x=0, y =0, z=0), and add a collider to the character. Select: Component | Physics | Capsule Collider. Set the center position of this collider to (x=0, y=0.7, z=0), the radius to 0.5, the height to 2, and leave the other options as default, as illustrated in the following screenshot: Select: Assets | chapter5 | chapter5_pack; you will see that it includes several animations, including Zombie@idle, Zombie@walkForward, Zombie@attack, Zombie@hit, and Zombie@dead. We will now create the necessary animation for our character. Click once on the object zombie_hires in the Hierarchy window. We should see that it includes a component called Animator. This component is related to the animation of the character through Mecanim. You will also notice an empty slot for an Animator Controller. This controller will be created so that we can animate the character and control its different states, using a state machine. Let's create an Animator Controller that will be used for this character: From the project folder, select the chapter5 folder, then select Create | Animator Controller in the Project window. This should create a new Animator Controller labeled New Animator Controller in the folder chapter5. Rename this controller zombieController. Select the object labeled zombie_hires in the Hierarchy window. Locate the Animator Controller that we have just created by navigating to Assets | chapter5 (zombieController), drag-and-drop it to the empty slot to the right of the attribute controller in the Animator component of the zombie character, and check that the options Apply Root Motion and Animate Physics are selected. Our character is now ready to receive the animations. Open the Animator window (Window | Animator). This window is employed to display and manage the different states of our character. Since no animation is linked to the character, the default state is Any State. Select the object labeled zombie_hires in the Hierarchy window. Rearrange the windows in our project so that we can see both the state machine window and the character in the Scene view: we can drag the tab labeled Scene for the Scene view at the bottom of the Animator window, so that both windows can be seen simultaneously. We will now apply our first animation to the character: Locate the prefab Zombie@idle by navigating to Assets | chapter5 | chapter5_pack. Click once on this prefab, and in the Inspector window, click the Rig tab. In the new window, select the option Humanoid for the attribute Animation Type and click on Apply. Click on the Animations tab, and then click on the label idle, this will provide information on the idle clip. Scroll down the window, check the box for the attribute Loop Pose, and click on Apply to apply this change (you will need to scroll down to locate this button). In the Project view, click on the arrow located to the left (or right, depending on how much we have zoomed-in within this window) of the prefab Zombie@idle; it will reveal items included in this prefab, including an animation called idle, symbolized by a gray box with a white triangle. Make sure that the Animator window is active and drag this animation (idle) to the Animator window. This will create an idle state, and this state will be colored in orange, which means that it is the default state for our character. Rename this state Idle (upper case I) using the Inspector. Play the scene and check that the character is in an idle state. Repeat steps 1-9 for the prefab Zombie@walkForward and create a state called WalkForward. To test the second animation, we can temporarily set the state walkForward to be the default state by right-clicking on the walkForward state in the Animator window, and selecting Set As Default. Once we have tested this animation, set the state Idle as the default state. While the zombie is animated properly, you may notice that the camera on the First Person Controller might be too high. You will address this by changing the height of the camera so that it is at eye-level. In the Hierarchy view, select the object Main Camera that is located with the object First Person Controller and change its position to (x=0, y=0.5, z=0). We now have two animations. At present, the character is in the Idle state, and we need to de fine triggers or conditions for the character to start or stop walking toward the player. In this game, we will have enemies with different degrees of intelligence. This first type will follow the user when it sees the user, is close to the user, or is being attacked by the user. The Animator window will help to create animations and to apply transition conditions and blending between them so that transitions between each animation are smoother. To move around this window, we can hold the Alt key while simultaneously dragging-and-dropping the mouse. We can also select states by clicking on them or de fining a selection area (drag-and-drop the mouse to define the area). If needed, it is also possible to maximize this window using the icon located at its top-right corner. Creating parameters and transitions First, let's create transitions. Open the Animator window, right-click on the state labeled Idle, and select the option Make Transition from the contextual menu. This will create an arrow that symbolizes the transition from this state to another state. While this arrow is visible, click on the state labeled WalkForward. This will create a transition between the states WalkForward and Idle as illustrated in the following screenshot: Repeat the last step to create a transition between the state WalkForward and Idle: right-click on the state labeled WalkForward, select the option Make Transition from the contextual menu, and click on the state labeled Idle. Now that these transitions have been defined, we will need to specify how the animations will change from one state to the other. This will be achieved using parameters. In the Animator window, click on the + button located at the bottom-right corner of the window, as indicated in the following screenshot: Doing so will display a contextual menu, from which we can choose the type of the parameter. Select the option Bool to create a Boolean parameter. A new window should now appear with a default name for our new parameter as illustrated in the following screenshot: change the name of the parameter to walking. Now that the parameter has been defined, we can start defining transitions based on this parameter. Let's start with the first transition from the Idle state to the Walkforward state: Select the transition from the Idle state to the Walkforward state (that is, click on the corresponding transition in the Animator window). If we look at the Inspector window, we can see that this object has several components, including Transitions and Conditions. Let's focus on the Conditions component for the time being. We can see that the condition for the transition is based on a parameter called ExitTime and that the value is 0.98. This means that the transition will occur when the current animation has reached 98 percent completion. However, we would like to use the parameter labeled walking instead. Click on the parameter ExitTime, this should display other parameters that we can use for this transition. Select walking from the contextual menu and make sure that the condition is set to true as shown in the following screenshot: The process will be similar for the other transition (that is, from WalkForward to Idle), except that the condition for the transition for the parameter walking will be false: select the second transition (WalkForward to Idle) and set the transition condition of walking to false. To check that the transitions are working, we can do the following: Play the scene and look at the Scene view (not the Game view). In the Animator window, change the parameter walking to true by checking the corresponding box, as highlighted in the following screenshot: Check that the zombie character starts walking; click on this box again to set the variable walking to false, check that the zombie stops walking, and stop the Play mode (Ctrl + P). Adding basic AI to enemies We have managed to set transitions for the animations and the state of the zombie from Idle to walking. To add some challenge to the game, we will equip this enemy with some AI and create a script that changes the state of the enemy from Idle to WalkForward whenever it sees the player. First, let's allocate the predefined-tag player to First Person Controller: select First Person Controller from the Hierarchy window, and in the Inspector window, click on the drop-down menu to the right of the label Tag and select the tag Player. Then, we can start creating a script that will set the direction of the zombie toward the player. Create a folder labeled Scripts inside the folder Assets | chapter5, create a new script, rename it controlZombie, and add the following code to the start of the script: public var walking:boolean = false;public var anim:Animator;public var currentBaseState:AnimatorStateInfo; public var walkForwardState:int = Animator.StringToHash("Base Layer.WalkForward");public var idleState:int = Animator.StringToHash("Base Layer.Idle");private var playerTransform:Transform;private var hit:RaycastHit; In statement 1 of the previous code, a Boolean value is created. It is linked to the parameter used for the animation in the Animator window. In statement 2 of the previous code, we define an Animator object that will be used to manage the animator component of the zombie character. In statement 3 of the previous code, we create an AnimatorStateInfo variable that will be used to determine the current state of the animation (for example, Idle or WalkForward). In statement 4 of the previous code, we create a variable, walkForwardState , that will represent the state WalkForward previously de fined in the Animator window. We use the method Animator.StringToHash to convert this state initially from a string to an integer that can then be used to monitor the active state. In statement 5 of the previous code, similar to the previous comments, a variable is created for the state Idle. In statement 6 of the previous code, we create a variable that will be used to detect the position of the player. In statement 7 of the previous code, we create a ray that will be employed later on to detect the player. Next, let's add the following function to the script: function Start (){ anim = GetComponent(Animator); playerTransform = GameObject.FindWithTag("Player").transform;} In line 3 of the previous code, we initialize the variable anim with the Animator component linked to this GameObject. We can then add the following lines of code: function Update (){ currentBaseState = anim.GetCurrentAnimatorStateInfo(0); gameObject.transform.LookAt(playerTransform);} In line 3 of the previous code, we determine the current state for our animation. In line 4 of the previous code, the transform component of the current game object is oriented so that it is looking at the First Person Controller. Therefore, when the zombie is walking, it will follow the player. Save this script, and drag-and-drop it to the character labeled zombie_hires in the Hierarchy window. As we have seen previously, we will need to manage several states through our script, including the states Idle and WalkForward. Let's add the following code in the Update function: switch (currentBaseState.nameHash){case idleState:break;case walkForwardState:break;default:break;} In line 1 of the previous code, depending on the current state, we will switch to a different set of instructions All code related to the state Idle will be included within lines 3-4 of the previous code All code related to the state WalkForward will be included within lines 6-7 If we play the scene, we may notice that the zombie rotates around the x and z axes when near the player; its y position also changes over time. To correct this issue, let's add the following code at the end of the function Update: transform.position.y = -0.5;transform.rotation.x = 0.0;transform.rotation.z = 0.0; We now need to detect whether the zombie can see the player, or detect its presence within a radius of two meters(that is, the zombie would hear the player if he/she is within two meters). This can be achieved using two techniques: by calculating the distance between the zombie and the player, and by casting a ray from the zombie and detecting whether the player is in front of the zombie. If this is the case, the zombie will start walking toward the player. We need to calculate the distance between the player and the zombie by adding the following code to the script controlZombie, at the start of the function Update, before the switch statement: var distance:float = Vector3.Distance(transform.position, playerTransform.position); In the previous code, we create a variable labeled distance and initialize it with the distance between the player and the zombie. This is achieved using the built-in function Vector3.Distance. Now that the distance is calculated (and updated in every frame), we can implement the code that will serve to detect whether the player is near or in front of the zombie. Open the script entitled controlZombie, and add the following lines to the function Update within the block of instructions for the Idle state, so that it looks as follows: case idleState: if ((Physics.Raycast (Vector3(transform.position.x,transform.position.y+.5,transform.po sition.z), transform.forward, hit,40) && hit.collider.gameObject.tag == "Player") || distance <2.0f) { anim.SetBool("walking",true); }break; In the previous lines of code, a ray or ray cast is created. It is casted forward from the zombie, 0.5 meters above the ground and over 40 meters. Thanks to the variable hit, we read the tag of the object that is colliding with our ray and check whether this object is the player. If this is the case, the parameter walking is set to true. Effectively, this should trigger a transition to the state walking, as we have defined previously, so that the zombie starts walking toward the player. Initially, our code was written so that the zombie rotated around to face the player, even in the Idle state (using the built-in function LookAt). However, we need to modify this feature so that the zombie only turns around to face the player while it is following the player, otherwise, the player will always be in sight and the zombie will always see him/her, even in the Idle state. We can achieve this by deleting the code highlighted in the following code snippet (from the start of the function Update), and adding it to the code for the state WalkForward: case walkForwardState: transform.LookAt(playerTransform); break; In the previous lines, we checked whether the zombie is walking forward, and if this is the case, the zombie will rotate in order to look at and follow the player. Test our code by playing the scene and either moving within two meters of the zombie or in front of the zombie.
Read more
  • 0
  • 0
  • 2274

article-image-unreal-engine
Packt
26 Aug 2013
8 min read
Save for later

The Unreal Engine

Packt
26 Aug 2013
8 min read
(For more resources related to this topic, see here.) Sound cues versus sound wave data There are two types of sound entries in UDK: Sound cues Sound wave data The simplest difference between the two is that Sound Wave Data is what we would have if we imported a sound file into the editor, and a Sound Cue is taking a sound wave data or multiple sound wave datas and manipulating them or combining them using a fairly robust and powerful toolset that UDK gives us in their Sound Cue Editor. In terms of uses, sound wave datas are primarily only used as parts of sound cues. However, in terms of placing ambient sounds, that is, sounds that are just sort of always playing in the background, sound wave datas and sound cues offer different situations where each is used. Regardless, they both get represented in the level as Sound Actors, of which there are several types as shown in the following screenshot:     Types of sound actors A key element of any well designed level is ambient sound effects. This requires placing sound actors into the world. Some of these actors use sound wave data and others use sound cues. There are strengths, weaknesses, and specific use cases for all of them, so we'll touch on those presently. Using sound cues There are two distinct types of sound actors that call for the use of sound cues specifically. The strength of using sound cues for ambient sounds is that the different sounds can be manipulated in a wider variety of ways. Generally, this isn't necessary as most ambient sounds are some looping sound used to add sound to things like torches, rippling streams, a subtle blowing wind, or other such environmental instances. The two types of sound actors that use sound cues are Ambient Sounds and Ambient Sound Movables as shown in the following screenshot: Ambient sound As the name suggests, this is a standard ambient sound. It stays exactly where you place it and cannot be moved. These ambient sound actors are generally used for stationary sounds that need some level of randomization or some other form of specific control of multiple sound wave datas. Ambient sound movable Functionally very similar to the regular ambient sound, this variation can, as the name suggests, be moved. That means, this sort of ambient sound actor should be used in a situation where an ambient sound would be used, but needs to be mobile. The main weakness of the two ambient sound actors that utilize sound cues is that each one you place in a level is identically set to the exact settings within the sound cue. Conversely, ambient sound actors utilizing sound wave datas can be set up on an instance by instance basis. What this means is explained with the help of an example. Lets say we have two fires in our game. One is a small torch, and the other is a roaring bonfire. If we feel that using the same sound for each is what we want to do, then we can place both the ambient sound actors utilizing sound wave datas and adjust some settings within the actor to make sure that the bonfire is louder and/or lower pitched. If we wanted this type of variation using sound cues, we would have to make separate sound cues. Using sound wave data There are four types of ambient sound actors that utilize sound wave datas directly as opposed to housed within sound cues. As previously mentioned, the purpose of using ambient sound actors that use sound wave data is to avoid having to create multiple sound cues with only minimally different contents for simple ambient sounds. This is most readily displayed by the fact that the most commonly used ambient sound actors that use sound wave data are called AmbientSoundSimple and AmbientSoundSimpleToggleable as shown in the following screenshot: Ambient sound simple Ambient sound simple is, as the name suggests, the simplest of ambient sound actors. They are only used when we need one sound wave data or multiple sound wave datas to just repeat on a loop over and over again. Fortunately, most ambient sounds in a level fit this description. In most cases, if we were to go through a level and do an ambient sound pass, all we would need to use are ambient sound simples. Ambient sound non loop Ambient sound non loop are pretty much the same, functionally, as ambient sound simples. The only difference is, as the name suggests, they don't loop. They will play whatever sound wave data(s) that are set in the actor, then delay by a number of seconds that is also set within the actor, and then go through it again. This is useful when we want to have a sound play somewhat intermittently, but not be on a regular loop. Ambient sound non looping toggleable Ambient sound non looping toggleable are, for all intents and purposes, the same as the regular ambient sound non loop actors, but they are toggleable. This means, put simply, that they can be turned on and off at will using Kismet. This would obviously be useful if we needed one of these intermittent sounds to play only when certain things happened first. Ambient sound simple toggleable Ambient sound simple toggleable are basically the same as a plain old, run of the mill ambient sound simple with the difference being, as like the ambient sound non looping toggleable, it can be turned on and off using Kismet. Playing sounds in Kismet There are several different ways to play different kinds of sounds using Kismet. Firstly, if we are using a toggleable ambient sound actor, then we can simply use a toggle sequence, which can be found under New Action | Toggle. There is also a Play Sound sequence located in New Action | Sound | Play Sound. Both of these are relatively straightforward in terms of where to plug in the sound cue. Playing sounds in Matinee If we need a sound to play as part of a Matinee sequence, the Matinee tool gives us the ability to trigger the sound in question. If we have a Matinee sequence that contains a Director track, then we need to simply right-click and select Add New Sound Track. From here, we just need to have the sound cue we want to use selected in the Content Browser, and then, with the Sound Track selected in the active Matinee window, we simply place the time marker where we want the sound to play and press Enter. This will place a keyframe that will trigger our sound to play, easy as pie. The Matinee tool dialog will look like the following screenshot: Matinee will only play one sound in a sound track at a time, so if we place multiple sounds and they overlap, they won't play simultaneously. Fortunately, we can have as many separate sound tracks as we need. So if we find ourselves setting up a Matinee and two or more sounds overlap in our sound track, we can just add a second one and move some of our sounds in to it. Now that we've gone over the different ways to directly play and use sound cues, let's look at how to make and manipulate the same sound cues using UDK's surprisingly robust Sound Cue Editor. Summary Now that we have a decent grasp of what kinds of sound control UDK offers us and how to manipulate sounds in the editor, we can set about bringing our game to audible life. A quick tip for placing ambient sounds: if you look at something that visually seems like it should be making a noise like a waterfall, a fire, a flickering light, or whatever else, then it probably should have an ambient sound of some sort placed right on it. And as always, what we've covered in this article is an overview of some of the bare bones basics required to get started exploring sounds and soundscapes in UDK. There are plenty of other actors, settings, and things that can be done. So, again, I recommend playing around with anything you can find. Experiment with everything in UDK and you'll learn all sorts of new and interesting things. Resources for Article: Further resources on this subject: Unreal Development Toolkit: Level Design HQ [Article] Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article]
Read more
  • 0
  • 0
  • 2507

article-image-using-cameras
Packt
16 Aug 2013
11 min read
Save for later

Using Cameras

Packt
16 Aug 2013
11 min read
(For more resources related to this topic, see here.) Creating a picture-in-picture effect Having more than one viewport displayed can be useful in many situations. For example, you might want to show simultaneous events going on in different locations, or maybe you want to have a separate window for hot-seat multiplayer games. Although you could do it manually by adjusting the Normalized Viewport Rect parameters on your camera, this recipe includes a series of extra preferences to make it more independent from the user's display configuration. Getting ready For this recipe, we have prepared a package named basicLevel containing a scene. The package is in the 0423_02_01_02 folder. How to do it... To create a picture-in-picture display, just follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene, inside the folder 02_01_02. This is a basic scene featuring a directional light, a camera, and some geometry. Add the Camera option to the scene through the Create dropdown menu on top of the Hierarchy view, as shown in the following screenshot: Select the camera you have created and, in the Inspector view, set its Depth to 1: In the Project view, create a new C# script and rename it PictureInPicture. Open your script and replace everything with the following code: using UnityEngine;public class PictureInPicture: MonoBehaviour {public enum HorizontalAlignment{left, center, right};public enum VerticalAlignment{top, middle, bottom};public HorizontalAlignment horizontalAlignment =HorizontalAlignment.left;public VerticalAlignment verticalAlignment =VerticalAlignment.top;public enum ScreenDimensions{pixels, screen_percentage};public ScreenDimensions dimensionsIn = ScreenDimensions.pixels;public int width = 50;public int height= 50;public float xOffset = 0f;public float yOffset = 0f;public bool update = true;private int hsize, vsize, hloc, vloc;void Start (){AdjustCamera ();}void Update (){if(update)AdjustCamera ();}void AdjustCamera(){if(dimensionsIn == ScreenDimensions.screen_percentage){hsize = Mathf.RoundToInt(width * 0.01f * Screen.width);vsize = Mathf.RoundToInt(height * 0.01f * Screen.height);} else {hsize = width;vsize = height;}if(horizontalAlignment == HorizontalAlignment.left){hloc = Mathf.RoundToInt(xOffset * 0.01f *Screen.width);} else if(horizontalAlignment == HorizontalAlignment.right){hloc = Mathf.RoundToInt((Screen.width - hsize)- (xOffset * 0.01f * Screen.width));} else {hloc = Mathf.RoundToInt(((Screen.width * 0.5f)- (hsize * 0.5f)) - (xOffset * 0.01f * Screen.height));}if(verticalAlignment == VerticalAlignment.top){vloc = Mathf.RoundToInt((Screen.height -vsize) - (yOffset * 0.01f * Screen.height));} else if(verticalAlignment == VerticalAlignment.bottom){vloc = Mathf.RoundToInt(yOffset * 0.01f *Screen.height);} else {vloc = Mathf.RoundToInt(((Screen.height *0.5f) - (vsize * 0.5f)) - (yOffset * 0.01f * Screen.height));}camera.pixelRect = new Rect(hloc,vloc,hsize,vsize);}} In case you haven't noticed, we are not achieving percentage by dividing numbers by 100, but rather multiplying them by 0.01. The reason behind that is performance: computer processors are faster multiplying than dividing. Save your script and attach it to the new camera that you created previously. Uncheck the new camera's Audio Listener component and change some of the PictureInPicture parameters: change Horizontal Alignment to Right, Vertical Alignment to Top, and Dimensions In to pixels. Leave XOffset and YOffset as 0, change Width to 400 and Height to 200, as shown below: Play your scene. The new camera's viewport should be visible on the top right of the screen: How it works... Our script changes the camera's Normalized Viewport Rect parameters, thus resizing and positioning the viewport according to the user preferences. There's more... The following are some aspects of your picture-in-picture you could change. Making the picture-in-picture proportional to the screen's size If you change the Dimensions In option to screen_percentage, the viewport size will be based on the actual screen's dimensions instead of pixels. Changing the position of the picture-in-picture Vertical Alignment and Horizontal Alignment can be used to change the viewport's origin. Use them to place it where you wish. Preventing the picture-in-picture from updating on every frame Leave the Update option unchecked if you don't plan to change the viewport position in running mode. Also, it's a good idea to leave it checked when testing and then uncheck it once the position has been decided and set up. See also The Displaying a mini-map recipe. Switching between multiple cameras Choosing from a variety of cameras is a common feature in many genres: race sims, sports sims, tycoon/strategy, and many others. In this recipe, we will learn how to give players the ability of choosing an option from many cameras using their keyboard. Getting ready In order to follow this recipe, we have prepared a package containing a basic level named basicScene. The package is in the folder 0423_02_01_02. How to do it... To implement switchable cameras, follow these steps: Import the basicLevel package into your Unity project. In the Project view, open basicScene from the 02_01_02 folder. This is a basic scene featuring a directional light, a camera, and some geometry. Add two more cameras to the scene. You can do it through the Create drop-down menu on top of the Hierarchy view. Rename them cam1 and cam2. Change the cam2 camera's position and rotation so it won't be identical to cam1. Create an Empty game object by navigating to Game Object | Create Empty. Then, rename it Switchboard. In the Inspector view, disable the Camera and Audio Listener components of both cam1 and cam2. In the Project view, create a new C# script. Rename it CameraSwitch and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;public class CameraSwitch : MonoBehaviour {public GameObject[] cameras;public string[] shortcuts;public bool changeAudioListener = true;void Update (){int i = 0;for(i=0; i<cameras.Length; i++){if (Input.GetKeyUp(shortcuts[i]))SwitchCamera(i);}}void SwitchCamera ( int index ){int i = 0;for(i=0; i<cameras.Length; i++){if(i != index){if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = false;}cameras[i].camera.enabled = false;} else {if(changeAudioListener){cameras[i].GetComponent<AudioListener>().enabled = true;}cameras[i].camera.enabled = true;}}}} Attach CameraSwitch to the Switchboard game object. In the Inspector view, set both Cameras and Shortcuts size to 3. Then, drag the scene cameras into the Cameras slots, and type 1, 2, and 3 into the Shortcuts text fields, as shown in the next screenshot. Play your scene and test your cameras. How it works... The script is very straightforward. All it does is capture the key pressed and enable its respective camera (and its Audio Listener, in case the Change Audio Listener option is checked). There's more... Here are some ideas on how you could try twisting this recipe a bit. Using a single-enabled camera A different approach to the problem would be keeping all the secondary cameras disabled and assigning their position and rotation to the main camera via a script (you would need to make a copy of the main camera and add it to the list, in case you wanted to save its transform settings). Triggering the switch from other events Also, you could change your camera from other game object's scripts by using a line of code, such as the one shown here: GameObject.Find("Switchboard").GetComponent("CameraSwitch"). SwitchCamera(1); See also The Making an inspect camera recipe. Customizing the lens flare effect As anyone who has played a game set in an outdoor environment in the last 15 years can tell you, the lens flare effect is used to simulate the incidence of bright lights over the player's field of view. Although it has become a bit overused, it is still very much present in all kinds of games. In this recipe, we will create and test our own lens flare texture. Getting ready In order to continue with this recipe, it's strongly recommended that you have access to image editor software such as Adobe Photoshop or GIMP. The source for lens texture created in this recipe can be found in the 0423_02_03 folder. How to do it... To create a new lens flare texture and apply it to the scene, follow these steps: Import Unity's Character Controller package by navigating to Assets | Import Package | Character Controller. Do the same for the Light Flares package. In the Hierarchy view, use the Create button to add a Directional Light effect to your scene. Select your camera and add a Mouse Look component by accessing the Component | Camera Control | Mouse Look menu option. In the Project view, locate the Sun flare (inside Standard Assets | Light Flares), duplicate it and rename it to MySun, as shown in the following screenshot: In the Inspector view, click Flare Texture to reveal the base texture's location in the Project view. It should be a texture named 50mmflare. Duplicate the texture and rename it My50mmflare. Right-click My50mmflare and choose Open. This should open the file (actually a.psd) in your image editor. If you're using Adobe Photoshop, you might see the guidelines for the texture, as shown here: To create the light rings, create new Circle shapes and add different Layer Effects such as Gradient Overlay, Stroke, Inner Glow, and Outer Glow. Recreate the star-shaped flares by editing the originals or by drawing lines and blurring them. Save the file and go back to the Unity Editor. In the Inspector view, select MySun, and set Flare Texture to My50mmflare: Select Directional Light and, in the Inspector view, set Flare to MySun. Play the scene and move your mouse around. You will be able to see the lens flare as the camera faces the light. How it works... We have used Unity's built-in lens flare texture as a blueprint for our own. Once applied, the lens flare texture will be displayed when the player looks into the approximate direction of the light. There's more... Flare textures can use different layouts and parameters for each element. In case you want to learn more about the Lens Flare effect, check out Unity's documentation at http://docs. unity3d.com/Documentation/Components/class-LensFlare.html. Making textures from screen content If you want your game or player to take in-game snapshots and apply it as a texture, this recipe will show you how. This can be very useful if you plan to implement an in-game photo gallery or display a snapshot of a past key moment at the end of a level (Race Games and Stunt Sims use this feature a lot). Getting ready In order to follow this recipe, please import the basicTerrain package, available in the 0423_02_04_05 folder, into your project. The package includes a basic terrain and a camera that can be rotated via a mouse. How to do it... To create textures from screen content, follow these steps: Import the Unity package and open the 02_04_05 scene. We need to create a script. In the Project view, click on the Create drop-down menu and choose C# Script. Rename it ScreenTexture and open it in your editor. Open your script and replace everything with the following code: using UnityEngine;using System.Collections;public class ScreenTexture : MonoBehaviour {public int photoWidth = 50;public int photoHeight = 50;public int thumbProportion = 25;public Color borderColor = Color.white;public int borderWidth = 2;private Texture2D texture;private Texture2D border;private int screenWidth;private int screenHeight;private int frameWidth;private int frameHeight;private bool shoot = false;void Start (){screenWidth = Screen.width;screenHeight = Screen.height;frameWidth = Mathf.RoundToInt(screenWidth * photoWidth *0.01f);frameHeight = Mathf.RoundToInt(screenHeight * photoHeight* 0.01f);texture = new Texture2D (frameWidth,frameHeight,TextureFormat.RGB24,false);border = new Texture2D (1,1,TextureFormat.ARGB32, false);border.SetPixel(0,0,borderColor);border.Apply();}void Update (){if (Input.GetKeyUp(KeyCode.Mouse0))StartCoroutine(CaptureScreen());}void OnGUI (){GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,((screenHeight*0.5f)-(frameHeight*0.5f)) - borderWidth,frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f) - borderWidth*2,(screenHeight*0.5f)+(frameHeight*0.5f),frameWidth + borderWidth*2,borderWidth),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)-(frameWidth*0.5f)- borderWidth*2,(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);GUI.DrawTexture(new Rect((screenWidth*0.5f)+(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),borderWidth,frameHeight),border,ScaleMode.StretchToFill);if(shoot){GUI.DrawTexture(new Rect (10,10,frameWidth*thumbProportion*0.01f,frameHeight*thumbProportion* 0.01f),texture,ScaleMode.StretchToFill);}}IEnumerator CaptureScreen (){yield return new WaitForEndOfFrame();texture.ReadPixels(new Rect((screenWidth*0.5f)-(frameWidth*0.5f),(screenHeight*0.5f)-(frameHeight*0.5f),frameWidth,frameHeight),0,0);texture.Apply();shoot = true;}} Save your script and apply it to the Main Camera game object. In the Inspector view, change the values for the Screen Capturecomponent, leaving Photo Width and Photo Height as 25 and Thumb Proportion as 75, as shown here: Play the scene. You will be able to take a snapshot of the screen (and have it displayed on the top-left corner) by clicking the mouse button. How it works... Clicking the mouse triggers a function that reads pixels within the specified rectangle and applies them into a texture that is drawn by the GUI. There's more... Apart from displaying the texture as a GUI element, you could use it in other ways. Applying your texture to a material You apply your texture to an existing object's material by adding a line similar to GameObject.Find("MyObject").renderer.material.mainTexture = texture; at the end of the CaptureScreen function. Using your texture as a screenshot You can encode your texture as a PNG image file and save it. Check out Unity's documentation on this feature at http://docs.unity3d.com/Documentation/ScriptReference/ Texture2D.EncodeToPNG.html.
Read more
  • 0
  • 0
  • 2276

article-image-detailing-environments
Packt
17 Jul 2013
4 min read
Save for later

Detailing Environments

Packt
17 Jul 2013
4 min read
(For more resources related to this topic, see here.) Applying materials As it stands, our current level looks rather... well, bland. I'd say it's missing something in order to really make it realistic... the walls are all the same! Thankfully, we can use textures to make the walls come to life in a very simple way, bringing us one step closer to that AAA quality that we're going for! Applying materials to our walls in Unreal Development Kit (UDK) is actually very simple once we know how to do it, which is what we're going to look at now: First, go to the menu bar at the top and access the Actor Classes window by going to the top menu and navigating to View | Browser Windows | Content Browser. Once in the Content Browser window, make sure that Packages are sorted by folder by clicking on the left-hand side button. Once this is done, click on the UDK Game folder in the Packages window. Then type in floor master in the top search bar menu. Click on the M_LT_Floors_BSP_Master material. Close the Content Browser window and then left-click on the floor of our level; if you look closely, you should see. With the floor selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Now that we have given the floor a material, let's give it a platform as well. Select each of the faces by holding down Ctrl and left-clicking on them individually. Once selected, right-click and select Apply Material : M_LT_Floors_BSP_Master. Another way to select all of the faces would be to rightclick on the floor and navigate to Select Surfaces | Adjacent Floors. Now our floor is placed; but if you play the game, you may notice the texture being repeated over and over again and the texture on the platform being stretched strangely. One of the ways we can rectify this problem is by scaling the texture to fit our needs. With all of the floor and the pieces of the platform selected, navigate to View| Surface Properties. From there, change the Simple field under Scaling to 2.0 and click on the Apply button to its right that will double the size of our textures. After that, go to Alignment and select Box; click on the Apply button placed below it to align our textures as if the faces that we selected were like a box. This works very well for objects consisting of box-like objects (our brushes, for instance). Close the Surface Properties window and open up the Content Browser window. Now search for floors organic. Select M_LT_Floors_BSP_ Organic15b and close the Content Browser window. Now select one of the floors on the edges with the default texture on them. Then right-click and go to Select Surfaces | Matching Texture. After that, right-click and select Apply Material : M_LT_Floors_BSP_Organic15b. We build our project by navigating to Build | Build All, save our game by going to the Save option within the File menu, and run our game by navigating to Play | In Editor. And with that, we now have a nicely textured world, and it is quite a good start towards getting our levels looking as refined as possible. Summary This article discusses the role of an environment artist doing a texture pass on the environment. After that, we will place meshes to make our level pop with added details. Finally, we will add a few more things to make the experience as nice looking as possible. Resources for Article : Further resources on this subject: Getting Started on UDK with iOS [Article] Configuration and Handy Tweaks for UDK [Article] Creating Virtual Landscapes [Article]
Read more
  • 0
  • 0
  • 1159
article-image-introduction-hlsl-language
Packt
28 Jun 2013
8 min read
Save for later

Introduction to HLSL language

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Distance/Height-based fog Distance/Height-based fog is an approximation to the fog you would normally see outdoors. Even in the clearest of days, you should be able to see some fog far in the distance. The main benefit of adding the fog effect is that it helps the viewer estimate how far different elements in the scene are based on the amount of fog covering them. In addition to the realism this effect adds, it has the additional benefit of hiding the end of the visible range. Without fog to cover the far plane, it becomes easier to notice when far scene elements are clipped by the cameras far plane. By tuning the height of the fog you can also add a darker atmosphere to your scene as demonstrated by the following image: This recipe will demonstrate how distance/height-based fog can be added to our deferred directional light calculation. See the How it works… section for details about adding the effect to other elements of your rendering code. Getting ready We will be passing additional fog specific parameters to the directional light's pixel shader through a new constant buffer. The reason for separating the fog values into their own constant buffer is to allow the same parameters to be used by any other shader that takes fog into account. To create the new constant buffer use the following buffer descriptor: Constant buffer descriptor parameter   Value   Usage   D3D11_USAGE_DYNAMIC   BindFlags   D3D11_BIND_CONSTANT_BUFFER   CPUAccessFlags   D3D11_CPU_ACCESS_WRITE   ByteWidth   48   The reset of the descriptor fields should be set to zero. All the fog calculations will be handled in the deferred directional light pixel shader. How to do it... Our new fog constant buffer is declared in the pixel shader as follows: cbuffer cbFog : register( b2 ){float3 FogColor : packoffset( c0 );float FogStartDepth : packoffset( c0.w );float3 FogHighlightColor : packoffset( c1 );float FogGlobalDensity : packoffset( c1.w );float3 FogSunDir : packoffset( c2 );FogHeightFalloff : packoffset( c2.w );} The helper function used for calculating the fog is as follows: float3 ApplyFog(float3 originalColor, float eyePosY, float3eyeToPixel){float pixelDist = length( eyeToPixel );float3 eyeToPixelNorm = eyeToPixel / pixelDist;// Find the fog staring distance to pixel distancefloat fogDist = max(pixelDist - FogStartDist, 0.0);// Distance based fog intensityfloat fogHeightDensityAtViewer = exp( -FogHeightFalloff * eyePosY );float fogDistInt = fogDist * fogHeightDensityAtViewer;// Height based fog intensityfloat eyeToPixelY = eyeToPixel.y * ( fogDist / pixelDist );float t = FogHeightFalloff * eyeToPixelY;const float thresholdT = 0.01;float fogHeightInt = abs( t ) > thresholdT ?( 1.0 - exp( -t ) ) / t : 1.0;// Combine both factors to get the final factorfloat fogFinalFactor = exp( -FogGlobalDensity * fogDistInt *fogHeightInt );// Find the sun highlight and use it to blend the fog colorfloat sunHighlightFactor = saturate(dot(eyeToPixelNorm, FogSunDir));sunHighlightFactor = pow(sunHighlightFactor, 8.0);float3 fogFinalColor = lerp(FogColor, FogHighlightColor,sunHighlightFactor);return lerp(fogFinalColor, originalColor, fogFinalFactor);} The Applyfog function takes the color without fog along with the camera height and the vector from the camera to the pixel the color belongs to and returns the pixel color with fog. To add fog to the deferred directional light, change the directional entry point to the following code: float4 DirLightPS(VS_OUTPUT In) : SV_TARGET{// Unpack the GBufferfloat2 uv = In.Position.xy;//In.UV.xy;SURFACE_DATA gbd = UnpackGBuffer_Loc(int3(uv, 0));// Convert the data into the material structureMaterial mat;MaterialFromGBuffer(gbd, mat);// Reconstruct the world positionfloat2 cpPos = In.UV.xy * float2(2.0, -2.0) - float2(1.0, -1.0);float3 position = CalcWorldPos(cpPos, gbd.LinearDepth);// Get the AO valuefloat ao = AOTexture.Sample(LinearSampler, In.UV);// Calculate the light contributionfloat4 finalColor;finalColor.xyz = CalcAmbient(mat.normal, mat.diffuseColor.xyz) * ao;finalColor.xyz += CalcDirectional(position, mat);finalColor.w = 1.0;// Apply the fog to the final colorfloat3 eyeToPixel = position - EyePosition;finalColor.xyz = ApplyFog(finalColor.xyz, EyePosition.y,eyeToPixel);return finalColor;} With this change, we apply the fog on top of the lit pixels color and return it to the light accumulation buffer. How it works… Fog is probably the first volumetric effect implemented using a programmable pixel shader as those became commonly supported by GPUs. Originally, fog was implemented in hardware (fixed pipeline) and only took distance into account. As GPUs became more powerful, the hardware distance based fog was replaced by a programmable version that also took into account things such as height and sun effect. In reality, fog is just particles in the air that absorb and reflect light. A ray of light traveling from a position in the scene travels, the camera interacts with the fog particles, and gets changed based on those interactions. The further this ray has to travel before it reaches the camera, the larger the chance is that this ray will get either partially or fully absorbed. In addition to absorption, a ray traveling in a different direction may get reflected towards the camera and add to the intensity of the original ray. Based on the amount of particles in the air and the distance a ray has to travel, the light reaching our camera may contain more reflection and less of the original ray which leads to a homogenous color we perceive as fog. The parameters used in the fog calculation are: FogColor: The fog base color (this color's brightness should match the overall intensity so it won't get blown by the bloom) FogStartDistance: The distance from the camera at which the fog starts to blend in FogHighlightColor: The color used for highlighting pixels with pixel to camera vector that is close to parallel with the camera to sun vector FogGlobalDensity: Density factor for the fog (the higher this is the denser the fog will be) FogSunDir: Normalized sun direction FogHeightFalloff: Height falloff value (the higher this value, the lower is the height at which the fog disappears will be) When tuning the fog values, make sure the ambient colors match the fog. This type of fog is designed for outdoor environments, so you should probably disable it when lighting interiors. You may have noticed that the fog requires the sun direction. We already store the inversed sun direction for the directional light calculation. You can remove that value from the directional light constant buffer and use the fog vector instead to avoid the duplicate values This recipe implements the fog using the exponential function. The reason for using the exponent function is because of its asymptote on the negative side of its graph. Our fog implementation uses that asymptote to blend the fog in from the starting distances. As a reminder, the exponent function graph is as follows: The ApplyFog function starts off by finding the distance our ray traveled in the fog (fogDepth). In order to take the fog's height into account, we also look for the lowest height between the camera and the pixel we apply the fog to which we then use to find how far our ray travels vertically inside the fog (fogHeight). Both distance values are negated and multiplied by the fog density to be used as the exponent. The reason we negate the distance values is because it's more convenient to use the negative side of the exponential functions graph which is limited to the range 0 to 1. As the function equals 1 when the exponent is 0, we have to invert the results (stored in fogFactors). At this point we have one factor for the height which gets larger the further the ray travels vertically into the fog and a factor that gets larger the further the ray travels in the fog in any direction. By multiplying both factors with each other we get the combined fog effect on the ray: the higher the result is, the more the original ray got absorbed and light got reflected towards the camera in its direction (this is stored in fogFinalFactor). Before we can compute the final color value, we need to find the fog's color based on the camera and sun direction. We assume that the sun intensity is high enough to get more of its light rays reflected towards the camera direction and sun direction are close to parallel. We use the dot product between the two to determine the angle and narrow the result by raising it to the power of 8 (the result is stored in sunHighlightFactor). The result is used to lerp between the fog base color and the fog color highlighted by the sun. Finally, we use the fog factor to linearly interpolate between the input color and the fog color. The resulting color is then returned from the helper function and stored into the light accumulation buffer. As you can see, the changes to the directional light entry point are very minor as most of the work is handled inside the helper function ApplyFog. Adding the fog calculation to the rest of the deferred and forward light sources should be pretty straightforward. One thing to take into consideration is that fog also has to be applied to scene elements that don't get lit, like the sky or emissive elements. Again, all you have to do is call ApplyFog to get the final color with the fog effect. Summary In this article, we learned how to apply fog effect and add atmospheric scenes to the images. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] 3D Vector Drawing and Text with Papervision3D: Part 1 [Article] 3D Vector Drawing and Text with Papervision3D: Part 2 [Article]
Read more
  • 0
  • 0
  • 3802

article-image-creating-custom-hud
Packt
29 Apr 2013
5 min read
Save for later

Creating a Custom HUD

Packt
29 Apr 2013
5 min read
(For more resources related to this topic, see here.) Mission Briefing In this project we will be creating a HUD that can be used within a Medieval RPG and that will fit nicely into the provided Epic Citadel map, making use of Scaleform and ActionScript 3.0 using Adobe Flash CS6. As usual, we will be following a simple step—by—step process from beginning to end to complete the project. Here is the outline of our tasks: Setting up Flash Creating our HUD Importing Flash files into UDK Setting up Flash Our first step will be setting up Flash in order for us to create our HUD. In order to do this, we must first install the Scaleform Launcher. Prepare for Lift Off At this point, I will assume that you have run Adobe Flash CS6 at least once beforehand. If not, you can skip this section to where we actually import the .swf file into UDK. Alternatively, you can try to use some other way to create a Flash animation, such as FlashDevelop, Flash Builder, or SlickEdit; but that will have to be done on your own. Engage Thrusters The first step will be to install the Scaleform Launcher. The launcher will make it very easy for us to test our Flash content using the GFX hardware—accelerated Flash Player, which is what UDK will use to play it. Let's get started. Open up Adobe Flash CS6 Professional. Once the program starts up, open up Adobe Extension Manager by going to Help | Manage Extensions.... You may see the menu say Performing configuration tasks, please wait.... This is normal; just wait for it to bring up the menu as shown in the following screenshot: Click on the Install option from the top menu on the right—hand side of the screen. In the file browser, locate the path of your UDK installation and then go into the BinariesGFxCLICK Tools folder. Once there, select the ScaleformExtensions.mxp file and then select OK. When the agreement comes up, press the Accept button; then select whether you want the program to be installed for just you or everyone on your computer. If Flash is currently running, you should get a window popping up telling you that the program will not be ready until you restart the program. Close the manager and restart the program. With your reopened version of Flash start up the Scaleform Launcher by clicking on Window | Other Panels | Scaleform Launcher. At this point you should see the Scaleform Launcher panel come up as shown in the following screenshot: At this point all of the options are grayed out as it doesn't know how to access the GFx player, so let's set that up now. Click on the + button to add a new profile. In the profile name section, type in GFXMediaPlayer. Next, we need to reference the GFx player. Click on the + button in the player EXE section. Go to your UDK directory, BinariesGFx, and then select GFxMediaPlayerD3d9.exe. It will then ask you to give a name for the Player Name field with the value already filled in; just hit the OK button. UDK by default uses DirectX 9 for rendering. However, since GDC 2011, it has been possible for users to use DirectX 11. If your project is using 11, feel free to check out http://udn.epicgames.com/Three/DirectX11Rendering.html and use DX11. In order to test our game, we will need to hit the button that says Test with: GFxMediaPlayerD3d9 as shown in the following screenshot: If you know the resolution in which you want your final game to be, you can set up multiple profiles to preview how your UI will look at a specific resolution. For example, if you'd like to see something at a resolution of 960 x 720, you can do so by altering the command params field after %SWF PATH% to include the text —res 960:720. Now that we have the player loaded, we need to install the CLIK library for our usage. Go to the Preferences menu by selecting Edit | Preferences. Click on the ActionScript tab and then click on the ActionScript 3.0 Settings... button. From there, add a new entry to our Source path section by clicking on the + button. After that, click on the folder icon to browse to the folder we want. Add an additional path to our CLIK directory in the file explorer by first going to your UDK installation directory and then going to DevelopmentFlashAS3CLIK. Click on the OK button and drag—and—drop the newly created Scaleform Launcher to the bottom—right corner of the interface. Objective Complete — Mini Debriefing Alright, Flash is now set up for us to work with Scaleform within it, which for all intents and purposes is probably the hardest part about working with Scaleform. Now that we have taken care of it, let's get started on the HUD! As long as you have administrator access to your computer, these settings should be set for whenever you are working with Flash. However, if you do not, you will have to run through all of these settings every time you want to work on Scaleform projects.
Read more
  • 0
  • 0
  • 3770

article-image-creating-virtual-landscapes
Packt
06 Mar 2013
9 min read
Save for later

Creating Virtual Landscapes

Packt
06 Mar 2013
9 min read
(For more resources related to this topic, see here.) Describing a world in data Just like modern games, early games like Ant Attack required data that described in some meaningful way how the landscape was to appear. The eerie city landscape of "Antchester" (shown in the following screenshot) was constructed in memory as a 128 x 128 byte grid, the first 128 bytes defined the upper-left wall, and the 128 byte row below that, and so on. Each of these bytes described the vertical arrangement of blocks in lower six bits, for game logic purposes the upper two bits were used for game sprites. Heightmaps are common ground The arrangement of numbers in a grid pattern is still extensively used to represent terrain. We call these grids "maps" and they are popular by virtue of being simple to use and manipulate. A long way from "Antchester", maps can now be measured in megabytes or Gigabytes (around 20GB is needed for the whole earth at 30 meter resolution). Each value in the map represents the height of the terrain at that location. These kinds of maps are known as heightmaps. However, any information that can be represented in the grid pattern can use maps. Additional maps can be used by 3D engines to tell it how to mix many textures together; this is a common terrain painting technique known as "splatting". Splats describe the amount of blending between texture layers. Another kind of map might be used for lighting, adding light, or shadows to an area of the map. We also find in some engines something called visibility maps which hide parts of the terrain; for example we might want to add holes or caves into a landscape. Coverage maps might be used to represent objects such as grasses, different vegetation layers might have some kind of map the engine uses to draw 3D objects onto the terrain surface. GROME allows us to create and edit all of these kinds of maps and export them, with a little bit of manipulation we can port this information into most game engines. Whatever the technique used by an engine to paint the terrain, height-maps are fairly universal in how they are used to describe topography. The following is an example of a heightmap loaded into an image viewer. It appears as a gray scale image, the intensity of each pixel represents a height value at that location on the map. This map represents a 100 square kilometer area of north-west Afghanistan used in a flight simulation. GROME like many other terrain editing tools uses heightmaps to transport terrain information. Typically importing the heightmap as a gray scale image using common file formats such as TIFF, PNG, or BMP. When it's time to export the terrain project you have similar options to save. This commonality is the basis of using GROME as a tool for many different engines. There's nothing to stop you from making changes to an exported heightmap using image editing software. The GROME plugin system and SDK permit you to make your own custom exporter for any unsupported formats. So long as we can deal with the material and texture format requirements for our host 3D engine we can integrate GROME into the art pipeline. Texture sizes Using textures for heightmap information does have limitations. The largest "safe" size for a texture is considered 4096 x 4096 although some of the older 3D cards would have problems with anything higher than 2048 x 2048. Also, host 3D engines often require texture dimensions to be a power of 2. A table of recommended dimensions for images follow: SafeTexture dimensions 64 x 64 128 x 128 256 x 256 512 x 512 1024 x 1024 2048 x 2048 4096 x 4096 512 x 512 provides efficient trade-off between resolution and performance and is the default value for GROME operations. If you're familiar with this already then great, you might see questions posted on forums about texture corruption or materials not looking correct. Sometimes these problems are the result of not conforming to this arrangement. Also, you'll see these numbers crop up a few times in GROME's drop-down property boxes. To avoid any potential problems it is wise to ensure any textures you use in your projects conform to these specifications. One exception is Unreal Development Kit ( UDK) in which you'll see numbers such as 257 x 257 used. If you have a huge amount of terrain data that you need to import for a project you can use the texture formats mentioned earlier but I recommend using RAW formats if possible. If your project is based on real-world topography then importing DTED or GeoTIFF data will extract geographical information such as latitude, longitude, and number of arc seconds represented by the terrain. Digital Terrain Elevation Data (DTED) A file format used by geographers and mappers to map the height of a terrain. Often used to import real-world topography into flight simulations. Shuttle Radar Topography Mission (SRTM) data is easily obtained and converted. The huge world problem Huge landscapes may require a lot of memory, potentially more than a 3D card can handle. In game consoles memory is a scarce resource, on mobile devices transferring the app and storing is a factor. Even on a cutting edge PC large datasets will eat into that onboard memory especially when we get down to designing and building them using high-resolution data. Requesting actions that eat up your system memory may cause the application to fail. We can use GROME to create vast worlds without worrying too much about memory. This is done by taking advantage of how GROME manages data through a process of splitting terrain into "zones" and swapping it out to disk. This swapping is similar to how operating systems move memory to disk and reload it on demand. By default whenever you import large DTED files GROME will break the region into multiple zones and hide them. Someone new to GROME might be confused by a lengthy file import operation only to be presen ted with a seemingly empty project space. When creating terrain for engines such as Unity, UDK, Ogre3D, and others you should keep in mind their own technical limitations of what they can reasonably import. Most of these engines are built for small scale scenes. While GROME doesn't impose any specific unit of measure on your designs, one unit equals one meter is a good rule of thumb. Many third-party models are made to this scale. However it's up to the artist to pick a unit of scale and importantly, be consistent. Keep in mind many 3D engines are limited by two factors: Floating point math precision Z-buffer (depth buffer) precision Floating point precision As a general rule anything larger than 20,000 units away from the world origin in any direction is going to exhibit precision errors. This manifests as vertex jitter whenever vertices are rotated and transformed by large values. The effects are not something you can easily work around. Changing the scale of the object shifts the error to another decimal point. Normally in engines that specialize in rendering large worlds they either use a camera-relative rendering or some kind of paging system. Unity and UDK are not inherently capable of camera-relative rendering but a method of paging is possible to employ. Depth buffer precision The other issue associated with large scene rendering is z-fighting. The depth buffer is a normally invisible part of a scene used to determine what part is hidden by another, depth-testing. Whenever a pixel is written to a scene buffer the z component is saved in the depth buffer. Typically this buffer has 16 bits of precision, meaning you have a linear depth of 0 to 65,536. This depth value is based on the 3D camera's view range (the difference between the camera near and far distance). Z-fighting occurs when objects appear to be co-planer polygons written into the z-buffer with similar depth values causing them to "fight" for visibility. This flickering is an indicator that the scene and camera settings need to be rethought. Often the easy fix is to increase the z-buffer precision by increasing the camera's near distance. The downside is that this can clip very near objects. GROME will let you create such large worlds. Its own Graphite engine handles them well. Most 3D engines are designed for smaller first and third-person games which will have a practical limit of around 10 to 25 square kilometers (1 meter = 1 unit). GROME can mix levels of detail quite easily, different regions of the terrain have their own mesh density. If for example you have a map on an island, you will want lots of detail for the land and less in the sea region. However, game engines such as Unity, UDK, and Ogre3 Dare are not easily adapted to deal with such variability in the terrain mesh since they are optimized to render a large triangular grid of uniform size. Instead, we use techniques to fake extra detail and bake it into our terrain textures, dramatically reducing the triangle count in the process. Using a combination of Normal Maps and Mesh Layers in GROME we can create the illusion of more detail than there is at a distance. Normal map A Normal is a unit vector (a vector with a total length of one) perpendicular to a surface. When a texture is used as a Normal map, the red, green, and blue channels represent the vector (x,y,z). These are used to generate the illusion of more detail by creating a bumpy looking surface. Also known as bump-maps. Summary In this article we looked at heightmaps and how they allow us to import and export to other programs and engines. We touched upon world sizes and limitations commonly found in 3D engines. Resources for Article : Further resources on this subject: Photo Manipulation with GIMP 2.6 [Article] Setting up a BizTalk Server Environment [Article] Creating and Warping 3D Text with Away3D 3.6 [Article]
Read more
  • 0
  • 0
  • 3670
article-image-miscellaneous-gameplay-features
Packt
01 Mar 2013
13 min read
Save for later

Miscellaneous Gameplay Features

Packt
01 Mar 2013
13 min read
(For more resources related to this topic, see here.) How to have a sprinting player use up energy Torque 3D's Player class has three main modes of movement over land: sprinting, running, and crouching. Some are designed to allow a player to sprint as much as they want, but perhaps with other limitations while sprinting. This is the default method of sprinting in the Torque 3D templates. Other game designs allow the player to sprint only for short bursts before the player becomes "tired". In this recipe, we will learn how to set up the Player class such that sprinting uses up a pool of energy that slowly recharges over time; and when that energy is depleted, the player is no longer able to sprint. How to do it... We are about to modify a PlayerData Datablock instance so that sprint uses up the player's energy as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock template in art/ datablocks/player.cs. Find the sprinting section of the Datablock instance and make the following changes: sprintForce = 4320; sprintEnergyDrain = 0.6; // Sprinting now drains energy minSprintEnergy = 10; // Minimum energy to sprint maxSprintForwardSpeed = 14; maxSprintBackwardSpeed = 8; maxSprintSideSpeed = 6; sprintStrafeScale = 0.25; sprintYawScale = 0.05; sprintPitchScale = 0.05; sprintCanJump = true; Start up the game and have the player sprint. Sprinting should now be possible for about 5.5 seconds before the player falls back to a run. If the player stops sprinting for about 7.5 seconds, their energy will be fully recharged and they will be able to sprint again. How it works... The maxEnergy property on the PlayerData Datablock instance determines the maximum amount of energy a player has. All of Torque 3D's templates set it to a value of 60. This energy may be used for a number of different activities (such as jet jumping), and even certain weapons may draw from it. By setting the sprintEnergyDrain property on the PlayerData Datablock instance to a value greater than zero, the player's energy will be drained every tick (about one-thirty-second of a second) by that amount. When the player's energy reaches zero they may no longer sprint, and revert back to running. Using our previous example, we have a value for the sprintEnergyDrain property of 0.6 units per tick. This works out to 19.2 units per second. Given that our DefaultPlayerData maxEnergy property is 60 units, we should run out of sprint energy in 3.125 seconds. However, we were able to sprint for about 5.5 seconds in our example before running out of energy. Why is this? A second PlayerData property affects energy use over time: rechargeRate. This property determines how much energy is restored to the player per tick, and is set to 0.256 units in DefaultPlayerData. When we take both the sprintEnergyDrain and recharcheRate properties into account, we end up with an effective rate of (0.6 – 0.256) 0.344 units drained per tick while sprinting. Assuming the player begins with the maximum amount of energy allowed by DefaultPlayerData, this works out to be (60 units / (0.344 units per tick * 32 ticks per second)) 5.45 seconds. The final PlayerData property that affects sprinting is minSprintEnergy. This property determines the minimum player energy level required before being able to sprint. When this property is greater than zero, it means that a player may continue to sprint until their energy is zero, but cannot sprint again until they have regained a minSprintEnergy amount of energy. There's more... Let's continue our discussion of player sprinting and energy use. Balance energy drain versus recharge rate With everything set up as described previously, every tick the player is sprinting his energy pool will be reduced by the value of sprintEnergyDrain property of PlayerData, and increased by the value of the rechargeRate property. This means that in order for the player's energy to actually drain, his sprintEnergyDrain property must be greater than his rechargeRate property. As a player's energy may be used for other game play elements (such as jet jumping or weapons fire), sometimes we may forget this relationship while tuning the rechargeRate property, and end up breaking a player's ability to sprint (or make them sprint far too long). Modifying other sprint limitations The way the DefaultPlayerData Datablock instance is set up in all of Torque 3D's templates, there are already limitations placed on sprinting without making use of an energy drain. This includes not being able to rotate the player as fast as when running, and limited strafing ability. Making sprinting rely on the amount of energy a player has is often enough of a limitation, and the other default limitations may be removed or reduced. In the end it depends on the type of game we are making. To change how much the player is allowed to rotate while sprinting, we modify the sprintYawScale and sprintPitchScale properties of the PlayerData property. These two properties represent the fraction of rotation allowed while sprinting compared with running and default to 0.05 each. To change how much the player is allowed to strafe while sprinting, we modify the sprintStrafeScale property of the PlayerData property. This property is the fraction of the amount of strafing movement allowed while running and defaults to 0.25. Disabling sprint During a game we may want to disable a player's sprinting ability. Perhaps they are too injured, or are carrying too heavy a load. To allow or disallow sprinting for a specific player we call the following Player class method on the server: Player.allowSprinting( allow ); In the previous code, the allow parameter is set to true to allow a player the ability to sprint, and to false to not allow a player to sprint at all. This method is used by the standard weapon mounting system in scripts/server/ weapon.cs to disable sprinting. If the ShapeBaseImageData Datablock instance for the weapon has a dynamic property of sprintDisallowed set to true, the player may not sprint while holding that weapon. The DeployableTurretImage Datablock instance makes use of this by not allowing the player to sprint while holding a turret. Enabling and disabling air control Air control is a fictitious force used by a number of games that allows a player to control their trajectory while falling or jumping in the air. Instead of just falling or jumping and hoping for the best, this allows the player to change course as necessary and trades realism for playability. We can find this type of control in first-person shooters, platformers, and adventure games. In this recipe we will learn how to enable or disable air control for a player, as well as limit its effect while in use. How to do it... We are about to modify a PlayerData Datablock instance to enable complete air control as follows: Open your player's Datablock in a text editor, such as Torsion. The Torque 3D templates have the DefaultPlayerData Datablock instance in art/ datablocks/player.cs. Find the section of the Datablock instance that contains the airControl property and make the following change: jumpForce = "747"; jumpEnergyDrain = 0; minJumpEnergy = 0; jumpDelay = "15"; // Set to maximum air control airControl = 1.0; Start up the game and jump the player off of a building or a sand dune. While in the air press one of the standard movement keys: W, A, S, and D. We now have full trajectory control of the player while they are in the air as if they were running. How it works... If the player is not in contact with any surface and is not swimming, the airControl property of PlayerData is multiplied against the player's direction of requested travel. This multiplication only happens along the world's XY plane and does not affect vertical motion. Setting the airControl property of PlayerData to a value of 0 will disable all air control. Setting the airControl property to a value greater than 1 will cause the player to move faster in the air than they can run. How to jump jet In game terms, a jump jet is often a backpack, a helicopter hat, or a similar device that a player wears, that provides them a short thrust upwards and often uses up a limited energy source. This allows a player to reach a height they normally could not, jump a canyon, or otherwise get out of danger or reach a reward. In this recipe we will learn how to allow a player to jump jet. Getting ready We will be making TorqueScript changes in a project based on the Torque 3D Full template using the Empty Terrain level. If you haven't already, use the Torque Project Manager (Project Manager.exe) to create a new project from the Full template. It will be found under the My Projects directory. Then start up your favorite script editor, such as Torsion, and let's get going! How to do it... We are going to modify the player's Datablock instance to allow for jump jetting and adjust how the user triggers the jump jet as follows: Open the art/datablocks/player.cs file in your text editor. Find the DefaultPlayerData Datablock instance and just below the section on jumping and air control, add the following code: // Jump jet jetJumpForce = 500; jetJumpEnergyDrain = 3; jetMinJumpEnergy = 10; Open scripts/main.cs and make the following addition to the onStart() function: function onStart() { // Change the jump jet trigger to match a regular jump $player::jumpJetTrigger = 2; // The core does initialization which requires some of // the preferences to loaded... so do that first. exec( "./client/defaults.cs" ); exec( "./server/defaults.cs" ); Parent::onStart(); echo("n--------- Initializing Directory: scripts ---------"); // Load the scripts that start it all... exec("./client/init.cs"); exec("./server/init.cs"); // Init the physics plugin. physicsInit(); // Start up the audio system. sfxStartup(); // Server gets loaded for all sessions, since clients // can host in-game servers. initServer(); // Start up in either client, or dedicated server mode if ($Server::Dedicated) initDedicated(); else initClient(); } Start our Full template game and load the Empty Terrain level. Hold down the Space bar to cause the player to fly straight up for a few seconds. The player will then fall back to the ground. Once the player has regained enough energy it will be possible to jump jet again. How it works... The only property that is required to be set for jump jetting to work is the jetJumpForce property of the PlayerData Datablock instance. This property determines the amount of continuous force applied on the player object to have them flying up in the air. It takes some trial and error to determine what force works best. Other Datablock properties that are useful to set are jetJumpEnergyDrain and jetMinJumpEnergy. These two PlayerData properties make jet jumping use up a player's energy. When the energy runs out, the player may no longer jump jet until enough energy has recharged. The jetJumpEnergyDrain property is how much energy per tick is drained from the player's energy pool, and the jetMinJumpEnergy property is the minimum amount of energy the player needs in their energy pool before they can jump jet again. Please see the How to have a sprinting player use up energy recipe for more information on managing a player's energy use. Another change we made in our previous example is to define which move input trigger number will cause the player to jump jet. This is defined using the global $player::jumpJetTrigger variable. By default, this is set to trigger 1, which is usually the same as the right mouse button. However, all of the Torque 3D templates make use of the right mouse button for view zooming (as defined in scripts/client/default.bind.cs). In our previous example, we modified the global $player::jumpJetTrigger variable to use trigger 2, which is usually the same as for regular jumping as defined in scripts/ client/default.bind.cs: function jump(%val) { // Touch move trigger 2 $mvTriggerCount2++; } moveMap.bind( keyboard, space, jump ); This means that we now have jump jetting working off of the same key binding as regular jumping, which is the Space bar. Now holding down the Space bar will cause the player to jump jet, unless they do not have enough energy to do so. Without enough energy, the player will just do a regular jump with their legs. There's more... Let's continue our discussion of using a jump jet. Jump jet animation sequence If the shape used by the Player object has a Jet animation sequence defined, it will play while the player is jump jetting. This sequence will play instead of all other action sequences. The hierarchy or order of action sequences that the Player class uses to determine which action sequence to play is as follows: Jump jetting Falling Swimming Running (known internally as the stand pose) Crouching Prone Sprinting Disabling jump jetting During a game we may no longer want to allow a player to jump jet. Perhaps they have run out of fuel or they have removed the device that allowed them to jump jet. To allow or disallow jump jetting for a specific player, we call the following Player class method on the server: Player.allowJetJumping( allow ); In the previous code, the allow parameter is set to true to allow a player to jump jet, and to false for not allowing him to jump jet at all. More control over the jump jet The PlayerData Datablock instance has some additional properties to fine tune a player's jump jet capability. The first is the jetMaxJumpSpeed property. This property determines the maximum vertical speed at which the player may use their jump jet. If the player is moving upwards faster than this, then they may not engage their jump jet. The second is the jetMinJumpSpeed property. This property is the minimum vertical speed of the player before a speed multiplier is applied. If the player's vertical speed is between jetMinJumpSpeed and jetMaxJumpSpeed, the applied jump jet speed is scaled up by a relative amount. This helps ensure that the jump jet will always make the player move faster than their current speed, even if the player's current vertical speed is the result of some other event (such as being thrown by an explosion). Summary These recipes will help you to fully utilize the gameplay's features and make your game more interesting and powerful. The tips and tricks mentioned in the recipes will surely help you in making the game more real, more fun to play, and much more intriguing. Resources for Article : Further resources on this subject: Creating and Warping 3D Text with Away3D 3.6 [Article] Retopology in 3ds Max [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1 [Article]
Read more
  • 0
  • 0
  • 2560

article-image-getting-started
Packt
26 Dec 2012
6 min read
Save for later

Getting Started

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) System requirements Before we take a look at how to download and install ShiVa3D, it might be a good idea to see if your system will handle it. The minimum requirements for the ShiVa3D editor are as follows: Microsoft Windows XP and above, Mac OS with Parallels Intel Pentium IV 2 GHz or AMD Athlon XP 2600+ 512 MB of RAM 3D accelerated graphics card with 64 MB RAM and 1440 x 900 resolution Network interface In addition to the minimum requirements, the following suggestions will give you the best gaming experience: Intel Core Duo 1.8 GHz or AMD Athlon 64 X2 3600+ 1024 MB of RAM Modern 3D accelerated graphics card with 256 MB RAM and 1680 x 1050 resolution Sound card Downloading ShiVa3D Head over to http://www.stonetrip.com and get a copy of ShiVa3D Web Edition. Currently, there is a download link on the home page. Once you get to the Download page, enter your email address and click on the Download button. If everything goes right, you will be prompted for a save location—save it in a place that will be easy to find later. That's it for the download, but you may want to take a second to look around Stonetrip's website. There are links to the documentation, forum, wiki, and news updates. It will be well worth your time to become familiar with the site now since you will be using it frequently. Installing ShiVa3D Assuming your computer meets the minimum requirements, installation should be pretty easy. Simply find the installation file that you downloaded and run it. I recommend sticking with the default settings. If you do have issues getting it installed, it is most likely due to a technical problem, so head on over to the forums, and we will be more than glad to lend a helping hand. The ShiVa editor Several different applications were installed, if you accepted the default installation choices. The only one we are going to worry about for most of this book is the ShiVa Web Edition editor, so go ahead and open it now. By default, ShiVa opens with a project named Samples loaded. You can tell by looking at the lower right-hand quadrant of the screen in the Data Explorer—the root folder is named Samples, as shown in the following screenshot: This is actually a nice place to start, because there are all sorts of samples that we can play with. We'll come back to those once we have had a chance to make our own game. We will cover the editor in more detail later, but for now it is important to notice that the default layout has four sections: Attributes Editor, Game Editor, Scene Viewer, and Data Explorer. Each of these sections represents a module within the editor. The Data Explorer window, for example, gives us access to all of the resources that can be used in our project such as materials, models, fonts, and so on. Creating a project A project is the way by which we can group games that share the same resources.To create a new project, click on Main | Projects in the upper left-hand corner of the screen. The project window will open, as shown in the following screenshot: In this window, we can see the Samples project along with its path. The green light next to the name indicates that Samples is the project currently loaded into the editor. If there were other projects listed, the other projects would have red lights besides their names. The steps for creating a new project are as follows: Click on the Add button to create a new project. Navigate to the location we want for our project and then right-click in the explorer area and select New | Folder. Name the folder as IntroToShiva, highlight the folder and click on Select. The project window will now show our new project has the green light and the Samples project has a red light. Click on the Close button to finish. Notice that the root folder in the Data Context window now says IntroToShiva. Creating a game Games are exactly what you would think they are and it's time we created ours. The steps for creating our own games are as follows: Go to the Game Editor window in the lower left-hand corner and click on Game | Create. A window will pop up asking for the game name.We will be creating a game in which the player must fly a spaceship through a tunnel or cave and avoid obstacles; so let's call the game CaveRunner. Click on the OK button and the bottom half of our editor should look like the following screenshot: Notice that there is now some information displayed in the Game Editor window and the Data Explorer window shows the CaveRunner game in the Games folder. A game is simply the empty husk of what we are really trying to build. Next, we will begin building out our game by adding a scene. Making a scene We can think of a scene as a level in a game—it is the stage upon which we place our objects, so that the player can interact with them. We can create a scene by performing the following steps: Click on Edit | Scene | Create in the Game Editor window. Name the scene as Level1 and click on the OK button. The new scene is created and opened for immediate use, as shown in the following screenshot: We can tell Level1 is open, because the Game Editor window switched to the Scenes tab and now Level1 has a green check mark next to it; we can also see a grid in the Scene Viewer window. Additionally, the scene information is displayed in the upper left-hand corner of the Scene Viewer window and the Scene tag says Level1. So we were able to get a scene created, but it is sadly empty—it's not much of a level in even the worst of games. If we want this game to be worth playing, we better add something interesting. Let's start by importing a ship.
Read more
  • 0
  • 0
  • 1622