Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

370 Articles
article-image-game-development-using-c
Packt
07 Jun 2016
14 min read
Save for later

Game Development Using C++

Packt
07 Jun 2016
14 min read
C++ is one of the most popular languages for game development as it supports a variety of coding styles that provide low-level access to the system. In this article by Druhin Mukherjee, author of the book C++ Game Development Cookbook, we will go through a basics of game development using C++. (For more resources related to this topic, see here.) Creating your first simple game Creating a simple text-based game is really easy. All we need to do is to create some rules and logic and we will have ourselves a game. Of course, as the game gets more complex we need to add more functions. When the game reaches a point where there are multiple behaviors and states of objects and enemies, we should use classes and inheritance to achieve the desired result. Getting ready To work through this recipe, you will need a machine running Windows. You also need to have a working copy of Visual Studio installed on your Windows machine. No other prerequisites are required. How to do it… In this recipe, we will learn how to create a simple luck-based lottery game: Open Visual Studio. Create a new C++ project. Select Win32 Console Application. Add a Source.cpp file. Add the following lines of code to it: #include <iostream> #include <cstdlib> #include <ctime>   int main(void) { srand(time(NULL)); // To not have the same numbers over and over again.   while (true) { // Main loop.     // Initialize and allocate. intinumber = rand() % 100 + 1 // System number is stored in here. intiguess; // User guess is stored in here. intitries = 0; // Number of tries is stored here. charcanswer; // User answer to question is stored here.       while (true) { // Get user number loop.       // Get number. std::cout<< "Enter a number between 1 and 100 (" << 20 - itries<< " tries left): "; std::cin>>iguess; std::cin.ignore();         // Check is tries are taken up. if (itries>= 20) { break;       }         // Check number. if (iguess>inumber) { std::cout<< "Too high! Try again.n";       } else if (iguess<inumber) { std::cout<< "Too low! Try again.n";       } else { break;       }         // If not number, increment tries. itries++;     }       // Check for tries. if (itries>= 20) { std::cout<< "You ran out of tries!nn";     } else {       // Or, user won. std::cout<< "Congratulations!! "<<std::endl; std::cout<< "You got the right number in "<<itries<< " tries!n";     }   while (true) { // Loop to ask user is he/she would like to play again.       // Get user response. std::cout<< "Would you like to play again (Y/N)? "; std::cin>>canswer; std::cin.ignore();         // Check if proper response. if (canswer == 'n' || canswer == 'N' || canswer == 'y' || canswer == 'Y') { break;       } else { std::cout<< "Please enter 'Y' or 'N'...n";       }     }       // Check user's input and run again or exit; if (canswer == 'n' || canswer == 'N') { std::cout<< "Thank you for playing!"; break;     } else { std::cout<< "nnn";     }   }     // Safely exit. std::cout<< "nnEnter anything to exit. . . "; std::cin.ignore(); return 0; } How it works… The game works by creating a random number from 1 to 100 and asks the user to guess that number. Hints are provided as to whether the number guessed is higher or lower than the actual number. The user is given just 20 tries to guess the number. We first need a pseudo seeder, based on which we are going to generate a random number. The pseudo seeder in this case is srand. We have chosen TIME as a value to generate our random range. We need to execute the program in an infinite loop so that the program breaks only when all tries are used up or when the user correctly guesses the number. We can set a variable for tries and increment for every guess a user takes. The random number is generated by the rand function. We use rand%100+1 so that the random number is in the range 1 to 100. We ask the user to input the guessed number and then we check whether that number is less than, greater than, or equal to the randomly generated number. We then display the correct message. If the user has guessed correctly, or all tries have been used, the program should break out of the main loop. At this point, we ask the user whether they want to play the game again. Then, depending on the answer, we go back into the main loop and start the process of selecting a random number. Creating your first window Creating a window is the first step in Windows programming. All our sprites and other objects will be drawn on top of this window. There is a standard way of drawing a window. So this part of the code will be repeated in all programs that use Windows programming to draw something. Getting ready You need to have a working copy of Visual Studio installed on your Windows machine. How to do it… In this recipe, we will find out how easy it is to create a window: Open Visual Studio. Create a new C++ project. Select a Win32 Windows application. Add a source file called Source.cpp. Add the following lines of code to it: #define WIN32_LEAN_AND_MEAN   #include <windows.h>   // Include all the windows headers. #include <windowsx.h>  // Include useful macros. #include "resource.h"   #define WINDOW_CLASS_NAMEL"WINCLASS1"     voidGameLoop() {   //One frame of game logic occurs here... }   LRESULT CALLBACK WindowProc(HWND _hwnd, UINT _msg, WPARAM _wparam, LPARAM _lparam) {   // This is the main message handler of the system. PAINTSTRUCTps; // Used in WM_PAINT. HDChdc;        // Handle to a device context.     // What is the message? switch (_msg)   { caseWM_CREATE:   {             // Do initialization stuff here.               // Return Success. return (0);   } break;   caseWM_PAINT:   {            // Simply validate the window. hdc = BeginPaint(_hwnd, &ps);              // You would do all your painting here...   EndPaint(_hwnd, &ps);              // Return Success. return (0);   } break;   caseWM_DESTROY:   {              // Kill the application, this sends a WM_QUIT message. PostQuitMessage(0);                // Return success. return (0);   } break;     default:break;   } // End switch.     // Process any messages that we did not take care of...   return (DefWindowProc(_hwnd, _msg, _wparam, _lparam)); }   intWINAPIWinMain(HINSTANCE _hInstance, HINSTANCE _hPrevInstance, LPSTR _lpCmdLine, int _nCmdShow) { WNDCLASSEXwinclass; // This will hold the class we create. HWNDhwnd;           // Generic window handle.   MSG msg;             // Generic message.   HCURSORhCrosshair = LoadCursor(_hInstance, MAKEINTRESOURCE(IDC_CURSOR2));     // First fill in the window class structure. winclass.cbSize = sizeof(WNDCLASSEX); winclass.style = CS_DBLCLKS | CS_OWNDC | CS_HREDRAW | CS_VREDRAW; winclass.lpfnWndProc = WindowProc; winclass.cbClsExtra = 0; winclass.cbWndExtra = 0; winclass.hInstance = _hInstance; winclass.hIcon = LoadIcon(NULL, IDI_APPLICATION); winclass.hCursor = LoadCursor(_hInstance, MAKEINTRESOURCE(IDC_CURSOR2)); winclass.hbrBackground = static_cast<HBRUSH>(GetStockObject(WHITE_BRUSH)); winclass.lpszMenuName = NULL; winclass.lpszClassName = WINDOW_CLASS_NAME; winclass.hIconSm = LoadIcon(NULL, IDI_APPLICATION);     // register the window class if (!RegisterClassEx(&winclass))   { return (0);   }     // create the window hwnd = CreateWindowEx(NULL, // Extended style. WINDOW_CLASS_NAME,      // Class. L"Packt Publishing",   // Title. WS_OVERLAPPEDWINDOW | WS_VISIBLE,     0, 0,                    // Initial x,y.     400, 400,                // Initial width, height.     NULL,                   // Handle to parent.     NULL,                   // Handle to menu.     _hInstance,             // Instance of this application.     NULL);                  // Extra creation parameters.   if (!(hwnd))   { return (0);   }     // Enter main event loop while (true)   {     // Test if there is a message in queue, if so get it. if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE))     {       // Test if this is a quit. if (msg.message == WM_QUIT)       { break;       }         // Translate any accelerator keys. TranslateMessage(&msg);       // Send the message to the window proc. DispatchMessage(&msg);     }       // Main game processing goes here. GameLoop(); //One frame of game logic occurs here...   }     // Return to Windows like this... return (static_cast<int>(msg.wParam)); } How it works… In this example, we have used the standard Windows API callback. We query on the message parameter that is passed and, based on that, we intercept and perform suitable actions. We have used the WM_PAINT message to paint the window for us and the WM_DESTROY message to destroy the current window. To paint the window, we need a handle to the device context and then we can use BeginPaint and EndPaint appropriately. In the main structure, we need to fill up the Windows structures and specify the current cursor and icons that need to be loaded. Here, we can specify what color brush we are going to use to paint the window. Finally, the size of the window is specified and registered. After that, we need to continuously peek messages, translate them, and finally dispatch them to the Windows procedure. Adding artificial intelligence to a game Adding artificial intelligence to a game may be easy or extremely difficult, based on the level of realism or complexity we are trying to achieve. In this recipe, we will start with the basics of adding artificial intelligence. Getting ready To work through this recipe, you will need a machine running Windows and a version of Visual Studio. No other prerequisites are required. How to do it… In this recipe, we will see how easy it is to add a basic artificial intelligence to the game. Add a source file called Source.cpp. Add the following code to it: // Basic AI : Keyword identification   #include <iostream> #include <string> #include <string.h>     std::stringarr[] = { "Hello, what is your name ?", "My name is Siri" };   int main() {   std::stringUserResponse;   std::cout<< "Enter your question? "; std::cin>>UserResponse;   if (UserResponse == "Hi")   { std::cout<<arr[0] <<std::endl; std::cout<<arr[1];   }     int a; std::cin>> a; return 0;   } How it works… In the previous example, we are using a string array to store a response. The idea of the software is to create an intelligent chat bot that can reply to questions asked by users and interact with them as if it were human. Hence the first task was to create an array of responses. The next thing to do is to ask the user for the question. In this example, we are searching for a basic keyword called Hi and, based on that, we are displaying the appropriate answer.  Of course, this is a very basic implementation. Ideally we would have a list of keywords and responses when either of the keywords is triggered. We can even personalize this by asking the user for their name and then appending it to the answer every time. The user may also ask to search for something. That is actually quite an easy thing to do. If we have detected the word that the user is longing to search for correctly, we just need to enter that into the search engine. Whatever result the page displays, we can report it back to the user. We can also use voice commands to enter the questions and give the responses. In this case, we would also need to implement some kind of NLP (Natural Language Processing). After the voice command is correctly identified, all the other processes are exactly the same. Using the const keyword to optimize your code Aconst keyword is used to make data or a pointer constant so that we cannot change the value or address, respectively. There is one more advantage of using the const keyword. This is particularly useful in the object-oriented paradigm. Getting ready For this recipe, you will need a Windows machine and an installed version of Visual Studio. How to do it… In this recipe, we will find out how easy it is to use the const keyword effectively: #include <iostream>   class A { public:   voidCalc()const   { Add(a, b);     //a = 9;       // Not Allowed   } A()   {     a = 10;     b = 10;     } private:   int a, b; void Add(int a, int b)const   {   std::cout<< a + b <<std::endl;   } };   int main() {     A _a;   _a.Calc();   int a; std::cin>> a;   return 0; } How it works… In this example, we are writing a simple application to add two numbers. The first function is a public function. This mean that it is exposed to other classes. Whenever we write public functions, we must ensure that they are not harming any private data of that class. As an example, if the public function was to return the values of the member variables or change the values, then this public function is very risky. Therefore, we must ensure that the function cannot modify any member variables by adding the const keyword at the end of the function. This ensures that the function is not allowed to change any member variables. If we try to assign a different value to the member, we will get a compiler error: errorC3490: 'a' cannot be modified because it is being accessed through a const object. So this makes the code more secure. However, there is another problem. This public function internally calls another private function. What if this private function modifies the values of the member variables? Again, we will be at the same risk. As a result, C++ does not allow us to call that function unless it has the same signature of const at the end of the function. This is to ensure that the function cannot change the values of the member variables. Summary C++ is still used as the game programming languageof choice by many as it gives game programmers control of the entire architecture, including memorypatterns and usage. For detailed game development using C++ you can refer to: https://www.packtpub.com/game-development/procedural-content-generation-c-game-development https://www.packtpub.com/game-development/learning-c-creating-games-ue4 Resources for Article: Further resources on this subject: Putting the Fun in Functional Python [article] Playing with Physics [article] Introducing GameMaker [article]
Read more
  • 0
  • 0
  • 4770

article-image-bang-bang-lets-make-it-explode
Packt
23 May 2016
13 min read
Save for later

Bang Bang – Let's Make It Explode

Packt
23 May 2016
13 min read
In this article by Justin Plowman, author of the book 3D Game Design with Unreal Engine 4 and Blender, We started with a basic level that, when it comes right down to it, is simply two rooms connected by a hallway with a simple crate. From humble beginnings our game has grown, as have our skills. Our simple cargo ship leads the player to a larger space station level. This level includes scripted events to move the story along and a game asset that looks great and animates. However, we are not done. How do we end our journey? We blow things up, that's how! In this article we will cover the following topics: Using class blueprints to bring it all together Creating an explosion using sound effects Adding particle effects (For more resources related to this topic, see here.) Creating a class blueprint to tie it all together We begin with the first step to any type of digital destruction, creation. we have created a disturbing piece of ancient technology. The Artifact stands as a long forgotten terror weapon of another age, somehow brought forth by an unknown power. But we know the truth. That unknown power is us, and we are about to import all that we need to implement the Artifact on the deck of our space station. Players beware! Take a look at the end result. To get started we will need to import the Artifact body, the Tentacle, and all of the texture maps from Substance Painter. Let's start with exporting the main body of the Artifact. In Blender, open our file with the complete Artifact. The FBX file format will allow us to export both the completed 3d model and the animations we created, all in a single file. Select the Artifact only. Since it is now bound to the skeleton we created, the bones and the geometry should all be one piece. Now press Alt+S to reset the scale of our game asset. Doing this will make sure that we won't have any problems weird scaling problems when we import the Artifact into Unreal. Head to the File menu and select Export. Choose FBX as our file format. On the first tab of the export menu, select the check box for Selected Objects. This will make sure that we get just the Artifact and not the Tentacle. On the Geometries tab, change the Smoothing option to Faces. Name your file and click Export! Alright, we now have the latest version of the Artifact exported out as an FBX. With all the different processes described in its pages, it makes a great reference! Time to bring up Unreal. Open the game engine and load our space station level. It's been a while since we've taken a look at it and there is no doubt in my mind that you've probably thought of improvements and new sections you would love to add. Don't forget them! Just set them aside for now. Once we get our game assets in there and make them explode, you will have plenty of time to add on. Time to import into Unreal! Before we begin importing our pieces, let's create a folder to hold our custom assets. Click on the Content folder in the Content Browser and then right click. At the top of the menu that appears, select New Folder and name it CustomAssets. It's very important not to use spaces or special characters (besides the underscore). Select our new folder and click Import. Select the Artifact FBX file. At the top of the Import menu, make sure Import as Skeletal and Import Mesh are selected. Now click the small arrow at the bottom of the section to open the advanced options. Lastly, turn on the check box to tell Unreal to use T0 As Reference Pose. A Reference Pose is the starting point for any animations associated with a skeletal mesh. Next, take a look at the Animation section of the menu. Turn on Import Animations to tell Unreal to bring in our open animation for the Artifact. Once all that is done, it's time to click Import! Unreal will create a Skeletal Mesh, an Animation, a Physics Asset, and a Skeleton asset for the Artifact. Together, these pieces make up a fully functioning skeletal mesh that can be used within our game. Take a moment and repeat the process for the Tentacle, again being careful to make sure to export only selected objects from Blender. Next, we need to import all of our texture maps from Substance Painter. Locate the export folder for Substance Painter. By default it is located in the Documents folder, under Substance Painter, and finally under Export. Head back into Unreal and then bring up the Export folder from your computer's task bar. Click and drag each of the texture maps we need into the Content Browser. Unreal will import them automatically. Time to set them all up as a usable material! Right click in the Content Browser and select Material from the Create Basic Asset section of the menu. Name the material Artifact_MAT. This will open a Material Editor window. Creating Materials and Shaders for video games is an art form all its own. Here I will talk about creating Materials in basic terms but I would encourage you to check out the Unreal documentation and open up some of the existing materials in the Starter Content folder and begin exploring this highly versatile tool. So we need to add our textures to our new Material. An easy way to add texture maps to any Material is to click and drag them from the Content Browser into the Material Editor. This will create a node called a Texture Sample which can plug into the different sockets on the main Material node. Now to plug in each map. Drag a wire from each of the white connections on the right side of each Texture Sample to its appropriate slot on the main Material node. The Metallic and Roughness texture sample will be plugged into two slots on the main node. Let's preview the result. Back in the Content Browser, select the Artifact. Then in the Preview Window of the Material Editor, select the small button the far right that reads Set the Preview Mesh based on the current Content Browser selection. The material has come out just a bit too shiny. The large amount of shine given off by the material is called the specular highlight and is controlled by the Specular connection on the main Material node. If we check the documentation, we can see that this part of the node accepts a value between 0 and 1. How might we do this? Well, the Material Editor has a Constant node that allows us to input a number and then plug that in wherever we may need it. This will work perfectly! Search for a Constant in the search box of the Palette, located on the right side of the Material Editor. Drag it into the area with your other nodes and head over to the Details panel. In the Value field, try different values between 0 and 1 and preview the result. I ended up using 0.1. Save your changes. Time to try it out on our Artifact! Double click on the Skeletal Mesh to open the Skeletal Mesh editor window. On the left hand side, look for the LOD0 section of the menu. This section has an option to add a material (I have highlighted it in the image above). Head back to the content browser and select our Artifact_MAT material. Now select the small arrow in the LOD0 box to apply the selection to the Artifact. How does it look? Too shiny? Not shiny enough? Feel free to adjust our Constant node in the material until you are able to get the result you want. When you are happy, repeat the process for the Tentacle. You will import it as a static mesh (since it doesn't have any animations) and create a material for it made out of the texture maps you created in Substance Painter. Now we will use a Class Blueprint for final assembly. Class Blueprints are a form of standalone Blueprint that allows us to combine art assets with programming in an easy and, most importantly, reusable package. For example, the player is a class blueprint as it combines the player character skeletal mesh with blueprint code to help the player move around. So how and when might we use class blueprints vs just putting the code in the level blueprint? The level blueprint is great for anything that is specific to just that level. Such things would include volcanoes on a lava based level or spaceships in the background of our space station level. Class blueprints work great for building objects that are self-contained and repeatable, such as doors, enemies, or power ups. These types of items would be used frequently and would have a place in several levels of a game. Let's create a class blueprint for the Artifact. Click on the Blueprints button and select the New Empty Blueprints Tab. This will open the Pick Parent Class menu. Since we creating a prop and not something that the player needs to control directly, select the Actor Parent Class. The next screen will ask us to name our new class and for a location to save it. I chose to save it in my CustomAssets folder and named it Artifact_Blueprint. Welcome to the Class Blueprint Editor. Similar to other editor windows within Unreal, the Class Blueprint editor has both a Details panel and a Palette. However, there is a panel that is new to us. The Components panel contains a list of the art that makes up a class blueprint. These components are various pieces that make up the whole object. For our Artifact, this would include the main piece itself, any number of tentacles, and a collision box. Other components that can be added include particle emitters, audio, and even lights. Let's add the Artifact. In the Components section, click the Add Component button and select Skeletal Mesh from the drop down list. You can find it in the Common section. This adds a blank Skeletal Mesh to the Viewport and the Components list. With it selected, check out the Details panel. In the Mesh section is an area to assign the skeletal mesh you wish it to be. Back in the Content Browser, select the Artifact. Lastly, back in the Details panel of the Blueprint Editor click the small arrow next to the Skeletal Mesh option to assign the Artifact. It should now appear in the viewport. Back to the Components list. Let's add a Collision Box. Click Add Component and select Collision Box from the Collision section of the menu. Click it and in the Details panel, increase the Box Extents to a size that would allow the player to enter within its bounds. I used 180 for x, y, and z. Repeat the last few steps and add the Tentacles to the Artifact using the Add Component menu. We will use the Static Mesh option. The design calls for 3, but add more if you like. Time to give this class blueprint a bit of programming. We want the player to be able to walk up to the Artifact and press the E key to open it. We used a Gate ton control the flow of information through the blueprint. However, Gates don't function the same within class blueprints so we require a slightly different approach. The first step in the process is to use the Enable Input and Disable Input nodes to allow the player to use input keys when they are within our box collision. Using the search box located within our Palette, grab an Enable Input and a Disable Input. Now we need to add our trigger events. Click on the Box variable within the Variable section of the My Blueprint panel. This changes the Details panel to display a list of all the Events that can be created for this component. Click the + button next to the OnComponentBeginOverlap and the OnComponentEndOverlap events. Connect the OnComponentBeginOverlap event to the Enable Input node and the OnComponentEndOverlap event to the Disable Input node. Next, create an event for the player pressing the E key by searching for it and dragging it in from the Palette. To that, we will add a Do Once node. This node works similar to a Gate in that it restricts the flow of information through the network, but it does allow the action to happen once before closing. This will make it so the player can press E to open the Artifact but the animation will only play once. Without it a player can press E as many times as they want, playing the animation over and over again. Its fun for a while since it makes it look like a mouth trying to eat you, but it's not our original intention (I might have spent some time pressing it repeatedly and laughing hysterically). Do Once can be easily found in the Palette. Lastly, we will need a Play Animation node. There are two versions so be sure to grab this node from the Skeletal Mesh section of your search so that its target is Skeletal Mesh Component. Connect the input E event to the Do Once node and the Do Once node to the Play Animation. Once last thing to complete this sequence. We need to set the target and animation to play on the Play Animation node. So the target will be our Skeletal Mesh component. Click on the Artifact component in the Components list and drag it into the Blueprint window and plug that into the Target on our Play Animation. Lastly, click the drop down under the New Anim to Play option on the Play Animation node and select our animation of the Artifact opening. We're done! Let's save all of our files and test this out. Drag the Artifact into our Space Station and position it in the Importer/Export Broker's shop. Build the level and then drop in and test. Did it open? Does it need more tentacles? Debug and refine it until it is exactly what you want. Summary This article provides an overview about using class blueprints, creating an explosion and adding particle effects. Resources for Article: Further resources on this subject: Dynamic Graphics [article] Lighting basics [article] VR Build and Run [article]
Read more
  • 0
  • 0
  • 1499

article-image-first-person-shooter-part-1-creating-exterior-environments
Packt
19 May 2016
13 min read
Save for later

First Person Shooter Part 1 – Creating Exterior Environments

Packt
19 May 2016
13 min read
In this article by John P. Doran, the author of the book Unity 5.x Game Development Blueprints, we will be creating a first-person shooter; however, instead of shooting a gun to damage our enemies, we will be shooting a picture in a survival horror environment, similar to the Fatal Frame series of games and the recent indie title DreadOut. To get started on our project, we're first going to look at creating our level or, in this case, our environments starting with the exterior. In the game industry, there are two main roles in level creation: an environment artist and a level designer. An environment artist is a person who builds the assets that go into the environment. He/she uses tools such as 3Ds Max or Maya to create the model and then uses other tools such as Photoshop to create textures and normal maps. The level designer is responsible for taking the assets that the environment artist created and assembling them in an environment for players to enjoy. He/she designs the gameplay elements, creates the scripted events, and tests the gameplay. Typically, a level designer will create environments through a combination of scripting and using a tool that may or may not be in development as the game is being made. In our case, that tool is Unity. One important thing to note is that most companies have their own definition for different roles. In some companies, a level designer may need to create assets and an environment artist may need to create a level layout. There are also some places that hire someone to just do lighting or just to place meshes (called a mesher) because they're so good at it. (For more resources related to this topic, see here.) Project overview In this article, we take on the role of an environment artist who has been tasked to create an outdoor environment. We will use assets that I've placed in the example code as well as assets already provided to us by Unity for mesh placement. In addition, you will also learn some beginner-level design. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from the beginning to end. Here is the outline of our tasks: Creating the exterior environment—terrain Beautifying the environment—adding water, trees, and grass Building the atmosphere Designing the level layout and background Project setup At this point, I assume that you have a fresh installation of Unity and have started it. You can perform the following steps: With Unity started, navigate to File | New Project. Select a project location of your choice somewhere on your hard drive and ensure that you have Setup defaults for set to 3D. Then, put in a Project name (I used First Person Shooter). Once completed, click on Create project. Here, if you see the Welcome to Unity popup, feel free to close it as we won't be using it. Level design 101 – planning Now just because we are going to be diving straight into Unity, I feel that it's important to talk a little more about how level design is done in the game industry. Although you may think a level designer will just jump into the editor and start playing, the truth is that you normally would need to do a ton of planning ahead of time before you even open up your tool. In general, a level begins with an idea. This can come from anything; maybe you saw a really cool building, or a photo on the Internet gave you a certain feeling; maybe you want to teach the player a new mechanic. Turning this idea into a level is what a level designer does. Taking all of these ideas, the level designer will create a level design document, which will outline exactly what you're trying to achieve with the entire level from start to end. A level design document will describe everything inside the level; listing all of the possible encounters, puzzles, so on and so forth, which the player will need to complete as well as any side quests that the player will be able to achieve. To prepare for this, you should include as many references as you can with maps, images, and movies similar to what you're trying to achieve. If you're working with a team, making this document available on a website or wiki will be a great asset so that you know exactly what is being done in the level, what the team can use in their levels, and how difficult their encounters can be. In general, you'll also want a top-down layout of your level done either on a computer or with a graph paper, with a line showing a player's general route for the level with encounters and missions planned out. Of course, you don't want to be too tied down to your design document and it will change as you playtest and work on the level, but the documentation process will help solidify your ideas and give you a firm basis to work from. For those of you interested in seeing some level design documents, feel free to check out Adam Reynolds (Level Designer on Homefront and Call of Duty: World at War) at http://wiki.modsrepository.com/index.php?title=Level_Design:_Level_Design_Document_Example. If you want to learn more about level design, I'm a big fan of Beginning Game Level Design, John Feil (previously my teacher) and Marc Scattergood, Cengage Learning PTR. For more of an introduction to all of game design from scratch, check out Level Up!: The Guide to Great Video Game Design, Scott Rogers, Wiley and The Art of Game Design, Jesse Schell, CRC Press. For some online resources, Scott has a neat GDC talk named Everything I Learned About Level Design I Learned from Disneyland, which can be found at http://mrbossdesign.blogspot.com/2009/03/everything-i-learned-about-game-design.html, and World of Level Design (http://worldofleveldesign.com/) is a good source for learning about of level design, though it does not talk about Unity specifically. Introduction to terrain Terrain is basically used for non-manmade ground; things such as hills, deserts, and mountains. Unity's way of dealing with terrain is different than what most engines use in the fact that there are two mays to make terrains, one being using a height map and the other sculpting from scratch. Height maps Height maps are a common way for game engines to support terrains. Rather than creating tools to build a terrain within the level, they use a piece of graphics software to create an image and then we can translate that image into a terrain using the grayscale colors provided to translate into different height levels, hence the name height map. The lighter in color the area is, the lower its height, so in this instance, black represents the terrain's lowest areas, whereas white represents the highest. The Terrain's Terrain Height property sets how high white actually is compared with black. In order to apply a height map to a terrain object, inside an object's Terrain component, click on the Settings button and scroll down to Import Raw…. For more information on Unity's Height tools, check out http://docs.unity3d.com/Manual/terrain-Height.html. If you want to learn more about creating your own HeightMaps using Photoshop while this tutorial is for UDK, the area in Photoshop is the same: http://worldofleveldesign.com/categories/udk/udk-landscape-heightmaps-photoshop-clouds-filter.php  Others also use software such as Terragen to create HeightMaps. More information on that is at http://planetside.co.uk/products/terragen3. Exterior environment – terrain When creating exterior environments, we cannot use straight floors for the most part unless you're creating a highly urbanized area. Our game takes place in a haunted house in the middle of nowhere, so we're going to create a natural landscape. In Unity, the best tool to use to create a natural landscape is the Terrain tool. Unity's Terrain system lets us add landscapes, complete with bushes, trees, and fading materials to our game. To show how easy it is to use the terrain tool, let's get started. The first thing that we're going to do is actually create the terrain we'll be placing for the world. Let's first create a Terrain by selecting GameObject | 3D Object | Terrain. At this point, you should see the terrain on the screen. If for some reason you have problems seeing the terrain object, go to the Hierarchy tab and double-click on the Terrain object to focus your camera on it and move in as needed. Right now, it's just a flat plane, but we'll be doing a lot to it to make it shine. If you look to the right with the Terrain object selected, you'll see the Terrain editing tools, which do the following (from left to right): Raise/Lower Height—This will allow us to raise or lower the height of our terrain in a certain radius to create hills, rivers, and more. Paint Height—If you already know exactly the height that a part of your terrain needs to be, this tool will allow you to paint a spot to that location. Smooth Height—This averages out the area that it is in, attempts to smooth out areas, and reduces the appearance of abrupt changes. Paint Texture—This allows us to add textures to the surface of our terrain. One of the nice features of this is the ability to lay multiple textures on top of each other. Place Trees—This allows us to paint objects in our environment that will appear on the surface. Unity attempts to optimize these objects by billboarding distant trees so we can have dense forests without having a horrible frame rate. By billboarding, I mean that the object will be simplified and its direction usually changes constantly as the object and camera move, so it always faces the camera direction. Paint Details—In addition to trees, you can also have small things like rocks or grass covering the surface of your environment, using 2D images to represent individual clumps with bits of randomization to make it appear more natural. Terrain Settings—Settings that will affect the overall properties of the particular Terrain, options such as the size of the terrain and wind can be found here. By default, the entire Terrain is set to be at the bottom, but we want to have ground above us and below us so we can add in things like lakes. With the Terrain object selected, click on the second button from the left on the Terrain component (Paint height mode). From there, set the Height value under Settings to 100 and then press the Flatten button. At this point, you should note the plane moving up, so now everything is above by default. Next, we are going to create some interesting shapes to our world with some hills by "painting" on the surface. With the Terrain object selected, click on the first button on the left of our Terrain component (the Raise/Lower Terrain mode). Once this is completed, you should see a number of different brushes and shapes that you can select from. Our use of terrain is to create hills in the background of our scene, so it does not seem like the world is completely flat. Under the Settings, change the Brush Size and Opacity of your brush to 100 and left-click around the edges of the world to create some hills. You can increase the height of the current hills if you click on top of the previous hill. When creating hills, it's a good idea to look at multiple angles while you're building them, so you can make sure that none are too high or too low In general, you want to have taller hills as you go further back, or else you cannot see the smaller ones since they're blocked. In the Scene view, to move your camera around, you can use the toolbar at the top-right corner or hold down the right mouse button and drag it in the direction you want the camera to move around in, pressing the W, A, S, and D keys to pan. In addition, you can hold down the middle mouse button and drag it to move the camera around. The mouse wheel can be scrolled to zoom in and out from where the camera is. Even though you should plan out the level ahead of time on something like a piece of graph paper to plan out encounters, you will want to avoid making the level entirely from the preceding section, as the player will not actually see the game with a bird's eye view in the game at all (most likely). Referencing the map from the same perspective as your character will help ensure that the map looks great. To see many different angles at one time, you can use a layout with multiple views of the scene, such as the 4 Split. Once we have our land done, we now want to create some holes in the ground, which we will fill with water later. This will provide a natural barrier to our world that players will know they cannot pass, so we will create a moat by first changing the Brush Size value to 50 and then holding down the Shift key, and left-clicking around the middle of our texture. In this case, it's okay to use the Top view; remember that this will eventually be water to fill in lakes, rivers, and so on, as shown in the following screenshot:   To make this easier to see, you can click on the sun-looking light icon from the Scene tab to disable lighting for the time being. At this point, we have done what is referred to in the industry as "grayboxing," making the level in the engine in the simplest way possible but without artwork (also known as "whiteboxing" or "orangeboxing" depending on the company you're working for). At this point in a traditional studio, you'd spend time playtesting the level and iterating on it before an artist or you will take the time to make it look great. However, for our purposes, we want to create a finished project as soon as possible. When doing your own games, be sure to play your level and have others play your level before you polish it. For more information on grayboxing, check out http://www.worldofleveldesign.com/categories/level_design_tutorials/art_of_blocking_in_your_map.php. For an example with images of a graybox to the final level, PC Gamer has a nice article available at http://www.pcgamer.com/2014/03/18/building-crown-part-two-layout-design-textures-and-the-hammer-editor/. Summary With this, we now have a great-looking exterior level for our game! In addition, we covered a lot of features that exist in Unity for you to be able to use in your own future projects.   Resources for Article: Further resources on this subject: Learning NGUI for Unity [article] Components in Unity [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 1476

article-image-special-effects
Packt
11 Apr 2016
16 min read
Save for later

Special Effects

Packt
11 Apr 2016
16 min read
In this article by Maciej Szczesnik, author of Unity 5.x Animation Cookbook, we will cover the following recipes: Creating camera shakes with the Animation View and Animator Controller Using the Animation View to animate public script variables Using additive Mecanim layers of add extra motion to a character Using Blend Shapes to morph an object into another one (For more resources related to this topic, see here.) Introduction This one is all about encouraging you to experiment with Unity's animation system. In the next ten recipes, we will create interesting effects and use animations in new, creative ways. Using Animation Events to trigger sound and visual effects This recipe shows a simple, generic way of playing different sound and visual effects with Animation Events. Getting ready To start with, you need to have a character with one, looped animation—Jump. We also need a sound effect and a particle system. We will need a transparent DustParticle.png texture for the particle system. It should resemble a small dust cloud. In the Rigs directory, you will find all the animations you need, and in the Resources folder, you'll find other required assets. When you play the game, you will see a character using the Jump animation. It will also play a sound effect and a particle effect while landing. How to do it... To play sound and visual effects with Animation Events, follow these steps: Import the character with the Jump animation. In the Import Settings, Animation tab, select the Jump animation. Make it loop. Go to the Events section. Scrub through the timeline in the Preview section, and click on Add Event Button. The Edit Animation Event window will appear.Edit Animation Event window Type Sound in the Function field and Jump in the String field. This will call a Sound function in a script attached to the character and pass the Jump word as a string parameter to it. Create another Animation Event. Set the Function field to Effect and the String field to Dust. Apply Import Settings. Create Animator Controller for the character with just the Jump animation in it. Place the character in a scene. Attach the controller to the Animator component of the character. Attach an Audio Source component to the character. Uncheck the Play On Awake option. Create an empty Game Object and name it Dust. Add a Particle System component to it. This will be our dust effect. Set the Particle System parameters as follows: Duration to 1 second. Start Life Time to 0,5 seconds.      Start Speed to 0,4.      Start Size to random between two constants: 1 and 2.      Start Color to a light brown.      Emission | Rate to 0.      Emission | Bursts to one burst with time set to 0, min and max set to 5.      Shape | Shape to Sphere.      Shape | Radius to 0.2.     For Color Over Lifetime, create a gradient for the alpha channel. In the 0% mark and 100% mark, it should be set to 0. In the 10% and 90% mark, it should be set to 255. Create a new Material and set the shader by navigating to Particles | Alpha Blended. Drag and drop a transparent texture of DustParticle.png into the Texture field of Material. Drag and drop Material by navigating to the Renderer | Material slot of our Dust Particle System. Create a Resources folder in the project's structure. Unity can load assets from the Resources folder in runtime without the need of referencing them as prefabs. Drag and drop the Jump.ogg sound and Dust Game Object into the Resources folder. Write a new script and name it TriggerEffects.cs. This script has two public void functions. Both are called from the Jump animation as Animation Events. In the first function, we load an Audio Clip from the Resources folder. We set the Audio Clip name in the Animation Event itself as a string parameter (it was set to Jump). When we successfully load the Audio Clip, we play it using the Audio Source component, reference to which we store in the source variable. We also randomize the pitch of the Audio Source to have a little variation when playing the Jump.ogg sound. public void Sound (string soundResourceName) { AudioClip clip = (AudioClip) Resources.Load(soundResourceName); if (clip != null) { source.pitch = Random.Range(0.9f, 1.2f); source.PlayOneShot(clip); } } In the second function, we try to load a prefab with the name specified as the function's parameter. We also set this name in the Animation Event (it was set to Dust). If we manage to load the prefab, we instantiate it, creating the dust effect under our character's feet. public void Effect (string effectResourceName) { GameObject effectResource = (GameObject)Resources.Load(effectResourceName); if (effectResource != null) { GameObject.Instantiate(effectResource, transform.position, Quaternion.identity); } } Assign the script to our character and play the game to see the effect. How it works... We are using one important feature of Animation Events in this recipe: the possibility of a passing string, int, or float parameter to our script's functions. This way, we can create one function to play all the sound effects associated with our character and pass clip names as string parameters from the Animation Events. The same concept is used to spawn the Dust effect. The Resources folder is needed to get any resource (prefab, texture, audio clip, and so on.) with the Resources.Load(string path) function. This method is convenient in order to load assets using their names. There's more... Our Dust effect has the AutoDestroy.cs script attached to make it disappear after a certain period of time. You can find that script in the Shared Scripts folder in the provided Unity project example. Creating camera shakes with the Animation View and the Animator Controller In this recipe, we will use a simple but very effective method to create camera shakes. These effects are often used to emphasize impacts or explosions in our games. Getting ready... You don't need anything special for this recipe. We will create everything from scratch in Unity. You can also download the provided example. When you open the Example.scene scene and play the game, you can press Space to see a simple camera shake effect. How to do it... To create a camera shake effect, follow these steps: Create an empty Game Object in Scene View and name it CameraRig. Parent Main Camera to CameraRig. Select Main Camera and add an Animator component to it. Open Animation View. Create a new Animation Clip and call it CamNormal. The camera should have no motion in this clip. Add keys for both the camera's position and its rotation. Create another Animation Clip and call it CameraShake. Animate the camera's rotation and position it to create a shake effect. The animation should be for about 0.5 seconds. Open the automatically created Main Camera controller. Add a Shake Trigger parameter. Create two transitions:      Navigate to CamNormal | CameraShake with this condition: Shake the Trigger parameter, Has Exit Time is set to false, and Transition Duration is set to 0.2 seconds.      Navigate to CameraShake | CamNormal with no conditions, Has Exit Time is set to true, and Transition Duration is set to 0.2 seconds. Write a new script and call it CamShake.cs. In this script's Update() function, we check whether the player pressed the Space key. If so, we trigger the Shake Trigger in our controller. if (Input.GetKeyDown(KeyCode.Space)) { anim.SetTrigger("Shake"); } As always, the anim variable holds the reference to the Animator component and is set in the Start() function with the GetComponent<Animator>() method. Assign the script to Main Camera. Play the game and press Space to see the effect. How it works... In this recipe, we've animated the camera's position and rotation relative to the CameraRig object. This way, we can still move CameraRig (or attach it to a character). Our CameraShake animation affects only the local position and rotation of the camera. In the script, we simply call the Shake Trigger to play the CameraShake animation once. There's more... You can create more sophisticated camera shake effects with Blend Trees. To do so, prepare several shake animations of different strengths and blend them in a Blend Tree using a Strengthfloat parameter. This way, you will be able to set the shake's strength, depending on different situations in the game (the distance from an explosion, for instance). Using the Animation View to animate public script variables In Unity, we can animate public script variables. The most standard types are supported. We can use this to achieve interesting effects that are not possible to achieve directly. For instance, we can animate the fog's color and density, which is not directly accessible through the Animation View. Getting ready... In this recipe, everything will be created from scratch, so you don't need to prepare any special assets. You can find the Example.scene scene there. If you open it and press Space, you can observe the fog changing color and density. This is achieved by animating the public variables of a script. Animated fog How to do it... To animate public script variables, follow these steps: Create a new script and call it FogAnimator.cs. Create two public variables in this script: public float fogDensity and public Color fogColor. In the script's Update() function, we call the o Trigger in the controller when the player presses Space. We also set the RenderSettings.fogColor and RenderSettings.fogDensity parameters using our public variables. We also adjust the main camera's background color to match the fog color. if (Input.GetKeyDown(KeyCode.Space)) { anim.SetTrigger("ChangeFog"); } RenderSettings.fogColor = fogColor; RenderSettings.fogDensity = fogDensity; Camera.main.backgroundColor = fogColor; Create a new Game Object and name it FogAnimator. Attach the FogAnimator.cs script to it. Select the FogAnimator game object and add an Animator component to it. Open the Animation View. Create a new Animation Clip. Make sure Record Button is pressed. Create an animation for the public float fogDensity and public Color fogColor parameters by changing their values. You can create any number of animations and connect them in the automatically created Animator Controller with transitions based on the ChangeFog Trigger (you need to add this parameter to the controller first). Here's an example controller:An example controller for different fog animations Remember that you don't need to create animations of the fog changing its color or density. You can rely on blending between animations in the controller. All you need to have is one key for the density and one for the color in each animation. In this example, all Transition Durations are set to 1 second, and every transition's Has Exit Time parameter is set to false. Make sure that the fog is enabled in the Lighting settings. Play the game and press the Space button to see the effect. How it works... Normally, we can't animate the fog's color or density using the Animation View. But we can do this easily with a script that sets the RenderSettings.fogColor and RenderSettings.fogDensity parameters in every frame. We use animations to change the script's public variables values in time. This way, we've created a workaround in order to animate fog in Unity. We've just scratched the surface of what's possible in terms of animating public script variables. Try experimenting with them to achieve awesome effects. Using additive Mecanim layers to add extra motion to a character In previous recipes, we used Mecanim layers in the override mode. We can set a layer to be additive. This can add additional movement to our base layer animations. Getting ready... We will need a character with three animations—Idle, TiredReference, and Tired. The first animation is a normal, stationary idle. The second animation has no motion and is used as a reference pose to calculate the additive motion from the third animation. TiredReference can be the first frame of the Tired animation. In the Tired animation, we can see our character breathing heavily. You will find the same Humanoid character there. If you play the game and press Space, our character will start breathing heavily while still using the Idle animation. You can find all the required animations in the Rigs directory. How to do it... To use additive layers, follow these steps: Import the character into Unity and place it in a scene. Go to the Animation tab in Import Settings. Find the TiredReference animation and check the Additive Reference Pose option (you can also use the normal Tired animation and specify the frame in the Pose Frame field). Loop the Idle and Tired animations. Create a new Animator Controller. Drag and drop the Idle animation into the controller and make it the default state. Find the Layers tab in upper-left corner of the Animator window. Select it and click on the Plus button below to add a new layer. Name the newly created layer Tired. Click on the Gear icon and set the Blending to Additive. Take a look at this diagram for reference:                                                                                                      Additive layer settings Drag and drop the Tired animation to the newly created layer. Assign the controller to our character. Create a new script and call it Tired.cs. In this script's Update() function, we set the weight of the Tired layer when the player presses Space. The Tired layer has an index of 1. We use a weightTarget helper variable to set the new weight to 0 or 1, depending on its current value. This allows us to switch the additive layer on and off every time the player presses Space. Finally, we interpolate the weight value in time to make the transition more smooth, and we set weight of our additive layer with the SetLayerWeight() function. if (Input.GetKeyDown(KeyCode.Space)) { if (weightTarget < 0.5f) { weightTarget = 1f; } else if (weightTarget > 0.5f) { weightTarget = 0f; } } weight = Mathf.Lerp(weight, weightTarget, Time.deltaTime * tiredLerpSpeed); anim.SetLayerWeight(1, weight); Attach the script to the Humanoid character. Play the game and press Space to see the additive animation effect. How it works... Additive animations are calculated using the reference pose. Movements relative to this pose are then added to other animations. This way, we can not only override the base layer with other layers but also modify base movements by adding a secondary motion. Try experimenting with different additive animations. You can, for instance, make your character bend, aim, or change its overall body pose. Using Blend Shapes to morph an object into another one Previously, we used Blend Shapes to create face expressions. This is also an excellent tool for special effects. In this recipe, we will morph one object into another. Getting ready... To follow this recipe, we need to prepare an object with Blend Shapes. We've created a really simple example in Blender—a subdivided cube with one shape key that looks like a sphere. Take a look at this screenshot for reference: A cube with a Blend Shape that turns it into a sphere You will see a number of cubes there. If you hit the Space key in play mode, the cubes will morph into spheres. You can find the Cuboid.fbx asset with the required Blend Shapes in the Model directory. How to do it... To use Blend Shapes to morph objects, follow these steps: Import the model with at least one Blend Shape to Unity. You may need to go to the Import Settings | Model tab and choose Import BlendShapes. Place the model in Scene. Create a new script and call it ChangeShape.cs. This script is similar to the one from the previous recipe. In the Update() function, we change the weight of the of the first Blend Shape when player presses Space. Again, we use a helper variable weightTarget to set the new weight to 0 or 100, depending on its current value. Blend Shapes have weights from 0 to 100 instead of 1. Finally, we interpolate the weight value in time to make the transition smoother. We use the SetBlendShapeWeight() function on the skinnedRenderer object. This variable is set in the Start() function with the GetComponent<SkinnedMeshRenderer>() function. if (Input.GetKeyDown(KeyCode.Space)) { if (weightTarget < 50f) { weightTarget = 100f; } else if (weightTarget > 50f) { weightTarget = 0f; } } weight = Mathf.Lerp(weight, weightTarget, Time.deltaTime * blendShapeLerpSpeed); skinnedRenderer.SetBlendShapeWeight(0, weight); Attach the script to the model on the scene. Play the game and press Space to see the model morph. How it works... Blend Shapes store vertices position of a mesh. We have to create them in a 3D package. Unity imports Blend Shapes and we can modify their weights in runtime using the SetBlendShapeWeight() function on the Skinned Mesh Renderer component. Blend Shapes have trouble with storing normals. If we import normals from our model it may look weird after morphing. Sometimes setting the Normals option to Calculate in the Import Settings can helps with the problem. If we choose this option Unity will calculate normals based on the angle between faces of our model. This allowed us to morph a hard surface cube into a smooth sphere in this example. Summary This article covers some basic recipes which can be performed using Unity. It also covers basic concept of of using Animation Layer, Mecanim layer and creating Camera shakes Resources for Article: Further resources on this subject: Animation features in Unity 5[article] Saying Hello to Unity and Android[article] Learning NGUI for Unity[article]
Read more
  • 0
  • 0
  • 1848

article-image-cardboard-virtual-reality-everyone
Packt
11 Apr 2016
22 min read
Save for later

Cardboard is Virtual Reality for Everyone

Packt
11 Apr 2016
22 min read
In this article, by Jonathan Linowes and Matt Schoen, authors of the book Cardboard VR Projects for Android, introduce and define Google Cardboard. (For more resources related to this topic, see here.) Welcome to the exciting new world of virtual reality! We're sure that, as an Android developer, you want to jump right in and start building cool stuff that can be viewed using Google Cardboard. Your users can then just slip their smartphone into a viewer and step into your virtual creations. Let's take an outside-in tour of VR, Google Cardboard, and its Android SDK to see how they all fit together. Why is it called Cardboard? It all started in early 2014 when Google employees, David Coz and Damien Henry, in their spare time, built a simple and cheap stereoscopic viewer for their Android smartphone. They designed a device that can be constructed from ordinary cardboard, plus a couple of lenses for your eyes, and a mechanism to trigger a button "click." The viewer is literally made from cardboard. They wrote software that renders a 3D scene with a split screen, one view for the left eye, and another view, with offset, for the right eye. Peering through the device, you get a real sense of 3D immersion into the computer generated scene. It worked! The project was then proposed and approved as a "20% project" (where employees may dedicate one day a week for innovations), funded, and joined by other employees. In fact, Cardboard worked so well that Google decided to go forward, taking the project to the next level and releasing it to the public a few months later at Google I/O 2014. Since its inception, Google Cardboard has been accessible to hackers, hobbyists, and professional developers alike. Google open sourced the viewer design for anyone to download the schematics and make their own, from a pizza box or from whatever they had lying around. One can even go into business selling precut kits directly to consumers. An assembled Cardboard viewer is shown in the following image: The Cardboard project also includes a software development kit (SDK) that makes it easy to build VR apps. Google has released continuous improvements to the software, including both a native Java SDK as well as a plugin for the Unity 3D game engine (https://unity3d.com/). Since the release of Cardboard, a huge number of applications have been developed and made available on the Google Play Store. At Google I/O 2015, Version 2.0 introduced an upgraded design, improved software, and support for Apple iOS. Google Cardboard has rapidly evolved in the eye of the market from an almost laughable toy into a serious new media device for certain types of 3D content and VR experiences. Google's own Cardboard demo app has been downloaded millions of times from the Google Play store. The New York Times distributed about a million cardboard viewers with its November 8, Sunday issue back in 2015. Cardboard is useful for viewing 360-degree photos and playing low-fidelity 3D VR games. It is universally accessible to almost anyone because it runs on any Android or iOS smartphone. For developers who are integrating 3D VR content directly into Android apps, Google Cardboard is a way of experiencing virtual reality that is here to stay. A gateway to VR Even in a very short time, it's been available; this generation of consumer virtual reality, whether Cardboard or Oculus Rift, has demonstrated itself to be instantly compelling, immersive, entertaining, and "a game changer" for just about everyone who tries it. Google Cardboard is especially easy to access with a very low barrier to use. All you need is a smartphone, a low-cost Cardboard viewer (as low as $5 USD), and free apps downloaded from Google Play (or Apple App Store for iOS). Google Cardboard has been called a gateway to VR, perhaps in reference to marijuana as a "gateway drug" to more dangerous illicit drug abuse? We can play with this analogy for a moment, however, decadent. Perhaps Cardboard will give you a small taste of VR's potential. You'll want more. And then more again. This will help you fulfill your desire for better, faster, more intense, and immersive virtual experiences that can only be found in higher end VR devices. At this point, perhaps there'll be no turning back, you're addicted! Yet as a Rift user, I still also enjoy Cardboard. It's quick. It's easy. It's fun. And it really does work, provided I run apps that are appropriately designed for the device. I brought a Cardboard viewer in my backpack when visiting my family for the holidays. Everyone enjoyed it a lot. Many of my relatives didn't even get past the standard Google Cardboard demo app, especially its 360-degree photo viewer. That was engaging enough to entertain them for a while. Others jumped to a game or two or more. They wanted to keep playing and try new experiences. Perhaps it's just the novelty. Or, perhaps it's the nature of this new medium. The point is that Google Cardboard provides an immersive experience that's enjoyable, useful, and very easily accessible. In short, it is amazing. Then, show them an HTC Vive or Oculus Rift. Holy Cow! That's really REALLY amazing! We're not here to talk about the higher end VR devices, except to contrast it with Cardboard and to keep things in perspective. Once you try desktop VR, is it hard to "go back" to mobile VR? Some folks say so. But that's almost silly. The fact is that they're really separate things. As discussed earlier, desktop VR comes with much higher processing power and other high-fidelity features, whereas mobile VR is limited to your smartphone. If one were to try and directly port a desktop VR app to a mobile device, there's a good chance that you'll be disappointed. It's best to think of each as a separate media. Just like a desktop application or a console game is different from, but similar to, a mobile one. The design criteria may be similar but different. The technologies are similar but different. The user expectations are similar but different. Mobile VR may be similar to desktop VR, but it's different. To emphasize how different Cardboard is from desktop VR devices, it's worth pointing out that Google has written their manufacturer's specifications and guidelines. "Do not include a headstrap with your viewer. When the user holds the Cardboard with their hands against the face, their head rotation speed is limited by the torso rotational speed (which is much slower than the neck rotational speed). This reduces the chance of "VR sickness" caused by rendering/IMU latency and increases the immersiveness in VR." The implication is that Cardboard apps should be designed for shorter, simpler, and somewhat stationary experiences. Let's now consider the other ways where Cardboard is a gateway to VR. We predict that Android will continue to grow as a primary platform for virtual reality into the future. More and more technologies will get crammed into smartphones. And this technology will include features specifically with keeping VR in mind: Faster processors and mobile GPUs Higher resolution screens Higher precision motion sensors Optimized graphics pipelines Better software Tons of more VR apps Mobile VR will not give way to desktop VR; it may even eventually replace it. Furthermore, maybe soon we'll see dedicated mobile VR headsets that have the guts of a smartphone built-in without the cost of a wireless communications' contract. No need to use your own phone. No more getting interrupted while in VR by an incoming call or notification. No more rationing battery life in case you need to receive an important call or otherwise use your phone. These dedicated VR devices will likely be Android-based. The value of low-end VR Meanwhile, Android and Google Cardboard are here today, on our phones, in our pockets, in our homes, at the office, and even in our schools. Google Expeditions, for example, is Google's educational program for Cardboard (https://www.google.com/edu/expeditions/), which allows K-12 school children to take virtual field trips to "places a school bus can't," as they say, "around the globe, on the surface of Mars, on a dive to coral reefs, or back in time." The kits include Cardboard viewers and Android phones for each child in a classroom, plus an Android tablet for the teacher. They're connected with a network. The teacher can then guide students on virtual field trips, provide enhanced content, and create learning experiences that go way beyond a textbook or classroom video, as shown in the following image: The entire Internet can be considered a world-wide publishing and media distribution network. It's a web of hyperlinked pages, text, images, music, video, JSON data, web services, and many more. It's also teeming with 360-degree photos and videos. There's also an ever growing amount of three-dimensional content and virtual worlds. Would you consider writing an Android app today that doesn't display images? Probably not. There's a good chance that your app also needs to support sound files, videos, or other media. So, pay attention. Three-dimensional Cardboard-enabled content is coming quickly. You might be interested in reading this article now because VR looks fun. But soon enough, it may be a customer-driven requirement for your next app. Some examples of types of popular Cardboard apps include: 360 degree photo viewing, for example, Google's Cardboard demo (https://play.google.com/store/apps/details?id=com.google.samples.apps.cardboarddemo) and Cardboard Camera (https://play.google.com/store/apps/details?id=com.google.vr.cyclops) Video and cinema viewing, for example, a Cardboard theatre (https://play.google.com/store/apps/details?id=it.couchgames.apps.cardboardcinema) Roller coasters and thrill rides, for example, VR Roller Coaster (https://play.google.com/store/apps/details?id=com.frag.vrrollercoaster) Cartoonish 3D games, for example, Lamber VR (https://play.google.com/store/apps/details?id=com.archiactinteractive.LfGC&hl=en_GB) First person shooter games, for example, Battle 360 VR (https://play.google.com/store/apps/details?id=com.oddknot.battle360vr) Creepy scary stuff, for example, Sisters (https://play.google.com/store/apps/details?id=com.otherworld.Sisters) Educational experiences, for example, Titans of Space (https://play.google.com/store/apps/details?id=com.drashvr.titansofspacecb&hl=en_GB) Marketing experiences, for example, Volvo Reality (https://play.google.com/store/apps/details?id=com.volvo.volvoreality) And much more. Thousands more. The most popular ones have had hundreds of thousands of downloads (the Cardboard demo app itself has millions of downloads). Cardware! Let's take a look at the variety of Cardboard devices that are available. There's a lot of variety. Obviously, the original Google design is actually made from cardboard. And manufacturers have followed suit, offering cardboard Cardboards directly to consumers—brands such as Unofficial Cardboard, DODOCase, and IAmCardboard were among the first. Google provides the specifications and schematics free of charge (refer to https://www.google.com/get/cardboard/manufacturers/). The basic viewer design consists of an enclosure body, two lenses, and an input mechanism. The Works with Google Cardboard certification program indicates that a given viewer product meets the Google standards and works well with Cardboard apps. The viewer enclosure may be constructed from any material: cardboard, plastic, foam, aluminum, and so on. It should be lightweight and do a pretty good job of blocking the ambient light. The lenses (I/O 2015 Edition) are 34 mm diameter aspherical single lenses with an 80 degree circular FOV (field of view) and other specified parameters. The input trigger ("clicker") can be one of the several alternative mechanisms. The simplest is none, where the user must touch the smartphone screen directly with their finger to trigger a click. This may be inconvenient since the phone is sitting inside the viewer enclosure but it works. Plenty of viewers just include a hole to stick your finger inside. Alternatively, the original Cardboard utilized a small ring magnet attached to the outside of the viewer. The user can slide the magnet and this movement is sensed by the phone's magnetometer and recognized by the software as a "click". This design is not always reliable because the location of the magnetometer varies among phones. Also, using this method, it is harder to detect a "press and hold" interaction, which means that there is only one type of user input "event" to use within your application. Lastly, Version 2.0 introduced a button input constructed from a conductive "strip" and "pillow" glued to a Cardboard-based "hammer". When the button is pressed, the user's body charge is transferred onto the smartphone screen, as if he'd directly touched the screen with his finger. This clever solution avoids the unreliable magnetometer solution, instead uses the phone's native touchscreen input, albeit indirectly. It is also worth mentioning at this point that since your smartphone supports Bluetooth, it's possible to use a handheld Bluetooth controller with your Cardboard apps. This is not part of the Cardboard specifications and requires some extra configuration; the use of a third-party input handler or controller support built into the app. A mini Bluetooth controller is shown in the following image: Cardboard viewers are not necessarily made out of cardboard. Plastic viewers can get relatively costly. While they are more sturdy than cardboards, they fundamentally have the same design (assembled). Some devices include adjustable lenses, for the distance of the lenses from the screen, and/or the distance between your eyes (IPD or inter-pupillary distance). The Zeiss VR One, Homido, and Sunnypeak devices were among the first to become popular. Some manufacturers have gone out of the box (pun intended) with innovations that are not necessarily compliant with Google's specifications but provide capabilities beyond the Cardboard design. A notable example is the Wearality viewer (http://www.wearality.com/), which includes an exclusive patent 150-degree field of view (FOV) double Fresnel lens. It's so portable that it folds up like a pair of sunglasses. The Wearality viewer is shown in the following image: Configuring your Cardboard viewer With such a variety of Cardboard devices and variations in lens distance, field of view, distortion, and so on, Cardboard apps must be configured to a specific device's attributes. Google provides a solution to this as well. Each Cardboard viewer comes with a unique QR code and/or NFC chip, which you scan to configure the software for that device. If you're interested in calibrating your own device or customizing your parameters, check out the profile generator tools at https://www.google.com/get/cardboard/viewerprofilegenerator/. To configure your phone to a specific Cardboard viewer, open the standard Google Cardboard app, and select the Settings icon in the center bottom section of the screen, as shown in the following image: Then, point the camera to the QR code for your particular Cardboard viewer: Your phone is now configured for the specific Cardboard viewer parameters. Developing apps for Cardboard At the time of writing this article, Google provides two SDKs for Cardboard: Cardboard SDK for Android (https://developers.google.com/cardboard/android) Cardboard SDK for Unity (https://developers.google.com/cardboard/unity) Let's consider the Unity option first. Using Unity Unity (http://unity3d.com/) is a popular fully featured 3D game engine, which supports building your games on a wide gamut of platforms, from Playstation and XBox, to Windows and Mac (and Linux!), to Android and iOS. Unity consists of many separate tools integrated into a powerful engine under a unified visual editor. There are graphics tools, physics, scripting, networking, audio, animations, UI, and many more. It includes advanced computer graphics rendering, shading, textures, particles, and lighting with all kinds of options for optimizing performance and fine tuning the quality of your graphics for both 2D and 3D. If that's not enough, Unity hosts a huge Assets Store teaming with models, scripts, tools, and other assets created by its large community of developers. The Cardboard SDK for Unity provides a plugin package that you can import into the Unity Editor, containing prefabs (premade objects), C# scripts, and other assets. The package gives you what you need in order to add a stereo camera to your virtual 3D scene and build your projects to run as Cardboard apps on Android (and iOS). If you're interested in learning more about using Unity to build VR applications for Cardboard, check out another book by Packt Publishing, Unity Virtual Reality Projects by Jonathan Linowes (https://www.packtpub.com/game-development/unity-virtual-reality-projects). Going native So, why not just use Unity for Cardboard development? Good question. It depends on what you're trying to do. Certainly, if you need all the power and features of Unity for your project, it's the way to go. But at what cost? With great power comes great responsibility (says Uncle Ben Parker). It is quick to learn but takes a lifetime to master (says the Go Master). Seriously though, Unity is a powerful engine that may be an overkill for many applications. To take full advantage, you may require additional expertise in modeling, animation, level design, graphics, and game mechanics. Cardboard applications built with Unity are bulky. An empty Unity scene build for Android generates an .apk file that has minimum 23 megabytes. In contrast, the simple native Cardboard application, .apk, is under one megabyte. Along with this large app size comes a long loading time, possibly more than several seconds. It impacts the memory usage and battery use. Unless you've paid for a Unity Android license, your app always starts with the Made With Unity splash screen. These may not be acceptable constraints for you. In general, the closer you are to the metal, the better performance you'll eke out of your application. When you write directly for Android, you have direct access to the features of the device, more control over memory and other resources, and more opportunities for customization and optimization. This is why native mobile apps tend to trump over mobile web apps. Lastly, one of the best reasons to develop with native Android and Java may be the simplest. You're anxious to build something now! If you're already an Android developer, then just use what you already know and love! Take the straightest path from here to there. If you're familiar with Android development, then Cardboard development will come naturally. Using the Cardboard SDK for Android, you can do programming in Java, perhaps using an IDE (integrated development environment) such as Android Studio (also known as IntelliJ). As we'll notice throughout this article, your Cardboard Android app is like other Android apps, including a manifest, resources, and Java code. As with any Android app, you will implement a MainActivity class, but yours will extend CardboardActivity and implement CardboardView.StereoRenderer. Your app will utilize OpenGL ES 2.0 graphics, shaders, and 3D matrix math. It will be responsible for updating the display on each frame, that is, rerendering your 3D scene based on the direction the user is looking at that particular slice in time. It is particularly important in VR, but also in any 3D graphics context, to render a new frame as quickly as the display allows, usually at 60 FPS. Your app will handle the user input via the Cardboard trigger and/or gaze-based control. That's what your app needs to do. However, there are still more nitty gritty details that must be handled to make VR work. As noted in the Google Cardboard SDK guide (https://developers.google.com/cardboard/android/), the SDK simplifies many of these common VR development tasks, including the following: Lens distortion correction Head tracking 3D calibration Side-by-side rendering Stereo geometry configuration User input event handling Functions are provided in the SDK to handle these tasks for you. Building and deploying your applications for development, debugging, profiling, and eventually publishing on Google Play also follow the same Android workflows you may be familiar with already. That's cool. Of course, there's more to building an app than simply following an example. We'll take a look at techniques; for example, for using data-driven geometric models, for abstracting shaders and OpenGL ES API calls, and for building user interface elements, such as menus and icons. On top of all this, there are important suggested best practices for making your VR experiences work and avoiding common mistakes. An overview to VR best practices More and more is being discovered and written each day about the dos and don'ts when designing and developing for VR. Google provides a couple of resources to help developers build great VR experiences, including the following: Designing for Google Cardboard is a best practice document that helps you focus on the overall usability as well as avoid common VR pitfalls (http://www.google.com/design/spec-vr/designing-for-google-cardboard/a-new-dimension.html). Cardboard Design Lab is a Cardboard app that directly illustrates the principles of designing for VR, which you can explore in Cardboard itself. At Vision Summit 2016, the Cardboard team announced that they have released the source (Unity) project for developers to examine and extend (https://play.google.com/store/apps/details?id=com.google.vr.cardboard.apps.designlab and https://github.com/googlesamples/cardboard-unity/tree/master/Samples/CardboardDesignLab). VR motion sickness is a real symptom and concern for virtual reality caused in parts by a lag in screen updates, or latency, when you're moving your head. Your brain expects the world around you to change exactly in sync with your actual motion. Any perceptible delay can make you feel uncomfortable, to say the least, and possibly nauseous. Latency can be reduced by faster rendering of each frame and maintaining the recommended frames per second. Desktop VR apps are held at the high standard of 90 FPS, enabled by a custom HMD screen. On mobile devices, the screen hardware often limits the refresh rates to 60 FPS, or in the worst case, 30 FPS. There are additional causes of VR motion sickness and other user discomforts, which can be mitigated by following these design guidelines: Always maintain head tracking. If the virtual world seems to freeze or pause, this may cause users to feel ill. Displays user interface elements, such as titles and buttons, in 3D virtual spaces. If rendered in 2D, they'll seem to be "stuck to your face" and you will feel uncomfortable. When transitioning between scenes, fade to black, cut scenes will be very disorienting. Fading to white might be uncomfortably bright for your users. Users should remain in control of their movement within the app. Something about initiating camera motion yourself helps reduce motion sickness. Avoid acceleration and deceleration. As humans, we feel the acceleration but not constant velocity. If you are moving the camera inside the app, keep a constant velocity. Rollercoasters are fun, but even in real life, it can make you feel sick. Keep your users grounded. Being a virtual floating point in space, it can make you feel sick, whereas a feeling like you're standing on the ground or sitting in a cockpit provides a sense of stability. Maintain a reasonable distance from the eye for UI elements, such as buttons and reticle cursors. If too close, the user may have to look cross-eyed and can experience an eye strain. Some items that are too close may not converge at all and cause "double-vision." Building applications for virtual reality also differ from the conventional Android ones in other ways, such as follows: When transitioning from a 2D application into VR, it is recommended that you provide a headset icon for the user, the tap, as shown in the following image: To exit VR, the user can hit the back button in the system bar (if present) or the home button. The cardboard sample apps use a "tilt-up" gesture to return to the main menu, which is a fine approach if you want to allow a "back" input without forcing the user to remove the phone from the device. Make sure that you build your app to run in fullscreen mode (and not in Android's Lights Out mode). Do not perform any API calls that will present the user with a 2D dialog box. The user will be forced to remove the phone from the viewer to respond. Provide audio and haptic (vibration) feedback to convey information and indicate that the user input is recognized by the app. So, let's say that you've got your awesome Cardboard app done and is ready to publish. Now what? There's a line you can put in the AndroidManifext file that marks the app as a Cardboard app. Google's Cardboard app includes a Google Play store browser used to find a Cardboard app. Then, just publish it as you would do for any normal Android application. Summary In this article, we started by defining Google Cardboard and saw how it fits in the spectrum of consumer virtual reality devices. We then contrasted Cardboard with higher end VRs, such as Oculus Rift, HTC Vive, and PlayStation VR, making the case for low-end VR as a separate medium in its own right. We talked a bit about developing for Cardboard, and considered why and why not to use the Unity 3D game engine versus writing a native Android app in Java with the Cardboard SDK. And lastly, we took a quick survey of many design considerations for developing for VR, including ways to avoid motion sickness and tips for integrating Cardboard with Android apps in general. Resources for Article: Further resources on this subject: VR Build and Run[article] vROps – Introduction and Architecture[article] Designing and Building a vRealize Automation 6.2 Infrastructure[article]
Read more
  • 0
  • 0
  • 3030

article-image-creating-coin-material
Packt
10 Mar 2016
7 min read
Save for later

Creating a Coin Material

Packt
10 Mar 2016
7 min read
In this article by Alan Thorn, the author of Unity 5.x By Example, the coin object, as a concept, represents a basic or fundamental unit in our game logic because the player character should be actively searching the level looking for coins to collect before a timer runs out. This means that the coin is more than mere appearance; its purpose in the game is not simply eye candy, but is functional. It makes an immense difference to the game outcome whether the coin is collected by the player or not. Therefore, the coin object, as it stands, is lacking in two important respects. Firstly, it looks dull and grey—it doesn't really stand out and grab the player's attention. Secondly, the coin cannot actually be collected yet. Certainly, the player can walk into the coin, but nothing appropriate happens in response. Figure 2.1: The coin object so far The completed CollectionGame project, as discussed in this article and the next, can be found in the book companion files in the Chapter02/CollectionGame folder. (For more resources related to this topic, see here.) In this section, we'll focus on improving the coin appearance using a material. A material defines an algorithm (or instruction set) specifying how the coin should be rendered. A material doesn't just say what the coin should look like in terms of color; it defines how shiny or smooth a surface is, as opposed to rough and diffuse. This is important to recognize and is why a texture and material refer to different things. A texture is simply an image file loaded in memory, which can be wrapped around a 3D object via its UV mapping. In contrast, a material defines how one or more textures can be combined together and applied to an object to shape its appearance. To create a new material asset in Unity, right-click on an empty area in the Project panel, and from the context menu, choose Create | Material. See Figure 2.2. You can also choose Assets | Create | Material from the application menu. Figure 2.2: Creating a material A material is sometimes called a Shader. If needed, you can create custom materials using a Shader Language or you can use a Unity add-on, such as Shader Forge. After creating a new material, assign it an appropriate name from the Project panel. As I'm aiming for a gold look, I'll name the material mat_GoldCoin. Prefixing the asset name with mat helps me know, just from the asset name, that it's a material asset. Simply type a new name in the text edit field to name the material. You can also click on the material name twice to edit the name at any time later. See Figure 2.3: Figure 2.3: Naming a material asset Next, select the material asset in the Project panel, if it's not already selected, and its properties display immediately in the object Inspector. There are lots of properties listed! In addition, a material preview displays at the bottom of the object Inspector, showing you how the material would look, based on its current settings, if it were applied to a 3D object, such as a sphere. As you change material settings from the Inspector, the preview panel updates automatically to reflect your changes, offering instant feedback on how the material would look. See the following screenshot: Figure 2.4: Material properties are changed from the Object Inspector Let's now create a gold material for the coin. When creating any material, the first setting to choose is the Shader type because this setting affects all other parameters available to you. The Shader type determines which algorithm will be used to shade your object. There are many different choices, but most material types can be approximated using either Standard or Standard (Specular setup). For the gold coin, we can leave the Shader as Standard. See the following screenshot: Figure 2.5: Setting the material Shader type Right now, the preview panel displays the material as a dull grey, which is far from what we need. To define a gold color, we must specify the Albedo. To do this, click on the Albedo color slot to display a Color picker, and from the Color picker dialog, select a gold color. The material preview updates in response to reflect the changes. Refer to the following screenshot: Figure 2.6: Selecting a gold color for the Albedo channel The coin material is looking better than it did, but it's still supposed to represent a metallic surface, which tends to be shiny and reflective. To add this quality to our material, click and drag the Metallic slider in the object Inspector to the right-hand side, setting its value to 1. This indicates that the material represents a fully metal surface as opposed to a diffuse surface such as cloth or hair. Again, the preview panel will update to reflect the change. See Figure 2.7: Figure 2.7: Creating a metallic material We now have a gold material created, and it's looking good in the preview panel. If needed, you can change the kind of object used for a preview. By default, Unity assigns the created material to a sphere, but other primitive objects are allowed, including cubes, cylinders, and torus. This helps you preview materials under different conditions. You can change objects by clicking on the geometry button directly above the preview panel to cycle through them. See Figure 2.8: Figure 2.8: Previewing a material on an object When your material is ready, you can assign it directly to meshes in your scene just by dragging and dropping. Let's assign the coin material to the coin. Click and drag the material from the Project panel to the coin object in the scene. On dropping the material, the coin will change appearance. See Figure 2.9: Figure 2.9: Assigning the material to the coin You can confirm that material assignment occurred successfully and can even identify which material was assigned by selecting the coin object in the scene and viewing its Mesh Renderer component from the object Inspector. The Mesh Renderer component is responsible for making sure that a mesh object is actually visible in the scene when the camera is looking. The Mesh Renderer component contains a Materials field. This lists all materials currently assigned to the object. By clicking on the material name from the Materials field, Unity automatically selects the material in the Project panel, making it quick and simple to locate materials. See Figure 2.10, The Mesh Renderer component lists all materials assigned to an object: Mesh objects may have multiple materials with different materials assigned to different faces. For best in-game performance, use as few unique materials on an object as necessary. Make the extra effort to share materials across multiple objects, if possible. Doing so can significantly enhance the performance of your game. For more information on optimizing rendering performance, see the online documentation at http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html. Figure 2.10: The Mesh Renderer component lists all materials assigned to an object That's it! You now have a complete and functional gold material for the collectible coin. It's looking good. However, we're still not finished with the coin. The coin looks right, but it doesn't behave right. Specifically, it doesn't disappear when touched, and we don't yet keep track of how many coins the player has collected overall. To address this, then, we'll need to script. Summary Excellent work! In this article, you've completed the coin collection game as well as your first game in Unity. Resources for Article: Further resources on this subject: Animation features in Unity 5 [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 3944
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-game-world
Packt
23 Feb 2016
39 min read
Save for later

The Game World

Packt
23 Feb 2016
39 min read
In this article, we will cover the basics of creating immersive areas where players can walk around and interact, as well as some of the techniques used to manage those areas. This article will give you some practical tips and tricks of the spritesheet system introduced with Unity 4.3 and how to get it to work for you. Lastly, we will also have a cursory look at how shaders work in the 2D world and the considerations you need to keep in mind when using them. However, we won't be implementing shaders as that could be another book in itself. The following is the list of topics that will be covered in this article: Working with environments Looking at sprite layers Handling multiple resolutions An overview of parallaxing and effects Shaders in 2D – an overview (For more resources related to this topic, see here.) Backgrounds and layers Now that we have our hero in play, it would be nice to give him a place to live and walk around, so let's set up the home town and decorate it. Firstly, we are going to need some more assets. So, from the asset pack you downloaded earlier, grab the following assets from the Environments pack, place them in the AssetsSpritesEnvironment folder, and name them as follows: Name the ENVIRONMENTS STEAMPUNKbackground01.png file Assets SpritesEnvironmentbackground01 Name the ENVIRONMENTSSTEAMPUNKenvironmentalAssets.png file AssetsSpritesEnvironmentenvironmentalAssets Name the ENVIRONMENTSFANTASYenvironmentalAssets.png file Assets SpritesEnvironmentenvironmentalAssets2 To slice or not to slice It is always better to pack many of the same images on to a single asset/atlas and then use the Sprite Editor to define the regions on that texture for each sprite, as long as all the sprites on that sheet are going to get used in the same scene. The reason for this is when Unity tries to draw to the screen, it needs to send the images to draw to the graphics card; if there are many images to send, this can take some time. If, however, it is just one image, it is a lot simpler and more performant with only one file to send. There needs to be a balance; too large an image and the upload to the graphics card can take up too many resources, too many individual images and you have the same problem. The basic rule of thumb is as follows: If the background is a full screen background or large image, then keep it separately. If you have many images and all are for the same scene, then put them into a spritesheet/atlas. If you have many images but all are for different scenes, then group them as best you can—common items on one sheet and scene-specific items on different sheets. You'll have several spritesheets to use. You basically want to keep as much stuff together as makes sense and not send unnecessary images that won't get used to the graphics card. Find your balance. The town background First, let's add a background for the town using the AssetsSpritesEnvironmentbackground01 texture. It is shown in the following screenshot: With the background asset, we don't need to do anything else other than ensure that it has been imported as a sprite (in case your project is still in 3D mode), as shown in the following screenshot: The town buildings For the steampunk environmental assets (AssetsSpritesEnvironmentenvironmentalAssets) that are shown in the following screenshot, we need a bit more work; once these assets are imported, change the Sprite Mode to Multiple and load up the Sprite Editor using the Sprite Editor button. Next, click on the Slice button, leave the settings at their default options, and then click on the Slice button in the new window as shown in the following screenshot: Click on Apply and close the Sprite Editor. You will have four new sprite textures available as seen in the following screenshot: The extra scenery We saw what happens when you use a grid type split on a spritesheet and when the automatic split works well, so what about when it doesn't go so well? If we look at the Fantasy environment pack (AssetsSpritesEnvironmentenvironmentalAssets2), we will see the following: After you have imported it and run the Split in Sprite Editor, you will notice that one of the sprites does not get detected very well; altering the automatic split settings in this case doesn't help, so we need to do some manual manipulation as shown in the following screenshot: In the previous screenshot, you can see that just two of the rocks in the top-right sprite have been identified by the splicing routine. To fix this, just delete one of the selections and then expand the other manually using the selection points in the corner of the selection box (after clicking on the sprite box). Here's how it will look before the correction: After correction, you should see something like the following screenshot: This gives us some nice additional assets to scatter around our towns and give it a more homely feel, as shown in the following screenshot: Building the scene So, now that we have some nice assets to build with, we can start building our first town. Adding the town background Returning to the scene view, you should see the following: If, however, we add our town background texture (AssetsSpritesBackgroundsBackground.png) to the scene by dragging it to either the project hierarchy or the scene view, you will end up with the following: Be sure to set the background texture position appropriately once you add it to the scene; in this case, be sure the position of the transform is centered in the view at X = 0, Y = 0, Z = 0. Unity does have a tendency to set the position relative to where your 3D view is at the time of adding it—almost never where you want it. Our player has vanished! The reason for this is simple: Unity's sprite system has an ordering system that comes in two parts. Sprite sorting layers Sorting Layers (Edit | Project Settings | Tags and Layers) are a collection of sprites, which are bulked together to form a single group. Layers can be configured to be drawn in a specific order on the screen as shown in the following screenshot: Sprite sorting order Sprites within an individual layer can be sorted, allowing you to control the draw order of sprites within that layer. The sprite Inspector is used for this purpose, as shown in the following screenshot: Sprite's Sorting Layers should not be confused with Unity's rendering layers. Layers are a separate functionality used to control whether groups of game objects are drawn or managed together, whereas Sorting Layers control the draw order of sprites in a scene. So the reason our player is no longer seen is that it is behind the background. As they are both in the same layer and have the same sort order, they are simply drawn in the order that they are in the project hierarchy. Updating the scene Sorting Layers To resolve the update of the scene's Sorting Layers, let's organize our sprite rendering by adding some sprite Sorting Layers. So, open up the Tags and Layers inspector pane as shown in the following screenshot (by navigating to Edit | Project settings | Tags and Layers), and add the following Sorting Layers: Background Player Foreground GUI You can reorder the layers underneath the default anytime by selecting a row and dragging it up and down the sprite's Sorting Layers list. With the layers set up, we can now configure our game objects accordingly. So, set the Sorting Layer on our background01 sprite to the Background layer as shown in the following screenshot: Then, update the PlayerSprite layer to Player; our character will now be displayed in front of the background. You can just keep both objects on the same layer and set the Sort Order value appropriately, keeping the background to a Sort Order value of 0 and the player to 10, which will draw the player in front. However, as you add more items to the scene, things will get tricky quickly, so it is better to group them in a layer accordingly. Now when we return to the scene, our hero is happily displayed but he is seen hovering in the middle of our village. So let's fix that next by simply changing its position transform in the Inspector window. Setting the Y position transform to -2 will place our hero nicely in the middle of the street (provided you have set the pivot for the player sprite to bottom), as shown in the following screenshot: Feel free at this point to also add some more background elements such as trees and buildings to fill out the scene using the environment assets we imported earlier. Working with the camera If you try and move the player left and right at the moment, our hero happily bobs along. However, you will quickly notice that we run into a problem: the hero soon disappears from the edge of the screen. To solve this, we need to make the camera follow the hero. When creating new scripts to implement something, remember that just about every game that has been made with Unity has most likely implemented either the same thing or something similar. Most just get on with it, but others and the Unity team themselves are keen to share their scripts to solve these challenges. So in most cases, we will have something to work from. Don't just start a script from scratch (unless it is a very small one to solve a tiny issue) if you can help it; here's some resources to get you started: Unity sample projects: http://Unity3d.com/learn/tutorials/projects Unity Patterns: http://unitypatterns.com/ Unity wiki scripts section: http://wiki.Unity3d.com/index.php/Scripts (also check other stuff for detail) Once you become more experienced, it is better to just use these scripts as a reference and try to create your own and improve on them, unless they are from a maintained library such as https://github.com/nickgravelyn/UnityToolbag. To make the camera follow the players, we'll take the script from the Unity 2D sample and modify it to fit in our game. This script is nice because it also includes a Mario style buffer zone, which allows the players to move without moving the camera until they reach the edge of the screen. Create a new script called FollowCamera in the AssetsScripts folder, remove the Start and Update functions, and then add the following properties: using UnityEngine;   public class FollowCamera : MonoBehavior {     // Distance in the x axis the player can move before the   // camera follows.   public float xMargin = 1.5f;     // Distance in the y axis the player can move before the   // camera follows.   public float yMargin = 1.5f;     // How smoothly the camera catches up with its target   // movement in the x axis.   public float xSmooth = 1.5f;     // How smoothly the camera catches up with its target   // movement in the y axis.   public float ySmooth = 1.5f;     // The maximum x and y coordinates the camera can have.   public Vector2 maxXAndY;     // The minimum x and y coordinates the camera can have.   public Vector2 minXAndY;     // Reference to  the player's transform.   public Transform player; } The variables are all commented to explain their purpose, but we'll cover each as we use them. First off, we need to get the player object's position so that we can track the camera to it by discovering it from the object it is attached to. This is done by adding the following code in the Awake function: void Awake()     {         // Setting up the reference.         player = GameObject.Find("Player").transform;   if (player == null)   {     Debug.LogError("Player object not found");   }       } An alternative to discovering the player this way is to make the player property public and then assign it in the editor. There is no right or wrong way—just your preference. It is also a good practice to add some element of debugging to let you know if there is a problem in the scene with a missing reference, else all you will see are errors such as object not initialized or variable was null. Next, we need a couple of helper methods to check whether the player has moved near the edge of the camera's bounds as defined by the Max X and Y variables. In the following code, we will use the settings defined in the preceding code to control how close you can get to the end result:   bool CheckXMargin()     {         // Returns true if the distance between the camera and the   // player in the x axis is greater than the x margin.         return Mathf.Abs (transform.position.x - player.position.x) > xMargin;     }       bool CheckYMargin()     {         // Returns true if the distance between the camera and the   // player in the y axis is greater than the y margin.         return Mathf.Abs (transform.position.y - player.position.y) > yMargin;     } To finish this script, we need to check each frame when the scene is drawn to see whether the player is close to the edge and update the camera's position accordingly. Also, we need to check if the camera bounds have reached the edge of the screen and not move it beyond. Comparing Update, FixedUpdate, and LateUpdate There is usually a lot of debate about which update method should be used within a Unity game. To put it simply, the FixedUpdate method is called on a regular basis throughout the lifetime of the game and is generally used for physics and time sensitive code. The Update method, however, is only called after the end of each frame that is drawn to the screen, as the time taken to draw the screen can vary (due to the number of objects to be drawn and so on). So, the Update call ends up being fairly irregular. For more detail on the difference between Update and FixedUpdate see the Unity Learn tutorial video at http://unity3d.com/learn/tutorials/modules/beginner/scripting/update-and-fixedupdate. As the player is being moved by the physics system, it is better to update the camera in the FixedUpdate method: void FixedUpdate()     {         // By default the target x and y coordinates of the camera         // are it's current x and y coordinates.         float targetX = transform.position.x;         float targetY = transform.position.y;           // If the player has moved beyond the x margin...         if (CheckXMargin())             // the target x coordinate should be a Lerp between             // the camera's current x position and the player's  // current x position.             targetX = Mathf.Lerp(transform.position.x,  player.position.x, xSmooth * Time.fixedDeltaTime );           // If the player has moved beyond the y margin...         if (CheckYMargin())             // the target y coordinate should be a Lerp between             // the camera's current y position and the player's             // current y position.             targetY = Mathf.Lerp(transform.position.y,  player.position.y, ySmooth * Time. fixedDeltaTime );           // The target x and y coordinates should not be larger         // than the maximum or smaller than the minimum.         targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);         targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);           // Set the camera's position to the target position with         // the same z component.         transform.position =          new Vector3(targetX, targetY, transform.position.z);     } As they say, every game is different and how the camera acts can be different for every game. In a lot of cases, the camera should be updated in the LateUpdate method after all drawing, updating, and physics are complete. This, however, can be a double-edged sword if you rely on math calculations that are affected in the FixedUpdate method, such as Lerp. It all comes down to tweaking your camera system to work the way you need it to do. Once the script is saved, just attach it to the Main Camera element by dragging the script to it or by adding a script component to the camera and selecting the script. Finally, we just need to configure the script and the camera to fit our game size as follows: Set the orthographic Size of the camera to 2.7 and the Min X and Max X sizes to 5 and -5 respectively. The perils of resolution When dealing with cameras, there is always one thing that will trip us up as soon as we try to build for another platform—resolution. By default, the Unity player in the editor runs in the Free Aspect mode as shown in the following screenshot: The Aspect mode (from the Aspect drop-down) can be changed to represent the resolutions supported by each platform you can target. The following is what you get when you switch your build target to each platform: To change the build target, go into your project's Build Settings by navigating to File | Build Settings or by pressing Ctrl + Shift + B, then select a platform and click on the Switch Platform button. This is shown in the following screenshot: When you change the Aspect drop-down to view in one of these resolutions, you will notice how the aspect ratio for what is drawn to the screen changes by either stretching or compressing the visible area. If you run the editor player in full screen by clicking on the Maximize on Play button () and then clicking on the play icon, you will see this change more clearly. Alternatively, you can run your project on a target device to see the proper perspective output. The reason I bring this up here is that if you used fixed bounds settings for your camera or game objects, then these values may not work for every resolution, thereby putting your settings out of range or (in most cases) too undersized. You can handle this by altering the settings for each build or using compiler predirectives such as #if UNITY_METRO to force the default depending on the build (in this example, Windows 8). To read more about compiler predirectives, check the Unity documentation at http://docs.unity3d.com/Manual/PlatformDependentCompilation.html. A better FollowCamera script If you are only targeting one device/resolution or your background scrolls indefinitely, then the preceding manual approach works fine. However, if you want it to be a little more dynamic, then we need to know what resolution we are working in and how much space our character has to travel. We will perform the following steps to do this: We will change the min and max variables to private as we no longer need to configure them in the Inspector window. The code is as follows:   // The maximum x and y coordinates the camera can have.     private Vector2 maxXAndY;       // The minimum x and y coordinates the camera can have.     private Vector2 minXAndY; To work out how much space is available in our town, we need to interrogate the rendering size of our background sprite. So, in the Awake function, we add the following lines of code: // Get the bounds for the background texture - world       size     var backgroundBounds = GameObject.Find("background")      .renderer.bounds; In the Awake function, we work out our resolution and viewable space by interrogating the ViewPort method on the camera and converting it to the same coordinate type as the sprite. This is done using the following code:   // Get the viewable bounds of the camera in world     // coordinates     var camTopLeft = camera.ViewportToWorldPoint      (new Vector3(0, 0, 0));     var camBottomRight = camera.ViewportToWorldPoint      (new Vector3(1, 1, 0)); Finally, in the Awake function, we update the min and max values using the texture size and camera real-world bounds. This is done using the following lines of code: // Automatically set the min and max values     minXAndY.x = backgroundBounds.min.x - camTopLeft.x;     maxXAndY.x = backgroundBounds.max.x - camBottomRight.x; In the end, it is up to your specific implementation for the type of game you are making to decide which pattern works for your game. Transitioning and bounds So our camera follows our player, but our hero can still walk off the screen and keep going forever, so let us stop that from happening. Towns with borders As you saw in the preceding section, you can use Unity's camera logic to figure out where things are on the screen. You can also do more complex ray testing to check where things are, but I find these are overly complex unless you depend on that level of interaction. The simpler answer is just to use the native Box2D physics system to keep things in the scene. This might seem like overkill, but the 2D physics system is very fast and fluid, and it is simple to use. Once we add the physics components, Rigidbody 2D (to apply physics) and a Box Collider 2D (to detect collisions) to the player, we can make use of these components straight away by adding some additional collision objects to stop the player running off. To do this and to keep things organized, we will add three empty game objects (either by navigating to GameObject | Create Empty, or by pressing Ctrl + Shift +N) to the scene (one parent and two children) to manage these collision points, as shown in the following screenshot: I've named them WorldBounds (parent) and LeftBorder and RightBorder (children) for reference. Next, we will position each of the child game objects to the left- and right-hand side of the screen, as shown in the following screenshot: Next, we will add a Box Collider 2D to each border game object and increase its height just to ensure that it works for the entire height of the scene. I've set the Y value to 5 for effect, as shown in the following screenshot: The end result should look like the following screenshot with the two new colliders highlighted in green: Alternatively, you could have just created one of the children, added the box collider, duplicated it (by navigating to Edit | Duplicate or by pressing Ctrl + D), and moved it. If you have to create multiples of the same thing, this is a handy tip to remember. If you run the project now, then our hero can no longer escape this town on his own. However, as we want to let him leave, we can add a script to the new Boundary game object so that when the hero reaches the end of the town, he can leave. Journeying onwards Now that we have collision zones on our town's borders, we can hook into this by using a script to activate when the hero approaches. Create a new C# script called NavigationPrompt, clear its contents, and populate it with the following code: using UnityEngine;   public class NavigationPrompt : MonoBehavior {     bool showDialog;     void OnCollisionEnter2D(Collision2D col)   {     showDialog = true;   }     void OnCollisionExit2D(Collision2D col)   {     showDialog = false;   } } The preceding code gives us the framework of a collision detection script that sets a flag on and off if the character interacts with what the script is attached to, provided it has a physics collision component. Without it, this script would do nothing and it won't cause an error. Next, we will do something with the flag and display some GUI when the flag is set. So, add the following extra function to the preceding script: void OnGUI()     {       if (showDialog)       {         //layout start         GUI.BeginGroup(new Rect(Screen.width / 2 - 150, 50, 300,           250));           //the menu background box         GUI.Box(new Rect(0, 0, 300, 250), "");           // Information text         GUI.Label(new Rect(15, 10, 300, 68), "Do you want to           travel?");           //Player wants to leave this location         if (GUI.Button(new Rect(55, 100, 180, 40), "Travel"))         {           showDialog = false;             // The following line is commented out for now           // as we have nowhere to go :D           //Application.LoadLevel(1);}           //Player wants to stay at this location         if (GUI.Button(new Rect(55, 150, 180, 40), "Stay"))         {           showDialog = false;         }           //layout end         GUI.EndGroup();       }     } The function itself is very simple and only activates if the showDialog flag is set to true by the collision detection. Then, we will perform the following steps: In the OnGUI method, we set up a dialog window region with some text and two buttons. One button asks if the player wants to travel, which would load the next area (commented out for now as we only have one scene), and close the dialog. One button simply closes the dialog if the hero didn't actually want to leave. As we haven't stopped moving the player, the player can also do this by moving away. If you now add the NavigationPrompt script to the two world border (LeftBorder and RightBorder) game objects, this will result in the following simple UI whenever the player collides with the edges of our world: We can further enhance this by tagging or naming our borders to indicate a destination. I prefer tagging, as it does not interfere with how my scene looks in the project hierarchy; also, I can control what tags are available and prevent accidental mistyping. To tag a game object, simply select a Tag using the drop-down list in the Inspector when you select the game object in the scene or project. This is shown in the following screenshot: If you haven't set up your tags yet or just wish to add a new one, select Add Tag in the drop-down menu; this will open up the Tags and Layers window of Inspector. Alternatively, you can call up this window by navigating to Edit | Project Settings | Tags and layers in the menu. It is shown in the following screenshot: You can only edit or change user-defined tags. There are several other tags that are system defined. You can use these as well; you just cannot change, remove, or edit them. These include Player, Respawn, Finish, Editor Only, Main Camera, and GameController. As you can see from the preceding screenshot, I have entered two new tags called The Cave and The World, which are the two main exit points from our town. Unity also adds an extra item to the arrays in the editor. This helps you when you want to add more items; it's annoying when you want a fixed size but it is meant to help. When the project runs, however, the correct count of items will be exposed. Once these are set up, just return to the Inspector for the two borders, and set the right one to The World and the left to The Cave. Now, I was quite specific in how I named these tags, as you can now reuse these tags in the script to both aid navigation and also to notify the player where they are going. To do this, simply update the Do you want to travel to line to the following: //Information text GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel to " +   this.tag + "?"); Here, we have simply appended the dialog as it is presented to the user with the name of the destination we set in the tag. Now, we'll get a more personal message, as shown in the following screenshot: Planning for the larger picture Now for small games, the preceding implementation is fine; however, if you are planning a larger world with a large number of interactions, provide complex decisions to prevent the player continuing unless they are ready. As the following diagram shows, there are several paths the player can take and in some cases, these is only one way. Now, we could just build up the logic for each of these individually as shown in the screenshot, but it is better if we build a separate navigation system so that we have everything in one place; it's just easier to manage that way. This separation is a fundamental part of any good game design. Keeping the logic and game functionality separate makes it easier to maintain in the future, especially when you need to take internationalization into account (but we will learn more about that later). Now, we'll change to using a manager to handle all the world/scene transitions, and simplify the tag names we use as they won't need to be displayed. So, The Cave will be renamed as just Cave, and we will get the text to display from the navigation manager instead of the tag. So, by separating out the core decision making functionality out of the prompt script, we can build the core manager for navigation. Its primary job is to maintain where a character can travel and information about that destination. First, we'll update the tags we created earlier to simpler identities that we can use in our navigation manager (update The Cave to Cave01 and The World to World). Next, we'll create a new C# script called NavigationManager in our AssetsScripts folder, and then replace its contents with the following lines of code: public static class NavigationManager {       public static Dictionary<string,string> RouteInformation =     new Dictionary<string,string>()   {     { "World", "The big bad world"},     { "Cave", "The deep dark cave"},   };     public static string GetRouteInfo(string destination)   {     return RouteInformation.ContainsKey(destination) ?     RouteInformation[destination] : null;   }     public static bool CanNavigate(string destination)   {     return true;   }     public static void NavigateTo(string destination)   {     // The following line is commented out for now     // as we have nowhere to go :D     //Application.LoadLevel(destination);   } } Notice the ? and : operators in the following statement: RouteInformation.ContainsKey(destination) ?   RouteInformation[destination] : null; These operators are C# conditional operators. They are effectively the shorthand of the following: if(RouteInformation.ContainsKey(destination)) {   return RouteInformation[destination]; } else {   return null; } Shorter, neater, and much nicer, don't you think? For more information, see the MSDN C# page at http://bit.ly/csharpconditionaloperator. The script is very basic for now, but contains several following key elements that can be expanded to meet the design goals of your game: RouteInformation: This is a list of all the possible destinations in the game in a dictionary. A static list of possible destinations in the game, and it is a core part of the manager as it knows everywhere you can travel in the game in one place. GetRouteInfo: This is a basic information extraction function. A simple controlled function to interrogate the destination list. In this example, we just return the text to be displayed in the prompt, which allows for more detailed descriptions that we could use in tags. You could use this to provide alternate prompts depending on what the player is carrying and whether they have a lit torch, for example. CanNavigate: This is a test to see if navigation is possible. If you are going to limit a player's travel, you need a way to test if they can move, allowing logic in your game to make alternate choices if the player cannot. You could use a different system for this by placing some sort of block in front of a destination to limit choice (as used in the likes of Zelda), such as an NPC or rock. As this is only an example, we can always travel and add logic to control it if you wish. NavigateTo: This is a function to instigate navigation. Once a player can travel, you can control exactly what happens in the game: does navigation cause the next scene to load straight away (as in the script currently), or does the current scene fade out and then a traveling screen is shown before fading the next level in? Granted, this does nothing at present as we have nowhere to travel to. The script you will notice is different to the other scripts used so far, as it is a static class. This means it sits in the background, only exists once in the game, and is accessible from anywhere. This pattern is useful for fixed information that isn't attached to anything; it just sits in the background waiting to be queried. Later, we will cover more advanced types and classes to provide more complicated scenarios. With this class in place, we just need to update our previous script (and the tags) to make use of this new manager. Update the NavigationPrompt script as follows: Update the collision function to only show the prompt if we can travel. The code is as follows: void OnCollisionEnter2D(Collision2D col) {   //Only allow the player to travel if allowed   if (NavigationManager.CanNavigate(this.tag))   {     showDialog = true;   } } When the dialog shows, display the more detailed destination text provided by the manager for the intended destination. The code is as follows: //Dialog detail - updated to get better detail GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel   to " + NavigationManager.GetRouteInfo(this.tag) + "?"); If the player wants to travel, let the manager start the travel process. The code is as follows: //Player wants to leave this location if (GUI.Button(new Rect(55, 100, 180, 40), "Travel")) {   showDialog = false;   NavigationManager.NavigateTo(this.tag); } The functionality I've shown here is very basic and it is intended to make you think about how you would need to implement it for your game. With so many possibilities available, I could fill several articles on this kind of subject alone. Backgrounds and active elements A slightly more advanced option when building game worlds is to add a level of immersive depth to the scene. Having a static image to show the village looks good, especially when you start adding houses and NPCs to the mix; but to really make it shine, you should layer the background and add additional active elements to liven it up. We won't add them to the sample project at this time, but it is worth experimenting with in your own projects (or try adding it to this one)—it is a worthwhile effect to look into. Parallaxing If we look at the 2D sample provided by Unity, the background is split into several panes—each layered on top of one another and each moving at a different speed when the player moves around. There are also other elements such as clouds, birds, buses, and taxes driving/flying around, as shown in the following screenshot: Implementing these effects is very easy technically. You just need to have the art assets available. There are several scripts in the wiki I described earlier, but the one in Unity's own 2D sample is the best I've seen. To see the script, just download the Unity Projects: 2D Platformer asset from https://www.assetstore.unity3d.com/en/#!/content/11228, and check out the BackgroundParallax script in the AssetsScripts folder. The BackgroundParallax script in the platformer sample implements the following: An array of background images, which is layered correctly in the scene (which is why the script does not just discover the background sprites) A scaling factor to control how much the background moves in relation to the camera target, for example, the camera A reducing factor to offset how much each layer moves so that they all don't move as one (or else what is the point, might as well be a single image) A smoothing factor so that each background moves smoothly with the target and doesn't jump around Implementing this same model in your game would be fairly simple provided you have texture assets that could support it. Just replicate the structure used in the platformer 2D sample and add the script. Remember to update the FollowCamera script to be able to update the base background, however, to ensure that it can still discover the size of the main area. Foreground objects The other thing you can do to liven up your game is to add random foreground objects that float across your scene independently. These don't collide with anything and aren't anything to do with the game itself. They are just eye candy to make your game look awesome. The process to add these is also fairly simple, but it requires some more advanced Unity features such as coroutines, which we are not going to cover here. So, we will come back to these later. In short, if you examine the BackgroundPropSpawner.cs script from the preceding Unity platformer 2D sample, you will have to perform the following steps: Create/instantiate an object to spawn. Set a random position and direction for the object to travel. Update the object over its lifetime. Once it's out of the scene, destroy or hide it. Wait for a time, and then start again. This allows them to run on their own without impacting the gameplay itself and just adds that extra bit of depth. In some cases, I've seen particle effects are also used to add effect, but they are used sparingly. Shaders and 2D Believe it or not, all 2D elements (even in their default state) are drawn using a shader—albeit a specially written shader designed to light and draw the sprite in a very specific way. If you look at the player sprite in the inspector, you will see that it uses a special Material called Sprites-Default, as shown in the following screenshot: This section is purely meant to highlight all the shading options you have in the 2D system. Shaders have not changed much in this update except for the addition of some 2D global lighting found in the default sprite shader. For more detail on shaders in general, I suggest a dedicated Unity shader book such as https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook. Clicking on the button next to Material field will bring up the material selector, which also shows the two other built-in default materials, as shown in the following screenshot: However, selecting either of these will render your sprite invisible as they require a texture and lighting to work; they won't inherit from the Sprite Renderer texture. You can override this by creating your own material and assigning alternate sprite style shaders. To create a new material, just select the AssetsMaterials folder (this is not crucial, but it means we create the material in a sensible place in our project folder structure) and then right click on and select Create | Material. Alternatively, do the same using the project view's Edit... menu option, as shown in the following screenshot: This gives us a basic default Diffuse shader, which is fine for basic 3D objects. However, we also have two default sprite rendering shaders available. Selecting the shader dropdown gives us the screen shown in the following screenshot: Now, these shaders have the following two very specific purposes: Default: This shader inherits its texture from the Sprite Renderer texture to draw the sprite as is. This is a very basic functionality—just enough to draw the sprite. (It contains its own static lighting.) Diffuse: This shader is the same as the Default shader; it inherits the texture of Default, but it requires an external light source as it does not contain any lighting—this has to be applied separately. It is a slightly more advanced shader, which includes offsets and other functions. Creating one of these materials and applying it to the Sprite Renderer texture of a sprite will override its default constrained behavior. This opens up some additional shader options in the Inspector, as shown in the following screenshot: These options include the following: Sprite Texture: Although changing the Tiling and Offset values causes a warning to appear, they still display a function (even though the actual displayed value resets). Tint: This option allows changing the default light tint of the rendered sprite. It is useful to create different colored objects from the same sprite. Pixel snap: This option makes the rendered sprite crisper but narrows the drawn area. It is a trial and error feature (see the following sections for more information). Achieving pixel perfection in your game in Unity can be a challenge due to the number of factors that can affect it, such as the camera view size, whether the image texture is a Power Of Two (POT) size, and the import setting for the image. This is basically a trial and error game until you are happy with the intended result. If you are feeling adventurous, you can extend these default shaders (although this is out of the scope of this article). The full code for these shaders can be found at http://Unity3d.com/unity/download/archive. If you are writing your own shaders though, be sure to add some lighting to the scene; otherwise, they are just going to appear dark and unlit. Only the default sprite shader is automatically lit by Unity. Alternatively, you can use the default sprite shader as a base to create your new custom shader and retain the 2D basic lighting. Another worthy tip is to check out the latest version of the Unity samples (beta) pack. In it, they have added logic to have two sets of shaders in your project: one for mobile and one for desktop, and a script that will swap them out at runtime depending on the platform. This is very cool; check out on the asset store at https://www.assetstore.unity3d.com/#/content/14474 and the full review of the pack at http://darkgenesis.zenithmoon.com/unity3dsamplesbeta-anoverview/. Going further If you are the adventurous sort, try expanding your project to add the following: Add some buildings to the town Set up some entry points for a building and work that into your navigation system, for example, a shop Add some rocks to the scene and color each differently using a manual material, maybe even add a script to randomly set the pixel color in the shader instead of creating several materials Add a new scene for the cave using another environment background, and get the player to travel between them Summary This certainly has been a very busy article just to add a background to our scene, but working out how each scene will work is a crucial design element for the entire game; you have to pick a pattern that works for you and your end result once as changing it can be very detrimental (and a lot of work) in the future. In this article, we covered the following topics: Some more practice with the Sprite Editor and sprite slicer including some tips and tricks when it doesn't work (or you want to do it yourself) Some camera tips, tricks, and scripts An overview of sprite layers and sprite sorting Defining boundaries in scenes Scene navigation management and planning levels in your game Some basics of how shaders work for 2D For learning Unity 2D from basic you can refer to https://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Resources for Article:   Further resources on this subject: Build a First Person Shooter [article] Let's Get Physical – Using GameMaker's Physics System [article] Using the Tiled map editor [article]
Read more
  • 0
  • 0
  • 3797

article-image-dynamic-graphics
Packt
22 Feb 2016
64 min read
Save for later

Dynamic Graphics

Packt
22 Feb 2016
64 min read
There is no question that the rendering system of modern graphics devices is complicated. Even rendering a single triangle to the screen engages many of these components, since GPUs are designed for large amounts of parallelism, as opposed to CPUs, which are designed to handle virtually any computational scenario. Modern graphics rendering is a high-speed dance of processing and memory management that spans software, hardware, multiple memory spaces, multiple languages, multiple processors, multiple processor types, and a large number of special-case features that can be thrown into the mix. To make matters worse, every graphics situation we will come across is different in its own way. Running the same application against a different device, even by the same manufacturer, often results in an apples-versus-oranges comparison due to the different capabilities and functionality they provide. It can be difficult to determine where a bottleneck resides within such a complex chain of devices and systems, and it can take a lifetime of industry work in 3D graphics to have a strong intuition about the source of performance issues in modern graphics systems. Thankfully, Profiling comes to the rescue once again. If we can gather data about each component, use multiple performance metrics for comparison, and tweak our Scenes to see how different graphics features affect their behavior, then we should have sufficient evidence to find the root cause of the issue and make appropriate changes. So in this article, you will learn how to gather the right data, dig just deep enough into the graphics system to find the true source of the problem, and explore various solutions to work around a given problem. There are many more topics to cover when it comes to improving rendering performance, so in this article we will begin with some general techniques on how to determine whether our rendering is limited by the CPU or by the GPU, and what we can do about either case. We will discuss optimization techniques such as Occlusion Culling and Level of Detail (LOD) and provide some useful advice on Shader optimization, as well as large-scale rendering features such as lighting and shadows. Finally, since mobile devices are a common target for Unity projects, we will also cover some techniques that may help improve performance on limited hardware. (For more resources related to this topic, see here.) Profiling rendering issues Poor rendering performance can manifest itself in a number of ways, depending on whether the device is CPU-bound, or GPU-bound; in the latter case, the root cause could originate from a number of places within the graphics pipeline. This can make the investigatory stage rather involved, but once the source of the bottleneck is discovered and the problem is resolved, we can expect significant improvements as small fixes tend to reap big rewards when it comes to the rendering subsystem. The CPU sends rendering instructions through the graphics API, that funnel through the hardware driver to the GPU device, which results in commands entering the GPU's Command Buffer. These commands are processed by the massively parallel GPU system one by one until the buffer is empty. But there are a lot more nuances involved in this process. The following shows a (greatly simplified) diagram of a typical GPU pipeline (which can vary based on technology and various optimizations), and the broad rendering steps that take place during each stage: The top row represents the work that takes place on the CPU, the act of calling into the graphics API, through the hardware driver, and pushing commands into the GPU. Ergo, a CPU-bound application will be primarily limited by the complexity, or sheer number, of graphics API calls. Meanwhile, a GPU-bound application will be limited by the GPU's ability to process those calls, and empty the Command Buffer in a reasonable timeframe to allow for the intended frame rate. This is represented in the next two rows, showing the steps taking place in the GPU. But, because of the device's complexity, they are often simplified into two different sections: the front end and the back end. The front end refers to the part of the rendering process where the GPU has received mesh data, a draw call has been issued, and all of the information that was fed into the GPU is used to transform vertices and run through Vertex Shaders. Finally, the rasterizer generates a batch of fragments to be processed in the back end. The back end refers to the remainder of the GPU's processing stages, where fragments have been generated, and now they must be tested, manipulated, and drawn via Fragment Shaders onto the frame buffer in the form of pixels. Note that "Fragment Shader" is the more technically accurate term for Pixel Shaders. Fragments are generated by the rasterization stage, and only technically become pixels once they've been processed by the Shader and drawn to the Frame Buffer. There are a number of different approaches we can use to determine where the root cause of a graphics rendering issue lies: Profiling the GPU with the Profiler Examining individual frames with the Frame Debugger Brute Force Culling GPU profiling Because graphics rendering involves both the CPU and GPU, we must examine the problem using both the CPU Usage and GPU Usage areas of the Profiler as this can tell us which component is working hardest. For example, the following screenshot shows the Profiler data for a CPU-bound application. The test involved creating thousands of simple objects, with no batching techniques taking place. This resulted in an extremely large Draw Call count (around 15,000) for the CPU to process, but giving the GPU relatively little work to do due to the simplicity of the objects being rendered: This example shows that the CPU's "rendering" task is consuming a large amount of cycles (around 30 ms per frame), while the GPU is only processing for less than 16 ms, indicating that the bottleneck resides in the CPU. Meanwhile, Profiling a GPU-bound application via the Profiler is a little trickier. This time, the test involves creating a small number of high polycount objects (for a low Draw Call per vertex ratio), with dozens of real-time point lights and an excessively complex Shader with a texture, normal texture, heightmap, emission map, occlusion map, and so on, (for a high workload per pixel ratio). The following screenshot shows Profiler data for the example Scene when it is run in a standalone application: As we can see, the rendering task of the CPU Usage area matches closely with the total rendering costs of the GPU Usage area. We can also see that the CPU and GPU time costs at the bottom of the image are relatively similar (41.48 ms versus 38.95 ms). This is very unintuitive as we would expect the GPU to be working much harder than the CPU. Be aware that the CPU/GPU millisecond cost values are not calculated or revealed unless the appropriate Usage Area has been added to the Profiler window. However, let's see what happens when we test the same exact Scene through the Editor: This is a better representation of what we would expect to see in a GPU-bound application. We can see how the CPU and GPU time costs at the bottom are closer to what we would expect to see (2.74 ms vs 64.82 ms). However, this data is highly polluted. The spikes in the CPU and GPU Usage areas are the result of the Profiler Window UI updating during testing, and the overhead cost of running through the Editor is also artificially increasing the total GPU time cost. It is unclear what causes the data to be treated this way, and this could certainly change in the future if enhancements are made to the Profiler in future versions of Unity, but it is useful to know this drawback. Trying to determine whether our application is truly GPU-bound is perhaps the only good excuse to perform a Profiler test through the Editor. The Frame Debugger A new feature in Unity 5 is the Frame Debugger, a debugging tool that can reveal how the Scene is rendered and pieced together, one Draw Call at a time. We can click through the list of Draw Calls and observe how the Scene is rendered up to that point in time. It also provides a lot of useful details for the selected Draw Call, such as the current render target (for example, the shadow map, the camera depth texture, the main camera, or other custom render targets), what the Draw Call did (drawing a mesh, drawing a static batch, drawing depth shadows, and so on), and what settings were used (texture data, vertex colors, baked lightmaps, directional lighting, and so on). The following screenshot shows a Scene that is only being partially rendered due to the currently selected Draw Call within the Frame Debugger. Note the shadows that are visible from baked lightmaps that were rendered during an earlier pass before the object itself is rendered: If we are bound by Draw Calls, then this tool can be effective in helping us figure out what the Draw Calls are being spent on, and determine whether there are any unnecessary Draw Calls that are not having an effect on the scene. This can help us come up with ways to reduce them, such as removing unnecessary objects or batching them somehow. We can also use this tool to observe how many additional Draw Calls are consumed by rendering features, such as shadows, transparent objects, and many more. This could help us, when we're creating multiple quality levels for our game, to decide what features to enable/disable under the low, medium, and high quality settings. Brute force testing If we're poring over our Profiling data, and we're still not sure we can determine the source of the problem, we can always try the brute force method: cull a specific activity from the Scene and see if it results in greatly increased performance. If a small change results in a big speed improvement, then we have a strong clue about where the bottleneck lies. There's no harm in this approach if we eliminate enough unknown variables to be sure the data is leading us in the right direction. We will cover different ways to brute force test a particular issue in each of the upcoming sections. CPU-bound If our application is CPU-bound, then we will observe a generally poor FPS value within the CPU Usage area of the Profiler window due to the rendering task. However, if VSync is enabled the data will often get muddied up with large spikes representing pauses as the CPU waits for the screen refresh rate to come around before pushing the current frame buffer. So, we should make sure to disable the VSync block in the CPU Usage area before deciding the CPU is the problem. Brute-forcing a test for CPU-bounding can be achieved by reducing Draw Calls. This is a little unintuitive since, presumably, we've already been reducing our Draw Calls to a minimum through techniques such as Static and Dynamic Batching, Atlasing, and so forth. This would mean we have very limited scope for reducing them further. What we can do, however, is disable the Draw-Call-saving features such as batching and observe if the situation gets significantly worse than it already is. If so, then we have evidence that we're either already, or very close to being, CPU-bound. At this point, we should see whether we can re-enable these features and disable rendering for a few choice objects (preferably those with low complexity to reduce Draw Calls without over-simplifying the rendering of our scene). If this results in a significant performance improvement then, unless we can find further opportunities for batching and mesh combining, we may be faced with the unfortunate option of removing objects from our scene as the only means of becoming performant again. There are some additional opportunities for Draw Call reduction, including Occlusion Culling, tweaking our Lighting and Shadowing, and modifying our Shaders. These will be explained in the following sections. However, Unity's rendering system can be multithreaded, depending on the targeted platform, which version of Unity we're running, and various settings, and this can affect how the graphics subsystem is being bottlenecked by the CPU, and slightly changes the definition of what being CPU-bound means. Multithreaded rendering Multithreaded rendering was first introduced in Unity v3.5 in February 2012, and enabled by default on multicore systems that could handle the workload; at the time, this was only PC, Mac, and Xbox 360. Gradually, more devices were added to this list, and since Unity v5.0, all major platforms now enable multithreaded rendering by default (and possibly some builds of Unity 4). Mobile devices were also starting to feature more powerful CPUs that could support this feature. Android multithreaded rendering (introduced in Unity v4.3) can be enabled through a checkbox under Platform Settings | Other Settings | Multithreaded Rendering. Multithreaded rendering on iOS can be enabled by configuring the application to make use of the Apple Metal API (introduced in Unity v4.6.3), under Player Settings | Other Settings | Graphics API. When multithreaded rendering is enabled, tasks that must go through the rendering API (OpenGL, DirectX, or Metal), are handed over from the main thread to a "worker thread". The worker thread's purpose is to undertake the heavy workload that it takes to push rendering commands through the graphics API and driver, to get the rendering instructions into the GPU's Command Buffer. This can save an enormous number of CPU cycles for the main thread, where the overwhelming majority of other CPU tasks take place. This means that we free up extra cycles for the majority of the engine to process physics, script code, and so on. Incidentally, the mechanism by which the main thread notifies the worker thread of tasks operates in a very similar way to the Command Buffer that exists on the GPU, except that the commands are much more high-level, with instructions like "render this object, with this Material, using this Shader", or "draw N instances of this piece of procedural geometry", and so on. This feature has been exposed in Unity 5 to allow developers to take direct control of the rendering subsystem from C# code. This customization is not as powerful as having direct API access, but it is a step in the right direction for Unity developers to implement unique graphical effects. Confusingly, the Unity API name for this feature is called "CommandBuffer", so be sure not to confuse it with the GPU's Command Buffer. Check the Unity documentation on CommandBuffer to make use of this feature: http://docs.unity3d.com/ScriptReference/Rendering.CommandBuffer.html. Getting back to the task at hand, when we discuss the topic of being CPU-bound in graphics rendering, we need to keep in mind whether or not the multithreaded renderer is being used, since the actual root cause of the problem will be slightly different depending on whether this feature is enabled or not. In single-threaded rendering, where all graphics API calls are handled by the main thread, and in an ideal world where both components are running at maximum capacity, our application would become bottlenecked on the CPU when 50 percent or more of the time per frame is spent handling graphics API calls. However, resolving these bottlenecks can be accomplished by freeing up work from the main thread. For example, we might find that greatly reducing the amount of work taking place in our AI subsystem will improve our rendering significantly because we've freed up more CPU cycles to handle the graphics API calls. But, when multithreaded rendering is taking place, this task is pushed onto the worker thread, which means the same thread isn't being asked to manage both engine work and graphics API calls at the same time. These processes are mostly independent, and even though additional work must still take place in the main thread to send instructions to the worker thread in the first place (via the internal CommandBuffer system), it is mostly negligible. This means that reducing the workload in the main thread will have little-to-no effect on rendering performance. Note that being GPU-bound is the same regardless of whether multithreaded rendering is taking place. GPU Skinning While we're on the subject of CPU-bounding, one task that can help reduce CPU workload, at the expense of additional GPU workload, is GPU Skinning. Skinning is the process where mesh vertices are transformed based on the current location of their animated bones. The animation system, working on the CPU, only transforms the bones, but another step in the rendering process must take care of the vertex transformations to place the vertices around those bones, performing a weighted average over the bones connected to those vertices. This vertex processing task can either take place on the CPU or within the front end of the GPU, depending on whether the GPU Skinning option is enabled. This feature can be toggled under Edit | Project Settings | Player Settings | Other Settings | GPU Skinning. Front end bottlenecks It is not uncommon to use a mesh that contains a lot of unnecessary UV and Normal vector data, so our meshes should be double-checked for this kind of superfluous fluff. We should also let Unity optimize the structure for us, which minimizes cache misses as vertex data is read within the front end. We will also learn some useful Shader optimization techniques shortly, when we begin to discuss back end optimizations, since many optimization techniques apply to both Fragment and Vertex Shaders. The only attack vector left to cover is finding ways to reduce actual vertex counts. The obvious solutions are simplification and culling; either have the art team replace problematic meshes with lower polycount versions, and/or remove some objects from the scene to reduce the overall polygon count. If these approaches have already been explored, then the last approach we can take is to find some kind of middle ground between the two. Level Of Detail Since it can be difficult to tell the difference between a high quality distance object and a low quality one, there is very little reason to render the high quality version. So, why not dynamically replace distant objects with something more simplified? Level Of Detail (LOD), is a broad term referring to the dynamic replacement of features based on their distance or form factor relative to the camera. The most common implementation is mesh-based LOD: dynamically replacing a mesh with lower and lower detailed versions as the camera gets farther and farther away. Another example might be replacing animated characters with versions featuring fewer bones, or less sampling for distant objects, in order to reduce animation workload. The built-in LOD feature is available in the Unity 4 Pro Edition and all editions of Unity 5. However, it is entirely possible to implement it via Script code in Unity 4 Free Edition if desired. Making use of LOD can be achieved by placing multiple objects in the Scene and making them children of a GameObject with an attached LODGroup component. The LODGroup's purpose is to generate a bounding box from these objects, and decide which object should be rendered based on the size of the bounding box within the camera's field of view. If the object's bounding box consumes a large area of the current view, then it will enable the mesh(es) assigned to lower LOD groups, and if the bounding box is very small, it will replace the mesh(es) with those from higher LOD groups. If the mesh is too far away, it can be configured to hide all child objects. So, with the proper setup, we can have Unity replace meshes with simpler alternatives, or cull them entirely, which eases the burden on the rendering process. Check the Unity documentation for more detailed information on the LOD feature: http://docs.unity3d.com/Manual/LevelOfDetail.html. This feature can cost us a large amount of development time to fully implement; artists must generate lower polygon count versions of the same object, and level designers must generate LOD groups, configure them, and test them to ensure they don't cause jarring transitions as the camera moves closer or farther away. It also costs us in memory and runtime CPU; the alternative meshes need to be kept in memory, and the LODGroup component must routinely test whether the camera has moved to a new position that warrants a change in LOD level. In this era of graphics card capabilities, vertex processing is often the least of our concerns. Combined with the additional sacrifices needed for LOD to function, developers should avoid preoptimizing by automatically assuming LOD will help them. Excessive use of the feature will lead to burdening other parts of our application's performance, and chew up precious development time, all for the sake of paranoia. If it hasn't been proven to be a problem, then it's probably not a problem! Scenes that feature large, expansive views of the world, and lots of camera movement, should consider implementing this technique very early, as the added distance and massive number of visible objects will exacerbate the vertex count enormously. Scenes that are always indoors, or feature a camera with a viewpoint looking down at the world (real-time strategy and MOBA games, for example) should probably steer clear of implementing LOD from the beginning. Games somewhere between the two should avoid it until necessary. It all depends on how many vertices are expected to be visible at any given time and how much variability in camera distance there will be. Note that some game development middleware companies offer third-party tools for automated LOD mesh generation. These might be worth investigating to compare their ease of use versus quality loss versus cost effectiveness. Disable GPU Skinning As previously mentioned, we could enable GPU Skinning to reduce the burden on a CPU-bound application, but enabling this feature will push the same workload into the front end of the GPU. Since Skinning is one of those "embarrassingly parallel" processes that fits well with the GPU's parallel architecture, it is often a good idea to perform the task on the GPU. But this task can chew up precious time in the front end preparing the vertices for fragment generation, so disabling it is another option we can explore if we're bottlenecked in this area. Again, this feature can be toggled under Edit | Project Settings | Player Settings | Other Settings | GPU Skinning. GPU Skinning is available in Unity 4 Pro Edition, and all editions of Unity 5. Reduce tessellation There is one last task that takes place in the front end process and that we need to consider: tessellation. Tessellation through Geometry Shaders can be a lot of fun, as it is a relatively underused technique that can really make our graphical effects stand out from the crowd of games that only use the most common effects. But, it can contribute enormously to the amount of processing work taking place in the front end. There are no simple tricks we can exploit to improve tessellation, besides improving our tessellation algorithms, or easing the burden caused by other front end tasks to give our tessellation tasks more room to breathe. Either way, if we have a bottleneck in the front end and are making use of tessellation techniques, we should double-check that they are not consuming the lion's share of the front end's budget. Back end bottlenecks The back end is the more interesting part of the GPU pipeline, as many more graphical effects take place during this stage. Consequently, it is the stage that is significantly more likely to suffer from bottlenecks. There are two brute force tests we can attempt: Reduce resolution Reduce texture quality These changes will ease the workload during two important stages at the back end of the pipeline: fill rate and memory bandwidth, respectively. Fill rate tends to be the most common source of bottlenecks in the modern era of graphics rendering, so we will cover it first. Fill rate By reducing screen resolution, we have asked the rasterization system to generate significantly fewer fragments and transpose them over a smaller canvas of pixels. This will reduce the fill rate consumption of the application, giving a key part of the rendering pipeline some additional breathing room. Ergo, if performance suddenly improves with a screen resolution reduction, then fill rate should be our primary concern. Fill rate is a very broad term referring to the speed at which the GPU can draw fragments. But, this only includes fragments that have survived all of the various conditional tests we might have enabled within the given Shader. A fragment is merely a "potential pixel," and if it fails any of the enabled tests, then it is immediately discarded. This can be an enormous performance-saver as the pipeline can skip the costly drawing step and begin work on the next fragment instead. One such example is Z-testing, which checks whether the fragment from a closer object has already been drawn to the same pixel already. If so, then the current fragment is discarded. If not, then the fragment is pushed through the Fragment Shader and drawn over the target pixel, which consumes exactly one draw from our fill rate. Now imagine multiplying this process by thousands of overlapping objects, each generating hundreds or thousands of possible fragments, for high screen resolutions causing millions, or billions, of fragments to be generated each and every frame. It should be fairly obvious that skipping as many of these draws as we can will result in big rendering cost savings. Graphics card manufacturers typically advertise a particular fill rate as a feature of the card, usually in the form of gigapixels per second, but this is a bit of a misnomer, as it would be more accurate to call it gigafragments per second; however this argument is mostly academic. Either way, larger values tell us that the device can potentially push more fragments through the pipeline, so with a budget of 30 GPix/s and a target frame rate of 60 Hz, we can afford to process 30,000,000,000/60 = 500 million fragments per frame before being bottlenecked on fill rate. With a resolution of 2560x1440, and a best-case scenario where each pixel is only drawn over once, then we could theoretically draw the entire scene about 125 times without any noticeable problems. Sadly, this is not a perfect world, and unless we take significant steps to avoid it, we will always end up with some amount of redraw over the same pixels due to the order in which objects are rendered. This is known as overdraw, and it can be very costly if we're not careful. The reason that resolution is a good attack vector to check for fill rate bounding is that it is a multiplier. A reduction from a resolution of 2560x1440 to 800x600 is an improvement factor of about eight, which could reduce fill rate costs enough to make the application perform well again. Overdraw Determining how much overdraw we have can be represented visually by rendering all objects with additive alpha blending and a very transparent flat color. Areas of high overdraw will show up more brightly as the same pixel is drawn over with additive blending multiple times. This is precisely how the Scene view's Overdraw shading mode reveals how much overdraw our scene is suffering. The following screenshot shows a scene with several thousand boxes drawn normally, and drawn using the Scene view's Overdraw shading mode: At the end of the day, fill rate is provided as a means of gauging the best-case behavior. In other words, it's primarily a marketing term and mostly theoretical. But, the technical side of the industry has adopted the term as a way of describing the back end of the pipeline: the stage where fragment data is funneled through our Shaders and drawn to the screen. If every fragment required an absolute minimum level of processing (such as a Shader that returned a constant color), then we might get close to that theoretical maximum. The GPU is a complex beast, however, and things are never so simple. The nature of the device means it works best when given many small tasks to perform. But, if the tasks get too large, then fill rate is lost due to the back end not being able to push through enough fragments in time and the rest of the pipeline is left waiting for tasks to do. There are several more features that can potentially consume our theoretical fill rate maximum, including but not limited to alpha testing, alpha blending, texture sampling, the amount of fragment data being pulled through our Shaders, and even the color format of the target render texture (the final Frame Buffer in most cases). The bad news is that this gives us a lot of subsections to cover, and a lot of ways to break the process, but the good news is it gives us a lot of avenues to explore to improve our fill rate usage. Occlusion Culling One of the best ways to reduce overdraw is to make use of Unity's Occlusion Culling system. The system works by partitioning Scene space into a series of cells and flying through the world with a virtual camera making note of which cells are invisible from other cells (are occluded) based on the size and position of the objects present. Note that this is different to the technique of Frustum Culling, which culls objects not visible from the current camera view. This feature is always active in all versions, and objects culled by this process are automatically ignored by the Occlusion Culling system. Occlusion Culling is available in the Unity 4 Pro Edition and all editions of Unity 5. Occlusion Culling data can only be generated for objects properly labeled Occluder Static and Occludee Static under the StaticFlags dropdown. Occluder Static is the general setting for static objects where we want it to hide other objects, and be hidden by large objects in its way. Occludee Static is a special case for transparent objects that allows objects behind them to be rendered, but we want them to be hidden if something large blocks their visibility. Naturally, because one of the static flags must be enabled for Occlusion Culling, this feature will not work for dynamic objects. The following screenshot shows how effective Occlusion Culling can be at reducing the number of visible objects in our Scene: This feature will cost us in both application footprint and incur some runtime costs. It will cost RAM to keep the Occlusion Culling data structure in memory, and there will be a CPU processing cost to determine which objects are being occluded in each frame. The Occlusion Culling data structure must be properly configured to create cells of the appropriate size for our Scene, and the smaller the cells, the longer it takes to generate the data structure. But, if it is configured correctly for the Scene, Occlusion Culling can provide both fill rate savings through reduced overdraw, and Draw Call savings by culling non-visible objects. Shader optimization Shaders can be a significant fill rate consumer, depending on their complexity, how much texture sampling takes place, how many mathematical functions are used, and so on. Shaders do not directly consume fill rate, but do so indirectly because the GPU must calculate or fetch data from memory during Shader processing. The GPU's parallel nature means any bottleneck in a thread will limit how many fragments can be pushed into the thread at a later date, but parallelizing the task (sharing small pieces of the job between several agents) provides a net gain over serial processing (one agent handling each task one after another). The classic example is a vehicle assembly line. A complete vehicle requires multiple stages of manufacture to complete. The critical path to completion might involve five steps: stamping, welding, painting, assembly, and inspection, and each step is completed by a single team. For any given vehicle, no stage can begin before the previous one is finished, but whatever team handled the stamping for the last vehicle can begin stamping for the next vehicle as soon as it has finished. This organization allows each team to become masters of their particular domain, rather than trying to spread their knowledge too thin, which would likely result in less consistent quality in the batch of vehicles. We can double the overall output by doubling the number of teams, but if any team gets blocked, then precious time is lost for any given vehicle, as well as all future vehicles that would pass through the same team. If these delays are rare, then they can be negligible in the grand scheme, but if not, and one stage takes several minutes longer than normal each and every time it must complete the task, then it can become a bottleneck that threatens the release of the entire batch. The GPU parallel processors work in a similar way: each processor thread is an assembly line, each processing stage is a team, and each fragment is a vehicle. If the thread spends a long time processing a single stage, then time is lost on each fragment. This delay will multiply such that all future fragments coming through the same thread will be delayed. This is a bit of an oversimplification, but it often helps to paint a picture of how poorly optimized Shader code can chew up our fill rate, and how small improvements in Shader optimization provide big benefits in back end performance. Shader programming and optimization have become a very niche area of game development. Their abstract and highly-specialized nature requires a very different kind of thinking to generate Shader code compared to gameplay and engine code. They often feature mathematical tricks and back-door mechanisms for pulling data into the Shader, such as precomputing values in texture files. Because of this, and the importance of optimization, Shaders tend to be very difficult to read and reverse-engineer. Consequently, many developers rely on prewritten Shaders, or visual Shader creation tools from the Asset Store such as Shader Forge or Shader Sandwich. This simplifies the act of initial Shader code generation, but might not result in the most efficient form of Shaders. If we're relying on pre-written Shaders or tools, we might find it worthwhile to perform some optimization passes over them using some tried-and-true techniques. So, let's focus on some easily reachable ways of optimizing our Shaders. Consider using Shaders intended for mobile platforms The built-in mobile Shaders in Unity do not have any specific restrictions that force them to only be used on mobile devices. They are simply optimized for minimum resource usage (and tend to feature some of the other optimizations listed in this section). Desktop applications are perfectly capable of using these Shaders, but they tend to feature a loss of graphical quality. It only becomes a question of whether the loss of graphical quality is acceptable. So, consider doing some testing with the mobile equivalents of common Shaders to see whether they are a good fit for your game. Use small data types GPUs can calculate with smaller data types more quickly than larger types (particularly on mobile platforms!), so the first tweak we can attempt is replacing our float data types (32-bit, floating point) with smaller versions such as half (16-bit, floating point), or even fixed (12-bit, fixed point). The size of the data types listed above will vary depending on what floating point formats the target platform prefers. The sizes listed are the most common. The importance for optimization is in the relative size between formats. Color values are good candidates for precision reduction, as we can often get away with less precise color values without any noticeable loss in coloration. However, the effects of reducing precision can be very unpredictable for graphical calculations. So, changes such as these can require some testing to verify whether the reduced precision is costing too much graphical fidelity. Note that the effects of these tweaks can vary enormously between one GPU architecture and another (for example, AMD versus Nvidia versus Intel), and even GPU brands from the same manufacturer. In some cases, we can make some decent performance gains for a trivial amount of effort. In other cases, we might see no benefit at all. Avoid changing precision while swizzling Swizzling is the Shader programming technique of creating a new vector (an array of values) from an existing vector by listing the components in the order in which we wish to copy them into the new structure. Here are some examples of swizzling: float4 input = float4(1.0, 2.0, 3.0, 4.0); // initial test value float2 val1 = input.yz; // swizzle two components float3 val2 = input.zyx; // swizzle three components in a different order float4 val3 = input.yyy; // swizzle the same component multiple times float sclr = input.w; float3 val4 = sclr.xxx // swizzle a scalar multiple times We can use both the xyzw and rgba representations to refer to the same components, sequentially. It does not matter whether it is a color or vector; they just make the Shader code easier to read. We can also list components in any order we like to fill in the desired data, repeating them if necessary. Converting from one precision type to another in a Shader can be a costly operation, but converting the precision type while simultaneously swizzling can be particularly painful. If we have mathematical operations that rely on being swizzled into different precision types, it would be wiser if we simply absorbed the high-precision cost from the very beginning, or reduced precision across the board to avoid the need for changes in precision. Use GPU-optimized helper functions The Shader compiler often performs a good job of reducing mathematical calculations down to an optimized version for the GPU, but compiled custom code is unlikely to be as effective as both the Cg library's built-in helper functions and the additional helpers provided by the Unity Cg included files. If we are using Shaders that include custom function code, perhaps we can find an equivalent helper function within the Cg or Unity libraries that can do a better job than our custom code can. These extra include files can be added to our Shader within the CGPROGRAM block like so: CGPROGRAM // other includes #include "UnityCG.cginc" // Shader code here ENDCG Example Cg library functions to use are abs() for absolute values, lerp() for linear interpolation, mul() for multiplying matrices, and step() for step functionality. Useful UnityCG.cginc functions include WorldSpaceViewDir() for calculating the direction towards the camera, and Luminance() for converting a color to grayscale. Check the following URL for a full list of Cg standard library functions: http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_e.html. Check the Unity documentation for a complete and up-to-date list of possible include files and their accompanying helper functions: http://docs.unity3d.com/Manual/SL-BuiltinIncludes.html. Disable unnecessary features Perhaps we can make savings by simply disabling Shader features that aren't vital. Does the Shader really need multiple passes, transparency, Z-writing, alpha-testing, and/or alpha blending? Will tweaking these settings or removing these features give us a good approximation of our desired effect without losing too much graphical fidelity? Making such changes is a good way of making fill rate cost savings. Remove unnecessary input data Sometimes the process of writing a Shader involves a lot of back and forth experimentation in editing code and viewing it in the Scene. The typical result of this is that input data that was needed when the Shader was going through early development is now surplus fluff once the desired effect has been obtained, and it's easy to forget what changes were made when/if the process drags on for a long time. But, these redundant data values can cost the GPU valuable time as they must be fetched from memory even if they are not explicitly used by the Shader. So, we should double check our Shaders to ensure all of their input geometry, vertex, and fragment data is actually being used. Only expose necessary variables Exposing unnecessary variables from our Shader to the accompanying Material(s) can be costly as the GPU can't assume these values are constant. This means the Shader code cannot be compiled into a more optimized form. This data must be pushed from the CPU with every pass since they can be modified at any time through the Material's methods such as SetColor(), SetFloat(), and so on. If we find that, towards the end of the project, we always use the same value for these variables, then they can be replaced with a constant in the Shader to remove such excess runtime workload. The only cost is obfuscating what could be critical graphical effect parameters, so this should be done very late in the process. Reduce mathematical complexity Complicated mathematics can severely bottleneck the rendering process, so we should do whatever we can to limit the damage. Complex mathematical functions could be replaced with a texture that is fed into the Shader and provides a pre-generated table for runtime lookup. We may not see any improvement with functions such as sin and cos, since they've been heavily optimized to make use of GPU architecture, but complex methods such as pow, exp, log, and other custom mathematical processes can only be optimized so much, and would be good candidates for simplification. This is assuming we only need one or two input values, which are represented through the X and Y coordinates of the texture, and mathematical accuracy isn't of paramount importance. This will cost us additional graphics memory to store the texture at runtime (more on this later), but if the Shader is already receiving a texture (which they are in most cases) and the alpha channel is not being used, then we could sneak the data in through the texture's alpha channel, costing us literally no performance, and the rest of the Shader code and graphics system would be none-the-wiser. This will involve the customization of art assets to include such data in any unused color channel(s), requiring coordination between programmers and artists, but is a very good way of saving Shader processing costs with no runtime sacrifices. In fact, Material properties and textures are both excellent entry points for pushing work from the Shader (the GPU) onto the CPU. If a complex calculation does not need to vary on a per pixel basis, then we could expose the value as a property in the Material, and modify it as needed (accepting the overhead cost of doing so from the previous section Only expose necessary variables). Alternatively, if the result varies per pixel, and does not need to change often, then we could generate a texture file from script code, containing the results of the calculations in the RGBA values, and pulling the texture into the Shader. Lots of opportunities arise when we ignore the conventional application of such systems, and remember to think of them as just raw data being transferred around. Reduce texture lookups While we're on the subject of texture lookups, they are not trivial tasks for the GPU to process and they have their own overhead costs. They are the most common cause of memory access problems within the GPU, especially if a Shader is performing samples across multiple textures, or even multiple samples across a single texture, as they will likely inflict cache misses in memory. Such situations should be simplified as much as possible to avoid severe GPU memory bottlenecking. Even worse, sampling a texture in a random order would likely result in some very costly cache misses for the GPU to suffer through, so if this is being done, then the texture should be reordered so that it can be sampled in a more sequential order. Avoid conditional statements In modern day CPU architecture, conditional statements undergo a lot of clever predictive techniques to make use of instruction-level parallelism. This is a feature where the CPU attempts to predict which direction a conditional statement will go in before it has actually been resolved, and speculatively begins processing the most likely result of the conditional using any free components that aren't being used to resolve the conditional (fetching some data from memory, copying some floats into unused registers, and so on). If it turns out that the decision is wrong, then the current result is discarded and the proper path is taken instead. So long as the cost of speculative processing and discarding false results is less than the time spent waiting to decide the correct path, and it is right more often than it is wrong, then this is a net gain for the CPU's speed. However, this feature is not possible on GPU architecture because of its parallel nature. The GPU's cores are typically managed by some higher-level construct that instructs all cores under its command to perform the same machine-code-level instruction simultaneously. So, if the Fragment Shader requires a float to be multiplied by 2, then the process will begin by having all cores copy data into the appropriate registers in one coordinated step. Only when all cores have finished copying to the registers will the cores be instructed to begin the second step: multiplying all registers by 2. Thus, when this system stumbles into a conditional statement, it cannot resolve the two statements independently. It must determine how many of its child cores will go down each path of the conditional, grab the list of required machine code instructions for one path, resolve them for all cores taking that path, and repeat for each path until all possible paths have been processed. So, for an if-else statement (two possibilities), it will tell one group of cores to process the "true" path, then ask the remaining cores to process the "false" path. Unless every core takes the same path, it must process both paths every time. So, we should avoid branching and conditional statements in our Shader code. Of course, this depends on how essential the conditional is to achieving the graphical effect we desire. But, if the conditional is not dependent on per pixel behavior, then we would often be better off absorbing the cost of unnecessary mathematics than inflicting a branching cost on the GPU. For example, we might be checking whether a value is non-zero before using it in a calculation, or comparing against some global flag in the Material before taking one action or another. Both of these cases would be good candidates for optimization by removing the conditional check. Reduce data dependencies The compiler will try its best to optimize our Shader code into the more GPU-friendly low-level language so that it is not waiting on data to be fetched when it could be processing some other task. For example, the following poorly-optimized code, could be written in our Shader: float sum = input.color1.r; sum = sum + input.color2.g; sum = sum + input.color3.b; sum = sum + input.color4.a; float result = calculateSomething(sum); If we were able to force the Shader compiler to compile this code into machine code instructions as it is written, then this code has a data dependency such that each calculation cannot begin until the last finishes due to the dependency on the sum variable. But, such situations are often detected by the Shader compiler and optimized into a version that uses instruction-level parallelism (the code shown next is the high-level code equivalent of the resulting machine code): float sum1, sum2, sum3, sum4; sum1 = input.color1.r; sum2 = input.color2.g; sum3 = input.color3.b sum4 = input.color4.a; float sum = sum1 + sum2 + sum3 + sum4; float result = CalculateSomething(sum); In this case, the compiler would recognize that it can fetch the four values from memory in parallel and complete the summation once all four have been fetched independently via thread-level parallelism. This can save a lot of time, relative to performing the four fetches one after another. However, long chains of data dependency can absolutely murder Shader performance. If we create a strong data dependency in our Shader's source code, then it has been given no freedom to make such optimizations. For example, the following data dependency would be painful on performance, as one step cannot be completed without waiting on another to fetch data and performing the appropriate calculation. float4 val1 = tex2D(_tex1, input.texcoord.xy); float4 val2 = tex2D(_tex2, val1.yz); float4 val3 = tex2D(_tex3, val2.zw); Strong data dependencies such as these should be avoided whenever possible. Surface Shaders If we're using Unity's Surface Shaders, which are a way for Unity developers to get to grips with Shader programming in a more simplified fashion, then the Unity Engine takes care of converting our Surface Shader code for us, abstracting away some of the optimization opportunities we have just covered. However, it does provide some miscellaneous values that can be used as replacements, which reduce accuracy but simplify the mathematics in the resulting code. Surface Shaders are designed to handle the general case fairly efficiently, but optimization is best achieved with a personal touch. The approxview attribute will approximate the view direction, saving costly operations. halfasview will reduce the precision of the view vector, but beware of its effect on mathematical operations involving multiple precision types. noforwardadd will limit the Shader to only considering a single directional light, reducing Draw Calls since the Shader will render in only a single pass, but reducing lighting complexity. Finally, noambient will disable ambient lighting in the Shader, removing some extra mathematical operations that we may not need. Use Shader-based LOD We can force Unity to render distant objects using simpler Shaders, which can be an effective way of saving fill rate, particularly if we're deploying our game onto multiple platforms or supporting a wide range of hardware capability. The LOD keyword can be used in the Shader to set the onscreen size factor that the Shader supports. If the current LOD level does not match this value, it will drop to the next fallback Shader and so on until it finds the Shader that supports the given size factor. We can also change a given Shader object's LOD value at runtime using the maximumLOD property. This feature is similar to the mesh-based LOD covered earlier, and uses the same LOD values for determining object form factor, so it should be configured as such. Memory bandwidth Another major component of back end processing and a potential source of bottlenecks is memory bandwidth. Memory bandwidth is consumed whenever a texture must be pulled from a section of the GPU's main video memory (also known as VRAM). The GPU contains multiple cores that each have access to the same area of VRAM, but they also each contain a much smaller, local Texture Cache that stores the current texture(s) the GPU has been most recently working with. This is similar in design to the multitude of CPU cache levels that allow memory transfer up and down the chain, as a workaround for the fact that faster memory will, invariably, be more expensive to produce, and hence smaller in capacity compared to slower memory. Whenever a Fragment Shader requests a sample from a texture that is already within the core's local Texture Cache, then it is lightning fast and barely perceivable. But, if a texture sample request is made, that does not yet exist within the Texture Cache, then it must be pulled in from VRAM before it can be sampled. This fetch request risks cache misses within VRAM as it tries to find the relevant texture. The transfer itself consumes a certain amount of memory bandwidth, specifically an amount equal to the total size of the texture file stored within VRAM (which may not be the exact size of the original file, nor the size in RAM, due to GPU-level compression). It's for this reason that, if we're bottlenecked on memory bandwidth, then performing a brute force test by reducing texture quality would suddenly result in a performance improvement. We've shrunk the size of our textures, easing the burden on the GPU's memory bandwidth, allowing it to fetch the necessary textures much quicker. Globally reducing texture quality can be achieved by going to Edit | Project Settings | Quality | Texture Quality and setting the value to Half Res, Quarter Res, or Eighth Res. In the event that memory bandwidth is bottlenecked, then the GPU will keep fetching the necessary texture files, but the entire process will be throttled as the Texture Cache waits for the data to appear before processing the fragment. The GPU won't be able to push data back to the Frame Buffer in time to be rendered onto the screen, blocking the whole process and culminating in a poor frame rate. Ultimately, proper usage of memory bandwidth is a budgeting concern. For example, with a memory bandwidth of 96 GB/sec per core and a target frame rate of 60 frames per second, then the GPU can afford to pull 96/60 = 1.6 GB worth of texture data every frame before being bottlenecked on memory bandwidth. Memory bandwidth is often listed on a per core basis, but some GPU manufacturers may try to mislead you by multiplying memory bandwidth by the number of cores in order to list a bigger, but less practical number. Because of this, research may be necessary to confirm the memory bandwidth limit we have for the target GPU hardware is given on a per core basis. Note that this value is not the maximum limit on the texture data that our game can contain in the project, nor in CPU RAM, not even in VRAM. It is a metric that limits how much texture swapping can occur during one frame. The same texture could be pulled back and forth multiple times in a single frame depending on how many Shaders need to use them, the order that the objects are rendered, and how often texture sampling must occur, so rendering just a few objects could consume whole gigabytes of memory bandwidth if they all require the same high quality, massive textures, require multiple secondary texture maps (normal maps, emission maps, and so on), and are not batched together, because there simply isn't enough Texture Cache space available to keep a single texture file long enough to exploit it during the next rendering pass. There are several approaches we can take to solve bottlenecks in memory bandwidth. Use less texture data This approach is simple, straightforward, and always a good idea to consider. Reducing texture quality, either through resolution or bit rate, is not ideal for graphical quality, but we can sometimes get away with using 16-bit textures without any noticeable degradation. Mip Maps are another excellent way of reducing the amount of texture data being pushed back and forth between VRAM and the Texture Cache. Note that the Scene View has a Mipmaps Shading Mode, which will highlight textures in our scene blue or red depending on whether the current texture scale is appropriate for the current Scene View's camera position and orientation. This will help identify what textures are good candidates for further optimization. Mip Maps should almost always be used in 3D Scenes, unless the camera moves very little. Test different GPU Texture Compression formats The Texture Compression techniques helpe reduce our application's footprint (executable file size), and runtime CPU memory usage, that is, the storage area where all texture resource data is kept until it is needed by the GPU. However, once the data reaches the GPU, it uses a different form of compression to keep texture data small. The common formats are DXT, PVRTC, ETC, and ASTC. To make matters more confusing, each platform and GPU hardware supports different compression formats, and if the device does not support the given compression format, then it will be handled at the software level. In other words, the CPU will need to stop and recompress the texture to the desired format the GPU wants, as opposed to the GPU taking care of it with a specialized hardware chip. The compression options are only available if a texture resource has its Texture Type field set to Advanced. Using any of the other texture type settings will simplify the choices, and Unity will make a best guess when deciding which format to use for the target platform, which may not be ideal for a given piece of hardware and thus will consume more memory bandwidth than necessary. The best approach to determining the correct format is to simply test a bunch of different devices and Texture Compression techniques and find one that fits. For example, common wisdom says that ETC is the best choice for Android since more devices support it, but some developers have found their game works better with the DXT and PVRTC formats on certain devices. Beware that, if we're at the point where individually tweaking Texture Compression techniques is necessary, then hopefully we have exhausted all other options for reducing memory bandwidth. By going down this road, we could be committing to supporting many different devices each in their own specific way. Many of us would prefer to keep things simple with a general solution instead of personal customization and time-consuming handiwork to work around problems like this. Minimize texture sampling Can we modify our Shaders to remove some texture sampling overhead? Did we add some extra texture lookup files to give ourselves some fill rate savings on mathematical functions? If so, we might want to consider lowering the resolution of such textures or reverting the changes and solving our fill rate problems in other ways. Essentially, the less texture sampling we do, the less often we need to use memory bandwidth and the closer we get to resolving the bottleneck. Organize assets to reduce texture swaps This approach basically comes back to Batching and Atlasing again. Are there opportunities to batch some of our biggest texture files together? If so, then we could save the GPU from having to pull in the same texture files over and over again during the same frame. As a last resort, we could look for ways to remove some textures from the entire project and reuse similar files. For instance, if we have fill rate budget to spare, then we may be able to use some Fragment Shaders to make a handful of textures files appear in our game with different color variations. VRAM limits One last consideration related to textures is how much VRAM we have available. Most texture transfer from CPU to GPU occurs during initialization, but can also occur when a non-existent texture is first required by the current view. This process is asynchronous and will result in a blank texture being used until the full texture is ready for rendering. As such, we should avoid too much texture variation across our Scenes. Texture preloading Even though it doesn't strictly relate to graphics performance, it is worth mentioning that the blank texture that is used during asynchronous texture loading can be jarring when it comes to game quality. We would like a way to control and force the texture to be loaded from disk to the main memory and then to VRAM before it is actually needed. A common workaround is to create a hidden GameObject that features the texture and place it somewhere in the Scene on the route that the player will take towards the area where it is actually needed. As soon as the textured object becomes a candidate for the rendering system (even if it's technically hidden), it will begin the process of copying the data towards VRAM. This is a little clunky, but is easy to implement and works sufficiently well in most cases. We can also control such behavior via Script code by changing a hidden Material's texture: GetComponent<Renderer>().material.texture = textureToPreload; Texture thrashing In the rare event that too much texture data is loaded into VRAM, and the required texture is not present, the GPU will need to request it from the main memory and overwrite the existing texture data to make room. This is likely to worsen over time as the memory becomes fragmented, and it introduces a risk that the texture just flushed from VRAM needs to be pulled again within the same frame. This will result in a serious case of memory "thrashing", and should be avoided at all costs. This is less of a concern on modern consoles such as the PS4, Xbox One, and WiiU, since they share a common memory space for both CPU and GPU. This design is a hardware-level optimization given the fact that the device is always running a single application, and almost always rendering 3D graphics. But, all other platforms must share time and space with multiple applications and be capable of running without a GPU. They therefore feature separate CPU and GPU memory, and we must ensure that the total texture usage at any given moment remains below the available VRAM of the target hardware. Note that this "thrashing" is not precisely the same as hard disk thrashing, where memory is copied back and forth between main memory and virtual memory (the swap file), but it is analogous. In either case, data is being unnecessarily copied back and forth between two regions of memory because too much data is being requested in too short a time period for the smaller of the two memory regions to hold it all. Thrashing such as this can be a common cause of dreadful graphics performance when games are ported from modern consoles to the desktop and should be treated with care. Avoiding this behavior may require customizing texture quality and file sizes on a per-platform and per-device basis. Be warned that some players are likely to notice these inconsistencies if we're dealing with hardware from the same console or desktop GPU generation. As many of us will know, even small differences in hardware can lead to a lot of apples-versus-oranges comparisons, but hardcore gamers will expect a similar level of quality across the board. Lighting and Shadowing Lighting and Shadowing can affect all parts of the graphics pipeline, and so they will be treated separately. This is perhaps one of the most important parts of game art and design to get right. Good Lighting and Shadowing can turn a mundane scene into something spectacular as there is something magical about professional coloring that makes it visually appealing. Even the low-poly art style (think Monument Valley) relies heavily on a good lighting and shadowing profile in order to allow the player to distinguish one object from another. But, this isn't an art book, so we will focus on the performance characteristics of various Lighting and Shadowing features. Unity offers two styles of dynamic light rendering, as well as baked lighting effects through lightmaps. It also provides multiple ways of generating shadows with varying levels of complexity and runtime processing cost. Between the two, there are a lot of options to explore, and a lot of things that can trip us up if we're not careful. The Unity documentation covers all of these features in an excellent amount of detail (start with this page and work through them: http://docs.unity3d.com/Manual/Lighting.html), so we'll examine these features from a performance standpoint. Let's tackle the two main light rendering modes first. This setting can be found under Edit | Project Settings | Player | Other Settings | Rendering, and can be configured on a per-platform basis. Forward Rendering Forward Rendering is the classical form of rendering lights in our scene. Each object is likely to be rendered in multiple passes through the same Shader. How many passes are required will be based on the number, distance, and brightness of light sources. Unity will try to prioritize which directional light is affecting the object the most and render the object in a "base pass" as a starting point. It will then take up to four of the most powerful point lights nearby and re-render the same object multiple times through the same Fragment Shader. The next four point lights will then be processed on a per-vertex basis. All remaining lights are treated as a giant blob by means of a technique called spherical harmonics. Some of this behavior can be simplified by setting a light's Render Mode to values such as Not Important, and changing the value of Edit | Project Settings | Quality | Pixel Light Count. This value limits how many lights will be treated on a per pixel basis, but is overridden by any lights with a Render Mode set to Important. It is therefore up to us to use this combination of settings responsibly. As you can imagine, the design of Forward Rendering can utterly explode our Draw Call count very quickly in scenes with a lot of point lights present, due to the number of render states being configured and Shader passes being reprocessed. CPU-bound applications should avoid this rendering mode if possible. More information on Forward Rendering can be found in the Unity documentation: http://docs.unity3d.com/Manual/RenderTech-ForwardRendering.html. Deferred Shading Deferred Shading or Deferred Rendering as it is sometimes known, is only available on GPUs running at least Shader Model 3.0. In other words, any desktop graphics card made after around 2004. The technique has been around for a while, but it has not resulted in a complete replacement of the Forward Rendering method due to the caveats involved and limited support on mobile devices. Anti-aliasing, transparency, and animated characters receiving shadows are all features that cannot be managed through Deferred Shading alone and we must use the Forward Rendering technique as a fallback. Deferred Shading is so named because actual shading does not occur until much later in the process; that is, it is deferred until later. From a performance perspective, the results are quite impressive as it can generate very good per pixel lighting with surprisingly little Draw Call effort. The advantage is that a huge amount of lighting can be accomplished using only a single pass through the lighting Shader. The main disadvantages include the additional costs if we wish to pile on advanced lighting features such as Shadowing and any steps that must pass through Forward Rendering in order to complete, such as transparency. The Unity documentation contains an excellent source of information on the Deferred Shading technique, its advantages, and its pitfalls: http://docs.unity3d.com/Manual/RenderTech-DeferredShading.html Vertex Lit Shading (legacy) Technically, there are more than two lighting methods. Unity allows us to use a couple of legacy lighting systems, only one of which may see actual use in the field: Vertex Lit Shading. This is a massive simplification of lighting, as lighting is only considered per vertex, and not per pixel. In other words, entire faces are colored based on the incoming light color, and not individual pixels. It is not expected that many, or really any, 3D games will make use of this legacy technique, as a lack of shadows and proper lighting make visualizations of depth very difficult. It is mostly relegated to 2D games that don't intend to make use of shadows, normal maps, and various other lighting features, but it is there if we need it. Real-time Shadows Soft Shadows are expensive, Hard Shadows are cheap, and No Shadows are free. Shadow Resolution, Shadow Projection, Shadow Distance, and Shadow Cascades are all settings we can find under Edit | Project Settings | Quality | Shadows that we can use to modify the behavior and complexity of our shadowing passes. That summarizes almost everything we need to know about Unity's real-time shadowing techniques from a high-level performance standpoint. We will cover shadows more in the following section on optimizing our lighting effects. Lighting optimization With a cursory glance at all of the relevant lighting techniques, let's run through some techniques we can use to improve lighting costs. Use the appropriate Shading Mode It is worth testing both of the main rendering modes to see which one best suits our game. Deferred Shading is often used as a backup in the event that Forward Rendering is becoming a burden on performance, but it really depends on where else we're finding bottlenecks as it is sometimes difficult to tell the difference between them. Use Culling Masks A Light Component's Culling Mask property is a layer-based mask that can be used to limit which objects will be affected by the given Light. This is an effective way of reducing lighting overhead, assuming that the layer interactions also make sense with how we are using layers for physics optimization. Objects can only be a part of a single layer, and reducing physics overhead probably trumps lighting overhead in most cases; thus, if there is a conflict, then this may not be the ideal approach. Note that there is limited support for Culling Masks when using Deferred Shading. Because of the way it treats lighting in a very global fashion, only four layers can be disabled from the mask, limiting our ability to optimize its behavior through this method. Use Baked Lightmaps Baking Lighting and Shadowing into a Scene is significantly less processor-intensive than generating them at runtime. The downside is the added application footprint, memory consumption, and potential for memory bandwidth abuse. Ultimately, unless a game's lighting effects are being handled exclusively through Legacy Vertex Lighting or a single Directional Light, then it should probably include Lightmapping to make some huge budget savings on lighting calculations. Relying entirely on real-time lighting and shadows is a recipe for disaster unless the game is trying to win an award for the smallest application file size of all time. Optimize Shadows Shadowing passes mostly consume our Draw Calls and fill rate, but the amount of vertex position data we feed into the process and our selection for the Shadow Projection setting will affect the front end's ability to generate the required shadow casters and shadow receivers. We should already be attempting to reduce vertex counts to solve front end bottlenecking in the first place, and making this change will be an added multiplier towards that effort. Draw Calls are consumed during shadowing by rendering visible objects into a separate buffer (known as the shadow map) as either a shadow caster, a shadow receiver, or both. Each object that is rendered into this map will consume another Draw Call, which makes shadows a huge performance cost multiplier, so it is often a setting that games will expose to users via quality settings, allowing users with weaker hardware to reduce the effect or even disable it entirely. Shadow Distance is a global multiplier for runtime shadow rendering. The fewer shadows we need to draw, the happier the entire rendering process will be. There is little point in rendering shadows at a great distance from the camera, so this setting should be configured specific to our game and how much shadowing we expect to witness during gameplay. It is also a common setting that is exposed to the user to reduce the burden of rendering shadows. Higher values of Shadow Resolution and Shadow Cascades will increase our memory bandwidth and fill rate consumption. Both of these settings can help curb the effects of artefacts in shadow rendering, but at the cost of a much larger shadow map size that must be moved around and of the canvas size to draw to. The Unity documentation contains an excellent summary on the topic of the aliasing effect of shadow maps and how the Shadow Cascades feature helps to solve the problem: http://docs.unity3d.com/Manual/DirLightShadows.html. It's worth noting that Soft Shadows do not consume any more memory or CPU overhead relative to Hard Shadows, as the only difference is a more complex Shader. This means that applications with enough fill rate to spare can enjoy the improved graphical fidelity of Soft Shadows. Optimizing graphics for mobile Unity's ability to deploy to mobile devices has contributed greatly to its popularity among hobbyist, small, and mid-size development teams. As such, it would be prudent to cover some approaches that are more beneficial for mobile platforms than for desktop and other devices. Note that any, and all, of the following approaches may become obsolete soon, if they aren't already. The mobile device market is moving blazingly fast, and the following techniques as they apply to mobile devices merely reflect conventional wisdom from the last half decade. We should occasionally test the assumptions behind these approaches from time-to-time to see whether the limitations of mobile devices still fit the mobile marketplace. Minimize Draw Calls Mobile applications are more often bottlenecked on Draw Calls than on fill rate. Not that fill rate concerns should be ignored (nothing should, ever!), but this makes it almost necessary for any mobile application of reasonable quality to implement Mesh Combining, Batching, and Atlasing techniques from the very beginning. Deferred Rendering is also the preferred technique as it fits well with other mobile-specific concerns, such as avoiding transparency and having too many animated characters. Minimize the Material count This concern goes hand in hand with the concepts of Batching and Atlasing. The fewer Materials we use, the fewer Draw Calls will be necessary. This strategy will also help with concerns relating to VRAM and memory bandwidth, which tend to be very limited on mobile devices. Minimize texture size and Material count Most mobile devices feature a very small Texture Cache relative to desktop GPUs. For instance, the iPhone 3G can only support a total texture size of 1024x1024 due to running OpenGLES1.1 with simple vertex rendering techniques. Meanwhile the iPhone 3GS, iPhone 4, and iPad generation run OpenGLES 2.0, which only supports textures up to 2048x2048. Later generations can support textures up to 4096x4096. Double check the device hardware we are targeting to be sure it supports the texture file sizes we wish to use (there are too many Android devices to list here).
However, later-generation devices are never the most common devices in the mobile marketplace. If we wish our game to reach a wide audience (increasing its chances of success), then we must be willing to support weaker hardware. Note that textures that are too large for the GPU will be downscaled by the CPU during initialization, wasting valuable loading time, and leaving us with unintended graphical fidelity. This makes texture reuse of paramount importance for mobile devices due to the limited VRAM and Texture Cache sizes available. Make textures square and power-of-2 The GPU will find it difficult, or simply be unable to compress the texture if it is not in a square format, so make sure you stick to the common development convention and keep things square and sized to a power of 2. Use the lowest possible precision formats in Shaders Mobile GPUs are particularly sensitive to precision formats in its Shaders, so the smallest formats should be used. On a related note, format conversion should be avoided for the same reason. Avoid Alpha Testing Mobile GPUs haven't quite reached the same levels of chip optimization as desktop GPUs, and Alpha Testing remains a particularly costly task on mobile devices. In most cases it should simply be avoided in favor of Alpha Blending. Summary If you've made it this far without skipping ahead, then congratulations are in order. That was a lot of information to absorb for just one component of the Unity Engine, but then it is clearly the most complicated of them all, requiring a matching depth of explanation. Hopefully, you've learned a lot of approaches to help you improve your rendering performance and enough about the rendering pipeline to know how to use them responsibly! To learn more about Unity 5, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Unity 5 Game Optimization (https://www.packtpub.com/game-development/unity-5-game-optimization) Unity 5.x By Example (https://www.packtpub.com/game-development/unity-5x-example) Unity 5.x Cookbook (https://www.packtpub.com/game-development/unity-5x-cookbook) Unity 5 for Android Essentials (https://www.packtpub.com/game-development/unity-5-android-essentials) Resources for Article: Further resources on this subject: The Vertex Functions [article] UI elements and their implementation [article] Routing for Yii Demystified [article]
Read more
  • 0
  • 0
  • 5219

article-image-vr-build-and-run
Packt
22 Feb 2016
25 min read
Save for later

VR Build and Run

Packt
22 Feb 2016
25 min read
Yeah well, this is cool and everything, but where's my VR? I WANT MY VR! Hold on kid, we're getting there. In this article, we are going to set up a project that can be built and run with a virtual reality head-mounted display (HMD) and then talk more in depth about how the VR hardware technology really works. We will be discussing the following topics: The spectrum of the VR device integration software Installing and building a project for your VR device The details and defining terms for how the VR technology really works (For more resources related to this topic, see here.) VR device integration software Before jumping in, let's understand the possible ways to integrate our Unity project with virtual reality devices. In general, your Unity project must include a camera object that can render stereographic views, one for each eye on the VR headset. Software for the integration of applications with the VR hardware spans a spectrum, from built-in support and device-specific interfaces to the device-independent and platform-independent ones. Unity's built-in VR support Since Unity 5.1, support for the VR headsets is built right into Unity. At the time of writing this article, there is direct support for Oculus Rift and Samsung Gear VR (which is driven by the Oculus software). Support for other devices has been announced, including Sony PlayStation Morpheus. You can use a standard camera component, like the one attached to Main Camera and the standard character asset prefabs. When your project is built with Virtual Reality Supported enabled in Player Settings, it renders stereographic camera views and runs on an HMD. The device-specific SDK If a device is not directly supported in Unity, the device manufacturer will probably publish the Unity plugin package. An advantage of using the device-specific interface is that it can directly take advantage of the features of the underlying hardware. For example, Steam Valve and Google have device-specific SDK and Unity packages for the Vive and Cardboard respectively. If you're using one of these devices, you'll probably want to use such SDK and Unity packages. (At the time of writing this article, these devices are not a part of Unity's built-in VR support.) Even Oculus, supported directly in Unity 5.1, provides SDK utilities to augment that interface (see, https://developer.oculus.com/documentation/game-engines/latest/concepts/unity-intro/). Device-specific software locks each build into the specific device. If that's a problem, you'll either need to do some clever coding, or take one of the following approaches instead. The OSVR project In January 2015, Razer Inc. led a group of industry leaders to announce the Open Source Virtual Reality (OSVR) platform (for more information on this, visit http://www.osvr.com/) with plans to develop open source hardware and software, including an SDK that works with multiple devices from multiple vendors. The open source middleware project provides device-independent SDKs (and Unity packages) so that you can write your code to a single interface without having to know which devices your users are using. With OSVR, you can build your Unity game for a specific operating system (such as Windows, Mac, and Linux) and then let the user configure the app (after they download it) for whatever hardware they're going to use. At the time of writing this article, the project is still in its early stage, is rapidly evolving, and is not ready for this article. However, I encourage you to follow its development. WebVR WebVR (for more information, visit http://webvr.info/) is a JavaScript API that is being built directly into major web browsers. It's like WebGL (2D and 3D graphics API for the web) with VR rendering and hardware support. Now that Unity 5 has introduced the WebGL builds, I expect WebVR to surely follow, if not in Unity then from a third-party developer. As we know, browsers run on just about any platform. So, if you target your game to WebVR, you don't even need to know the user's operating system, let alone which VR hardware they're using! That's the idea anyway. New technologies, such as the upcoming WebAssembly, which is a new binary format for the Web, will help to squeeze the best performance out of your hardware and make web-based VR viable. For WebVR libraries, check out the following: WebVR boilerplate: https://github.com/borismus/webvr-boilerplate GLAM: http://tparisi.github.io/glam/ glTF: http://gltf.gl/ MozVR (the Mozilla Firefox Nightly builds with VR): http://mozvr.com/downloads/ WebAssembly: https://github.com/WebAssembly/design/blob/master/FAQ.md 3D worlds There are a number of third-party 3D world platforms that provide multiuser social experiences in shared virtual spaces. You can chat with other players, move between rooms through portals, and even build complex interactions and games without having to be an expert. For examples of 3D virtual worlds, check out the following: VRChat: http://vrchat.net/ JanusVR: http://janusvr.com/ AltspaceVR: http://altvr.com/ High Fidelity: https://highfidelity.com/ For example, VRChat lets you develop 3D spaces and avatars in Unity, export them using their SDK, and load them into VRChat for you and others to share over the Internet in a real-time social VR experience. Creating the MeMyselfEye prefab To begin, we will create an object that will be a proxy for the user in the virtual environment. Let's create the object using the following steps: Open Unity and the project from the last article. Then, open the Diorama scene by navigating to File | Open Scene (or double-click on the scene object in Project panel, under Assets). From the main menu bar, navigate to GameObject | Create Empty. Rename the object MeMyselfEye (hey, this is VR!). Set its position up close into the scene, at Position (0, 1.4, -1.5). In the Hierarchy panel, drag the Main Camera object into MeMyselfEye so that it's a child object. With the Main Camera object selected, reset its transform values (in the Transform panel, in the upper right section, click on the gear icon and select Reset). The Game view should show that we're inside the scene. If you recall the Ethan experiment that we did earlier, I picked a Y-position of 1.4 so that we'll be at about the eye level with Ethan. Now, let's save this as a reusable prefabricated object, or prefab, in the Project panel, under Assets: In Project panel, under Assets, select the top-level Assets folder, right-click and navigate to Create | Folder. Rename the folder Prefabs. Drag the MeMyselfEye prefab into the Project panel, under Assets/Prefabs folder to create a prefab. Now, let's configure the project for your specific VR headset. Build for the Oculus Rift If you have a Rift, you've probably already downloaded Oculus Runtime, demo apps, and tons of awesome games. To develop for the Rift, you'll want to be sure that the Rift runs fine on the same machine on which you're using Unity. Unity has built-in support for the Oculus Rift. You just need to configure your Build Settings..., as follows: From main menu bar, navigate to File | Build Settings.... If the current scene is not listed under Scenes In Build, click on Add Current. Choose PC, Mac, & Linux Standalone from the Platform list on the left and click on Switch Platform. Choose your Target Platform OS from the Select list on the right (for example, Windows). Then, click on Player Settings... and go to the Inspector panel. Under Other Settings, check off the Virtual Reality Supported checkbox and click on Apply if the Changing editor vr device dialog box pops up. To test it out, make sure that the Rift is properly connected and turned on. Click on the game Play button at the top of the application in the center. Put on the headset, and IT SHOULD BE AWESOME! Within the Rift, you can look all around—left, right, up, down, and behind you. You can lean over and lean in. Using the keyboard, you can make Ethan walk, run, and jump just like we did earlier. Now, you can build your game as a separate executable app using the following steps. Most likely, you've done this before, at least for non-VR apps. It's pretty much the same: From the main menu bar, navigate to File | Build Settings.... Click on Build and set its name. I like to keep my builds in a subdirectory named Builds; create one if you want to. Click on Save. An executable will be created in your Builds folder. If you're on Windows, there may also be a rift_Data folder with built data. Run Diorama as you would do for any executable application—double-click on it. Choose the Windowed checkbox option so that when you're ready to quit, close the window with the standard Close icon in the upper right of your screen. Build for Google Cardboard Read this section if you are targeting Google Cardboard on Android and/or iOS. A good starting point is the Google Cardboard for Unity, Get Started guide (for more information, visit https://developers.google.com/cardboard/unity/get-started). The Android setup If you've never built for Android, you'll first need to download and install the Android SDK. Take a look at Unity manual for Android SDK Setup (http://docs.unity3d.com/Manual/android-sdksetup.html). You'll need to install the Android Developer Studio (or at least, the smaller SDK Tools) and other related tools, such as Java (JVM) and the USB drivers. It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine.) Make sure that you know how to install and run it on your Android phone. The iOS setup A good starting point is Unity manual, Getting Started with iOS Development guide (http://docs.unity3d.com/Manual/iphone-GettingStarted.html). You can only perform iOS development from a Mac. You must have an Apple Developer Account approved (and paid for the standard annual membership fee) and set up. Also, you'll need to download and install a copy of the Xcode development tools (via the Apple Store). It might be a good idea to first build, install, and run another Unity project without the Cardboard SDK to ensure that you have all the pieces in place. (A scene with just a cube would be fine). Make sure that you know how to install and run it on your iPhone. Installing the Cardboard Unity package To set up our project to run on Google Cardboard, download the SDK from https://developers.google.com/cardboard/unity/download. Within your Unity project, import the CardboardSDKForUnity.unitypackage assets package, as follows: From the Assets main menu bar, navigate to Import Package | Custom Package.... Find and select the CardboardSDKForUnity.unitypackage file. Ensure that all the assets are checked, and click on Import. Explore the imported assets. In the Project panel, the Assets/Cardboard folder includes a bunch of useful stuff, including the CardboardMain prefab (which, in turn, contains a copy of CardboardHead, which contains the camera). There is also a set of useful scripts in the Cardboard/Scripts/ folder. Go check them out. Adding the camera Now, we'll put the Cardboard camera into MeMyselfEye, as follows: In the Project panel, find CardboardMain in the Assets/Cardboard/Prefabs folder. Drag it onto the MeMyselfEye object in the Hierarchy panel so that it's a child object. With CardboardMain selected in Hierarchy, look at the Inspector panel and ensure the Tap is Trigger checkbox is checked. Select the Main Camera in the Hierarchy panel (inside MeMyselfEye) and disable it by unchecking the Enable checkbox on the upper left of its Inspector panel. Finally, apply theses changes back onto the prefab, as follows: In the Hierarchy panel, select the MeMyselfEye object. Then, in its Inspector panel, next to Prefab, click on the Apply button. Save the scene. We now have replaced the default Main Camera with the VR one. The build settings If you know how to build and install from Unity to your mobile phone, doing it for Cardboard is pretty much the same: From the main menu bar, navigate to File | Build Settings.... If the current scene is not listed under Scenes to Build, click on Add Current. Choose Android or iOS from the Platform list on the left and click on Switch Platform. Then, click on Player Settings… in the Inspector panel. For Android, ensure that Other Settings | Virtual Reality Supported is unchecked, as that would be for GearVR (via the Oculus drivers), not Cardboard Android! Navigate to Other Settings | PlayerSettings.bundleIdentifier and enter a valid string, such as com.YourName.VRisAwesome. Under Resolution and Presentation | Default Orientation set Landscape Left. Play Mode To test it out, you do not need your phone connected. Just press the game's Play button at the top of the application in the center to enter Play Mode. You will see the split screen stereographic views in the Game view panel. While in Play Mode, you can simulate the head movement if you were viewing it with the Cardboard headset. Use Alt + mouse-move to pan and tilt forward or backwards. Use Ctrl + mouse-move to tilt your head from side to side. You can also simulate magnetic clicks (we'll talk more about user input in a later article) with mouse clicks. Note that since this emulates running on a phone, without a keyboard, the keyboard keys that we used to move Ethan do not work now. Building and running in Android To build your game as a separate executable app, perform the following steps: From the main menu bar, navigate to File | Build & Run. Set the name of the build. I like to keep my builds in a subdirectory named Build; you can create one if you want. Click on Save. This will generate an Android executable .apk file, and then install the app onto your phone. The following screenshot shows the Diorama scene running on an Android phone with Cardboard (and Unity development monitor in the background). Building and running in iOS To build your game and run it on the iPhone, perform the following steps: Plug your phone into the computer via a USB cable/port. From the main menu bar, navigate to File | Build & Run. This allows you to create an Xcode project, launch Xcode, build your app inside Xcode, and then install the app onto your phone. Antique Stereograph (source https://www.pinterest.com/pin/493073859173951630/) The device-independent clicker At the time of writing this article, VR input has not yet been settled across all platforms. Input devices may or may not fit under Unity's own Input Manager and APIs. In fact, input for VR is a huge topic and deserves its own book. So here, we will keep it simple. As a tribute to the late Steve Jobs and a throwback to the origins of Apple Macintosh, I am going to limit these projects to mostly one-click inputs! Let's write a script for it, which checks for any click on the keyboard, mouse, or other managed device: In the Project panel, select the top-level Assets folder. Right-click and navigate to Create | Folder. Name it Scripts. With the Scripts folder selected, right-click and navigate to Create | C# Script. Name it Clicker. Double-click on the Clicker.cs file in the Projects panel to open it in the MonoDevelop editor. Now, edit the Script file, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { return Input.anyKeyDown; } } Save the file. If you are developing for Google Cardboard, you can add a check for the Cardboard's integrated trigger when building for mobile devices, as follows: using UnityEngine; using System.Collections; public class Clicker { public bool clicked() { #if (UNITY_ANDROID || UNITY_IPHONE) return Cardboard.SDK.CardboardTriggered; #else return Input.anyKeyDown; #endif } } Any scripts that we write that require user clicks will use this Clicker file. The idea is that we've isolated the definition of a user click to a single script, and if we change or refine it, we only need to change this file. How virtual reality really works So, with your headset on, you experienced the diorama! It appeared 3D, it felt 3D, and maybe you even had a sense of actually being there inside the synthetic scene. I suspect that this isn't the first time you've experienced VR, but now that we've done it together, let's take a few minutes to talk about how it works. The strikingly obvious thing is, VR looks and feels really cool! But why? Immersion and presence are the two words used to describe the quality of a VR experience. The Holy Grail is to increase both to the point where it seems so real, you forget you're in a virtual world. Immersion is the result of emulating the sensory inputs that your body receives (visual, auditory, motor, and so on). This can be explained technically. Presence is the visceral feeling that you get being transported there—a deep emotional or intuitive feeling. You can say that immersion is the science of VR, and presence is the art. And that, my friend, is cool. A number of different technologies and techniques come together to make the VR experience work, which can be separated into two basic areas: 3D viewing Head-pose tracking In other words, displays and sensors, like those built into today's mobile devices, are a big reason why VR is possible and affordable today. Suppose the VR system knows exactly where your head is positioned at any given moment in time. Suppose that it can immediately render and display the 3D scene for this precise viewpoint stereoscopically. Then, wherever and whenever you moved, you'd see the virtual scene exactly as you should. You would have a nearly perfect visual VR experience. That's basically it. Ta-dah! Well, not so fast. Literally. Stereoscopic 3D viewing Split-screen stereography was discovered not long after the invention of photography, like the popular stereograph viewer from 1876 shown in the following picture (B.W. Kilborn & Co, Littleton, New Hampshire, see http://en.wikipedia.org/wiki/Benjamin_W._Kilburn). A stereo photograph has separate views for the left and right eyes, which are slightly offset to create parallax. This fools the brain into thinking that it's a truly three-dimensional view. The device contains separate lenses for each eye, which let you easily focus on the photo close up. Similarly, rendering these side-by-side stereo views is the first job of the VR-enabled camera in Unity. Let's say that you're wearing a VR headset and you're holding your head very still so that the image looks frozen. It still appears better than a simple stereograph. Why? The old-fashioned stereograph has twin relatively small images rectangularly bound. When your eye is focused on the center of the view, the 3D effect is convincing, but you will see the boundaries of the view. Move your eyeballs around (even with the head still), and any remaining sense of immersion is totally lost. You're just an observer on the outside peering into a diorama. Now, consider what an Oculus Rift screen looks like without the headset (see the following screenshot): The first thing that you will notice is that each eye has a barrel shaped view. Why is that? The headset lens is a very wide-angle lens. So, when you look through it you have a nice wide field of view. In fact, it is so wide (and tall), it distorts the image (pincushion effect). The graphics software (SDK) does an inverse of that distortion (barrel distortion) so that it looks correct to us through the lenses. This is referred to as an ocular distortion correction. The result is an apparent field of view (FOV), that is wide enough to include a lot more of your peripheral vision. For example, the Oculus Rift DK2 has a FOV of about 100 degrees. Also of course, the view angle from each eye is slightly offset, comparable to the distance between your eyes, or the Inter Pupillary Distance (IPD). IPD is used to calculate the parallax and can vary from one person to the next. (Oculus Configuration Utility comes with a utility to measure and configure your IPD. Alternatively, you can ask your eye doctor for an accurate measurement.) It might be less obvious, but if you look closer at the VR screen, you see color separations, like you'd get from a color printer whose print head is not aligned properly. This is intentional. Light passing through a lens is refracted at different angles based on the wavelength of the light. Again, the rendering software does an inverse of the color separation so that it looks correct to us. This is referred to as a chromatic aberration correction. It helps make the image look really crisp. Resolution of the screen is also important to get a convincing view. If it's too low-res, you'll see the pixels, or what some refer to as a screen door effect. The pixel width and height of the display is an oft-quoted specification when comparing the HMD's, but the pixels per inch (ppi) value may be more important. Other innovations in display technology such as pixel smearing and foveated rendering (showing a higher-resolution detail exactly where the eyeball is looking) will also help reduce the screen door effect. When experiencing a 3D scene in VR, you must also consider the frames per second (FPS). If FPS is too slow, the animation will look choppy. Things that affect FPS include the graphics processor (GPU) performance and complexity of the Unity scene (number of polygons and lighting calculations), among other factors. This is compounded in VR because you need to draw the scene twice, once for each eye. Technology innovations, such as GPUs optimized for VR, frame interpolation and other techniques, will improve the frame rates. For us developers, performance-tuning techniques in Unity, such as those used by mobile game developers, can be applied in VR. These techniques and optics help make the 3D scene appear realistic. Sound is also very important—more important than many people realize. VR should be experienced while wearing stereo headphones. In fact, when the audio is done well but the graphics are pretty crappy, you can still have a great experience. We see this a lot in TV and cinema. The same holds true in VR. Binaural audio gives each ear its own stereo view of a sound source in such a way that your brain imagines its location in 3D space. No special listening devices are needed. Regular headphones will work (speakers will not). For example, put on your headphones and visit the Virtual Barber Shop at https://www.youtube.com/watch?v=IUDTlvagjJA. True 3D audio, such as VisiSonics (licensed by Oculus), provides an even more realistic spatial audio rendering, where sounds bounce off nearby walls and can be occluded by obstacles in the scene to enhance the first-person experience and realism. Lastly, the VR headset should fit your head and face comfortably so that it's easy to forget that you're wearing it and should block out light from the real environment around you. Head tracking So, we have a nice 3D picture that is viewable in a comfortable VR headset with a wide field of view. If this was it and you moved your head, it'd feel like you have a diorama box stuck to your face. Move your head and the box moves along with it, and this is much like holding the antique stereograph device or the childhood View Master. Fortunately, VR is so much better. The VR headset has a motion sensor (IMU) inside that detects spatial acceleration and rotation rate on all three axes, providing what's called the six degrees of freedom. This is the same technology that is commonly found in mobile phones and some console game controllers. Mounted on your headset, when you move your head, the current viewpoint is calculated and used when the next frame's image is drawn. This is referred to as motion detection. Current motion sensors may be good if you wish to play mobile games on a phone, but for VR, it's not accurate enough. These inaccuracies (rounding errors) accumulate over time, as the sensor is sampled thousands of times per second, one may eventually lose track of where you are in the real world. This drift is a major shortfall of phone-based VR headsets such as Google Cardboard. It can sense your head motion, but it loses track of your head position. High-end HMDs account for drift with a separate positional tracking mechanism. The Oculus Rift does this with an inside-out positional tracking, where an array of (invisible) infrared LEDs on the HMD are read by an external optical sensor (infrared camera) to determine your position. You need to remain within the view of the camera for the head tracking to work. Alternatively, the Steam VR Vive Lighthouse technology does an outside-in positional tracking, where two or more dumb laser emitters are placed in the room (much like the lasers in a barcode reader at the grocery checkout), and an optical sensor on the headset reads the rays to determine your position. Either way, the primary purpose is to accurately find the position of your head (and other similarly equipped devices, such as handheld controllers). Together, the position, tilt, and the forward direction of your head—or the head pose—is used by the graphics software to redraw the 3D scene from this vantage point. Graphics engines such as Unity are really good at this. Now, let's say that the screen is getting updated at 90 FPS, and you're moving your head. The software determines the head pose, renders the 3D view, and draws it on the HMD screen. However, you're still moving your head. So, by the time it's displayed, the image is a little out of date with respect to your then current position. This is called latency, and it can make you feel nauseous. Motion sickness caused by latency in VR occurs when you're moving your head and your brain expects the world around you to change exactly in sync. Any perceptible delay can make you uncomfortable, to say the least. Latency can be measured as the time from reading a motion sensor to rendering the corresponding image, or the sensor-to-pixel delay. According to Oculus' John Carmack: "A total latency of 50 milliseconds will feel responsive, but still noticeable laggy. 20 milliseconds or less will provide the minimum level of latency deemed acceptable." There are a number of very clever strategies that can be used to implement latency compensation. The details are outside the scope of this article and inevitably will change as device manufacturers improve on the technology. One of these strategies is what Oculus calls the timewarp, which tries to guess where your head will be by the time the rendering is done, and uses that future head pose instead of the actual, detected one. All of this is handled in the SDK, so as a Unity developer, you do not have to deal with it directly. Meanwhile, as VR developers, we need to be aware of latency as well as the other causes of motion sickness. Latency can be reduced by faster rendering of each frame (keeping the recommended FPS). This can be achieved by discouraging the moving of your head too quickly and using other techniques to make the user feel grounded and comfortable. Another thing that the Rift does to improve head tracking and realism is that it uses a skeletal representation of the neck so that all the rotations that it receives are mapped more accurately to the head rotation. An example of this is looking down at your lap makes a small forward translation since it knows it's impossible to rotate one's head downwards on the spot. Other than head tracking, stereography and 3D audio, virtual reality experiences can be enhanced with body tracking, hand tracking (and gesture recognition), locomotion tracking (for example, VR treadmills), and controllers with haptic feedback. The goal of all of this is to increase your sense of immersion and presence in the virtual world. Summary In this article, we discussed the different levels of device integration software and then installed the software that is appropriate for your target VR device. We also discussed what happens inside the hardware and software SDK that makes virtual reality work and how it matters to us VR developers. For more information on VR development and Unity refer to the following Packt books: Unity UI Cookbook, by Francesco Sapio: https://www.packtpub.com/game-development/unity-ui-cookbook Building a Game with Unity and Blender, by Lee Zhi Eng: https://www.packtpub.com/game-development/building-game-unity-and-blender Augmented Reality with Kinect, by Rui Wang: https://www.packtpub.com/application-development/augmented-reality-kinect Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Unity Networking – The Pong Game [article] Getting Started with Mudbox 2013 [article]
Read more
  • 0
  • 0
  • 2615

article-image-ragdoll-physics
Packt
19 Feb 2016
5 min read
Save for later

Ragdoll Physics

Packt
19 Feb 2016
5 min read
In this article we will learn how to apply Ragdoll physics to a character. (For more resources related to this topic, see here.) Applying Ragdoll physics to a character Action games often make use of Ragdoll physics to simulate the character's body reaction to being unconsciously under the effect of a hit or explosion. In this recipe, we will learn how to set up and activate Ragdoll physics to our character whenever she steps in a landmine object. We will also use the opportunity to reset the character's position and animations a number of seconds after that event has occurred. Getting ready For this recipe, we have prepared a Unity Package named Ragdoll, containing a basic scene that features an animated character and two prefabs, already placed into the scene, named Landmine and Spawnpoint. The package can be found inside the 1362_07_08 folder. How to do it... To apply ragdoll physics to your character, follow these steps: Create a new project and import the Ragdoll Unity Package. Then, from the Project view, open the mecanimPlayground level. You will see the animated book character and two discs: Landmine and Spawnpoint. First, let's set up our Ragdoll. Access the GameObject | 3D Object | Ragdoll... menu and the Ragdoll wizard will pop-up. Assign the transforms as follows:     Root: mixamorig:Hips     Left Hips: mixamorig:LeftUpLeg     Left Knee: mixamorig:LeftLeg     Left Foot: mixamorig:LeftFoot     Right Hips: mixamorig:RightUpLeg     Right Knee: mixamorig:RightLeg     Right Foot: mixamorig:RightFoot     Left Arm: mixamorig:LeftArm     Left Elbow: mixamorig:LeftForeArm     Right Arm: mixamorig:RightArm     Right Elbow: mixamorig:RightForeArm     Middle Spine: mixamorig:Spine1     Head: mixamorig:Head     Total Mass: 20     Strength: 50 Insert image 1362OT_07_45.png From the Project view, create a new C# Script named RagdollCharacter.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class RagdollCharacter : MonoBehaviour { void Start () { DeactivateRagdoll(); } public void ActivateRagdoll(){ gameObject.GetComponent<CharacterController> ().enabled = false; gameObject.GetComponent<BasicController> ().enabled = false; gameObject.GetComponent<Animator> ().enabled = false; foreach (Rigidbody bone in GetComponentsInChildren<Rigidbody>()) { bone.isKinematic = false; bone.detectCollisions = true; } foreach (Collider col in GetComponentsInChildren<Collider>()) { col.enabled = true; } StartCoroutine (Restore ()); } public void DeactivateRagdoll(){ gameObject.GetComponent<BasicController>().enabled = true; gameObject.GetComponent<Animator>().enabled = true; transform.position = GameObject.Find("Spawnpoint").transform.position; transform.rotation = GameObject.Find("Spawnpoint").transform.rotation; foreach(Rigidbody bone in GetComponentsInChildren<Rigidbody>()){ bone.isKinematic = true; bone.detectCollisions = false; } foreach (CharacterJoint joint in GetComponentsInChildren<CharacterJoint>()) { joint.enableProjection = true; } foreach(Collider col in GetComponentsInChildren<Collider>()){ col.enabled = false; } gameObject.GetComponent<CharacterController>().enabled = true; } IEnumerator Restore(){ yield return new WaitForSeconds(5); DeactivateRagdoll(); } } Save and close the script. Attach the RagdollCharacter.cs script to the book Game Object. Then, select the book character and, from the top of the Inspector view, change its tag to Player. From the Project view, create a new C# Script named Landmine.cs. Open the script and add the following code: using UnityEngine; using System.Collections; public class Landmine : MonoBehaviour { public float range = 2f; public float force = 2f; public float up = 4f; private bool active = true; void OnTriggerEnter ( Collider collision ){ if(collision.gameObject.tag == "Player" && active){ active = false; StartCoroutine(Reactivate()); collision.gameObject.GetComponent<RagdollCharacter>().ActivateRagdoll(); Vector3 explosionPos = transform.position; Collider[] colliders = Physics.OverlapSphere(explosionPos, range); foreach (Collider hit in colliders) { if (hit.GetComponent<Rigidbody>()) hit.GetComponent<Rigidbody>().AddExplosionForce(force, explosionPos, range, up); } } } IEnumerator Reactivate(){ yield return new WaitForSeconds(2); active = true; } } Save and close the script. Attach the script to the Landmine Game Object. Play the scene. Using the WASD keyboard control scheme, direct the character to the Landmine Game Object. Colliding with it will activate the character's Ragdoll physics and apply an explosion force to it. As a result, the character will be thrown away to a considerable distance and will no longer be in the control of its body movements, akin to a ragdoll. How it works... Unity's Ragdoll Wizard assigns, to selected transforms, the components Collider, Rigidbody, and Character Joint. In conjunction, those components make ragdoll physics possible. However, those components must be disabled whenever we want our character to be animated and controlled by the player. In our case, we switch those components on and off using the RagdollCharacter script and its two functions: ActivateRagdoll() and DeactivateRagdoll(), the latter including instructions to re-spawn our character in the appropriate place. For the testing purposes, we have also created the Landmine script, which calls RagdollCharacter script's function named ActivateRagdoll(). It also applies an explosion force to our ragdoll character, throwing it outside the explosion site. There's more... Instead of resetting the character's transform settings, you could have destroyed its gameObject and instantiated a new one over the respawn point using Tags. For more information on this subject, check Unity's documentation at: http://docs.unity3d.com/ScriptReference/GameObject.FindGameObjectsWithTag.html. Summary In this article we learned how to apply Ragdoll physics to a character. We also learned how to setup Ragdoll for the character of the game. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article: Further resources on this subject: Using a collider-based system [article] Looking Back, Looking Forward [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 7330
article-image-lighting-basics
Packt
19 Feb 2016
5 min read
Save for later

Lighting basics

Packt
19 Feb 2016
5 min read
In this article by Satheesh PV, author of the book Unreal Engine 4 Game Development Essentials, we will learn that lighting is an important factor in your game, which can be easily overlooked, and wrong usage can severely impact on performance. But with proper settings, combined with post process, you can create very beautiful and realistic scenes. We will see how to place lights and how to adjust some important values. (For more resources related to this topic, see here.) Placing lights In Unreal Engine 4, lights can be placed in two different ways. Through the modes tab or by right-clicking in the level: Modes tab: In the Modes tab, go to place tab (Shift + 1) and go to the Lights section. From there you can drag and drop various lights. Right-clicking: Right-click in viewport and in Place Actor you can select your light. Once a light is added to the level, you can use transform tool (W to move, E to rotate) to change the position and rotation of your selected light. Since Directional Light casts light from an infinite source, updating their location has no effect. Various lights Unreal Engine 4 features four different types of light Actors. They are: Directional Light: Simulates light from a source that is infinitely far away. Since all shadows casted by this light will be parallel, this is the ideal choice for simulating sunlight. Spot Light: Emits light from a single point in a cone shape. There are two cones (inner cone and outer cone). Within inner cone, light achieves full brightness and between inner and outer cone a falloff takes place, which softens the illumination. That means after inner cone, light slowly loses its illumination as it goes to outer cone. Point Light: Emits light from a single point to all directions, much like a real-world light bulb. Sky Light: Does not really emit light, but instead captures the distant parts of your scene (for example, Actors that are placed beyond Sky Distance Threshold) and applies them as light. That means you can have lights coming from your atmosphere, distant mountains, and so on. Note that Sky Light will only update when you rebuild your lighting or by pressing Recapture Scene (in the Details panel with Sky Light selected). Common light settings Now that we know how to place lights into a scene, let's take a look at some of the common settings of a light. Select your light in a scene and in the Details panel you will see these settings: Intensity: Determines the intensity (energy) of the light. This is in lumens units so, for example, 1700 lm (lumen units) corresponds to a 100 W bulb. Light Color: Determines the color of light. Attenuation Radius: Sets the limit of light. It also calculates the falloff of light. This setting is only available in Point Lights and Spot Lights. Attenuation Radius from left to right: 100, 200, 500. Source Radius: Defines the size of specular highlights on surfaces. This effect can be subdued by adjusting the Min Roughness setting. This also affects building light using Lightmass. Larger Source Radius will cast softer shadows. Since this is processed by Lightmass, it will only work on Lights with mobility set to Static Source Radius 0. Notice the sharp edges of the shadow. Source Radius 0. Notice the sharp edges of the shadow. Source Length: Same as Source Radius. Light mobility Light mobility is an important setting to keep in mind when placing lights in your level because this changes the way light works and impacts on performance. There are three settings that you can choose. They are as follows: Static: A completely static light that has no impact on performance. This type of light will not cast shadows or specular on dynamic objects (for example, characters, movable objects, and so on). Example usage: Use this light where the player will never reach, such as distant cityscapes, ceilings, and so on. You can literally have millions of lights with static mobility. Stationary: This is a mix of static and dynamic light and can change its color and brightness while running the game, but cannot move or rotate. Stationary lights can interact with dynamic objects and is used where the player can go. Movable: This is a completely dynamic light and all properties can be changed at runtime. Movable lights are heavier on performance so use them sparingly. Only four or fewer stationary lights are allowed to overlap each other. If you have more than four stationary lights overlapping each other the light icon will change to red X, which indicates that the light is using dynamic shadows at a severe performance cost! In the following screenshot, you can easily see the overlapping light. Under View Mode, you can change to Stationary Light Overlap to see which light is causing an issue. Summary We will look into different light mobilities and learn more about Lightmass Global Illumination, which is the static Global Illumination solver created by Epic games. We will also learn how to prepare assets to be used with it. Resources for Article:   Further resources on this subject: Understanding Material Design [article] Build a First Person Shooter [article] Machine Learning With R [article]
Read more
  • 0
  • 0
  • 5954

article-image-build-first-person-shooter
Packt
18 Feb 2016
29 min read
Save for later

Build a First Person Shooter

Packt
18 Feb 2016
29 min read
We will be creating a first person shooter; however, instead of shooting a gun to damage our enemies, we will be shooting a picture in a survival horror environment; similar to the Fatal Frame series of games and the recent indie title DreadOut. To get started on our project, we're first going to look at creating our level or, in this case, our environments starting with the exterior. In the game industry, there are two main roles in level creation: the environment artist and level designer. An environment artist is a person who builds the assets that go into the environment. He/she uses tools such as 3ds Max or Maya to create the model, and then uses other tools such as Photoshop to create textures and normal maps. The level designer is responsible for taking the assets that the environment artist has created and assembling them into an environment for players to enjoy. He/she designs the gameplay elements, creates the scripted events, and tests the gameplay. Typically, a level designer will create environments through a combination of scripting and using a tool that may or may not be in development as the game is being made. In our case, that tool is Unity. One important thing to note is that most companies have their own definition for different roles. In some companies, a level designer may need to create assets and an environment artist may need to create a level layout. There are also some places that hire someone to just do lighting, or just to place meshes (called a mesher) because they're so good at it. (For more resources related to this topic, see here.) Project overview In this article, we take on the role of an environment artist whose been tasked with creating an outdoor environment. We will use assets that I've placed in the example code as well as assets already provided to us by Unity for mesh placement. In addition to this, you will also learn some beginner-level design. Your objectives This project will be split into a number of tasks. It will be a simple step-by-step process from the beginning to end. Here is an outline of our tasks: Creating the exterior environment – Terrain Beautifying the environment – adding water, trees, and grass Building the atmosphere Designing the level layout and background The project setup At this point, I assume you have a fresh installation of Unity and have started it. You can perform the following steps: With Unity started, navigate to File | New Project. Select a project location of your choice somewhere on your hard drive and ensure that you have Setup defaults for set to 3D. Once completed, click on Create. Here, if you see the Welcome to Unity pop up, feel free to close it as we won't be using it. Level design 101 – planning Now just because we are going to be diving straight into Unity, I feel it's important to talk a little more about how level design is done in the gaming industry. While you may think a level designer will just jump into the editor and start playing, the truth is you normally would need to do a ton of planning ahead of time before you even open up your tool. Generally, a level design begins with an idea. This can come from anything; maybe you saw a really cool building, or a photo on the Internet gave you a certain feeling; maybe you want to teach the player a new mechanic. Turning this idea into a level is what a level designer does. Taking all of these ideas, the level designer will create a level design document, which will outline exactly what you're trying to achieve with the entire level from start to end. A level design document will describe everything inside the level; listing all of the possible encounters, puzzles, so on and so forth, which the player will need to complete as well as any side quests that the player will be able to achieve. To prepare for this, you should include as many references as you can with maps, images, and movies similar to what you're trying to achieve. If you're working with a team, making this document available on a website or wiki will be a great asset so that you know exactly what is being done in the level, what the team can use in their levels, and how difficult their encounters can be. Generally, you'll also want a top-down layout of your level done either on a computer or with a graph paper, with a line showing a player's general route for the level with encounters and missions planned out. Of course, you don't want to be too tied down to your design document and it will change as you playtest and work on the level, but the documentation process will help solidify your ideas and give you a firm basis to work from. For those of you interested in seeing some level design documents, feel free to check out Adam Reynolds (Level Designer on Homefront and Call of Duty: World at War) at http://wiki.modsrepository.com/index.php?title=Level_Design:_Level_Design_Document_Example. If you want to learn more about level design, I'm a big fan of Beginning Game Level Design, John Feil (previously my teacher) and Marc Scattergood, Cengage Learning PTR. For more of an introduction to all of game design from scratch, check out Level Up!: The Guide to Great Video Game Design, Scott Rogers, Wiley and The Art of Game Design, Jesse Schell, CRC Press. For some online resources, Scott has a neat GDC talk called Everything I Learned About Level Design I Learned from Disneyland, which can be found at http://mrbossdesign.blogspot.com/2009/03/everything-i-learned-about-game-design.html, and World of Level Design (http://worldofleveldesign.com/) is a good source for learning about level design, though it does not talk about Unity specifically. Exterior environment – terrain When creating exterior environments, we cannot use straight floors for the most part, unless you're creating a highly urbanized area. Our game takes place in a haunted house in the middle of nowhere, so we're going to create a natural landscape. In Unity, the best tool to use to create a natural landscape is the Terrain tool. Unity's terrain system lets us add landscapes, complete with bushes, trees, and fading materials to our game. To show how easy it is to use the terrain tool, let's get started. The first thing that we're going to want to do is actually create the terrain we'll be placing for the world. Let's first create a terrain by navigating to GameObject | Create Other | Terrain: If you are using Unity 4.6 or later, navigate to GameObject | Create  General | Terrain to create the Terrain. At this point, you should see the terrain. Right now, it's just a flat plane, but we'll be adding a lot to it to make it shine. If you look to the right with the Terrain object selected, you'll see the Terrain Editing tools, which can do the following (from left to right): Raise/Lower Height: This option will allow us to raise or lower the height of our terrain up to a certain radius to create hills, rivers, and more. Paint Height: If you already know the exact height that a part of your terrain needs to be, this option will allow you to paint a spot on that location. Smooth Height: This option averages out the area that it is in, and then attempts to smooth out areas and reduce the appearance of abrupt changes. Paint Texture: This option allows us to add textures to the surface of our terrain. One of the nice features of this is the ability to lay multiple textures on top of each other. Place Trees: This option allows us to paint objects in our environment, which will appear on the surface. Unity attempts to optimize these objects by billboarding distant trees so that we can have dense forests without a horrible frame rate. Paint Details: In addition to trees, we can also have small things such as rocks or grass covering the surface of our environment. We can use 2D images to represent individual clumps using bits of randomization to make it appear more natural. Terrain Settings: These are settings that will affect the overall properties of a particular terrain; options such as the size of the terrain and wind can be found here. By default, the entire terrain is set to be at the bottom, but we want to have some ground above and below us; so first, with the terrain object selected, click on the second button to the left of the terrain component (the Paint Height mode). From here, set the Height value under Settings to 100 and then click on the Flatten button. At this point, you should notice the plane moving up, so now everything is above by default. Next, we are going to add some interesting shapes to our world with some hills by painting on the surface. With the Terrain object selected, click on the first button to the left of our Terrain component (the Raise/Lower Terrain mode). Once this is completed, you should see a number of different brushes and shapes that you can select from. Our use of terrain is to create hills in the background of our scene so that it does not seem like the world is completely flat. Under the Settings area, change the Brush Size and Opacity values of your brush to 100 and left-click around the edges of the world to create some hills. You can increase the height of the current hills if you click on top of the previous hill. This is shown in the following screenshot: When creating hills, it's a good idea to look at multiple angles while you're building them, so you can make sure that none are too high or too short. Generally, you want to have taller hills as you go further back, otherwise you cannot see the smaller ones since they would be blocked. In the Scene view, to move your camera around, you can use the toolbar in the top right corner or hold down the right mouse button and drag it in the direction you want the camera to move around in, pressing the W, A, S, and D keys to pan. In addition, you can hold down the middle mouse button and drag it to move the camera around. The mouse wheel can be scrolled to zoom in and out from where the camera is. Even though you should plan out the level ahead of time on something like a piece of graph paper to plan out encounters, you will want to avoid making the level entirely from the preceding section, as the player will not actually see the game with a bird's eye view in the game at all (most likely). Referencing the map from the same perspective of your character will help ensure that the map looks great. To see many different angles at one time, you can use a layout with multiple views of the scene, such as the 4 Split. Once we have our land done, we now want to create some holes in the ground, which we will fill in with water later. This will provide a natural barrier to our world that players will know they cannot pass, so we will create a moat by first changing the Brush Size value to 50 and then holding down the Shift key, and left-clicking around the middle of our texture. In this case, it's okay to use the Top view; remember this will eventually be water to fill in lakes, rivers, and so on, as shown in the following screenshot: At this point, we have done what is referred to in the industry as "greyboxing", making the level in the engine in the simplest way possible but without artwork (also known as "whiteboxing" or "orangeboxing" depending on the company you're working for). At this point in a traditional studio, you'd spend time playtesting the level and iterating on it before an artist or you takes the time to make it look great. However, for our purposes, we want to create a finished project as soon as possible. When doing your own games, be sure to play your level and have others play your level before you polish it. For more information on greyboxing, check out http://www.worldofleveldesign.com/categories/level_design_tutorials/art_of_blocking_in_your_map.php. For an example with images of a greybox to the final level, PC Gamer has a nice article available at http://www.pcgamer.com/2014/03/18/building-crown-part-two-layout-design-textures-and-the-hammer-editor/. This is interesting enough, but being in an all-white world would be quite boring. Thankfully, it's very easy to add textures to everything. However, first we need to have some textures to paint onto the world and for this instance, we will make use of some of the free assets that Unity provides us with. So, with that in mind, navigate to Window | Asset Store. The Asset Store option is home to a number of free and commercial assets that can be used with Unity to help you create your own projects created by both Unity and the community. While we will not be using any unofficial assets, the Asset Store option may help you in the future to save your time in programming or art asset creation. An the top right corner, you'll see a search bar; type terrain assets and press Enter. Once there, the first asset you'll see is Terrain Assets, which is released by Unity Technologies for free. Left-click on it and then once at the menu, click on the Download button: Once it finishes downloading, you should see the Importing Package dialog box pop up. If it doesn't pop up, click on the Import button where the Download button used to be: Generally, you'll only want to select the assets that you want to use and uncheck the others. However, since you're exploring the tools, we'll just click on the Import button to place them all. Close the Asset Store screen; if it's still open, go back into our game view. You should notice the new Terrain Assets folder placed in our Assets folder. Double-click on it and then enter the Textures folder: These will be the textures we will be placing in our environment: Select the Terrain object and then click on the fourth button from the left to select the Paint Texture button. Here, you'll notice that it looks quite similar to the previous sections we've seen. However, there is a Textures section as well, but as of now, there is the information No terrain textures defined in this section. So let's fix that. Click on the Edit Textures button and then select Add Texture. You'll see an Add Terrain Texture dialog pop up. Under the Texture variable, place the Grass (Hill) texture and then click on the Add button: At this point, you should see the entire world change to green if you're far away. If you zoom in, you'll see that the entire terrain uses the Grass (Hill) texture now: Now, we don't want the entire world to have grass. Next, we will add cliffs around the edges where the water is. To do this, add an additional texture by navigating to Edit Textures... | Add Texture. Select Cliff (Layered Rock) as the texture and then select Add. Now if you select the terrain, you should see two textures. With the Cliff (Layered Rock) texture selected, paint the edges of the water by clicking and holding the mouse, and modifying the Brush Size value as needed: We now want to create a path for our player to follow, so we're going to create yet another texture this time using the GoodDirt material. Since this is a path the player may take, I'm going to change the Brush Size value to 8 and the Opacity value to 30, and use the second brush from the left, which is slightly less faded. Once finished, I'm going to paint in some trails that the player can follow. One thing that you will want to try to do is make sure that the player shouldn't go too far before having to backtrack and reward the player for exploration. The following screenshot shows the path: However, you'll notice that there are two problems with it currently. Firstly, it is too big to fit in with the world, and you can tell that it repeats. To reduce the appearance of texture duplication, we can introduce new materials with a very soft opacity, which we place in patches in areas where there is just plain ground. For example, let's create a new texture with the Grass (Meadow) texture. Change the Brush Size value to 16 and the Opacity value to something really low, such as 6, and then start painting the areas that look too static. Feel free to select the first brush again to have a smoother touch up. Now, if we zoom into the world as if we were a character there, I can tell that the first grass texture is way too big for the environment but we can actually change that very easily. Double-click on the texture to change the Size value to (8,8). This will make the texture smaller before it duplicates. It's a good idea to have different textures with different sizes so that the seams of each texture aren't visible to others. The following screenshot shows the size options: Do the same changes as in the preceding step for our Dirt texture as well, changing the Size option to (8,8): With this, we already have a level that looks pretty nice! However, that being said, it's just some hills. To really have a quality-looking title, we are going to need to do some additional work to beautify the environment. Beautifying the environment – adding water, trees, and grass We now have a base for our environment with the terrain, but we're still missing a lot of polish that can make the area stand out and look like a quality environment. Let's add some of those details now: First, let's add water. This time we will use another asset from Unity, but we will not have to go to the Asset Store. Navigate to Assets | Import Package | Water (Basic) and import all of the files included in the package. We will be creating a level for the night time, so navigate to Standard Assets | Water Basic and drag-and-drop the Nighttime Simple Water prefab onto the scene. Once there, set the Position values to (1000,50, 1000) and the Scale values to (1000,1,1000): At this point, you want to repaint your cliff materials to reflect being next to the water better. Next, let's add some trees to make this forest level come to life. Navigate to Terrain Assets | Trees Ambient-Occlusion and drag-and-drop a tree into your world (I'm using ScotsPineTree). By default, these trees do not contain collision information, so our player could just walk through it. This is actually great for areas that the player will not reach as we can add more trees without having to do meaningless calculations, but we need to stop the player from walking into these. To do that, we're going to need to add a collider. To do so, navigate to Component | Physics | Capsule Collider and then change the Radius value to 2. You have to use a Capsule Collider in order to have the collision carried over to the terrain. After this, move our newly created tree into the Assets folder under the Project tab and change its name to CollidingTree. Then, delete the object from the Hierarchy view. With this finished, go back to our Terrain object and then click on the Place Trees mode button. Just like working with painting textures, there are no trees here by default, so navigate to Edit Trees… | Add Tree, add our CollidingTree object created earlier in this step, and then select Add. Next, under the Settings section, change the Tree Density value to 15 and then with our new tree selected, paint the areas on the main island that do not have paths on them. Once you've finished with placing those trees, up the Tree Density value to 50 and then paint the areas that are far away from paths to make it less likely that players go that way. You should also enable Create Tree Collider in the terrain's Terrain Collider component: In our last step to create an environment, let's add some details. The mode next to the Plant Trees mode is Paint Details. Next, click on the Edit Details… button and select Add Grass Texture. Select the Grass texture for the Detail Texture option and then click on Add. In the terrain's Settings mode (the one on the far right), change the Detail Distance value to 250, and then paint the grass where there isn't any dirt along the route in the Paint Details mode: You may not see the results unless you zoom your camera in, which you can do by using the mouse wheel. Don't go too far in though, or the results may not show as well. This aspect of level creation isn't very difficult, just time consuming. However, it's taking time to enter these details that really sets a game apart from the other games. Generally, you'll want to playtest and make sure your level is fun before performing these actions; but I feel it's important to have an idea of how to do it for your future projects. Lastly, our current island is very flat, and while that's okay for cities, nature is random. Go back into the Raise/Lower Height tool and gently raise and lower some areas of the level to give the illusion of depth. Do note that your trees and grass will raise and fall with the changes that you make, as shown in the following screenshot: With this done, let's now add some details to the areas that the player will not be visiting, such as the outer hills. Go into the Place Trees mode and create another tree, but this time select the one without collision and then place it around the edges of the mountains, as shown in the following screenshot: At this point, we have a nice exterior shape created with the terrain tools! If you want to add even more detail to your levels, you can add additional trees and/or materials to the level area as long as it makes sense for them to be there. For more information on the terrain engine that Unity has, please visit http://docs.unity3d.com/Manual/script-Terrain.html. Creating our player Now that we have the terrain and its details, it's hard to get a good picture of what the game looks like without being able to see what it looks like down on the surface, so next we will do just that. However, instead of creating our player from scratch as we've done previously, we will make use of the code that Unity has provided us. We will perform the following steps: Start off by navigating to Assets | Import Package | Character Controller. When the Importing Package dialog comes up, we only need to import the files shown in the following screenshot: Now drag-and-drop the First Person Controller prefab under the Prefabs folder in our Project tab into your world, where you want the player to spawn, setting the Y Position value to above 100. If you see yourself fall through the world instead of hitting the ground when you spawn, then increase the Y Position value until you get there. If you open up the First Person Controller object in the Hierarchy tab, you'll notice that it has a Main Camera object already, so delete the Main Camera object that already exists in the world. Right now, if we played the game, you'd see that everything is dark because we don't have any light. For the purposes of demonstration, let's add a directional light by navigating to GameObject | Create Other | Directional Light. If you are using Unity 4.6 or later, navigate to GameObject | Create  General | Terrain to create the Terrain. Save your scene and hit the Play button to drop into your level: At this point, we have a playable level that we can explore and move around in! Building the atmosphere Now, the base of our world has been created; let's add some effects to make the game even more visually appealing and so it will start to fit in with the survival horror feel that we're going to be giving the game. The first part of creating the atmosphere is to add something for the sky aside from the light blue color that we currently use by default. To fix this, we will be using a skybox. A skybox is a method to create backgrounds to make the area seem bigger than it really is, by putting an image in the areas that are currently being filled with the light blue color, not moving in the same way that the sky doesn't move to us because it's so far away. The reason why we call a skybox a skybox is because it is made up of six textures that will be the inside of the box (one for each side of a cube). Game engines such as Unreal have skydomes, which are the same thing; but they are done with a hemisphere instead of a cube. We will perform the following steps to build the atmosphere: To add in our skybox, we are going to navigate to Assets | Import Package | Skyboxes. We want our level to display the night, so we'll be using the Starry Night Skybox. Just select the StarryNight Skybox.mat file and textures inside the Standard Assets/Skyboxes/Textures/StarryNight/ location, and then click on Import: With this file imported, we need to navigate to Edit | Render Settings next. Once there, we need to set the Skybox Material option to the Starry Night skybox: If you go into the game, you'll notice the level starting to look nicer already with the addition of the skybox, except for the fact that the sky says night while the world says it's daytime. Let's fix that now. Switch to the Game tab so that you can see the changes we'll be making next. While still at the RenderSettings menu, let's turn on the Fog property by clicking on the checkbox with its name and changing the Fog Color value to a black color. You should notice that the surroundings are already turning very dark. Play around with the Fog Density value until you're comfortable with how much the player can see ahead of them; I used 0.005. Fog obscures far away objects, which adds to the atmosphere and saves the rendering power. The denser the fog, the more the game will feel like a horror game. The first game of the Silent Hill franchise used fog to make the game run at an acceptable frame rate due to a large 3D environment it had on early PlayStation hardware. Due to how well it spooked players, it continued to be used in later games even though they could render larger areas with the newer technology. Let's add some lighting tweaks to make the environment that the player is walking in seem more like night. Go into the DirectionalLight properties section and change the Intensity value to 0.05. You'll see the value get darker, as shown in the following screenshot: If for some reason, you'd like to make the world pitch black, you'll need to modify the Ambient Light property to black inside the RenderSettings section. By default, it is dark grey, which will show up even if there are no lights placed in the world. In the preceding example, I increased the Intensity value to make it easier to see the world to make it easier for readers to follow, but in your project, you probably don't want the player to see so far out with such clarity. With this, we now have a believable exterior area at night! Now that we have this basic knowledge, let's add a flashlight so the players can see where they are going. Creating a flashlight Now that our level looks like a dark night, we still want to give our players the ability to see what's in front of them with a flashlight. We will customize the First Person Controller object to fit our needs: Create a spotlight by navigating to GameObject | Create Other | Spotlight. Once created, we are going to make the spotlight a child of the First Person Controller object's Main Camera object by dragging-and-dropping it on top of it. Once a child, change the Transform Position value to (0, -.95, 0). Since positions are relative to your parent's position, this places the light slightly lower than the camera's center, just like a hand holding a flashlight. Now change the Rotation value to (0,0,0) or give it a slight diagonal effect across the scene if you don't want it to look like it's coming straight out: Now, we want the flashlight to reach out into the distance. So we will change the Range value to 1000, and to make the light wider, we will change the Spot Angle value to 45. The effects are shown in the following screenshot: If you have Unity Pro, you can also give shadows to the world based on your lights by setting the Shadow Type property. We now have a flashlight, so the player can focus on a particular area and not worry. Walking / flashlight bobbing animation Now the flashlight looks fine in a screenshot, but if you walk throughout the world, it will feel very static and unnatural. If a person is actually walking through the forest, there will be a slight bob as you walk, and if someone is actually holding a flash light, it won't be stable the entire time because your hand would move. We can solve both of these problems by writing yet another script. We perform the following steps: Create a new folder called Scripts. Inside this folder, create a new C# script called BobbingAnimation. Open the newly created script and use the following code inside it: using UnityEngine; using System.Collections;   /// <summary> /// Allows the attached object to bob up and down through /// movement or /// by default. /// </summary> public class BobbingAnimation : MonoBehaviour {   /// <summary>   /// The elapsed time.   /// </summary>   private float elapsedTime;     /// <summary>   /// The starting y offset from the parent.   /// </summary>   private float startingY;     /// <summary>   /// The controller.   /// </summary>   private CharacterController controller;     /// <summary>   /// How far up and down the object will travel   /// </summary>   public float magnitude = .2f;     /// <summary>   /// How often the object will move up and down   /// </summary>   public float frequency = 10;     /// <summary>   /// Do you always want the object to bob up and down or   /// with movement?   /// </summary>   public bool alwaysBob = false;     /// <summary>   /// Start this instance.   /// </summary>   void Start ()   {     startingY = transform.localPosition.y;     controller = GetComponent<CharacterController> ();   }     /// <summary>   /// Update this instance.   /// </summary>   void Update ()   {          // Only increment elapsedTime if you want the player to     // bob, keeping it the same will keep it still     if(alwaysBob)     {       elapsedTime += Time.deltaTime;     }     else     {       if((Input.GetAxis("Horizontal") != 0.0f) || (Input.GetAxis("Vertical")  != 0.0f) )         elapsedTime += Time.deltaTime;     }         float yOffset = Mathf.Sin(elapsedTime * frequency) * magnitude;       //If we can't find the player controller or we're     // jumping, we shouldn't be bobbing     if(controller && !controller.isGrounded)     {       return;     }       //Set our position     Vector3 pos = transform.position;         pos.y = transform.parent.transform.position.y +             startingY + yOffset;         transform.position = pos;       } } The preceding code will tweak the object it's attached to so that it will bob up and down whenever the player is moving along the x or y axis. I've also added a variable called alwaysBob, which, when true, will make the object always bob. Math is a game developer's best friend, and here we are using sin (pronounced sine). Taking the sin of an angle number gives you the ratio of the length of the opposite side of the angle to the length of the hypotenuse of a right-angled triangle. If that didn't make any sense to you, don't worry. The neat feature of sin is that as the number it takes gets larger, it will continuously give us a value between 0 and 1 that will go up and down forever, giving us a smooth repetitive oscillation. For more information on sine waves, visit http://en.wikipedia.org/wiki/Sine_wave. While we're using the sin just for the player's movement and the flashlight, this could be used in a lot of effects, such as having save points/portals bob up and down, or any kind of object you would want to have slight movement or some special FX. Next, attach the BobbingAnimation component to the Main Camera object, leaving all the values with the defaults. After this, attach the BobbingAnimation component to the spotlight as well. With the spotlight selected, turn the Always Bob option on and change the Magnitude value to .05 and the Frequency value to 3. The effects are shown in the following screenshot: Summary To learn more about FPS game, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Building an FPS Game with Unity (https://www.packtpub.com/game-development/building-fps-game-unity) Resources for Article:   Further resources on this subject: Mobile Game Design Best Practices [article] Putting the Fun in Functional Python [article] Using a collider-based system [article]
Read more
  • 0
  • 0
  • 3162

article-image-mobile-game-design-best-practices
Packt
18 Feb 2016
13 min read
Save for later

Mobile Game Design Best Practices

Packt
18 Feb 2016
13 min read
With so many different screen sizes, so little space to work with, and no real standard in the videogame industry, interface design is one of the toughest parts of creating a successful game. We will provide you with what you need to know to address the task properly and come up with optimal solutions for your mobile games.a (For more resources related to this topic, see here.) Best practices in UI design In this article we will provide a list of useful hints to approach the creation of a well-designed UI for your next game. The first golden rule is that the better the game interface is designed, the less it will be noticed by players, as it allows users to navigate through the game in a way that feels natural and easy to grasp. To achieve that, always opt for the simplest solution possible, as simplicity means that controls are clean and easy to learn and that the necessary info is displayed clearly. A little bit of psychology helps when deciding the positioning of buttons in the interface. Human beings have typical cognitive biases and they easily develop habits. Learning how to exploit such psychological aspects can really make a difference in the ergonomics of your game interface. We mentioned The Theory of Affordances by Gibson, but there are other theories of visual perception that should be taken into consideration. A good starting point is the Gestalt Principles that you will find at the following link: http://graphicdesign.spokanefalls.edu/tutorials/process/gestaltprinciples/gestaltprinc.htm Finally, it may seem trivial, but never assume anything. Design your game interface so that the most prominent options available on the main game screen lead your players to the game. Choose the target platform. When you begin designing the interface for a game, spend some time thinking about the main target platform for your game, in order to have a reference resolution to begin working with. The following table describes the screen resolutions of the most popular devices you are likely to work with: Device model Screen resolution (pixels) iPhone 3GS and equivalent 480x320 iPhone 4(S) and equivalent 960x640 at 326 ppi iPhone 5 1136x640 at 326 ppi iPad 1 and 2 1024x768 iPad 3 retina 2048x1536 at 264 ppi iPad Mini 1024x768 at 163 ppi Android devices variable Tablets variable As you may notice, when working with iOS, things are almost straightforward, as with the exception of the latest iPhone 5, there are only two main aspect ratios, and retina displays simply doubles the number of pixels. By designing your interface separately for the iPhone/iPod touch, the iPad at retina resolution, and scaling it down for older models, you basically cover almost all the Apple-equipped customers. For Android-based devices, on the other hand, things are more complicated, as there are tons of devices and they can widely differ from each other in screen size and resolution. The best thing to do in this case is to choose a reference, high-end model with HD display, such as the HTC One X+ or the Samsung Galaxy S4 (as we write), and design the interface to match their resolution. Scale it as required to adapt to others: though this way, the graphics won't be perfect for any device, 90 percent of your gamers won't notice any difference. The following is a list of sites where you can find useful information to deal with the Android screens variety dilemma: http://developer.android.com/about/dashboards/index.html http://anidea.com/technology/designer%E2%80%99s-guide-to-supporting-multiple-android-device-screens/ http://unitid.nl/2012/07/10-free-tips-to-create-perfect-visual-assets-for-ios-and-android-tablet-and-mobile/ Search for references There is no need to reinvent the wheel every time you design a new game interface. Games can be easily classified by genre and different genres tend to adopt general solutions for the interface that are shared among different titles in the same genre. Whenever you are planning the interface for your new game, look at others' work first. Play games and study their UI, especially from a functional perspective. When studying others' game interfaces, always ask yourself: What info is necessary to the player to play this title? What kind of functionality is needed to achieve the game goals? Which are the important components that need to stand out from the rest? What's the purpose and context of each window? By answering such questions, you will be able to make a deep analysis of the interface of other games, compare them, and then choose the solutions to better fit your specific needs. The screen flow The options available to the players of your game will need to be located in a game screen of some sort. So the questions you should ask yourself are: How many screens does my game need? Which options will be available to players? Where will these options be located? Once you come up with a list of the required options and game screens, create a flow chart to describe how the player navigates through the different screens and which options are available in each one. The resulting visual map will help you understand if the screen flow is clear and intuitive, if game options are located where the players expect to find them, and if there are doubles, which should be avoided. Be sure that each game screen is provided with a BACK button to go back to a previous game screen. It can be useful to add hyperlinks to your screen mockups so that you can try navigating through them early on. Functionality It is now time to define how the interface you are designing will help users to play your game. At this point, you should already have a clear idea of what the player will be doing in your game and the mechanics of your game. With that information in mind, think about what actions are required and what info must be displayed to deal with them. For every piece of information that you can deliver to the player, ask yourself if it is really necessary and where it should be displayed for optimal fruition. Try to be as conservative as you can when doing that, it is much too easy to lose the grip on the interface of your game if new options, buttons, and functions keep proliferating. The following is a list of useful hints to keep in mind when defining the functionality of your game interface: Keep the number of buttons as low as possible Stick to one primary purpose for each game screen Refer to the screen flow to check the context for each game screen Split complex info into small chunks and/or multiple screens to avoid overburdening your players Wireframes Now that the flow and basic contents of the game interface is set, it is time to start drawing with a graphic editor, such as Photoshop. Create a template for your game interface which can support the different resolutions you expect to develop your game for, and start defining the size and positioning of each button and any pieces of information that must be available on screen. Try not to use colors yet, or just use them to highlight very important buttons available in each game screen. This operation should involve at least three members of the team: the game designer, the artist, and the programmer. If you are a game designer, never plan the interface without conferring with your artist and programmer: the first is responsible for creating the right assets for the job, so it is important that he/she understands the ideas behind your design choices. The programmer is responsible for implementing the solutions you designed, so it is good practice to ask for his/her opinion too, to avoid designing solutions which in the end cannot be implemented. There are also many tools that can be used by web and app developers to quickly create wireframes and prototypes for user interfaces. A good selection of the most appreciated tools can be found at the following link: http://www.dezinerfolio.com/2011/02/21/14-prototyping-and-wireframing-tools-for-designers The button size We suggest you put an extra amount of attention to defining the proper size for your game buttons. There's no point in having buttons on the screen if the player can't press them. This is especially true with games that use virtual pads. As virtual buttons tend to shadow a remarkable portion of a mobile device, there is a tendency to make them as small as possible. If they are too small, the consequences can be catastrophic, as the players won't be able to even play your game, let alone enjoy it. Street Fighter IV for the iPhone, for example, implements the biggest virtual buttons available on the Apple Store. Check them in the following figure: When designing buttons for your game interface, take your time to make tests and find an optimal balance between the opposing necessities of displaying buttons and saving as much screen space as possible for gameplay. The main screen The main goal of the first interactive game screen of a title should be to make it easy to play. It is thus very important that the PLAY button is large and distinctive enough for players to easily find it on the main screen. The other options that should be available on the main screen may vary depending on the characteristics of each specific game, although some are expected despite the game genre, such as OPTIONS, LEADERBOARDS, ACHIEVEMENTS, BUY, and SUPPORT. The following image represents the main screen of Angry Birds, which is a perfect example of a well-designed main screen. Notice, for example, that optional buttons on the bottom part of the screen are displayed as symbols that make it clear what is the purpose of each one of them. This is a smart way to reduce issues related with translating your game text into different languages. Test and iterate Once the former steps are completed, start testing the game interface. Try every option available to check that the game interface actually provides users with everything they need to correctly play and enjoy your game. Then ask other people to try it and get feedback from them. As you collect feedback, list the most requested modifications, implement them, and repeat the cycle until you are happy with the actual configuration you came up with for your game interface. Evergreen options In the last section of this article, we will provide some considerations about game options that should always be implemented in a well-designed mobile game UI, regardless of its genre or distinctive features. Multiple save slots Though extremely fit for gaming, today's smartphones are first of all phones and multi-purpose devices in general, so it is very common to be forced to suddenly quit a match due to an incoming call or other common activities. All apps quit when there is an incoming call or when the player presses the home button and mobile games offer an auto-saving feature in case of such events. What not all games do is to keep separate save states for every mode the game offers or for multiple users. Plants vs. Zombies, for example, offers such a feature: both the adventure and the quick play modes, in all their variations, are stored in separate save slots, so that the player never loses his/her progresses, regardless of the game mode he/she last played or the game level he/she would like to challenge. The following is a screenshot taken from the main screen of the game: A multiple save option is also much appreciated because it makes it safe for your friends to try your newly downloaded game without destroying your previous progresses. Screen rotation The accelerometer included in a large number of smartphones detects the rotation of the device in the 3D space and most software running on those devices rotate their interface as well, according to the portrait or landscape mode in which the smartphone is held. With games, it is not as easy to deal with such a feature as it would be for an image viewer or a text editor. Some games are designed to exploit the vertical or horizontal dimension of the screen with a purpose, and rotating the phone is an action that might not be accommodated by the game altogether. Should the game allow rotating the device, it is then necessary to adapt the game interface to the orientation of the phone as well, and this generally means designing an alternate version of the interface altogether. It is also an interesting (and not much exploited) feature to have the action of rotating the device as part of the actual gameplay and a core mechanic for a game. Calibrations and reconfigurations It is always a good idea to let players have the opportunity to calibrate and/or reconfigure the game controls in the options screen. For example, left-handed players would appreciate the possibility of switching the game controls orientation. When the accelerometer is involved, it can make a lot of difference for a player to be able to set the sensibility of the device to rotation. Different models with different hardware and software detect the rotation in the space differently and there's no single configuration which is good for all smartphones. So let players calibrate their phones according to their personal tastes and the capabilities of the device they are handling. Several games offer this option. Challenges As games become more and more social, several options have been introduced to allow players to display their score on public leaderboards and compete against friends. One game which does that pretty well is Super 7, an iPhone title that displays, on the very top of the screen, a rainbow bar which increases with the player's score. When the bar reaches its end on the right half of the screen, it means some other player's score has been beaten. It is a nice example of a game feature which continually rewards the player and motivates him as he plays the title. Experiment The touch screen is a relatively new control scheme for games. Feel free to try out new ideas. For example, would it be possible to design a first person shooter that uses the gestures, instead of traditional virtual buttons and D-pad? The trick is to keep the playfield as open as possible since the majority of smart devices have relatively small screens. Summary To learn more about mobile game desgin, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mobile First Design with HTML5 and CSS3 (https://www.packtpub.com/web-development/mobile-first-design-html5-and-css3) Gideros Mobile Game Development (https://www.packtpub.com/game-development/gideros-mobile-game-development) Resources for Article: Further resources on this subject: Elucidating the Game-changing Phenomenon of the Docker-inspired Containerization Paradigm [article] Getting Started with Multiplayer Game Programming [article] Introduction to Mobile Web ArcGIS Development [article]
Read more
  • 0
  • 0
  • 4013
article-image-speaking-java-your-first-game
Packt
17 Feb 2016
72 min read
Save for later

Speaking Java – Your First Game

Packt
17 Feb 2016
72 min read
In this article, we will start writing our very own Java code at the same time as we begin understanding Java syntax. We will learn how to store, retrieve, and manipulate different types of values stored in the memory. We will also look at making decisions and branching the flow of our code based on the values of this data. In this order, we will: Learn some Java syntax and see how it is turned into a running app by the compiler Store data and use it with variables Express yourself in Java with expressions Continue with the math game by asking a question Learn about decisions in Java Continue with the math game by getting and checking the answer (For more resources related to this topic, see here.) Acquiring the preceding Java skills will enable us to build the next two phases of our math game. This game will be able to ask the player a question on multiplication, check the answer and give feedback based on the answer given, as shown in the following screenshot: Java syntax Throughout this book, we will use plain English to discuss some fairly technical things. You will never be asked to read a technical explanation of a Java or Android concept that has not been previously explained in a non-technical way. Occasionally, I might ask or imply that you accept a simplified explanation in order to offer a fuller explanation at a more appropriate time, like the Java class as a black box; however, never will you need to scurry to Google in order to get your head around a big word or a jargon-filled sentence. Having said that, the Java and Android communities are full of people who speak in technical terms and to join in and learn from these communities, you need to understand the terms they use. So the approach this book takes is to learn a concept or appreciate an idea using an entirely plain speaking language, but at the same time, it introduces the jargon as part of the learning. Then, much of the jargon will begin to reveal its usefulness, usually as a way of clarification or keeping the explanation/discussion from becoming longer than it needs to be. The very term, "Java syntax," could be considered technical or jargon. So what is it? The Java syntax is the way we put together the language elements of Java in order to produce code that works in the Java/Dalvik virtual machine. Syntax should also be as clear as possible to a human reader, not least ourselves when we revisit our programs in the future. The Java syntax is a combination of the words we use and the formation of those words into sentence like structures. These Java elements or words are many in number, but when taken in small chunks are almost certainly easier to learn than any human-spoken language. The reason for this is that the Java language and its syntax were specifically designed to be as straightforward as possible. We also have Android Studio on our side which will often let us know if we make a mistake and will even sometimes think ahead and prompt us. I am confident that if you can read, you can learn Java; because learning Java is much easy. What then separates someone who has finished an elementary Java course from an expert programmer? The same things that separate a student of language from a master poet. Mastery of the language comes through practice and further study. The compiler The compiler is what turns our human-readable Java code into another piece of code that can be run in a virtual machine. This is called compiling. The Dalvik virtual machine will run this compiled code when our players tap on our app icon. Besides compiling Java code, the compiler will also check for mistakes. Although we might still have mistakes in our released app, many are stopped discovered at the when our code is compiled. Making code clear with comments As you become more advanced in writing Java programs, the solutions you use to create your programs will become longer and more complicated. Java was designed to manage complexity by having us divide our code into separate chunks, very often across multiple files. Comments are a part of the Java program that do not have any function in the program itself. The compiler ignores them. They serve to help the programmer to document, explain, and clarify their code to make it more understandable to themselves at a later date or to other programmers who might need to use or modify the code. So, a good piece of code will be liberally sprinkled with lines like this: //this is a comment explaining what is going on The preceding comment begins with the two forward slash characters, //. The comment ends at the end of the line. It is known as a single-line comment. So anything on that line is for humans only, whereas anything on the next line (unless it's another comment) needs to be syntactically correct Java code: //I can write anything I like here but this line will cause an error We can use multiple single-line comments: //Below is an important note //I am an important note //We can have as many single line comments like this as we like Single-line comments are also useful if we want to temporarily disable a line of code. We can put // in front of the code and it will not be included in the program. Recall this code, which tells Android to load our menu UI: //setContentView(R.layout.activity_main); In the preceding situation, the menu will not be loaded and the app will have a blank screen when run, as the entire line of code is ignored by the compiler. There is another type of comment in Java—the multiline comment. This is useful for longer comments and also to add things such as copyright information at the top of a code file. Also like the single-line comment, it can be used to temporarily disable code, in this case usually multiple lines. Everything in between the leading /* signs and the ending */ signs is ignored by the compiler. Here are some examples: /* This program was written by a Java expert You can tell I am good at this because my code has so many helpful comments in it. */ There is no limit to the number of lines in a multiline comment. Which type of comment is best to use will depend upon the situation. In this book, I will always explain every line of code explicitly but you will often find liberally sprinkled comments within the code itself that add further explanation, insight or clarification. So it's always a good idea to read all of the code: /* The winning lottery numbers for next Saturday are 9,7,12,34,29,22 But you still want to learn Java? Right? */ All the best Java programmers liberally sprinkle their code with comments. Storing data and using it with variables We can think of a variable as a labeled storage box. They are also like a programmer's window to the memory of the Android device, or whatever device we are programming. Variables can store data in memory (the storage box), ready to be recalled or altered when necessary by using the appropriate label. Computer memory has a highly complex system of addressing that we, fortunately, do not need to interact with in Java. Java variables allow us to make up convenient names for all the data that we want our program to work with; the JVM will handle all the technicalities that interact with the operating system, which in turn, probably through several layers of buck passing, will interact with the hardware. So we can think of our Android device's memory as a huge warehouse. When we assign names to our variables, they are stored in the warehouse, ready when we need them. When we use our variable's name, the device knows exactly what we are referring to. We can then tell it to do things such as "get box A and add it to box C, delete box B," and so on. In a game, we will likely have a variable named as something along the lines of score. It would be this score variable using which we manage anything related to the user's score, such as adding to it, subtracting or perhaps just showing it to the player. Some of the following situations that might arise: The player gets a question right, so add 10 to their existing score The player views their stats screen, so print score on the screen The player gets the best score ever, so make hiScore the same as their current score These are fairly arbitrary examples of names for variables and as long as you don't use any of the characters keywords that Java restricts, you can actually call your variables whatever you like. However, in practice, it is best to adopt a naming convention so that your variable names will be consistent. In this book, we will use a loose convention of variable names starting with a lowercase letter. When there is more than one word in the variable's name, the second word will begin with an uppercase letter. This is called "camel casing." Here are some examples of camel casing: score hiScore playersPersonalBest Before we look at some real Java code with some variables, we need to first look at the types of variables we can create and use. Types of variables It is not hard to imagine that even a simple game will probably have quite a few variables. What if the game has a high score table that remembers the names of the top 10 players? Then we might need variables for each player. And what about the case when a game needs to know if a playable character is dead or alive, or perhaps has any lives/retries left? We might need code that tests for life and then ends the game with a nice blood spurt animation if the playable character is dead. Another common requirement in a computer program, including games, is the right or wrong calculation: true or false. To cover almost these and many other types of information you might want to keep track of, every possibility Java has types. There are many types of variables and, we can also invent our own types or use other people's types. But for now, we will look at the built-in Java types. To be fair, they cover just about every situation we are likely to run into for a while. Some examples are the best way to explain this type of stuff. We have already discussed the hypothetical but highly likely score variable. The sWell score is likely to be a number, so we have to convey this (that the score is a number) to the Java compiler by giving the score an appropriate type. The hypothetical but equally likely playerName will of course, hold the characters that make up the player's name. Jumping ahead a couple of paragraphs, the type that holds a regular number is called an int, and the type that holds name-like data is called a string. And if we try and store a player name, perhaps "Ada Lovelace" in score, which is meant for numbers, we will certainly run into trouble. The compiler says no! Actually, the error would say this: As we can see, Java was designed to make it impossible for such errors to make it to a running program. Did you also spot in the previous screenshot that I had forgotten the semicolon at the end of the line? With this compiler identifying our errors, what could possibly go wrong? Here are the main types in Java. Later, we will see how to start using them: int: This type is used to store integers. It uses 32 pieces (bits) of memory and can therefore store values with a magnitude a little in excess of 2 billion, including negative values. long: As the name hints at, this data types can be used when even larger numbers are required. A long data type uses 64 bits of memory and 2 to the power of 63 is what we can store in this type. If you want to see what that looks like, try this: 9,223,372,036,854,775,807. Perhaps surprisingly, there are uses for long variables but if a smaller variable will do, we should use it so that our program uses less memory. You might be wondering when you might use numbers of this magnitude. The obvious examples would be math or science applications that do complex calculations but another use might be for timing. When you time how long something takes, the Java Date class uses the number of milliseconds since January 1, 1970. The long data type could be useful to subtract a start time from an end time to determine an elapsed time. float: This is for floating-point numbers, that is, numbers where there is precision beyond the decimal point. As the fractional part of a number takes memory space just as the whole number portion, the range of numbers possible in a float is therefore decreased compared to non-floating-point numbers. So, unless our variable will definitely use the extra precision, float would not be our data type of choice. double: When the precision in float is not enough we have double. short: When even an int data type is overkill, the super-skinny short fits into the tiniest of storage boxes, but we can only store around 64,000 values, from -32,768 to 32,767. byte: This is an even smaller storage box than a short type. There is plenty of room for these in memory but a byte can only store values from -128 to 127. boolean: We will be using plenty of Booleans throughout the book. A Boolean variable can be either true or false—nothing else. Perhaps Boolean answer questions such as: Is the player alive? Has a new high score been reached? Are two examples for a Boolean variable enough? char: This stores a single alphanumeric character. It's not going to change anything on its own but it could be useful if we put lots of them together. I have kept this discussion of data types to a practical level that is useful in the context of this book. If you are interested in how a data type's value is stored and why the limits are what they are, visit the Oracle Java tutorials site at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html. Note that you do not need any more information than we have already discussed to continue with this book. As we just learned, each type of data that we might want to store will require a specific amount of memory. So we must let the Java compiler know the type of the variable before we begin to use it. The preceding variables are known as the primitive types. They use predefined amounts of memory and so, using our storage analogy, fit into predefined sizes of the storage box. As the "primitive" label suggests, they are not as sophisticated as the reference types. Reference types You might have noticed that we didn't cover the string variable type that we previously used to introduce the concept of variables. Strings are a special type of variable known as a reference type. They quite simply refer to a place in memory where the storage of the variable begins, but the reference type itself does not define a specific amount of memory. The reason for this is fairly straightforward: we don't always know how much data will need to be stored until the program is actually run. We can think of strings and other reference types as continually expanding and contracting storage boxes. So won't one of these string reference types bump into another variable eventually? If you think about the devices memory as a huge warehouse full of racks of labeled storage boxes, then you can think of the Dalvik virtual machine as a super-efficient forklift truck driver that puts the different types of storage boxes in the most appropriate place. And if it becomes necessary, the virtual machine will quickly move stuff around in a fraction of a second to avoid collisions. It will even incinerate unwanted storage boxes when appropriate. This happens at the same time as constantly unloading new storage boxes of all types and placing them in the best place, for that type of variable. Dalvik tends to keep reference variables in a part of the warehouse that is different from the part for the primitive variables. So strings can be used to store any keyboard character, like a char data type but of almost any length. Anything from a player's name to an entire book can be stored in a single string. There are a couple more reference types we will explore. Arrays are a way to store lots of variables of the same type, ready for quick and efficient access. Think of an array as an aisle in our warehouse with all the variables of a certain type lined up in a precise order. Arrays are reference types, so Dalvik keeps these in the same part of the warehouse as strings. So we know that each type of data that we might want to store will require an amount of memory. Hence, we must let the Java compiler know the type of the variable before we begin to use it. Declaration That's enough of theory. Let's see how we would actually use our variables and types. Remember that each primitive type requires a specific amount of real device memory. This is one of the reasons that the compiler needs to know what type a variable will be of. So we must first declare a variable and its type before we attempt to do anything with it. To declare a variable of type int with the name score, we would type: int score; That's it! Simply state the type, in this case int, then leave a space, and type the name you want to use for this variable. Also note the semicolon on the end of the line as usual to show the compiler that we are done with this line and what follows, if anything, is not part of the declaration. For almost all the other variable types, declaration would occur in the same way. Here are some examples. The variable names are arbitrary. This is like reserving a labeled storage box in the warehouse: long millisecondsElapsed; float gravity; double accurrateGravity; boolean isAlive; char playerInitial; String playerName; Initialization Here, for each type, we initialize a value to the variable. Think about placing a value inside the storage box, as shown in the following code: score = 0; millisecondsElapsed = 1406879842000;//1st Aug 2014 08:57:22 gravity = 1.256; double accurrateGravity =1.256098; isAlive = true; playerInitial = 'C'; playerName = "Charles Babbage"; Notice that the char variable uses single quotes (') around the initialized value while the string uses double quotes ("). We can also combine the declaration and initialization steps. In the following snippet of code, we declare and initialize the same variables as we did previously, but in one step each: int score = 0; long millisecondsElapsed = 1406879842000;//1st Aug 2014 08:57:22 float gravity = 1.256; double accurrateGravity =1.256098; boolean isAlive = true; char playerInitial = 'C'; String playerName = "Charles Babbage"; Whether we declare and initialize separately or together is probably dependent upon the specific situation. The important thing is that we must do both: int a; //The line below attempts to output a to the console Log.i("info", "int a = " + a); The preceding code would cause the following result: Compiler Error: Variable a might not have been initialized There is a significant exception to this rule. Under certain circumstances variables can have default values. Changing variables with operators Of course, in almost any program, we are going to need to do something with these values. Here is a list of perhaps the most common Java operators that allow us to manipulate variables. You do not need to memorize them as we will look at every line of code when we use them for the first time: The assignment operator (=): This makesthe variable to the left of the operator the same as the value to the right. For example, hiScore = score; or score = 100;. The addition operator (+): This adds the values on either side of the operator. It is usually used in conjunction with the assignment operator, such as score = aliensShot + wavesCleared; or score = score + 100;. Notice that it is perfectly acceptable to use the same variable simultaneously on both sides of an operator. The subtraction operator (-): This subtracts the value on the right side of the operator from the value on the left. It is usually used in conjunction with the assignment operator, such as lives = lives - 1; or balance = income - outgoings;. The division operator (/): This divides the number on the left by the number on the right. Again, it is usually used in conjunction with the assignment operator, as shown in fairShare = numSweets / numChildren; or recycledValueOfBlock = originalValue / .9;. The multiplication operator (*): This multiplies variables and numbers, such as answer = 10 * 10; or biggerAnswer = 10 * 10 * 10;. The increment operator (++): This is a really neat way to add 1 to the value of a variable. The myVariable = myVariable + 1; statement is the same as myVariable++;. The decrement operator (--): You guessed it: a really neat way to subtract 1 from something. The myVariable = myVariable -1; statement is the same as myVariable--;. The formal names for these operators are slightly different from the names used here for explanation. For example, the division operator is actually one of the multiplicative operators. But the preceding names are far more useful for the purpose of learning Java and if you used the term "division operator", while conversing with someone from the Java community, they would know exactly what you mean. There are actually many more operators than these in Java. If you are curious about operators there is a complete list of them on the Java website at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html. All the operators required to complete the projects in this book will be fully explained in this book. The link is provided for the curious among us. Expressing yourself in Java Let's try using some declarations, assignments and operators. When we bundle these elements together into some meaningful syntax, we call it an expression. So let's write a quick app to try some out. Here we will make a little side project so we can play with everything we have learned so far. Instead, we will simply write some Java code and examine its effects by outputting the values of variables to the Android console, called LogCat. We will see exactly how this works by building the simple project and examining the code and the console output: The following is a quick reminder of how to create a new project. Close any currently open projects by navigating to File | Close Project. Click on Start a new Android Studio project. The Create New Project configuration window will appear. Fill in the Application name field and Company Domain with packtpub.com or you could use your own company website name here instead. Now click on the Next button. On the next screen, make sure the Phone and Tablet checkbox has a tick in it. Now we have to choose the earliest version of Android we want to build our app for. Go ahead and play with a few options in the drop-down selector. You will see that the earlier the version we select, the greater is the percentage of devices our app can support. However, the trade-off here is that the earlier the version we select, the less are cutting-edge Android features available in our apps. A good balance is to select API 8: Android 2.2 (Froyo). Go ahead and do that now as shown in the next screenshot. Click on Next. Now select Blank Activity as shown in the next screenshot and click on Next again. On the next screen, simply change Activity Name to MainActivity and click on Finish. To keep our code clear and simple, you can delete the two unneeded methods (onCreateOptionsMenu and onOptionsItemSelected) and their associated @override and @import statements. However, this is not necessary for the example to work. As with all the examples and projects in this book, you can copy or review the code from the download bundle. Just create the project as described previously and paste the code from MainActivity.java file from the download bundle to the MainActivity.java file that was generated when you created the project in Android Studio. Just ensure that the package name is the same as the one you chose when the project was created. However, I strongly recommend going along with the tutorial so that we can learn how to do everything for ourselves. As this app uses the LogCat console to show its output, you should run this app on the emulator only and not on a real Android device. The app will not harm a real device, but you just won't be able to see anything happening. Create a new blank project called Expressions In Java. Now, in the onCreate method just after the line where we use the setContentView method, add this code to declare and initialize some variables: //first we declare and initialize a few variables int a = 10; String b = "Alan Turing"; boolean c = true; Now add the following code. This code simply outputs the value of our variables in a form where we can closely examine them in a minute: //Let's look at how Android 'sees' these variables //by outputting them, one at a time to the console Log.i("info", "a = " + a); Log.i("info", "b = " + b); Log.i("info", "c = " + c); Now let's change our variables using the addition operator and another new operator. See if you can work out the output values for variables a, b, and c before looking at the output and the code explanation: //Now let's make some changes a++; a = a + 10; b = b + " was smarter than the average bear Booboo"; b = b + a; c = (1 + 1 == 3);//1 + 1 is definitely 2! So false. Let's output the values once more in the same way we did in step 3, but this time, the output should be different: //Now to output them all again Log.i("info", "a = " + a); Log.i("info", "b = " + b); Log.i("info", "c = " + c); Run the program on an emulator in the usual way. You can see the output by clicking on the Android tab from our "useful tabs" area below the Project Explorer. Here is the output, with some of the unnecessary formatting stripped off: info? a = 10 info? b = Alan Turing info? c = true info? a = 21 info? b = Alan Turing was smarter than the average bear Booboo21 info? c = false Now let's discuss what happened. In step 2, we declared and initialized three variables: a: This is an int that holds the value 10 b: This is a string that holds the name of an eminent computer scientist. c: This is a Boolean that holds the value false So when we output the values in step 3, it should be no surprise that we get the following: info? a = 10 info? b = Alan Turing info? c = true In step 4, all the fun stuff happens. We add 1 to the value of our int a using the increment operator like this: a++;. Remember that a++ is the same as a = a + 1. We then add 10 to a. Note we are adding 10 to a after having already added 1. So we get this output for a 10 + 1 + 10 operation: info? a = 21 Now let's examine our string, b. We appear to be using the addition operator on our eminent scientist. What is happening is what you could probably guess. We are adding together two strings "Alan Turing" and "was smarter than the average bear Booboo." When you add two strings together it is called concatenating and the + symbol doubles as the concatenation operator. Finally, for our string, we appear to be adding int a to it. This is allowed and the value of a is concatenated to the end of b. info? b = Alan Turing was smarter than the average bear Booboo21 This does not work the other way round; you cannot add a string to an int. This makes sense as there is no logical answer. a = a + b Finally, let's look at the code that changes our Boolean, c, from true to false: c = (1+1=3);. Here, we are assigning to c the value of the expression contained within the brackets. This would be straightforward, but why the double equals (==)? We have jumped ahead of ourselves a little. The double equals sign is another operator called the comparison operator. So we are really asking, does 1+1 equal 3? Clearly the answer is false. You might ask, "why use == instead of =?" Simply to make it clear to the compiler when we mean to assign and when we mean to compare. Inadvertently using = instead of == is a very common error. The assignment operator (=) assigns the value on the right to the value on the left, while the comparison operator (==) compares the values on either side. The compiler will warn us with an error when we do this but at first glance you might swear the compiler is wrong. Now let's use everything we know and a bit more to make our Math game project. Math game – asking a question Now that we have all that knowledge under our belts, we can use it to improve our math game. First, we will create a new Android activity to be the actual game screen as opposed to the start menu screen. We will then use the UI designer to lay out a simple game screen so that we can use our Java skills with variables, types, declaration, initialization, operators, and expressions to make our math game generate a question for the player. We can then link the start menu and game screens together with a push button. If you want to save typing and just review the finished project, you can use the code downloaded from the Packt website. If you have any trouble getting any of the code to work, you can review, compare, or copy and paste the code from the already completed code provided in the download bundle. The completed code is in the following files that correspond to the filenames we will be using in this tutorial: Chapter 3/MathGameChapter3a/java/MainActivity.java Chapter 3/MathGameChapter3a/java/GameActivity.java Chapter 3/MathGameChapter3a/layout/activity_main.xml Chapter 3/MathGameChapter3a/layout/activity_game.xml As usual, I recommend following this tutorial to see how we can create all of the code for ourselves. Creating the new game activity We will first need to create a new Java file for the game activity code and a related layout file to hold the game Activity UI. Run Android Studio and select your Math Game Chapter 2 project. It might have been opened by default. Now we will create the new Android Activity that will contain the actual game screen, which will run when the player taps the Play button on our main menu screen. To create a new activity, we know need another layout file and another Java file. Fortunately Android Studio will help us do this. To get started with creating all the files we need for a new activity, right-click on the src folder in the Project Explorer and then go to New | Activity. Now click on Blank Activity and then on Next. We now need to tell Android Studio a little bit about our new activity by entering information in the above dialog box. Change the Activity Name field to GameActivity. Notice how the Layout Name field is automatically changed for us to activity_game and the Title field is automatically changed to GameActivity. Click on Finish. Android Studio has created two files for us and has also registered our new activity in a manifest file, so we don't need to concern ourselves with it. If you look at the tabs at the top of the editor window, you will see that GameActivity.java has been opened up ready for us to edit, as shown in the following screenshot: Ensure that GameActivity.java is active in the editor window by clicking on the GameActivity.java tab shown previously. Android overrides some methods for us by default, and that most of them were not necessary. Here again, we can see the code that is unnecessary. If we remove it, then it will make our working environment simpler and cleaner. To avoid this here, we will simply use the code from MainActivity.java as a template for GameActivity.java. We can then make some minor changes. Click on the MainActivity.java tab in the editor window. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now copy all of the code in the editor window using the Ctrl + C on the keyboard. Now click on the GameActivity.java tab. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now paste the copied code and overwrite the currently highlighted code using Ctrl + V on the keyboard. Notice that there is an error in our code denoted by the red underlining as shown in the following screenshot. This is because we pasted the code referring to MainActivity in our file which is called GameActivity. Simply change the text MainActivity to GameActivity and the error will disappear. Take a moment to see if you can work out what the other minor change is necessary, before I tell you. Remember that setContentView loads our UI design. Well what we need to do is change setContentView to load the new design (that we will build next) instead of the home screen design. Change setContentView(R.layout.activity_main); to setContentView(R.layout.activity_game);. Save your work and we are ready to move on. Note the Project Explorer where Android Studio puts the two new files it created for us. I have highlighted two folders in the next screenshot. In future, I will simply refer to them as our java code folder or layout files folder. You might wonder why we didn't simply copy and paste the MainActivity.java file to begin with and saved going through the process of creating a new activity? The reason is that Android Studio does things behind the scenes. Firstly, it makes the layout template for us. It also registers the new Activity for use through a file we will see later, called AndroidManifest.xml. This is necessary for the new activity to be able to work in the first place. All things considered, the way we did it is probably the quickest. The code at this stage is exactly the same as the code for the home menu screen. We state the package name and import some useful classes provided by Android: package com.packtpub.mathgamechapter3a.mathgamechapter3a; import android.app.Activity; import android.os.Bundle; We create a new activity, this time called GameActivity: public class GameActivity extends Activity { Then we override the onCreate method and use the setContentView method to set our UI design as the contents of the player's screen. Currently, however, this UI is empty: super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); We can now think about the layout of our actual game screen. Laying out the game screen UI As we know, our math game will ask questions and offer the player some multiple choices to choose answers from. There are lots of extra features we could add, such as difficulty levels, high scores, and much more. But for now, let's just stick to asking a simple, predefined question and offering a choice of three predefined possible answers. Keeping the UI design to the bare minimum suggests a layout. Our target UI will look somewhat like this: The layout is hopefully self-explanatory, but let's ensure that we are really clear; when we come to building this layout in Android Studio, the section in the mock-up that displays 2 x 2 is the question and will be made up of three text views (both numbers, and the = sign is also a separate view). Finally, the three options for the answer are made up of Button layout elements. We used all of these UI elements, but this time, as we are going to be controlling them using our Java code, there are a few extra things we need to do to them. So let's go through it step by step: Open the file that will hold our game UI in the editor window. Do this by double-clicking on activity_game.xml. This is located in our UI layout folder which can be found in the project explorer. Delete the Hello World TextView, as it is not required. Find the Large Text element on the palette. It can be found under the Widgets section. Drag three elements onto the UI design area and arrange them near the top of the design as shown in the next screenshot. It does not have to be exact; just ensure that they are in a row and not overlapping, as shown in the following screenshot: Notice in the Component Tree window that each of the three TextViews has been assigned a name automatically by Android Studio. They are textView , textView2 and textView3: Android Studio refers to these element names as an id. This is an important concept that we will be making use of. So to confirm this, select any one of the textViews by clicking on its name (id), either in the component tree as shown in the preceding screenshot or directly on it in the UI designer shown previously. Now look at the Properties window and find the id property. You might need to scroll a little to do this: Notice that the value for the id property is textView. It is this id that we will use to interact with our UI from our Java code. So we want to change all the IDs of our TextViews to something useful and easy to remember. If you look back at our design, you will see that the UI element with the textView id is going to hold the number for the first part of our math question. So change the id to textPartA. Notice the lowercase t in text, the uppercase P in Part, and the uppercase A. You can use any combination of cases and you can actually name the IDs anything you like. But just as with naming conventions with Java variables, sticking to conventions here will make things less error-prone as our program gets more complicated. Now select textView2 and change id to textOperator. Select the element currently with id textView3 and change it to textPartB. This TextView will hold the later part of our question. Now add another Large Text from the palette. Place it after the row of the three TextViews that we have just been editing. This Large Text will simply hold our equals to sign and there is no plan to ever change it. So we don't need to interact with it in our Java code. We don't even need to concern ourselves with changing the ID or knowing what it is. If this situation changed, we could always come back at a later time and edit its ID. However, this new TextView currently displays Large Text and we want it to display an equals to sign. So in the Properties window, find the text property and enter the value =. We have changed the text property, and you might also like to change the text property for textPartA, textPartB, and textOperator. This is not absolutely essential because we will soon see how we can change it via our Java code; however, if we change the text property to something more appropriate, then our UI designer will look more like it will when the game runs on a real device. So change the text property of textPartA to 2, textPartB to 2, and textOperator to x. Your UI design and Component tree should now look like this: For the buttons to contain our multiple choice answers, drag three buttons in a row, below the = sign. Line them up neatly like our target design. Now, just as we did for the TextViews, find the id properties of each button, and from left to right, change the id properties to buttonChoice1, buttonChoice2, and buttonChoice3. Why not enter some arbitrary numbers for the text property of each button so that the designer more accurately reflects what our game will look like, just as we did for our other TextViews? Again, this is not absolutely essential as our Java code will control the button appearance. We are now actually ready to move on. But you probably agree that the UI elements look a little lost. It would look better if the buttons and text were bigger. All we need to do is adjust the textSize property for each TextView and for each Button. Then, we just need to find the textSize property for each element and enter a number with the sp syntax. If you want your design to look just like our target design from earlier, enter 70sp for each of the TextView textSize properties and 40sp for each of the Buttons textSize properties. When you run the game on your real device, you might want to come back and adjust the sizes up or down a bit. But we have a bit more to do before we can actually try out our game. Save the project and then we can move on. As before, we have built our UI. This time, however, we have given all the important parts of our UI a unique, useful, and easy to identify ID. As we will see we are now able to communicate with our UI through our Java code. Coding a question in Java With our current knowledge of Java, we are not yet able to complete our math game but we can make a significant start. We will look at how we can ask the player a question and offer them some multiple choice answers (one correct and two incorrect). At this stage, we know enough of Java to declare and initialize some variables that will hold the parts of our question. For example, if we want to ask the times tables question 2 x 2, we could have the following variable initializations to hold the values for each part of the question: int partA = 2; int partB = 2; The preceding code declares and initializes two variables of the int type, each to the value of 2. We use int because we will not be dealing with any decimal fractions. Remember that the variable names are arbitrary and were just chosen because they seemed appropriate. Clearly, any math game worth downloading is going to need to ask more varied and advanced questions than 2 x 2, but it is a start. Now we know that our math game will offer multiple choices as answers. So, we need a variable for the correct answer and two variables for two incorrect answers. Take a look at these combined declarations and initializations: int correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; Note that the initialization of the variables for the wrong answers depends on the value of the correct answer, and the variables for the wrong answers are initialized after initializing the correctAnswer variable. Now we need to put these values, held in our variables, into the appropriate elements on our UI. The question variables (partA and partB) need to be displayed in our UI elements, textPartA and textPartB, and the answer variables (correctAnswer, wrongAnswer1, and wrongAnswer2) need to be displayed in our UI elements with the following IDs: buttonChoice1, buttonChoice2, and buttonChoice3. We will see how we do this in the next step-by-step tutorial. We will also implement the variable declaration and initialization code that we discussed a moment ago: First, open GameActivity.java in the editor window. Remember that you can do this by double-clicking on GameActivity in our java folder or clicking on its tab above the editor window if GameActivity.java is already open. All of our code will go into the onCreate method. It will go after the setContentView(R.layout.activity_game); line but before the closing curly brace } of the onCreate method. Perhaps, it's a good idea to leave a blank line for clarity and a nice explanatory comment as shown in the following code. We can see the entire onCreate method as it stands after the latest amendments. The parts in bold are what you need to add. Feel free to add helpful comments like mine if you wish: @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         //The next line loads our UI design to the screen         setContentView(R.layout.activity_game);           //Here we initialize all our variables         int partA = 9;         int partB = 9;         int correctAnswer = partA * partB;         int wrongAnswer1 = correctAnswer - 1;         int wrongAnswer2 = correctAnswer + 1;       }//onCreate ends here Now we need to add the values contained within the variables to the TextView and Button of our UI. But first, we need to get access to the UI elements we created. We do that by creating a variable of the appropriate class and linking it via the ID property of the appropriate UI element. We already know the class of our UI elements: TextView and Button. Here is the code that creates our special class variables for each of the necessary UI elements. Take a close look at the code, but don't worry if you don't understand all of it now. We will dissect the code in detail once everything is working. Enter the code immediately after the code entered in the previous step. You can leave a blank line for clarity if you wish. Just before you proceed, note that at two points while typing in this code, you will be prompted to import another class. Go ahead and do so on both occasions: /*Here we get a working object based on either the button or TextView class and base as well as link our new   objects directly to the appropriate UI elements that we created previously*/   TextView textObjectPartA =   (TextView)findViewById(R.id.textPartA);   TextView textObjectPartB =   (TextView)findViewById(R.id.textPartB);   Button buttonObjectChoice1 =   (Button)findViewById(R.id.buttonChoice1);   Button buttonObjectChoice2 =   (Button)findViewById(R.id.buttonChoice2);   Button buttonObjectChoice3 =   (Button)findViewById(R.id.buttonChoice3); In the preceding code, if you read the multiline comment, you will see that I used the term object. When we create a variable type based on a class, we call it an object. Once we have an object of a class, we can do anything that that class was designed to do. Now we have five new objects linked to the elements of our UI that we need to manipulate. What precisely are we going to do with them? We need to display the values of our variables in the text of the UI elements. We can use the objects we just created combined with a method provided by the class, and use our variables as values for that text. As usual, we will dissect this code further at the end of this tutorial. Here is the code to enter directly after the code in the previous step. Try and work out what is going on before we look at it together: //Now we use the setText method of the class on our objects //to show our variable values on the UI elements. //Just like when we output to the console in the exercise - //Expressions in Java, only now we use setText method //to put the values in our variables onto the actual UI. textObjectPartA.setText("" + partA); textObjectPartB.setText("" + partB);   //which button receives which answer, at this stage is arbitrary.   buttonObjectChoice1.setText("" + correctAnswer); buttonObjectChoice2.setText("" + wrongAnswer1); buttonObjectChoice3.setText("" + wrongAnswer2); Save your work. If you play with the assignment values for partA and partB, you can make them whatever you like and the game adjusts the answers accordingly. Obviously, we shouldn't need to reprogram our game each time we want a new question and we will solve that problem soon. All we need to do now is link the game section we have just made to the start screen menu. We will do that in the next tutorial. Now lets explore the trickier and newer parts of our code in more detail. In step 2, we declared and initialized the variables required so far: //Here we initialize all our variables int partA = 2; int partB = 2; int correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; Then in step 3, we got a reference to our UI design through our Java code. For the TextViews, it was done like this: TextView textObjectPartA = (TextView)findViewById(R.id.textPartA); For each of the buttons, a reference to our UI design was obtained like this: Button buttonObjectChoice1 =   Button)findViewById(R.id.buttonChoice1); In step 4, we did something new. We used a the setText method to show the values of our variables on our UI elements (TextView and Button) to the player. Let's break down one line completely to see how it works. Here is the code that shows the correctAnswer variable being displayed on buttonObjectChoice1. buttonObjectChoice1.setText("" + correctAnswer); By typing buttonObjectChoice1 and adding a period, as shown in the following line of code, we have access to all the preprogrammed methods of that object's class type that are provided by Android: The power of Button and the Android API There are actually lots of methods that we can perform on an object of the Button type. If you are feeling brave, try this to get a feeling of just how much functionality there is in Android. Type the following code: buttonObjectChoice1 Be sure to type the period on the end. Android Studio will pop up a list of possible methods to use on this object. Scroll through the list and get a feel of the number and variety of options: If a mere button can do all of this, think of the possibilities for our games once we have mastered all the classes contained in Android. A collection of classes designed to be used by others is collectively known as an Application Programmers Interface (API). Welcome to the Android API! In this case, we just want to set the button's text. So, we use setText and concatenate the value stored in our correctAnswer variable to the end of an empty string, like this: setText("" + correctAnswer); We do this for each of the UI elements we require to show our variables. Playing with autocomplete If you tried the previous tip, The power of Button and the Android API, and explored the methods available for objects of the Button type, you will already have some insight into autocomplete. Note that as you type, Android Studio is constantly making suggestions for what you might like to type next. If you pay attention to this, you can save a lot of time. Simply select the correct code completion statement that is suggested and press Enter. You can even see how much time you saved by selecting Help | Productivity Guide from the menu bar. Here you will see statistics for every aspect of code completion and more. Here are a few entries from mine: As you can see, if you get used to using shortcuts early on, you can save a lot of time in the long run. Linking our game from the main menu At the moment, if we run the app, we have no way for the player to actually arrive at our new game activity. We want the game activity to run when the player clicks on the Play button on the main MainActivity UI. Here is what we need to do to make that happen: Open the file activity_main.xml, either by double-clicking on it in the project explorer or by clicking its tab in the editor window. Now, just like we did when building the game UI, assign an ID to the Play button. As a reminder, click on the Play button either on the UI design or in the component tree. Find the id property in the Properties window. Assign the buttonPlay value to it. We can now make this button do stuff by referring to it in our Java code. Open the file MainActivity.java, either by double-clicking on it in the project explorer or clicking on its tab in the editor window. In our onCreate method, just after the line where we setContentView, add the following highlighted line of code: setContentView(R.layout.activity_main); Button buttonPlay = (Button)findViewById(R.id.buttonPlay); We will dissect this code in detail once we have got this working. Basically we are making a connection to the Play button by creating a reference variable to a Button object. Notice that both words are highlighted in red indicating an error. Just as before, we need to import the Button class to make this code work. Use the Alt + Enter keyboard combination. Now click on Import class from the popped-up list of options. This will automatically add the required import directive at the top of our MainActivity.java file. Now for something new. We will give the button the ability to listen to the user clicking on it. Type this immediately after the last line of code we entered: buttonPlay.setOnClickListener(this); Notice how the this keyword is highlighted in red indicating an error. Setting that aside, we need to make a modification to our code now in order to allow the use of an interface that is a special code element that allows us to add a functionality, such as listening for button clicks. Edit the line as follows. When prompted to import another class, click on OK: public class MainActivity extends Activity { to public class MainActivity extends Activity implements View.OnClickListener{ Now we have the entire line underlined in red. This indicates an error but it's where we should be at this point. We mentioned that by adding implements View.OnClickListener, we have implemented an interface. We can think of this like a class that we can use but with extra rules. The rules of the OnClickListener interface state that we must implement/use one of its methods. Notice that until now, we have optionally overridden/used methods as and when they have suited us. If we wish to use the functionality this interface provides, namely listening for button presses, then we have to add/implement the onClick method. This is how we do it. Notice the opening curly brace, {, and the closing curly brace, }. These denote the start and end of the method. Notice that the method is empty and it doesn't do anything, but an empty method is enough to comply with the rules of the OnClickListener interface, and the red line indicating an that our code has an error has gone. Make sure that you type the following code, outside the closing curly brace (}) of the onCreate method but inside the closing curly brace of our MainActivity class: @Override     public void onClick(View view) {               } Notice that we have an empty line between { and } of the onClick method. We can now add code in here to make the button actually do something. Type the following highlighted code between { and } of onClick: @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     } OK, so that code is a bit of a mouthful to comprehend all at once. See if you can guess what is happening. The clue is in the method named startActivity and the hopefully familiar term, GameActivity. Notice that we are assigning something to i. We will quickly get our app working and then diagnose the code in full. Notice that we have an error: all instances of the word Intent are red. We can solve this by importing the classes required to make Intent work. As before press Alt + Enter. Run the game in the emulator or on your device. Our app will now work. This is what the new game screen looks like after pressing Play on the menu screen: Almost every part of our code has changed a little and we have added a lot to it as well. Let's go over the contents of MainActivity.java and look at it line by line. For context, here it is in full: package com.packtpub.mathgamechapter3a.mathgamechapter3a;   import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button;     public class MainActivity extends Activity implements View.OnClickListener{       @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main);         final Button buttonPlay =           (Button)findViewById(R.id.buttonPlay);         buttonPlay.setOnClickListener(this);     }       @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     }   } We have seen much of this code before, but let's just go over it a chunk at a time before moving on so that it is absolutely clear. The code works like this: package com.packtpub.mathgamechapter3a.mathgamechapter3a; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; You would probably remember that this first block of code defines what our package is called and makes available all the Android API stuff we need for Button, TextView, and Activity. From our MainActivity.java file, we have this: public class MainActivity extends Activity implements View.OnClickListener{ Our MainActivity declaration with our new bit of code implements View.OnClickListener that gives us the ability to detect button clicks. Next in our code is this: @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main); It is at the start of our onCreate method where we first ask the hidden code of onCreate to do its stuff using super.onCreate(savedInstanceState);. Then we set our UI to the screen with setContentView(R.layout.activity_main);. Next, we get a reference to our button with an ID of buttonPlay: Button buttonPlay = (Button)findViewById(R.id.buttonPlay); buttonPlay.setOnClickListener(this); Finally, our onClick method uses the Intent class to send the player to our GameActivity class and the related UI when the user clicks on the Play button: @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     } If you run the app, you will notice that we can now click on the Play button and our math game will ask us a question. Of course, we can't answer it yet. Although we have very briefly looked at how to deal with button presses, we need to learn more of Java in order to intelligently react to them. We will also reveal how to write code to handle presses from several buttons. This will be necessary to receive input from our multiple-choice-centric game_activity UI. Decisions in Java We can now summon enough of Java prowess to ask a question but a real math game must obviously do much more than this. We need to capture the player's answer, and we are nearly there with that—we can detect button presses. From there, we need to be able to decide whether their answer is right or wrong. Then, based on this decision, we have to choose an appropriate course of action. Let's leave the math game aside for now and look at how Java might help us by learning some more fundamentals and syntax of the Java language. More operators Let's look at some more operators: we can already add (+), take away (-), multiply (*), divide (/), assign (=), increment (++), compare (==), and decrement (--) with operators. Let's introduce some more super-useful operators, and then we will go straight to actually understanding how to use them in Java. Don't worry about memorizing every operator given here. Glance at them and their explanations. There, we will put some operators to use and they will become much clearer as we see a few examples of what they allow us to do. They are presented here in a list just to make the variety and scope of operators plain from the start. The list will also be more convenient to refer back to when not intermingled with the discussion about implementation that follows it. ==: This is a comparison operator we saw this very briefly before. It tests for equality and is either true or false. An expression like (10 == 9);, for example, is false. !: The logical NOT operator. The expression, ! (2+2==5).), is true because 2+2 is NOT 5. !=: This is another comparison operator which tests if something is NOT equal. For example, the expression, (10 != 9);), is true, that is, 10 is not equal to 9. >: This is another comparison operator which tests if something is greater than something else. The expression, (10 > 9);), is true. There are a few more comparison operators as well. <: You guessed it. This tests whether the value to the left is less than the value to the right or not. The expression, (10 < 9);, is false. >=: This operator tests whether one value is greater than or equal to the other, and if either is true, the result is true. For example, the expression, (10 >= 9);, is true. The expression, (10 >= 10);, is also true. <=: Like the preceding operator, this operator tests for two conditions but this time, less than and equal to. The expression, (10 <= 9);, is false. The expression, (10 <= 10);, is true. &&: This operator is known as logical AND. It tests two or more separate parts of an expression and all parts must be true in order for the result to be true. Logical AND is usually used in conjunction with the other operators to build more complex tests. The expression, ((10 > 9) && (10 < 11));, is true because both parts are true. The expression, ((10 > 9) && (10 < 9));, is false because only one part of the expression is true and the other is false. ||: This operator is called logical OR. It is just like logical AND except that only one of two or more parts of an expression need to be true for the expression to be true. Let's look at the last example we used but replace the && sign with ||. The expression, ((10 > 9) || (10 < 9));, is now true because one part of the expression is true. All of these operators are virtually useless without a way of properly using them to make real decisions that affect real variables and code. Let's look at how to make decisions in Java. Decision 1 – If they come over the bridge, shoot them As we saw, operators serve hardly any purpose on their own but it was probably useful to see just a part of the wide and varied range available to us. Now, when we look at putting the most common operator, ==, to use, we can start to see the powerful yet fine control that operators offer us. Let's make the previous examples less abstract using the Java if keyword, a few conditional operators, and a small story: in use with some code and a made up military situation that will hopefully make the following examples less abstract. The captain is dying and, knowing that his remaining subordinates are not very experienced, he decides to write a Java program to convey his last orders after he has died. The troops must hold one side of a bridge while awaiting reinforcements. The first command the captain wants to make sure his troops understand is this: If they come over the bridge, shoot them. So how do we simulate this situation in Java? We need a Boolean variable isComingOverBridge. The next bit of code assumes that the isComingOverBridge variable has been declared and initialized. We can then use it like this: if(isComingOverBridge){   //Shoot them } If the isComingOverBridge Boolean is true, the code inside the opening and closing curly braces will run. If not, the program continues after the if block without running it. Decision 2 – Else, do this The captain also wants to tell his troops what to do (stay put) if the enemy is not coming over the bridge. Now we introduce another Java keyword, else. When we want to explicitly do something and the if block does not evaluate to true, we can use else. For example, to tell the troops to stay put if the enemy is not coming over the bridge, we use else: if(isComingOverBridge){   //Shoot them }else{   //Hold position } The captain then realized that the problem wasn't as simple as he first thought. What if the enemy comes over the bridge and has more troops? His squad will be overrun. So, he came up with this code (we'll use some variables as well this time): boolean isComingOverTheBridge; int enemyTroops; int friendlyTroops; //Code that initializes the above variables one way or another   //Now the if if(isComingOverTheBridge && friendlyTroops > enemyTroops){   //shoot them }else if(isComingOverTheBridge && friendlyTroops < enemyTroops) {   //blow the bridge }else{   //Hold position } Finally, the captain's last concern was that if the enemy came over the bridge waving the white flag of surrender and were promptly slaughtered, then his men would end up as war criminals. The Java code needed was obvious. Using the wavingWhiteFlag Boolean variable he wrote this test: if (wavingWhiteFlag){   //Take prisoners } But where to put this code was less clear. In the end, the captain opted for the following nested solution and changing the test for wavingWhiteFlag to logical NOT, like this: if (!wavingWhiteFlag){//not surrendering so check everything else   if(isComingOverTheBridge && friendlyTroops > enemyTroops){     //shoot them   }else if(isComingOverTheBridge && friendlyTroops <   enemyTroops) {     //blow the bridge   } }else{//this is the else for our first if   //Take prisoners { //Holding position This demonstrates that we can nest if and else statements inside of one another to create even deeper decisions. We could go on making more and more complicated decisions but what we have seen is more than sufficient as an introduction. Take the time to reread this if anything is unclear. It is also important to point out that very often, there are two or more ways to arrive at the solution. The right way will usually be the way that solves the problem in the clearest and simplest manner. Switching to make decisions We have seen the vast and virtually limitless possibilities of combining the Java operators with if and else statements. But sometimes a decision in Java can be better made in other ways. When we have to make a decision based on a clear list of possibilities that doesn't involve complex combinations, then switch is usually the way to go. We start a switch decision like this: switch(argument){   } In the previous example, an argument could be an expression or a variable. Then within the curly braces, we can make decisions based on the argument with case and break elements: case x:   //code to for x   break;   case y:   //code for y   break; You can see that in the previous example, each case states a possible result and each break denotes the end of that case as well as the point at which no further case statements should be evaluated. The first break encountered takes us out of the switch block to proceed with the next line of code. We can also use default without a value to run some code if none of the case statements evaluate to true, like this: default://Look no value   //Do something here if no other case statements are true break; Supposing we are writing an old-fashioned text adventure game—the kind of game where the player types commands such as "Go East", "Go West", "Take Sword", and so on. In this case, switch could handle that situation like this example code and we could use default to handle the case of the player typing a command that is not specifically handled: //get input from user in a String variable called command switch(command){     case "Go East":":   //code to go east   break;     case "Go West":   //code to go west   break;   case "Take sword":   //code to take the sword   break;     //more possible cases     default:   //Sorry I don't understand your command   break;   } We will use switch so that our onClick method can handle the different multiple-choice buttons of our math game. Java has even more operators than we have covered here. We have looked at all the operators we are going to need in this book and probably the most used in general. If you want the complete lowdown on operators, take a look at the official Java documentation at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html. Math game – getting and checking the answer Here we will detect the right or wrong answer and provide a pop-up message to the player. Our Java is getting quite good now, so let's dive in and add these features. I will explain things as we go and then, as usual, dissect the code thoroughly at the end. The already completed code is in the download bundle, in the following files that correspond to the filenames we will create/autogenerate in Android Studio in a moment: Chapter 3/MathGameChapter3b/java/MainActivity.java Chapter 3/MathGameChapter3b/java/GameActivity.java Chapter 3/MathGameChapter3b/layout/activity_main.xml Chapter 3/MathGameChapter3b/layout/activity_game.xml As usual, I recommend following this tutorial step by step to see how we can create all of the code for ourselves. Open the GameActivity.java file visible in the editor window. Now we need to add the click detection functionality to our GameActivity, just as we did for our MainActivity. However, we will go a little further than the last time. So let's do it step by step as if it is totally new. Once again, we will give the buttons the ability to listen to the user clicking on them. Type this immediately after the last line of code we entered in the onCreate method but before the closing }. This time of course, we need to add some code to listen to three buttons: buttonObjectChoice1.setOnClickListener(this); buttonObjectChoice2.setOnClickListener(this); buttonObjectChoice3.setOnClickListener(this); Notice how the this keyword is highlighted in red indicating an error. Again, we need to make a modification to our code in order to allow the use of an interface, the special code element that allows us to add functionalities such as listening to button clicks. Edit the line as follows. When prompted to import another class, click on OK. Consider this line of code: public class GameActivity extends Activity { Change it to the following line: public class GameActivity extends Activity implements View.OnClickListener{   Now we have the entire preceding line underlined in red. This indicates an error but it is where we should be at this point. We mentioned that by adding implements View.OnClickListener, we have implemented an interface. We can think of this like a class that we can use, but with extra rules. One of the rules of the OnClickListener interface is that we must implement one of its methods, as you might remember. If we wish to use the useful functionality this interface provides (listening to button presses). Now we will add the onClick method. Type the following code. Notice the opening curly brace, {, and the closing curly brace, }. These denote the start and end of the method. Notice that the method is empty; it doesn't do anything but an empty method is enough to comply with the rules of the OnClickListener interface and the red line that indicated an error has gone. Make sure that you type the following code outside the closing curly brace (}) of the onCreate method but inside the closing curly brace of our MainActivity class: @Override     public void onClick(View view) {     } Notice that we have an empty line between the { and } braces of our onClick method. We can now put some code in here to make the buttons actually do something. Type the following in between { and } of onClick. This is where things get different from our code in MainActivity. We need to differentiate between the three possible buttons that could be pressed. We will do this with the switch statement that we discussed earlier. Look at the case criteria; they should look familiar. Here is the code that uses the switch statements: switch (view.getId()) {               case R.id.buttonChoice1:             //button 1 stuff goes here                 break;               case R.id.buttonChoice2:             //button 2 stuff goes here                 break;               case R.id.buttonChoice3:            //button 3 stuff goes here                 break;           }   Each case element handles a different button. For each button case, we need to get the value stored in the button that was just pressed and see if it matches our correctAnswer variable. If it does, we must tell the player they got it right, and if not, we must tell them they got it wrong. However, there is still one problem we have to solve. The onClick method is separate from the onCreate method and the Button objects. In fact, all the variables are declared in the onCreate method. If you try typing the code from step 9 now, you will get lots of errors. We need to make all the variables that we need in onClick available in onClick. To do this, we will move their declarations from above the onCreate method to just below the opening { of GameActivity. This means that these variables become variables of the GameActivity class and can be seen anywhere within GameActivity. Declare the following variables like this: int correctAnswer; Button buttonObjectChoice1; Button buttonObjectChoice2; Button buttonObjectChoice3; Now change the initialization of these variables within onCreate as follows. The actual parts of code that need to be changed are highlighted. The rest is shown for context: //Here we initialize all our variables int partA = 9; int partB = 9; correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; and TextView textObjectPartA =   (TextView)findViewById(R.id.textPartA);   TextView textObjectPartB =   (TextView)findViewById(R.id.textPartB);   buttonObjectChoice1 = (Button)findViewById(R.id.buttonChoice1);         buttonObjectChoice2 = (Button)findViewById(R.id.buttonChoice2);         buttonObjectChoice3 = (Button)findViewById(R.id.buttonChoice3);   Here is the top of our onClick method as well as the first case statement for our onClick method: @Override     public void onClick(View view) {         //declare a new int to be used in all the cases         int answerGiven=0;         switch (view.getId()) {               case R.id.buttonChoice1:             //initialize a new int with the value contained in buttonObjectChoice1             //Remember we put it there ourselves previously                 answerGiven = Integer.parseInt("" +                     buttonObjectChoice1.getText());                   //is it the right answer?                 if(answerGiven==correctAnswer) {//yay it's the right answer                     Toast.makeText(getApplicationContext(),                       "Well done!",                       Toast.LENGTH_LONG).show();                 }else{//uh oh!                     Toast.makeText(getApplicationContext(),"Sorry that's     wrong", Toast.LENGTH_LONG).show();                 }                 break;   Here are the rest of the case statements that do the same steps as the code in the previous step except handling the last two buttons. Enter the following code after the code entered in the previous step:   case R.id.buttonChoice2:                 //same as previous case but using the next button                 answerGiven = Integer.parseInt("" +                   buttonObjectChoice2.getText());                 if(answerGiven==correctAnswer) {                     Toast.makeText(getApplicationContext(), "Well done!", Toast.LENGTH_LONG).show();                 }else{                     Toast.makeText(getApplicationContext(),"Sorry that's wrong", Toast.LENGTH_LONG).show();                 }                 break;               case R.id.buttonChoice3:                 //same as previous case but using the next button                 answerGiven = Integer.parseInt("" +                     buttonObjectChoice3.getText());                 if(answerGiven==correctAnswer) {                     Toast.makeText(getApplicationContext(), "Well done!", Toast.LENGTH_LONG).show();                 }else{                     Toast.makeText(getApplicationContext(),"Sorry that's wrong", Toast.LENGTH_LONG).show();                 }                 break;           } Run the program, and then we will look at the code carefully, especially that odd-looking Toast thing. Here is what happens when we click on the leftmost button: This is how we did it: In steps 1 through 6, we set up handling for our multi-choice buttons, including adding the ability to listen to clicks using the onClick method and a switch block to handle decisions depending on the button pressed. In steps 7 and 8, we had to alter our code to make our variables available in the onClick method. We did this by making them member variables of our GameActivity class. When we make a variable a member of a class, we call it a field. In steps 9 and 10, we implemented the code that actually does the work in our switch statement in onClick. Let's take a line-by-line look at the code that runs when button1 is pressed. case R.id.buttonChoice1: First, the case statement is true when the button with an id of buttonChoice1 is pressed. Then the next line of code to execute is this: answerGiven = Integer.parseInt(""+ buttonObjectChoice1.getText()); The preceding line gets the value on the button using two methods. First, getText gets the number as a string and then Integer.parseInt converts it to an int. The value is stored in our answerGiven variable. The following code executes next: if(answerGiven==correctAnswer) {//yay it's the right answer   Toast.makeText(getApplicationContext(), "Well done!",     Toast.LENGTH_LONG).show(); }else{//uh oh!     Toast.makeText(getApplicationContext(),"Sorry that's wrong",       Toast.LENGTH_LONG).show();                 } The if statement tests to see if the answerGiven variable is the same as correctAnswer using the == operator. If so, the makeText method of the Toast object is used to display a congratulatory message. If the values of the two variables are not the same, the message displayed is a bit more negative one. The Toast line of code is possibly the most evil thing we have seen thus far. It looks exceptionally complicated and it does need a greater knowledge of Java than we have at the moment to understand. All we need to know for now is that we can use the code as it is and just change the message, and it is a great tool to announce something to the player. If you really want an explanation now, you can think of it like this: when we made button objects, we got to use all the button methods. But with Toast, we used the class directly to access its setText method without creating an object first. We can do this process when the class and its methods are designed to allow it. Finally, we break out of the whole switch statement as follows: break; Self-test questions Q1) What does this code do? // setContentView(R.layout.activity_main); Q2) Which of these lines causes an error? String a = "Hello"; String b = " Vinton Cerf"; int c = 55; a = a + b c = c + c + 10; a = a + c; c = c + a; Q3) We talked a lot about operators and how different operators can be used together to build complicated expressions. Expressions, at a glance, can sometimes make the code look complicated. However, when looked at closely, they are not as tough as they seem. Usually, it is just a case of splitting the expressions into smaller pieces to work out what is going on. Here is an expression that is more convoluted than anything else you will ever see in this book. As a challenge, can you work out: what will x be? int x = 10; int y = 9; boolean isTrueOrFalse = false; isTrueOrFalse = (((x <=y)||(x == 10))&&((!isTrueOrFalse) || (isTrueOrFalse))); Summary We went from knowing nothing about Java syntax to learning about comments, variables, operators, and decision making. As with any language, mastery of Java can be achieved by simply practicing, learning, and increasing our vocabulary. At this point, the temptation might be to hold back until mastery of the current Java syntax has been achieved, but the best way is to move on to new syntax at the same time as revisiting what we have already begun to learn. We will finally finish our math game by adding random questions of multiple difficulties as well as using more appropriate and random wrong answers for the multiple choice buttons. To enable us to do this, we will first learn some more new on Java and syntax. Resources for Article: Further resources on this subject: Asking Permission: Getting your head around Marshmallow's Runtime Permissions [article] Android and iOS Apps Testing at a Glance [article] Practical How-To Recipes for Android [article]
Read more
  • 0
  • 0
  • 1370

article-image-using-collider-based-system
Packt
17 Feb 2016
10 min read
Save for later

Using a collider-based system

Packt
17 Feb 2016
10 min read
In this article by Jorge Palacios, the author of the book Unity 5.x Game AI Programming Cookbook, you will learn how to implement agent awareness using a mixed approach that considers the previous learnt sensory-level algorithms. (For more resources related to this topic, see here.) Seeing using a collider-based system This is probably the easiest way to simulate vision. We take a collider, be it a mesh or a Unity primitive, and use it as the tool to determine whether an object is inside the agent's vision range or not. Getting ready It's important to have a collider component attached to the same game object using the script on this recipe, as well as the other collider-based algorithms in this chapter. In this case, it's recommended that the collider be a pyramid-based one in order to simulate a vision cone. The lesser the polygons, the faster it will be on the game. How to do it… We will create a component that is able to see enemies nearby by performing the following steps: Create the Visor component, declaring its member variables. It is important to add the corresponding tags into Unity's configuration: using UnityEngine; using System.Collections; public class Visor : MonoBehaviour { public string tagWall = "Wall"; public string tagTarget = "Enemy"; public GameObject agent; } Implement the function for initializing the game object in case the component is already assigned to it: void Start() { if (agent == null) agent = gameObject; } Declare the function for checking collisions for every frame and build it in the following steps: public void OnCollisionStay(Collision coll) { // next steps here } Discard the collision if it is not a target: string tag = coll.gameObject.tag; if (!tag.Equals(tagTarget)) return; Get the game object's position and compute its direction from the Visor: GameObject target = coll.gameObject; Vector3 agentPos = agent.transform.position; Vector3 targetPos = target.transform.position; Vector3 direction = targetPos - agentPos; Compute its length and create a new ray to be shot soon: float length = direction.magnitude; direction.Normalize(); Ray ray = new Ray(agentPos, direction); Cast the created ray and retrieve all the hits: RaycastHit[] hits; hits = Physics.RaycastAll(ray, length); Check for any wall between the visor and target. If none, we can proceed to call our functions or develop our behaviors to be triggered: int i; for (i = 0; i < hits.Length; i++) { GameObject hitObj; hitObj = hits[i].collider.gameObject; tag = hitObj.tag; if (tag.Equals(tagWall)) return; } // TODO // target is visible // code your behaviour below How it works… The collider component checks every frame to know whether it is colliding with any game object in the scene. We leverage the optimizations to Unity's scene graph and engine, and focus only on how to handle valid collisions. After checking whether a target object is inside the vision range represented by the collider, we cast a ray in order to check whether it is really visible or there is a wall in between. Hearing using a collider-based system In this recipe, we will emulate the sense of hearing by developing two entities; a sound emitter and a sound receiver. It is based on the principles proposed by Millington for simulating a hearing system, and uses the power of Unity colliders to detect receivers near an emitter. Getting ready As with the other recipes based on colliders, we will need collider components attached to every object to be checked and rigid body components attached to either emitters or receivers. How to do it… We will create the SoundReceiver class for our agents and SoundEmitter for things such as alarms: Create the class for the SoundReceiver object: using UnityEngine; using System.Collections; public class SoundReceiver : MonoBehaviour { public float soundThreshold; } We define the function for our own behavior to handle the reception of sound: public virtual void Receive(float intensity, Vector3 position) { // TODO // code your own behavior here } Now, let's create the class for the SoundEmitter object: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SoundEmitter : MonoBehaviour { public float soundIntensity; public float soundAttenuation; public GameObject emitterObject; private Dictionary<int, SoundReceiver> receiverDic; } Initialize the list of receivers nearby and emitterObject in case the component is attached directly: void Start() { receiverDic = new Dictionary<int, SoundReceiver>(); if (emitterObject == null) emitterObject = gameObject; } Implement the function for adding new receivers to the list when they enter the emitter bounds: public void OnCollisionEnter(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Add(objId, receiver); } Also, implement the function for removing receivers from the list when they are out of reach: public void OnCollisionExit(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Remove(objId); } Define the function for emitting sound waves to nearby agents: public void Emit() { GameObject srObj; Vector3 srPos; float intensity; float distance; Vector3 emitterPos = emitterObject.transform.position; // next step here } Compute sound attenuation for every receiver: foreach (SoundReceiver sr in receiverDic.Values) { srObj = sr.gameObject; srPos = srObj.transform.position; distance = Vector3.Distance(srPos, emitterPos); intensity = soundIntensity; intensity -= soundAttenuation * distance; if (intensity < sr.soundThreshold) continue; sr.Receive(intensity, emitterPos); } How it works… The collider triggers help register agents in the list of agents assigned to an emitter. The sound emission function then takes into account the agent's distance from the emitter in order to decrease its intensity using the concept of sound attenuation. There is more… We can develop a more flexible algorithm by defining different types of walls that affect sound intensity. It works by casting rays and adding up their values to the sound attenuation: Create a dictionary to store wall types as strings (using tags) and their corresponding attenuation: public Dictionary<string, float> wallTypes; Reduce sound intensity this way: intensity -= GetWallAttenuation(emitterPos, srPos); Define the function called in the previous step: public float GetWallAttenuation(Vector3 emitterPos, Vector3 receiverPos) { // next steps here } Compute the necessary values for ray casting: float attenuation = 0f; Vector3 direction = receiverPos - emitterPos; float distance = direction.magnitude; direction.Normalize(); Cast the ray and retrieve the hits: Ray ray = new Ray(emitterPos, direction); RaycastHit[] hits = Physics.RaycastAll(ray, distance); For every wall type found via tags, add up its value (stored in the dictionary): int i; for (i = 0; i < hits.Length; i++) { GameObject obj; string tag; obj = hits[i].collider.gameObject; tag = obj.tag; if (wallTypes.ContainsKey(tag)) attenuation += wallTypes[tag]; } return attenuation; Smelling using a collider-based system Smelling can be simulated by computing collision between an agent and odor particles, scattered throughout the game level. Getting ready In this recipe based on colliders, we will need collider components attached to every object to be checked, which can be simulated by computing a collision between an agent and odor particles. How to do it… We will develop the scripts needed to represent odor particles and agents able to smell: Create the particle's script and define its member variables for computing its lifespan: using UnityEngine; using System.Collections; public class OdorParticle : MonoBehaviour { public float timespan; private float timer; } Implement the Start function for proper validations: void Start() { if (timespan < 0f) timespan = 0f; timer = timespan; } Implement the timer and destroy the object after its life cycle: void Update() { timer -= Time.deltaTime; if (timer < 0f) Destroy(gameObject); } Create the class for representing the sniffer agent: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Smeller : MonoBehaviour { private Vector3 target; private Dictionary<int, GameObject> particles; } Initialize the dictionary for storing odor particles: void Start() { particles = new Dictionary<int, GameObject>(); } Add to the dictionary the colliding objects that have the odor-particle component attached: public void OnCollisionEnter(Collision coll) { GameObject obj = coll.gameObject; OdorParticle op; op = obj.GetComponent<OdorParticle>(); if (op == null) return; int objId = obj.GetInstanceID(); particles.Add(objId, obj); UpdateTarget(); } Release the odor particles from the local dictionary when they are out of the agent's range or are destroyed: public void OnCollisionExit(Collision coll) { GameObject obj = coll.gameObject; int objId = obj.GetInstanceID(); bool isRemoved; isRemoved = particles.Remove(objId); if (!isRemoved) return; UpdateTarget(); } Create the function for computing the odor centroid according to the current elements in the dictionary: private void UpdateTarget() { Vector3 centroid = Vector3.zero; foreach (GameObject p in particles.Values) { Vector3 pos = p.transform.position; centroid += pos; } target = centroid; } Implement the function for retrieving the odor centroid, if any: public Vector3? GetTargetPosition() { if (particles.Keys.Count == 0) return null; return target; } How it works… Just like the hearing recipe based on colliders, we use the trigger colliders to register odor particles to an agent's perception (implemented using a dictionary). When a particle is included or removed, the odor centroid is computed. However, we implement a function to retrieve that centroid because when no odor particle is registered, the internal centroid position is not updated. There is more… The particle emission logic is left behind to be implemented according to our game's needs and it basically instantiates odor-particle prefabs. Also, it is recommended to attach the rigid body components to the agents. Odor particles are prone to be massively instantiated, reducing the game's performance. Seeing using a graph-based system We will start a recipe oriented to use graph-based logic in order to simulate sense. Again, we will start by developing the sense of vision. Getting ready It is important to grasp the chapter regarding path finding in order to understand the inner workings of the graph-based recipes. How to do it… We will just implement a new file: Create the class for handling vision: using UnityEngine; using System.Collections; using System.Collections.Generic; public class VisorGraph : MonoBehaviour { public int visionReach; public GameObject visorObj; public Graph visionGraph; } Validate the visor object: void Start() { if (visorObj == null) visorObj = gameObject; } Define and start building the function needed to detect visibility of a given set of nodes: public bool IsVisible(int[] visibilityNodes) { int vision = visionReach; int src = visionGraph.GetNearestVertex(visorObj); HashSet<int> visibleNodes = new HashSet<int>(); Queue<int> queue = new Queue<int>(); queue.Enqueue(src); } Implement a breath-first search algorithm: while (queue.Count != 0) { if (vision == 0) break; int v = queue.Dequeue(); List<int> neighbours = visionGraph.GetNeighbors(v); foreach (int n in neighbours) { if (visibleNodes.Contains(n)) continue; queue.Enqueue(v); visibleNodes.Add(v); } } Compare the set of visible nodes with the set of nodes reached by the vision system: foreach (int vn in visibleNodes) { if (visibleNodes.Contains(vn)) return true; } Return false if there is no match between the two sets of nodes: return false; How it works… The recipe uses the breath-first search algorithm in order to discover nodes within its vision reach, and then compares this set of nodes with the set of nodes where the agents reside. Summary In this article, we explained some algorithms for simulating senses and agent awareness. Resources for Article: Further resources on this subject: Animation and Unity3D Physics[article] Unity 3-0 Enter the Third Dimension[article] Animation features in Unity 5[article]
Read more
  • 0
  • 0
  • 2489