Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

370 Articles
article-image-scripting-strategies
Packt
24 Sep 2015
9 min read
Save for later

Scripting Strategies

Packt
24 Sep 2015
9 min read
 In this article by Chris Dickinson, the author of Unity 5 Game Optimization, you will learn how scripting consumes a great deal of our development time and how it will be enormously beneficial to learn some best practices in optimizing scripts. Scripting is a very broad term, so we will try to limit our exposure in this article to situations that are Unity specific, focussing on problems arising from within the Unity APIs and Engine design. Whether you have some specific problems in mind that we wish to solve or whether you just want to learn some techniques for future reference, this article will introduce you to methods that you can use to improve your scripting effort now and in the future. In each case, we will explore how and why the performance issue arises, an example situation where the problem is occurring, and one or more solutions to combat the issue. (For more resources related to this topic, see here.) Cache Component references A common mistake when scripting in Unity is to overuse the GetComponent() method. For example, the following script code is trying to check a creature's health value, and if its health goes below 0, then disable a series of components to prepare it for a death animation: void TakeDamage() { if (GetComponent<HealthComponent>().health < 0) { GetComponent<Rigidbody>().enabled = false; GetComponent<Collider>().enabled = false; GetComponent<AIControllerComponent>().enabled = false; GetComponent<Animator>().SetTrigger("death"); } } Each time this method executes, it will reacquire five different Component references. This is good in terms of heap memory consumption (in that, it doesn't cost any), but it is not very friendly on CPU usage. This is particularly problematic if the main method were called during Update(). Even if it is not, it still might coincide with other important events such as creating particle effects, replacing an object with a ragdoll (thus invoking various activity in the physics engine), and so on. This coding style can seem harmless, but it could cause a lot of long-term problems and runtime work for very little benefit. It costs us very little memory space (only 32 or 64 bits each; Unity version, platform and fragmentation-permitting) to cache these references for future usage. So, unless we're extremely bottlenecked on memory, a better approach will be to acquire the references during initialization and keep them until they are needed: private HealthComponent _healthComponent; private Rigidbody _rigidbody; private Collider _collider; private AIControllerComponent _aiController; private Animator _animator; void Awake() { _healthComponent = GetComponent<HealthComponent>(); _rigidbody = GetComponent<Rigidbody>(); _collider = GetComponent<Collider>(); _aiController = GetComponent<AIControllerComponent>(); _animator = GetComponent<Animator>(); } void TakeDamage() { if (_healthComponent.health < 0) { _rigidbody.detectCollisions = false; _collider.enabled = false; _aiController.enabled = false; _animator.SetTrigger("death"); } } Caching the Component references in this way spares us from reacquiring them each time they're needed, saving us some CPU overhead each time, at the expense of some additional memory consumption. Obtain components using the fastest method There are several variations of the GetComponent() method, and it becomes prudent to call the fastest version of this method as possible. The three overloads available are GetComponent(string), GetComponent<T>(), and GetComponent(typeof(T)). It turns out that the fastest version depends on which version of Unity we are running. In Unity 4, the GetComponent(typeof(T)) method is the fastest of the available options by a reasonable margin. Let's prove this with some simple testing: int numTests = 1000000; TestComponent test; using (new CustomTimer("GetComponent(string)", numTests)) { for (var i = 0; i < numTests; ++i) { test = (TestComponent)GetComponent("TestComponent"); } } using (new CustomTimer("GetComponent<ComponentName>", numTests)) { for (var i = 0; i < numTests; ++i) { test = GetComponent<TestComponent>(); } } using (new CustomTimer("GetComponent(typeof(ComponentName))", numTests)) { for (var i = 0; i < numTests; ++i) { test = (TestComponent)GetComponent(typeof(TestComponent)); } } This code tests each of the GetComponent() overloads one million times. This is far more tests than would be sensible for a typical project, but it is enough tests to prove the point. Here is the result we get when the test completes: As we can see, GetComponent(typeof(T)) is significantly faster than GetComponent<T>(), which is around five times faster than GetComponent(string). This test was performed against Unity 4.5.5, but the behavior should be equivalent all the way back to Unity 3.x. The GetComponent(string) method should not be used, since it is notoriously slow and is only included for completeness. These results change when we run the exact same test in Unity 5. Unity Technologies made some performance enhancements to how System.Type references are passed around in Unity 5.0 and as a result, GetComponent<T>() and GetComponent(typeof(T)) become essentially equivalent: As we can see, the GetComponent<T>() method is only a tiny fraction faster than GetComponent(typeof(T)), while GetComponent(string) is now around 30 times slower than the alternatives (interestingly, it became even slower than it was in Unity 4). Multiple tests will probably yield small variations in these results, but ultimately we can favor either of the type-based versions of GetComponent() when we're working in Unity 5 and the outcome will be about the same. However, there is one caveat. If we're running Unity 4, then we still have access to a variety of quick accessor properties such as collider, rigidbody, camera, and so on. These properties behave like precached Component member variables, which are significantly faster than all of the traditional GetComponent() methods: int numTests = 1000000; Rigidbody test; using (new CustomTimer("Cached reference", numTests)) { for (var i = 0; i < numTests; ++i) { test = gameObject.rigidbody; } } Note that this code is intended for Unity 4 and cannot be compiled in Unity 5 due to the removal of the rigidbody property. Running this test in Unity 4 gives us the following result: In an effort to reduce dependencies and improve code modularization in the Engine's backend, Unity Technologies deprecated all of these quick accessor variables in Unity5. Only the transform property remains. Unity 4 users considering an upgrade to Unity 5 should know that upgrading will automatically modify any of these properties to use the GetComponent<T>() method. However, this will result in un-cached GetComponent<T>() calls scattered throughout our code, possibly requiring us to revisit the techniques introduced in the earlier section titled Cache Component References. The moral of the story is that if we are running Unity 4, and the required Component is one of GameObject's built-in accessor properties, then we should use that version. If not, then we should favor GetComponent(typeof(T)). Meanwhile, if we're running Unity5, then we can favor either of the type-based versions: GetComponent<T>() or GetComponent(typeof(T)). Remove empty callback declarations When we create new MonoBehaviour script files in Unity, irrespective we're using Unity 4 or Unity 5, it creates two boiler-plate methods for us: // Use this for initialization void Start () { } // Update is called once per frame void Update () { } The Unity Engine hooks in to these methods during initialization and adds them to a list of methods to call back to at key moments. But, if we leave these as empty declarations in our codebase, then they will cost us a small overhead whenever the Engine invokes them. The Start() method is only called when the GameObject is instantiated for the first time, which could be whenever the Scene is loaded, or a new GameObject is instantiated from a Prefab. Therefore, leaving the empty Start() declaration may not be particularly noticeable unless there's a lot of GameObjects in the Scene invoking them at startup time. But, it also adds unnecessary overhead to any GameObject.Instantiate() call, which typically happens during key events, so they could potentially contribute to, and exacerbate, an already poor performance situation when lots of events are happening simultaneously. Meanwhile, the Update() method is called every time the Scene is rendered. If our Scene contains thousands of GameObjects owning components with these empty Update() declarations, then we can be wasting a lot of CPU cycles and cause havoc on our frame rate. Let's prove this with a simple test. Our test Scene should have GameObjects with two types of components. One type is with an empty Update() declaration and another with no methods defined: public class CallbackTestComponent : MonoBehaviour { void Update () {} } public class EmptyTestComponent : MonoBehaviour { } Here are the test results for 32,768 components of each type. If we enable all objects with no stub methods during runtime, then nothing interesting happens with CPU usage in the Profiler. We may note that some memory consumption changes and a slight difference in the VSync activity, but nothing very concerning. However, as soon as we enable all the objects with empty Unity callback declarations, then we will observe a huge increase in CPU usage: The fix for this is simple; delete the empty declarations. Unity will have nothing to hook into, and nothing will be called. Sometimes, finding such empty declarations in an expansive codebase can be difficult, but using some basic regular expressions (regex), we should be able to find what we're looking for relatively easily. All common code-editing tools for Unity, such as MonoDevelop, Visual Studio, and even Notepad++, provide a way to perform a regex-based search on the entire codebase–check the tool's documentation for more information, since the method can vary greatly depending on the tool and its version. The following regex search should find any empty Update() declarations in our code: voids*Updates*?(s*?)s*?n*?{n*?s*?} This regex checks for a standard method definition of the Update() method, while including any surplus whitespace and newline characters that can be distributed throughout the method declaration. Naturally, all of the above is also true for non-boilerplate Unity callbacks, such as OnGUI(), OnEnable(), OnDestroy(), FixedUpdate(), and so on. Check the MonoBehaviour Unity Documentation page for a complete list of these callbacks at http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. It might seem unlikely that someone generated empty versions of these callbacks in our codebase, but never say never. For example, if we use a common base class MonoBehaviour throughout all of our custom components, then a single empty callback declaration in that base class will permeate the entire game, which could cost us dearly. Be particularly careful of the OnGUI() method, as it can be invoked multiple times within the same frame or user interface (UI) event. Summary In this article, you have learned how you can optimize scripts while creating less CPU and memory-intensive applications and games. You learned about the Cache Component references and how you can optimize a code using the fastest method. For more information on code optimization, you can visit: http://www.paladinstudios.com/2012/07/30/4-ways-to-increase-performance-of-your-unity-game/ http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html Resources for Article: Further resources on this subject: Components in Unity[article] Saying Hello to Unity and Android[article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 3982

article-image-zombie-attacks
Packt
24 Sep 2015
9 min read
Save for later

The Zombie Attacks!

Packt
24 Sep 2015
9 min read
 In this article by Jamie Dean author of the book Unity Character Animation with Mecanim: RAW, we will demonstrate the process of importing and animating a rigged character in Unity. In this article, we will cover: Starting a blank Unity project and importing the necessary packages Importing a rigged character model in the FBX format and adjusting import settings Typically, an enemy character such as this will have a series of different animation sequences, which will be imported separately or together from a 3D package. In this case, our animation sequences are included in separate files. We will begin, by creating the Unity project. (For more resources related to this topic, see here.) Setting up the project Before we start exploring the animation workflow with Mecanim's tools, we need to set up the Unity project: Create a new project within Unity by navigating to File | New Project.... When prompted, choose an appropriate name and location for the project. In the Unity - Project Wizard dialog that appears, check the relevant boxes for the Character Controller.unityPackage and Scripts.unityPackage packages. Click on the Create button. It may take a few minutes for Unity to initialize. When the Unity interface appears, import the PACKT_cawm package by navigating to Assets | Import Package | Custom Package.... The Import package... window will appear. Navigate to the location where you unzipped the project files, select the unity package, and click on Open.The assets package will take a little time to decompress. When the Importing Package checklist appears, click on the Import button in the bottom-right of the window. Once the assets have finished importing, you will start with a default blank scene. Importing our enemy Now, it is time to import our character model: Minimize Unity. Navigate to the location where you unzipped the project files. Double-click on the Models folder to view its contents. Double-click on the zombie_m subfolder to view its contents.The folder contains an FBX file containing the rigged male zombie model and a separate subfolder containing the associated textures. Open Unity and resize the window so that both Unity and the zombie_m folder contents are visible. In Unity, click on the Assets folder in the Project panel. Drag the zombie_m FBX asset into the Assets panel to import it.Because the FBX file contains a normal map, a window will pop up asking if you want to set this file's import settings to read it correctly. Click on the Fix Now button. FBX files can contain embedded bitmap textures, which can be imported with the model. This will create subfolders containing the materials and textures within the folder where the model has been imported. Leaving the materials and textures as subfolders of the model will make them difficult to find within the project. The zombie model and two folders should now be visible in the FBX_Imports folder in the Assets panel. In the next step, we will move the imported material and texture assets into the appropriate folders in the Unity project. Organizing the material and textures The material and textures associated with the zombie_m model are currently located within the FBX_Imports folder. We will move these into different folders to organize them within the hierarchy of our project: Double-click on the Materials folder and drag the material asset contained within it into the PACKT_Materials folder in the Project panel. Return to the FBX_Imports folder by clicking on its title at the top of the Assets panel interface. Double-click on the textures folder. This will be named to be consistent with the model. Drag the two bitmap textures into the PACKT_Textures folder in the Project panel. Return to the FBX_Imports folder and delete the two empty subfolders.The moved material and textures will still be linked to the model. We will make sure of this by instancing it in the current empty scene. Drag the zombie_m asset into the Hierarchy panel. It may not be immediately visible within the Scene view due to the default import scale settings. We will take care of this in the next step. Adjusting the import scale Unity's import settings can be adjusted to account for the different tools commonly used to create 2D and 3D assets. Import settings are adjusted in the Inspector panel, which will appear on the right of the unity interface by default: Click on the zombie_m game object within the Hierarchy panel.This will bring up the file's import settings in the Inspector panel. Click on the Model tab. In the Scale Factor field, highlight the current number and type 1. The character model has been modeled to scale in meters to make it compatible with Unity's units. All 3D software applications have their own native scale. Unity does a pretty good job at accommodating all of them, but it often helps to know which software was used to create them. Scroll down until the Materials settings are visible. Uncheck the Import Materials checkbox.Now that we have got our textures and materials organized within the project, we want to make sure they are not continuously imported into the same folder as the model. Leave the remaining Model Import settings at their default values.We will be discussing these later on in the article, when we demonstrate the animation import. Click on the Apply button. You may need to scroll down within the Inspector panel to see this: The zombie_m character should now be visible in the Scene view: This character model is a medium resolution model—4410 triangles—and has a single 1024 x 1024 albedo texture and separate 1024 x 1024 specular and normal maps. The character has been rigged with a basic skeleton. The rigging process is essential if the model is to be animated. We need to save our progress, before we get any further: Save the scene by navigating to File | Save Scene as.... Choose an appropriate filename for the scene. Click on the Apply button. Despite the fact that we have only added a single game object to the default scene, there are more steps that we will need to take to set up the character and it will be convenient for us to save the current set up in case anything goes wrong. In the character animation, there are looping and single-shot animation sequences. Some animation sequences such as walk, run, idle are usually seamless loops designed to play back-to-back without the player being aware of where they start and end. Other sequences, typically, shooting, hitting, being injured or dying are often single-shot animations, which do not need to loop. We will start with this kind, and discuss looping animation sequences later in the article. In order to use Mecanim's animation tools, we need to set up the character's Avatar so that the character's hierarchy of bones is recognized and can be used correctly within Unity. Adjusting the rig import settings and creating the Avatar Now that we have imported the model, we will need to adjust the import settings so that the character functions correctly within our scene: Select zombie_m in the Assets panel. The asset's import settings should become visible within the Inspector panel. This settings rollout contains three tabs: Model, Rig, and Animations. Since we have already adjusted the Scale Factor within the Model Import settings, we will move on to the Rig import settings where we can define what kind of skeleton our character has. Choosing the appropriate rig import settings Mecanim has three options for importing rigged models: Legacy, Generic, and Humanoid. It also has a none option that should be applied to models that are not intended to be animated. Legacy format was previously the only option for importing skeletal animation in Unity. It is not possible to retarget animation sequences between models using Legacy, and setting up functioning state machines requires quite a bit of scripting. It is still a useful tool for importing models with fewer animation sequences and for simple mechanical animations. Legacy format animations are not compatible with Mecanim. Generic is one of the new animation formats that are compatible with Mecanim's animator controllers. It does not have the full functionality of Mecanim's character animation tools. Animations sequences imported with the generic format cannot be retargeted and are best used for quadrupeds, mechanical devices, pretty much anything except a character with two arms and two legs. The Humanoid animation type allows the full use of Mecanim's powerful toolset. It requires a minimum of 15 bones, and assumes that your rig is roughly human shaped with a pair of arms and legs. It can accommodate many more intermediary joints and some basic facial animation. One of the greatest benefits of using the Humanoid type is that it allows animation sequences to be retargeted or adapted to work with different rigs. For instance, you may have a detailed player character model with a full skeletal rig (including fingers and toes joints), maybe you want to reuse this character's idle sequence with a background character that is much less detailed, and has a simpler arrangement of bones. Mecanim makes it possible reuse purpose built motion sequences and even create useable sequences from motion capture data. Now that we have introduced these three rig types, we need to choose the appropriate setting for our imported zombie character, which in this case is Humanoid: In the Inspector panel, click on the Rig tab. Set the Animation Type field to Humanoid to suit our character skeleton type. Leave Avatar Definition set to Create From This Model. Optimize Game Objects can be left checked. Click on the Apply button to save the settings and transfer all of the changes that you have made to the instance in the scene.  The Humanoid animation type is the only one that supports retargeting. So if you are importing animations that are not unique and will be used for multiple characters, it is a good idea to use this setting. Summary In this article, we covered the major steps involved in animating a premade character using the Mecanim system in Unity. We started with FBX import settings for the model and the rig. We covered the creation of the Avatar by defining the bones in the Avatar Definition settings. Resources for Article: Further resources on this subject: Adding Animations[article] 2D Twin-stick Shooter[article] Skinning a character [article]
Read more
  • 0
  • 0
  • 3644

article-image-user-interface
Packt
23 Sep 2015
10 min read
Save for later

User Interface

Packt
23 Sep 2015
10 min read
This article, written by John Doran, the author of the Unreal Engine Game Development Cookbook, covers the following recipes: Creating a main menu Animating a menu (For more resources related to this topic, see here.) In order to create a good game project, you need to be able to communicate information to the player. To do this, we need to create a user interface (UI), which will allow us to display information such as the player's health, inventory, and so on. Inside Unreal 4, we use the Slate UI framework to create user interfaces, however, it's a very complex system. To make things easier for end users, Unreal also released the Unreal Motion Graphics (UMG) UI Designer which is a visual UI authoring tool with a much easier workflow. This is what we will be using in this article. For more information on Slate, refer to https://docs.unrealengine.com/latest/INT/Programming/Slate/index.html. Creating a main menu A main menu can serve as an introduction to your game and is a great place for us to discuss some additional things that UMG has, such as Texts and Buttons. We'll also learn how we can make buttons do things. Let's spend some time to see just how easy it is to create one! For more information on the client-server model, refer to https://en.wikipedia.org/wiki/Client%E2%80%93server_model. How to do it… To give you an idea of how it works, let's take a simple example of a coin collectable: Create a new level by going to File | New Level and select Empty Level. Next, inside the Content Browser tab, go to our UI folder, then to Add New | User Interface | Widget Blueprint, and give it a name of MainMenu. Double-click on it to open the editor. In this menu, we are going to have the title of the game and then a series of buttons the player can press: From the Palette tab, open up the Common section and drag and drop a Button onto the middle of the screen. Select the button and change its Size X to 400 and Size Y to 80. We will also rename the button to Play Game. Drag and drop a Text object onto the Play Game button and you should see it snap on to the button as a child. Under Content, change Text to Play Game. From here under Appearance, change the color of the button to black and change the Font size to 32. From the Hierarchy tab, select the Play Game button and copy and paste it to create duplicate. Move the button down, rename it to Quit Game, and change the Text to Content as well. Move both of the objects so that they're on the bottom part of the HUD, slightly above and side by side, as shown in the following image: Lastly, we'll want to set our pivots and anchors accordingly. When you select either the Quit Game or Play Game buttons, you may notice a sun-like looking widget that displays the Anchors of the object (known as the Anchor Medallion). In our case, open Anchors from the Details panel and click on the bottom-center option. Now that we have the buttons created, we want them to actually do something when we click on them. Select the Play Game button and from the Details tab, scroll down until you see the Events component. There should be a series of big green + buttons. Click on the green button beside OnClicked. Next, it will take us to the Event Graph with the appropriate event created for us. To the right of the event, right-click and create an Open Level action. Under Level Name, put in whatever level you like (for example, StarterMap) and then connect the output of the OnClicked action to the input of the Open Level action. To the right of that, create a Remove from Parent action to make sure that when we leave that, the menu doesn't stay. Finally, create a Get Player Controller action and to the right of it a Set Show Mouse Cursor action, which should be disabled, so that the mouse will no longer be visible since we will want to see the mouse in the menu. (Drag Return Value from the Get Player Controller action to create a new node and search for the mouse cursor action.) Now, go back to the Designer button and then select the Quit Game button. Click on the OnClicked button as well and to the right of this one, create a Quit Game action and connect the output of the OnClicked action to the input of the Quit Game action. Lastly, as a bit of polish, let's add our game's title to the screen. Drag and drop another Text object onto the scene, this time with Anchor at the top-center. From here, change Position X to 0 and Position Y to 176. Change Alignment in the X axis to .5 and check the Size to Content option for it to automatically resize. Set the Content component's Text property to the game's name (in my case, Game Name). Under the Appearance component, set the Font size to 93 and set Justification to Center. There are a number of other styling options that you may wish to use when developing your HUDs. For more information about it, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Styling/index.html. Compile the menu, and saveit. Now we need to actually have the widget show up. To do so, we'll need to take the same steps as we did earlier. Open up Level Blueprint by going to Blueprints | Open Level Blueprint and create an EventBeginPlay event. Then, to the right of this, right-click and create a Create Widget action. From the dropdown under Class, select MainMenu and connect the arrow from Event Begin Play to the input of Create MainMenu_C Widget. After this, click and drag the output arrow and create an Add to Viewport event. Then, connect Return Value of our Create Widget action to Target of the Add to Viewport action. Now lastly, we also want to display the player's cursor on the screen to show buttons. To do this, right-click and select Get Player Controller. Then, from Return Value of that, create a Show Mouse Cursor object in Set. Connect the output of the Add to Viewport action to the input of the Show Mouse Cursor action. Compile, save, and run the project! With this, our menu is completed! We can quit the game without any problem, and pressing the Play Game button will start our level! Animating a menu You may have created a menu or UI element at some point, but rather than having it static and non-moving, let's spend some time looking at how we can animate the menus by having them fly in and out or animating them in some way. This will help add to the polish of the title as well as enable players to notice things easier as they move in. Getting ready Before we start working on this, we need to have a project created and set up. Do the previous recipe all the way to completion. How to do it… Open up the MainMenu blueprint once more and from the bottom-left in the Animations tab, click on the +Animation button and give the new animation a name of MenuFlyIn. Select the newly created animation and you should see the window on the right-hand side brighten up. Next, click on the Auto Key toggle to have the animation editor automatically set keys that are appropriate for our implementation. If it's not there already, move the timeline bar (the white line with two orange ends on the top and bottom) to the 0.00 mark on the animation timeline. Next, select the Game Name object and under Color and Opacity, open it and change the A (alpha) value to 0. Now move the timeline bar to the 1.00 mark and then open the color again and set the A value to 1. You'll notice a transition—going from a completely transparent text to a fully shown one. This is a good start. Let's have the buttons fly in after the text appears. Next, move the Time bar to the 2.00 mark and select the Play Game button. Now from the Details tab, you'll notice that under the variables, there are new + icons to the left of variables. This value will save the value for use in the animations. Click on the + icon by the Position Y value. If you use your scroll wheel while inside the dark grey portion of the timeline bar (where the keyframe numbers are displayed), it zooms in and out. This can be quite useful when you create more complex animations. Now move the Time bar to the 1.00 mark and move the Play Game button off the screen. By doing the animation in this way, we are saving where we want it to be first at the end, and then going back in time to do the animations. Do the same animation for the Quit Game button. Now that our animation is created, let's make it in a way so that when the object starts, this animation is played. Click on the Graph button and from the MyBlueprint tab under the Graphs section, double-click on the Event Construct event, which is called as soon as we add the menu to the scene. Grab the pin on the end of it and create a Play Animation action. Drag and drop a MenuFlyIn animation into the scene and select Get. Connect its output pin to the In Animation property of the Play Animation action. Now that we have the animation work when we create the menu, let's have it play when we leave the menu. Select the Play Animation and Menu Fly In variables and copy them. Then move to the OnClicked (Play Game) action. Drag the OnClicked event over to the left and remove its original connection to the Open Level action by holding down Alt and clicking. Now paste (Ctrl + V) the new objects and connect the out pin of OnClicked (Play Game) to the input of Play Animation. Now change Play Mode to Reverse. To the right of this, create a Delay action. For the Duration variable, we want it to wait as long as the animation is, so from the Menu Fly In variable, create another pin and create a Get End Time action. Connect Return Value of Get End Time to the input of the Delay action. Connect the output of the Play Animation action to the input of the Delay action and the Completed output of the Delay action to the input of the Open Level action. Now we need to do the same for the OnClicked (Quit Game) event. Now compile, save, and run the game! Our menu is now completed and we've learned about how animation works inside UMG! For more examples of using UMG for animation, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Animation/index.html. Summary This article gave you some insight on Slate and the UMG Editor to create a number of UI elements and an animated main menu to tie your whole game together. We created a main menu and also learned how to make buttons do things. We spent some time looking at how we can animate menus by having them fly in and out. Resources for Article: Further resources on this subject: The Blueprint Class[article] Adding Fog to Your Games [article] Overview of Unreal Engine 4 [article]
Read more
  • 0
  • 0
  • 2446

article-image-prototyping-levels-prototype
Packt
22 Sep 2015
13 min read
Save for later

Prototyping Levels with Prototype

Packt
22 Sep 2015
13 min read
Level design 101 – planning Now, just because we are going to be diving straight into Unity, I feel it's important to talk a little more about how level design is done in the game industry. While you may think a level designer will just jump into the editor and start playing, the truth is you would normally need to do a ton of planning ahead of time before you even open up your tool. Generally, a level begins with an idea. This can come from anything; maybe you saw a really cool building or a photo on the Internet gave you a certain feeling; maybe you want to teach the player a new mechanic. Turning this idea into a level is what a level designer does. Taking all of these ideas, the level designer will create a level design document, which will outline exactly what you're trying to achieve with the entire level from start to end. In this article by John Doran, author of Building FPS Games with Unity, a level design document will describe everything inside the level; listing all of the possible encounters, puzzles, so on and so forth, which the player will need to complete as well as any side quests that the player will be able to achieve. To prepare for this, you should include as many references as you can with maps, images, and movies similar to what you're trying to achieve. If you're working with a team, making this document available on a website or wiki will be a great asset so that you know exactly what is being done in the level, what the team can use in their levels, and how difficult their encounters can be. Generally, you'll also want a top-down layout of your level done either on a computer or with a graph paper, with a line showing a player's general route for the level with the encounters and missions planned out. (For more resources related to this topic, see here.) Of course, you don't want to be too tied down to your design document. It will change as you playtest and work on the level, but the documentation process will help solidify your ideas and give you a firm basis to work from. For those of you interested in seeing some level design documents, feel free to check out Adam Reynolds' Level Designer on Homefront and Call of Duty: World at War at http://wiki.modsrepository.com/index.php?title=Level_Design:_Level_Design_Document_Example. If you want to learn more about level design, I'm a big fan of Beginning Game Level Design by John Feil (previously, my teacher) and Marc Scattergood, Cengage Learning PTR. For more of an introduction to all of game design from scratch, check out Level Up!: The Guide to Great Video Game Design by Scott Rogers and Wiley and The Art of Game Design by Jesse Schel. For some online resources, Scott has a neat GDC talk called Everything I Learned About Level Design I Learned from Disneyland, which can be found at http://mrbossdesign.blogspot.com/2009/03/everything-i-learned-about-game-design.html, and World of Level Design (http://worldofleveldesign.com/) is a good source to learn about level design, though it does not talk about Unity specifically. In addition to a level design document, you can also create a game design document (GDD) that goes beyond the scope of just the level and includes story, characters, objectives, dialogue, concept art, level layouts, and notes about the game's content. However, it is something to do on your own. Creating architecture overview As a level designer, one of the most time-consuming parts of your job will be creating environments. There are many different ways out there to create levels. By default, Unity gives us some default meshes such as a Box, Sphere, and Cylinder. While it's technically possible to build a level in this way, it could get really tedious very quickly. Next, I'm going to quickly go through the most popular options to build levels for the games made in Unity before we jump into building a level of our own. 3D modelling software A lot of times, opening up a 3D modeling software package and building an architecture that way is what professional game studios will often do. This gives you maximum freedom to create your environment and allows you to do exactly what it is you'd like to do; but it requires you to be proficient in that tool, be it Maya, 3ds Max, Blender (which can be downloaded for free at blender.org), or some other tool. Then, you just need to export your models and import them into Unity. Unity supports a lot of different formats for 3D models (most commonly used are .obj and .fbx), but there are a lot of issues to consider. For some best practices when it comes to creating art assets, please visit http://blogs.unity3d.com/2011/09/02/art-assets-best-practice-guide/. Constructing geometry with brushes Constructive Solid Geometry (CSG), commonly referred to as brushes, is a tool artists/designers use to quickly block out pieces of a level from scratch. Using brushes inside the in-game level editor has been a common approach for artists/designers to create levels. Unreal Engine 4, Hammer, Radiant, and other professional game engines make use of this building structure, making it quite easy for people to create and iterate through levels quickly through a process called white-boxing, as it's very easy to make changes to the simple shapes. However; just like learning a modeling software tool, there can be a higher barrier for entry in creating complex geometry using a 3D application, but using CSG brushes will provide a quick solution to create shapes with ease. Unity does not support building things like this by default, but there are several tools in the Unity Asset Store, which allow you to do something like this. For example, sixbyseven studio has an extension called ProBuilder that can add this functionality to Unity, making it very easy to build out levels. The only possible downside is the fact that it does cost money, though it is worth every penny. However, sixbyseven has kindly released a free version of their tools called Prototype, which we installed earlier. It contains everything we will need for this chapter, but it does not allow us to add custom textures and some of the more advanced tools. We will be using ProBuilder later on in the book to polish the entire product. You can find out more information about ProBuilder at http://www.protoolsforunity3d.com/probuilder/. Modular tilesets Another way to generate architecture is through the use of "tiles" that are created by an artist. Similar to using Lego pieces, we can use these tiles to snap together walls and other objects to create a building. With creative uses of the tiles, you can create a large amount of content with just a minimal amount of assets. This is probably the easiest way to create a level at the expense of not being able to create unique looking buildings, since you only have a few pieces to work with. Titles such as Skyrim use this to a great extent to create their large world environments. Mix and match Of course, it's also possible to use a mixture of the preceding tools in order to use the advantages of certain ways of doing things. For example, you could use brushes to block out an area and then use a group of tiles called a tileset to replace the boxes with the highly detailed models, which is what a lot of AAA studios do. In addition, we could initially place brushes to test our gameplay and then add in props to break up the repetitiveness of the levels, which is what we are going to be doing. Creating geometry The first thing we are going to do is to learn how we can create geometry as described in the following steps: From the top menu, go to File | New Scene. This will give us a fresh start to build our project. Next, because we already have Prototype installed, let's create a cube by hitting Ctrl + K. Right now, our Cube (with a name of pb-Cube-1562 or something similar) is placed on a Position of 2, -7, -2. However, for simplicity's sake, I'm going to place it in the middle of the world. We can do this by typing in 0,0,0 by left-clicking in the X position field, typing 0, and then pressing Tab. Notice the cursor is now automatically at the Y part. Type in 0, press Tab again, and then, from the Z slot, press 0 again. Alternatively you can right-click on the Transform component and select Reset Position. Next, we have to center the camera back onto our Cube object. We can do this by going over to the Hierarchy tab and double-clicking on the Cube object (or selecting it and then pressing F). Now, to actually modify this cube, we are going to open up Prototype. We can do this by first selecting our Cube object, going to the Pb_Object component, and then clicking on the green Open Prototype button. Alternatively, you can also go to Tools | Prototype | Prototype Window. This is going to bring up a window much like the one I have displayed here. This new Prototype tab can be detached from the main Unity window or, if you drag from the tab over into Unity, it can be "hooked" into place elsewhere, like the following screenshot shows by my dragging and dropping it to the right of the Hierarchy tab. Next, select the Scene tab in the middle of the screen and press the G key to toggle us into the Object/Geometry mode. Alternatively, you can also click on the Element button in the Scene tab. Unlike the default Object/Top level mode, this will allow us to modify the cube directly to build upon it. For more information on the different modes, check out the Modes & Elements section from http://www.protoolsforunity3d.com/docs/probuilder/#buildingAndEditingGeometry. You'll notice the top of the Prototype tab has three buttons. These stand for what selection type you are currently wanting to use. The default is Vertex or the Point mode, which will allow us to select individual parts to modify. The next is Edge and the last is Face. Face is a good standard to use at this stage, because we only want to extend things out. Select the Face mode by either clicking on the button or pressing the H key twice until it says Editing Faces on the screen. Afterwards, select the box's right side. For a list of keyword shortcuts included with Prototype/ProBuilder, check out http://www.protoolsforunity3d.com/docs/probuilder/#keyboardShortcuts. Now, pull on the red handle to extend our brush outward. Easy enough. Note that, by default, while pulling things out, it is being done in 1 increment. This is nice when we are polishing our levels and trying to make things exactly where we want them, but right now, we are just prototyping. So, getting it out as quickly as possible is paramount to test if it's enjoyable. To help with this, we can use a feature of Unity called Unit Snapping. Undo the previous change we made by pressing Ctrl+Z. Then, move the camera over to the other side and select our longer face. Drag it 9 units out by holding down the Control key (Command on Mac). ProCore3D also has another tool out called ProGrids, which has some advanced unit snapping functionality, but we are not going to be using it. For more information on it, check out http://www.protoolsforunity3d.com/progrids/ If you'd like to change the distance traveled while using unit snapping, set it using the Edit | Snap Settings… menu. Next, drag both the sides out until they are 9 x 9 wide. To make things easier to see, select the Directional Light object in our scene via the Hierarchy tab and reduce the Light component's Intensity to . 5. So, at this point, we have a nice looking floor. However, to create our room, we are first going to need to create our ceiling. Select the floor we have created and press Ctrl + D to duplicate the brush. Once completed, change back into the Object/Top Level editing mode and move the brush so that its Position is at 0, 4, 0. Alternatively, you can click on the duplicated object and, from the Inspector tab, change the Position's Y value to 4. Go back into the sub-selection mode by hitting H to go back to the Faces mode. Then, hold down Ctrl and select all of the edges of our floor. Click on the Extrude button from the Prototype panel. This creates a new part on each of the four edges, which is by default .5 wide (change by clicking on the + button on the edge). This adds additional edges and/or faces to our object. Next, we are going to extrude again; but, rather than doing it from the menu, let's do it manually by selecting the tops of our newly created edges and holding down the Shift button and dragging it up along the Y (green) axis. We then hold down Ctrl after starting the extrusion to have it snap appropriately to fit around our ceiling. Note that the box may not look like this as soon as you let go, as Prototype needs time to compute lighting and materials, which it will mention from the bottom right part of Unity. Next, select Main Camera in the Hierarchy, hit W to switch to the Translate mode, and F to center the selection. Then, move our camera into the room. You'll notice it's completely dark due to the ceiling, but we can add light to the world to fix that! Let's add a point light by going to GameObject | Light | Point Light and position it in the center of the room towards the ceiling (In my case, it was at 4.5, 2.5. 3.5). Then, up the Range to 25 so that it hits the entire room. Finally, add a player to see how he interacts. First, delete the Main Camera object from Hierarchy, as we won't need it. Then, go into the Project tab and open up the AssetsUFPSBaseContentPrefabsPlayers folder. Drag and drop the AdvancedPlayer prefab, moving it so that it doesn't collide with the walls, floors, or ceiling, a little higher than the ground as shown in the following screenshot: Next, save our level (Chapter 3_1_CreatingGeometry) and hit the Play button. It may be a good idea for you to save your levels in such a way that you are able to go back and see what was covered in each section for each chapter, thus making things easier to find in the future. Again, remember that we can pull a weapon out by pressing the 1-5 keys. With this, we now have a simple room that we can interact with! Summary In this article, we take on the role of a level designer, who has been asked to create a level prototype to prove that our gameplay is solid. We will use the free Prototype tool to help in this endeavor. In addition, we will also learn some beginning level designs. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unity 3.x Scripting-Character Controller versus Rigidbody [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 2303

article-image-editor-tool-prefabs-and-main-menu
Packt
22 Sep 2015
19 min read
Save for later

Editor Tool, Prefabs, and Main Menu

Packt
22 Sep 2015
19 min read
In this article by Edward Kyle Langley, author of the book Learning Unity iOS Game Development, we will learn that the player has the ability to send input to the device, and we will handle this by manipulating the player character GameObject. We also set up some game logic so that the player character can interact with positive and negative world objects, such as Coins and Obstacles. To further develop the sense of a complete game, we need to create the pieces of the game world that represent a floor that the player will run on. (For more resources related to this topic, see here.) To create these pieces, we will create a Unity EditorWindow class that will help us create grids that will represent the ground the player runs on and the dirt below it. Traditionally, you would have to place each sprite one at a time. With this editor tool, we will be able to crate bigger boxes in a grid based on our settings. After we have our editor tool running, we will begin to create the prefabs that will hold multiple GameObjects and their components in a single file. Finally, we will write the code needed to move the floor and ground pieces below the player character, simulating the character as running forward. To summarize, in this article, we will cover the following topics: Writing a Unity C# class that extends EditorWindow, which allows you to input settings and sprite files that will give you a box grid and simplify the level pieces creation Creating the game-related prefabs so that you have grouped files in an easy-to-use file Building the main menu user interface with Unity's UI tools, including buttons for achievements, leaderboards, and store purchases Use the prefabs we made in the C# script. This will move the level pieces of prefabs under the player character, simulating movement. We will also go through the steps to get the final aspects of the iOS integration function and set up the main menu UI so that the player can navigate between playing the game, view at leaderboards /achievements, and have the option to purchase "remove iAds" for the cost of ten thousand coins or 99 cents. Making the Sprite Tile Editor Tool The Unity engine is incredibly flexible for all the aspects of game development, including creating custom editor tools to help fast track the more tedious aspects of development. In our case, it will be beneficial to have a tool that creates a root GameObject that will then create children GameObjects in a grid. This will be spaced out by the size of the sprite component they have attached. For example, if you were to place say 24 GameObjects one at a time, it could take some time to make sure that all are snapped correctly together. With our tool, we will be able to select the X value and the Y value for the grid, the sprite that represents the ground, and the sprite that represents the dirt below the ground. Perform the following steps: To begin with, navigate to the Assets folder. Right-click on this folder and select Create and then New Folder. Name this folder Level. Right-click on the new Level folder and select Import New Asset. Right-click on the Script folder, select Create and then C# Script. Name the script SpriteTiler. The SpriteTiler C# class Double-click on the SpriteTiler C# file to open it. Change the file so that it looks similar to the following code: using UnityEngine; using UnityEditor; using System.Collections; public class SpriteTiler : EditorWindow { } The big changes from the normally generated code file is the addition to using UnityEditor, changing the inherited class to EditorWindow, and removing the Start() and Update() functions. Global variables We now want to add the global variables for this class. Add the following code in the class block:   // Grid settings to make tiled by public float GridXSlider = 1; public float GridYSlider = 1; // Sprites for both the ground and dirt public Sprite TileGroundSprite; public Sprite TileDirtSprite; // Name of the GameObject that holds our tiled Objects public string TileSpriteRootGameObjectName = "Tiled Object"; The GridXSlider and GridYSlider class will be used to generate our grid, X being left to right and Y being top down. For example, if you had X set to five and Y set to three, the grid would generate columns of five elements and rows of three elements or five sprites long and three sprites down. The TileGroundSprite and TileDirtSprite sprite files will make up the ground and dirt levels. TileSpriteRootGameObjectName is the GameObject name that will hold the GameObjects children that have the sprite components. This is editable by you so that you can choose the name of the GameObject that gets created to avoid having the default new GameObject for each one made. The MenuItem creation Next, we need to create the MenuItem function. This will represent the Editor selection drop-down list so that we can use our tool. Add the following function to the SpriteTiler class under the global variables:    // Menu option to bring up Sprite Tiler window [MenuItem("RushRunner/Sprite Tile")] public static void OpenSpriteTileWindow() { EditorWindow.GetWindow< SpriteTiler > ( true, "Sprite Tiler" ); } As this class extends EditorWindow, and the preceding function is declared as MenuItem, it will create a dropdown in the Editor named RushRunner. This will hold a selection called Sprite Tile: You can name the dropdown and selection anything you like by changing the string that is passed into MenuItem, such as MyEditorTool or Editor Tool Name. If you save the SpiteTiler.cs file and go back to Unity and allow the engine to compile, you will be able to click on the SpriteTile button under RushRunner. This will create a editor window named Sprite Tiler. The OnGUI function Next, we need to add the function that will be used to draw all the windows GUI elements or the fields that we will use to get the settings to make the grid. Under our OpenSpriteTileWindow function, add the following code: // Called to render GUI frames and elements void OnGUI() { } OnGUI is the function that will draw our GUI elements to the window. This allows you to manipulate these GUI elements so that we have values to use when we create the GameObject grid and its GameObjects children with sprite components. The GUILayout and OnGUI setup To begin with the OnGUI function, we want to add the GUI elements to the window. In the OnGUI function, add the following code:   // Setting for GameObject name that holds our tiled Objects GUILayout.Label("Tile Level Object Name", EditorStyles .boldLabel); TileSpriteRootGameObjectName = GUILayout.TextField( TileSpriteRootGameObjectName, 25 ); // Slider for X grid value (left to right) GUILayout.Label("X: " + GridXSlider, EditorStyles. boldLabel); GridXSlider = GUILayout.HorizontalScrollbar( GridXSlider, 1.0f, 0.0f, 30.0f ); GridXSlider = (int)GridXSlider; // Slider for Y grid value(up to down) GUILayout.Label("Y: " + GridYSlider, EditorStyles. boldLabel); GridYSlider = GUILayout.HorizontalScrollbar(GridYSlider, 1.0f, 0.0f, 30.0f); GridYSlider = (int)GridYSlider; // File chose to be our Ground Sprite GUILayout.Label("Sprite Ground File", EditorStyles. boldLabel); TileGroundSprite = EditorGUILayout.ObjectField (TileGroundSprite, typeof(Sprite), true) as Sprite; // File chose to be our Dirt Sprite GUILayout.Label("Sprite Dirt File", EditorStyles. boldLabel); TileDirtSprite = EditorGUILayout.ObjectField (TileDirtSprite, typeof(Sprite), true) as Sprite; GUILayout.Label is a function that creates a text label in the window we are using. Its first use is to let the user know that the next setting is for Tile Level Object Name: the name of the root GameObject that will hold children GameObjects with Sprite components. By default, this is set to Tiled Object, although we allow the user to change it. In order to allow the user to change it, we need to give them a TextField parameter to input a new string. We do this by telling that TileSpriteRootGameObjectName is equal to the GUILayout.TextField setting. As this is used in OnGUI, anything the user inputs will change the value of TileSpriteRootGameObjectName. We will use this later when the user wants to create the GameObject. We then need to create two HorizontalSlider GUI elements so that we can get values from them that represent the X and Y values of the grid. Similar to TextField, we can start each of the HorizontalSlider elements with GUILayout.Label. This describes what the slider is for. We will then assign the GridXSlider and GridYSlider values to what the HorizontalSlider element is set to, which is one by default. As the user adjusts the sliders, the GridXSlider and GridYSlider values will change so that when the user clicks on a button to create the GameObject, we will have a reference to the values that they want to use for the grid. After HorizontalSliders, we want to have ObjectFields so that the user can search for and assign sprite files that will represent the ground and dirt of the grid. EditorGUILayout.ObjectField takes a reference to the object you want to assign when the user selects one, the type of object that ObjectField wants, and if ObjectField takes SceneObjects. As we want this ObjectField to be for sprites, we will set the type of object to typeof( Sprite ) and then cast the result that is assigned to TileGroundSprite or TileDirtSprite to the sprite by using as Sprite. The OnGUI create tiled button In order to know when the user wants to create the root GameObject and its grid of children GameObjects, we will need a button. Add the following code under the last GUI Elements: // If butt "Create Tiled" is clicked if (GUILayout.Button("Create Tiled")) { // If the Grid settings are both zero, // send notification to user if (GridXSlider == 0 && GridYSlider == 0) { ShowNotification(new GUIContent("Must have either X or Y grid set to a value greater than 0")); return; } // if Dirt and Ground Sprite exist if (TileDirtSprite != null && TileGroundSprite !=null) { // If the Sprites sizes dont match, // send notifcation to user if (TileDirtSprite.bounds.size.x != TileGroundSprite. bounds.size.x || TileDirtSprite.bounds.size.y != TileGroundSprite.bounds.size.y) { ShowNotification(new GUIContent("Both Sprites must be of matching size.")); return; } // Create GameObject and tiled // Objects with user settings CreateSpriteTiledGameObject(GridXSlider, GridYSlider, TileGroundSprite, TileDirtSprite, TileSpriteRoot GameObjectName); } else { // If either Dirt or Ground Sprite dont exist, // send notifcation to user ShowNotification( new GUIContent( "Must have Dirt and Ground Sprite selected." ) ); return; } } The first condition we have set is the GUILayout.Button( "Create Tiled" ) function. The Button function will return true as soon as it is clicked on, but it will still render to the window if false. This means that although the button is not active, it'll still be seen by the user. As some settings will create a scenario that is not ideal for the concept of our SpriteTiler, we first want to make sure that the settings are in line with what we have designed the tool to perform. We will first check whether GridXSlider and GridYSlider are set to zero. If both of these values are set to zero, the grid won't create anything, and as the concept of the tool is to create a grid of children sprites, we will tell the user that they must have a selection above zero for either GridXSlider or GridYSlider. We then check whether TileDirtSprite and TileGroundSprite have a value. If either of these values are null, the settings are not complete. This results in you telling the user that Dirt and Ground sprites need a selection. If the user has set Dirt and Ground sprites to something, but their sizing is not the same, such as one being 32 x 32 and the other being 64 x 64, we will tell the user that both the sprites need to be of the same size. If we didn't check for this, the grid wouldn't align correctly, creating negative results and making the tool not function as we want it to. If the user settings are in order, we will call the CreateSpriteTiledGameObject function and pass GridXSlider, GridYSlixer, TileGroundSprite, TileDirtSprite, and TileSpriteRootGameObjectName. The CreateSpriteTiledGameObject function This function is designed to take the user settings and create the grid from them. Add the following function under the OnGUI function: // Create GameObject and tiled childen based on user settings public static void CreateSpriteTiledGameObject(float GridXSlider, float GridYSlider, Sprite SpriteGroundFile, Sprite SpriteDirtFile, string RootObjectName) { // Store size of Sprite float spriteX = SpriteGroundFile.bounds.size.x; float spriteY = SpriteGroundFile.bounds.size.y; // Create the root GameObject which will hold children that tile GameObject rootObject = new GameObject( ); // Set position in world to 0,0,0 rootObject.transform.position = new Vector3( 0.0f, 0.0f, 0.0f ); // Name it based on user settings rootObject.name = RootObjectName; // Create starting values for while loop int currentObjectCount = 0; int currentColumn = 0; int currentRow = 0; Vector3 currentLocation = new Vector3( 0.0f, 0.0f, 0.0f ); // Continue loop until all rows // and columns have been filled while (currentRow < GridYSlider) { // Create a child GameObject, set its parent to root, // name it, and offset its location based on current location GameObject gridObject = new GameObject( ); gridObject.transform.SetParent( rootObject.transform ); gridObject.name = RootObjectName + "_" + currentObjectCount; gridObject.transform.position = currentLocation; // Give child gridObject a SpriteRenderer and set sprite on CurrentRow SpriteRenderer gridRenderer = gridObject.AddComponent <SpriteRenderer>( ); gridRenderer.sprite = ( currentRow == 0 ) ? SpriteGroundFile : SpriteDirtFile; // Give the gridObject a BoxCollider gridObject.AddComponent<BoxCollider2D>(); // Offset currentLocation for next gridObject to use currentLocation.x += spriteX; // Increment current column by one currentColumn++; // If the current collumn is greater than the X slider if (currentColumn >= GridXSlider) { // Reset column, incrmement row, reset x location // and offset y location downwards currentColumn = 0; currentRow++; currentLocation.x = 0; currentLocation.y -= spriteY; } // Add to currentObjectCount for naming of // gridObject children. currentObjectCount++; } } To start with, we must first have the X and Y sizes of the sprite we want to create so that we can offset the location of the children GameObjects that were created. As we originally checked to make sure that both sprites are of the same size, it doesn't matter which sprite object we get the size from. In our case, we will use SpriteGroundFile. We will then move the rootObject position to 0X, 0Y, and 0Z so that it is in the center of our scene. This can be set to anything you like, although when rootObject and its children get created, it is easier to find it at the center of the scene world. After it has been moved, we can set its name to the setting that the user had entered or Tiled Object (the default one). Once we have rootObject set up, we can create its children GameObjects. To start this cycle, we will need a few variables to reference and change: currentObjectCount: This specifies the total number of children that will be created. This increments for each one created. currentColumn: This denotes the current column we are on in the row. currentRow: This specifies the current row we are on. currentLocation: This denotes the current location that the children GameObject will use and sets its position too. This is changed after each new child is created based on the X or Y setting of the sprite size. Now that we have our rootObject and the variables we need to create the children, we can use while loop. A while loop is a loop that will continue until its condition fails. In our case, we will check whether currentRow is less than the GridYSlider value. As soon as currentRow is equal to or greater than GridYSlider, the loop will stop because the condition failed. The reason we will look at currentRow is that for each column created, we can reset its value to zero and increment currentRow by one. This means that each row will hold as many columns as were set by the GridXSlider value, and we know that the grid is complete when currentRow is equal or greater than GridYSlider. For example, if we had a grid setting of 3X and 3Y, the first row will hold three columns. When the first row is done, the row changes to two and adds three more columns. In the last row, it completes three more columns and then the while condition fails because the row value is equal to GridYSlider. In each loop of the while loop, we start by creating gridObject. We set this grid object parent to that of rootObject, set its name to RootObjectName, and concatenate an underscore, followed by currentObjectCount and then set the gridObject position to the currentLocation value, which will change based on the size of the sprite and the column/row. We will then add a SpriteRenderer component to gridObject and assign a sprite to it. We will change the sprite based on whether currentRow is equal to zero or not. If it is, in the first row, we will set the sprite to SpriteGroundFile. If currentRow is not equal to zero, we will set the sprite to SpriteDirtFile. The ternary operator is a sort of shorthand for if → else. If the condition is true, we will set the value to what is behind the question mark. If the condition is false, we will set the value based on what's behind the colon. The question mark represents if, whereas the colon represents else. The ternary operator is as follows: Value = ( condtion == true ) ? ifTrue : elseNotTrue; Once we have the sprite assigned to the SpriteRenderer component of gridObject, we can assign a BoxCollider2D component, which will make itself the same size as the sprite. If we were to add the BoxCollider2D component to SpriteRenderer, it would be the default size of 1, 1, 1, which would be too big. We will then offset currentLocation by the spriteX size, so the next gridObject will offset the size of the spriteX size. The currentColumn value is incremented by one, and we then check whether currentColumn is greater than or equal to the GridXSlider value. If it is, we know that we need to start the next row. To do this, we reset currentColumn to zero, increment currentRow by one, set the currentLocation.x value to zero, and offset currentLocation.y by negative spriteY size. This not only results in an offset location down, but also resets the X value to zero, making it possible for the columns to be created again; just down the size of spriteY. Finally, we increment currentObjectCount by one. Building the main menu UI The main menu UI will be its own Canvas GameObject. We will then handle the main menu and the game UI via the GameInfo class. We will also use the GameInfo class to manage button presses and the iOS integration. In Hierarchy, right-click and select UI and then click on Canvas. Name this new Canvas GameObject MenuUI. Let's start by adding five buttons to achievements, playing, leaderboards, remove iAds, and restore purchase. Right-click on the new MenuUI GameObject, navigate to UI, and left-click on Button. Do this four more times, so there are a total of five buttons that are children of the MenuUI GameObject. Name the buttons and text children as follows: PlayButton, PlayText LeaderboardButton, LeaderboardText AchievementButton, AchievementText RemoveAdsButton, RemoveAdsText RestorePurchaseButton, RestorePurchaseText Adding button images Next, we need to import the art that will be used for the main menu UI. In the Assets | UI folder, right-click and select Import New Asset. Select all the new images in the Assets | UI folder and change their settings as follows: Filter Mode: Trilinear Max Size: 256 Format: Truecolor PlayButton Select PlayButton in Hierarchy and search for Inspector. Change its settings as follows: Anchor: Bottom Center Pos X: 0 Pos Y: 115 Pos Z: 0 Width: 128 Height: 128 Source Image: MenuButton Now, select PlayButtonText. In the Inspector window, change its settings as follows: Text: Play Font: Arial Font Style: Bold Font Size: 36 Alignment: Center LeaderboardButton Select LeaderboardButton in the Hierarchy tab and search for Inspector. Change its settings as follows: Anchor: Bottom Center Pos X: 135 Pos Y: 115 Pos Z: 0 Width: 128 Height: 128 Source Image: MenuButton Select LeaderboardText. In the Inspector window, change its settings to: Text: Leaderboards Font: Arial Font Style: Bold Font Size: 17 Alignment: Center AchievementButton Select AchievementButton. In Hierarchy, search for Inspector. Change its settings as follows: Anchor: Bottom Center Pos X: -135 Pos Y: 115 Pos Z: 0 Width: 128 Height: 128 Source Image: MenuButton Now, select AchievementText and then in Inspector, change its settings to: Text: Achievements Font: Arial Font Style: Bold Font Size: 17 Alignment: Center RemoveAdsButton Select RemoveAdsButton in the Hierarchy tab and navigate to Inspector. Change its settings as follows: Anchor: Bottom Center Pos X: -64 Pos Y: 55 Pos Z: 0 Width: 96 Height: 42 Source Image: RestartButton Now, select RemoveAdsText and then in the Inspector window, change its settings as shown here: Text: Remove iAds Font: Arial Font Style: Bold Font Size: 12 Alignment: Center RestorePurchaseButton Let's select RestorePurchaseButton in the Hierarchy tab and search for Inspector. Change its settings as follows: Anchor: Bottom Center Pos X: 64 Pos Y: 55 Pos Z: 0 Width: 96 Height: 42 Source Image: RestartButton Now, select RestorePurchaseText and then in the Inspector window, change its settings as follows: Text: Restore Purchase Font: Arial Font Style: Bold Font Size: 14 Alignment: Center You should now have a button layout that looks similar to the following image: Summary In this article, we discussed how to create a Unity editor tool and a grid of GameObjects. These were laid out by the size of the sprites you chose and were flexible enough to use with your own settings. We also created prefabs for all of our bigger GameObjects, which could hold all of their components in a neat package. We also covered the basics of how to create a game for iOS and utilize its GameCenter features. Feel free to explore these features and add to them. Adding more store purchases, achievements, and leaderboards is simply repeating the steps that we have already done. Resources for Article: Further resources on this subject: Components in Unity[article] Saying Hello to Unity and Android [article] Unity Networking – The Pong Game [article]
Read more
  • 0
  • 0
  • 1097

article-image-adding-fog-your-games
Packt
21 Sep 2015
8 min read
Save for later

Adding Fog to Your Games

Packt
21 Sep 2015
8 min read
In this article by Muhammad A.Moniem, author of the book Unreal Engine Lighting and Rendering Essentials speaks about rendering without mentioning one of the most and old (but important) rendering features since the rise of the 3D rendering. Fog effects have always been an essential part of any rendering engines regardless of the main goal of that engine. However, in games, it is a must to have this feature, not only because of the ambiance and feel it will give to the game, but because it will minimize the draw distance while rendering the large and open areas, which is great performance wise! The fog effects can be used for a lot of purposes, starting from adding ambiance to the world to setting a global mood (perhaps scary), to simulating a real environment, or even to distracting the players. By the end of this little article, you'll be able to: Understand both the fog types in Unreal Engine Understand the difference between both the fog types Master all the parameters to control the fog types Having said this, let's get started! (For more resources related to this topic, see here.) The fog types Unreal Engine provides the user with two varieties of fog; each has its own set of parameters to modify and provide different results of effects. The two supported fog types are as follows: The Atmospheric Fog The Exponential Height Fog The Atmospheric Fog The Atmospheric Fog gives an approximation of light scattering through a planetary atmosphere. It is the best fog method that can be used with a natural environment scene, such as landscape scenes. One of the most core features of this fog is that it gives your directional light a sun disc effect. Adding it to your game By adding an actor from the Visual Effects section of the Modes panel, or even from the actor's context menu by right-clicking on the scene view, you can install the Atmospheric Fog in your level directly. In the Visual Effects submenu of the Modes panel, you can find both the fog types listed here. In order to be able to control the quality of the final visual look of the recently inserted fog, you will have to do some tweaks for its properties attached to the actor. Sun Multiplier: This is an overall multiplier for the directional light's brightness. Increasing this value will not only brighten the fog color, but will also brighten the sky color as well. Fog Multiplier: This is a multiplier that affects only the fog color (does not affect the directional light). Density Multiplier: This is a fog density multiplier (does not affect the directional light). Density Offset: This is a fog opacity controller. Distance Scale: This is a distance factor that is compared to the Unreal unit scale. This value is more effective for a very small world. As the world size increases, you will need to increase this value too, as larger values cause changes in the fog attenuation to take place faster. Altitude Scale: This is the scale along the z axis. Distance Offset: This is the distance offset, calculated in km, is used to manage the large distances. Ground Offset: This is an offset for the sea level. (normally, the sea level is 0, and as the fog system does not work for regions below the sea level, you need to make sure that all the terrain remains above this value in order to guarantee that the fog works.) Start Distance: This is the distance from the camera lens that the fog will start from. Sun Disk Scale: This is the size of the sun disk, but keep in mind that this can't be 0, as earlier there was an option to disable the sun disk, but in order to keep it real, Epic decided to remove this option and keep the sun disk, but it gives you the chance to make it as small as possible. Precompute Params: The properties included in this group need recomputation of precomputed texture data: Density Height: This is the fog density decay height controller. The lower the values, the denser the fog will be, while the higher the values, the less scatter the fog will have. Max Scattering Num: This sets a limit on the number of scattering calculations. Inscatter Altitude Sample Number: This is the number of different altitudes at which you can sample inscatter color. The Exponential Height Fog This type of fog has its own unique requirement. While the Atmospheric Fog can be added anytime or anywhere and it works, the Exponential Height Fog requires a special type of map where there are low and high bounds, as its mechanic includes creating more density in the low places of a map and less density in the high places of the map. Between both these areas, there will be a smooth transition. One of the most interesting features of the Exponential Height Fog is that is has two fog colors: one for the hemisphere facing the dominant directional light and another color for the opposite hemisphere. Adding it to your game As mentioned earlier, to add the volume type from the same Visual Effects section of the Modes panel is very simple. You can select the Exponential Height Fog actor and drag and drop it into the scene. As you can see, even the icon implies the high and low places from the sea level. In order to be able to control the final visual look of the recently inserted fog, you would have to do some tweaks for its properties attached to the actor: Fog Density: This is the global density controller of the fog. Fog Inscattering Color: This is the inscattering color for the fog (the primary color). In the following image, you can see how different values work: Fog Height Falloff: This is the Height density controller that controls how the density increases as the height decreases. Fog Max Opacity: This controls the maximum opacity of the fog. A value of 0 means the fog will be invisible. Start Distance: This is the distance from the camera where the fog will start. Directional Inscattering Exponent: This controls the size of the directional inscattering cone. The higher the value, the clearer vision you get, while the lower the value, the more fog dense you get. Directional Inscattering Start Distance: This controls the start distance from the viewer of the directional inscattering. Directional Inscattering Color: This sets the color for directional inscattering that is used to approximate inscattering from a directional light. Visible: This controls the fog visibility. Actor Hidden in Game: This enables or disables the fog in the game (it will not affect the editing mode). Editor Billboard Scale: This is the scale of the billboard components in the editor. The animated fog Almost like any other thing in Unreal Engine, you can do some animations for it. Some parts of the engine are super responsive to the animation system, while other parts have a limited access. However, speaking of the fog, it has a limited access in order to animate some values. You can use different ways and methods to animate values at runtime or even during the edit mode. The color The height fog color can be changed at runtime using the LinearColor Property Track in the Matinee Editor. By performing the following given steps, you can change the height fog color in the game: Create a new Matinee Actor. Open the newly created actor in the Matinee Editor. Create a Height Fog Actor. Create a group in Matinee. Attach the Height Fog Actor from the scene to the group created in the previous step. Create a linear color property track in the group. Choose the FogInscatteringColor or DirectionalInscatteringColor to control its value (using two colors is an advantage of that fog type, remember!). Add keyframes to the track, and set the color for them. Animating the Exponential Height Fog In order to animate the Exponential Height Fog, you can use one of the following two ways: Use Matinee to animate the Exponential Height Fog Actor values Use a timeline node in the Level Blueprint and control the Exponential Height Fog Actor values Summary In this article, you learned about the fog effects and the supported types in the Unreal Editor, the different parameters, and how to use any of the fog types. Now, it is recommended that you go ahead directly to your editor, and start adding some fog and play with its values. Even better if you can start to do some animation for the parameters as mentioned earlier. Don't just try in the Edit mode; sometimes, the results are different when you hit play or even more different when you cook a build, so feel free to build any level you made in an executable and check the results. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints[article] Creating a Brick Breaking Game[article] The Unreal Engine [article]
Read more
  • 0
  • 0
  • 6903
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-finding-your-way
Packt
21 Sep 2015
19 min read
Save for later

Finding Your Way

Packt
21 Sep 2015
19 min read
 This article by Ray Barrera, the author of Unity AI Game Programming Second Edition, covers the following topics: A* Pathfinding algorithm A custom A* Pathfinding implementation (For more resources related to this topic, see here.) A* Pathfinding We'll implement the A* algorithm in a Unity environment using C#. The A* Pathfinding algorithm is widely used in games and interactive applications even though there are other algorithms, such as Dijkstra's algorithm, because of its simplicity and effectiveness. Revisiting the A* algorithm Let's review the A* algorithm again before we proceed to implement it in next section. First, we'll need to represent the map in a traversable data structure. While many structures are possible, for this example, we will use a 2D grid array. We'll implement the GridManager class later to handle this map information. Our GridManager class will keep a list of the Node objects that are basically titles in a 2D grid. So, we need to implement that Node class to handle things such as node type (whether it's a traversable node or an obstacle), cost to pass through and cost to reach the goal Node, and so on. We'll have two variables to store the nodes that have been processed and the nodes that we have to process. We'll call them closed list and open list, respectively. We'll implement that list type in the PriorityQueue class. And then finally, the following A* algorithm will be implemented in the AStar class. Let's take a look at it: We begin at the starting node and put it in the open list. As long as the open list has some nodes in it, we'll perform the following processes: Pick the first node from the open list and keep it as the current node. (This is assuming that we've sorted the open list and the first node has the least cost value, which will be mentioned at the end of the code.) Get the neighboring nodes of this current node that are not obstacle types, such as a wall or canyon that can't be passed through. For each neighbor node, check if this neighbor node is already in the closed list. If not, we'll calculate the total cost (F) for this neighbor node using the following formula: F = G + H In the preceding formula, G is the total cost from the previous node to this node and H is the total cost from this node to the final target node. Store this cost data in the neighbor node object. Also, store the current node as the parent node as well. Later, we'll use this parent node data to trace back the actual path. Put this neighbor node in the open list. Sort the open list in ascending order, ordered by the total cost to reach the target node. If there's no more neighbor nodes to process, put the current node in the closed list and remove it from the open list. Go back to step 2. Once you have completed this process your current node should be in the target goal node position, but only if there's an obstacle free path to reach the goal node from the start node. If it is not at the goal node, there's no available path to the target node from the current node position. If there's a valid path, all we have to do now is to trace back from current node's parent node until we reach the start node again. This will give us a path list of all the nodes that we chose during our pathfinding process, ordered from the target node to the start node. We then just reverse this path list since we want to know the path from the start node to the target goal node. This is a general overview of the algorithm we're going to implement in Unity using C#. So let's get started. Implementation We'll implement the preliminary classes that were mentioned before, such as the Node, GridManager, and PriorityQueue classes. Then, we'll use them in our main AStar class. Implementing the Node class The Node class will handle each tile object in our 2D grid, representing the maps shown in the Node.cs file: using UnityEngine; using System.Collections; using System; public class Node : IComparable { public float nodeTotalCost; public float estimatedCost; public bool bObstacle; public Node parent; public Vector3 position; public Node() { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; } public Node(Vector3 pos) { this.estimatedCost = 0.0f; this.nodeTotalCost = 1.0f; this.bObstacle = false; this.parent = null; this.position = pos; } public void MarkAsObstacle() { this.bObstacle = true; } The Node class has properties, such as the cost values (G and H), flags to mark whether it is an obstacle, its positions, and parent node. The nodeTotalCost is G, which is the movement cost value from starting node to this node so far and the estimatedCost is H, which is total estimated cost from this node to the target goal node. We also have two simple constructor methods and a wrapper method to set whether this node is an obstacle. Then, we implement the CompareTo method as shown in the following code: public int CompareTo(object obj) { Node node = (Node)obj; //Negative value means object comes before this in the sort //order. if (this.estimatedCost < node.estimatedCost) return -1; //Positive value means object comes after this in the sort //order. if (this.estimatedCost > node.estimatedCost) return 1; return 0; } } This method is important. Our Node class inherits from IComparable because we want to override this CompareTo method. If you can recall what we discussed in the previous algorithm section, you'll notice that we need to sort our list of node arrays based on the total estimated cost. The ArrayList type has a method called Sort. This method basically looks for this CompareTo method, implemented inside the object (in this case, our Node objects) from the list. So, we implement this method to sort the node objects based on our estimatedCost value. The IComparable.CompareTo method, which is a .NET framework feature, can be found at http://msdn.microsoft.com/en-us/library/system.icomparable.compareto.aspx. Establishing the priority queue The PriorityQueue class is a short and simple class to make the handling of the nodes' ArrayList easier, as shown in the following PriorityQueue.cs class: using UnityEngine; using System.Collections; public class PriorityQueue { private ArrayList nodes = new ArrayList(); public int Length { get { return this.nodes.Count; } } public bool Contains(object node) { return this.nodes.Contains(node); } public Node First() { if (this.nodes.Count > 0) { return (Node)this.nodes[0]; } return null; } public void Push(Node node) { this.nodes.Add(node); this.nodes.Sort(); } public void Remove(Node node) { this.nodes.Remove(node); //Ensure the list is sorted this.nodes.Sort(); } } The preceding code listing should be easy to understand. One thing to notice is that after adding or removing node from the nodes' ArrayList, we call the Sort method. This will call the Node object's CompareTo method and will sort the nodes accordingly by the estimatedCost value. Setting up our grid manager The GridManager class handles all the properties of the grid, representing the map. We'll keep a singleton instance of the GridManager class as we need only one object to represent the map, as shown in the following GridManager.cs file: using UnityEngine; using System.Collections; public class GridManager : MonoBehaviour { private static GridManager s_Instance = null; public static GridManager instance { get { if (s_Instance == null) { s_Instance = FindObjectOfType(typeof(GridManager)) as GridManager; if (s_Instance == null) Debug.Log("Could not locate a GridManager " + "object. n You have to have exactly " + "one GridManager in the scene."); } return s_Instance; } } We look for the GridManager object in our scene and if found, we keep it in our s_Instance static variable: public int numOfRows; public int numOfColumns; public float gridCellSize; public bool showGrid = true; public bool showObstacleBlocks = true; private Vector3 origin = new Vector3(); private GameObject[] obstacleList; public Node[,] nodes { get; set; } public Vector3 Origin { get { return origin; } } Next, we declare all the variables; we'll need to represent our map, such as number of rows and columns, the size of each grid tile, and some Boolean variables to visualize the grid and obstacles as well as to store all the nodes present in the grid, as shown in the following code: void Awake() { obstacleList = GameObject.FindGameObjectsWithTag("Obstacle"); CalculateObstacles(); } // Find all the obstacles on the map void CalculateObstacles() { nodes = new Node[numOfColumns, numOfRows]; int index = 0; for (int i = 0; i < numOfColumns; i++) { for (int j = 0; j < numOfRows; j++) { Vector3 cellPos = GetGridCellCenter(index); Node node = new Node(cellPos); nodes[i, j] = node; index++; } } if (obstacleList != null && obstacleList.Length > 0) { //For each obstacle found on the map, record it in our list foreach (GameObject data in obstacleList) { int indexCell = GetGridIndex(data.transform.position); int col = GetColumn(indexCell); int row = GetRow(indexCell); nodes[row, col].MarkAsObstacle(); } } } We look for all the game objects with an Obstacle tag and put them in our obstacleList property. Then we set up our nodes' 2D array in the CalculateObstacles method. First, we just create the normal node objects with default properties. Just after that, we examine our obstacleList. Convert their position into row-column data and update the nodes at that index to be obstacles. The GridManager class has a couple of helper methods to traverse the grid and get the grid cell data. The following are some of them with a brief description of what they do. The implementation is simple, so we won't go into the details. The GetGridCellCenter method returns the position of the grid cell in world coordinates from the cell index, as shown in the following code: public Vector3 GetGridCellCenter(int index) { Vector3 cellPosition = GetGridCellPosition(index); cellPosition.x += (gridCellSize / 2.0f); cellPosition.z += (gridCellSize / 2.0f); return cellPosition; } public Vector3 GetGridCellPosition(int index) { int row = GetRow(index); int col = GetColumn(index); float xPosInGrid = col * gridCellSize; float zPosInGrid = row * gridCellSize; return Origin + new Vector3(xPosInGrid, 0.0f, zPosInGrid); } The GetGridIndex method returns the grid cell index in the grid from the given position: public int GetGridIndex(Vector3 pos) { if (!IsInBounds(pos)) { return -1; } pos -= Origin; int col = (int)(pos.x / gridCellSize); int row = (int)(pos.z / gridCellSize); return (row * numOfColumns + col); } public bool IsInBounds(Vector3 pos) { float width = numOfColumns * gridCellSize; float height = numOfRows* gridCellSize; return (pos.x >= Origin.x && pos.x <= Origin.x + width && pos.x <= Origin.z + height && pos.z >= Origin.z); } The GetRow and GetColumn methods return the row and column data of the grid cell from the given index: public int GetRow(int index) { int row = index / numOfColumns; return row; } public int GetColumn(int index) { int col = index % numOfColumns; return col; } Another important method is GetNeighbours, which is used by the AStar class to retrieve the neighboring nodes of a particular node: public void GetNeighbours(Node node, ArrayList neighbors) { Vector3 neighborPos = node.position; int neighborIndex = GetGridIndex(neighborPos); int row = GetRow(neighborIndex); int column = GetColumn(neighborIndex); //Bottom int leftNodeRow = row - 1; int leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Top leftNodeRow = row + 1; leftNodeColumn = column; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Right leftNodeRow = row; leftNodeColumn = column + 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); //Left leftNodeRow = row; leftNodeColumn = column - 1; AssignNeighbour(leftNodeRow, leftNodeColumn, neighbors); } void AssignNeighbour(int row, int column, ArrayList neighbors) { if (row != -1 && column != -1 && row < numOfRows && column < numOfColumns) { Node nodeToAdd = nodes[row, column]; if (!nodeToAdd.bObstacle) { neighbors.Add(nodeToAdd); } } } First, we retrieve the neighboring nodes of the current node in the left, right, top, and bottom, all four directions. Then, inside the AssignNeighbour method, we check the node to see whether it's an obstacle. If it's not, we push that neighbor node to the referenced array list, neighbors. The next method is a debug aid method to visualize the grid and obstacle blocks: void OnDrawGizmos() { if (showGrid) { DebugDrawGrid(transform.position, numOfRows, numOfColumns, gridCellSize, Color.blue); } Gizmos.DrawSphere(transform.position, 0.5f); if (showObstacleBlocks) { Vector3 cellSize = new Vector3(gridCellSize, 1.0f, gridCellSize); if (obstacleList != null && obstacleList.Length > 0) { foreach (GameObject data in obstacleList) { Gizmos.DrawCube(GetGridCellCenter( GetGridIndex(data.transform.position)), cellSize); } } } } public void DebugDrawGrid(Vector3 origin, int numRows, int numCols,float cellSize, Color color) { float width = (numCols * cellSize); float height = (numRows * cellSize); // Draw the horizontal grid lines for (int i = 0; i < numRows + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(0.0f, 0.0f, 1.0f); Vector3 endPos = startPos + width * new Vector3(1.0f, 0.0f, 0.0f); Debug.DrawLine(startPos, endPos, color); } // Draw the vertial grid lines for (int i = 0; i < numCols + 1; i++) { Vector3 startPos = origin + i * cellSize * new Vector3(1.0f, 0.0f, 0.0f); Vector3 endPos = startPos + height * new Vector3(0.0f, 0.0f, 1.0f); Debug.DrawLine(startPos, endPos, color); } } } Gizmos can be used to draw visual debugging and setup aids inside the editor scene view. The OnDrawGizmos method is called every frame by the engine. So, if the debug flags, showGrid and showObstacleBlocks, are checked, we just draw the grid with lines and obstacle cube objects with cubes. Let's not go through the DebugDrawGrid method, which is quite simple. You can learn more about gizmos in the Unity reference documentation at http://docs.unity3d.com/Documentation/ScriptReference/Gizmos.html. Diving into our A* Implementation The AStar class is the main class that will utilize the classes we have implemented so far. You can go back to the algorithm section if you want to review this. We start with our openList and closedList declarations, which are of the PriorityQueue type, as shown in the AStar.cs file: using UnityEngine; using System.Collections; public class AStar { public static PriorityQueue closedList, openList; Next, we implement a method called HeuristicEstimateCost to calculate the cost between the two nodes. The calculation is simple. We just find the direction vector between the two by subtracting one position vector from another. The magnitude of this resultant vector gives the direct distance from the current node to the goal node: private static float HeuristicEstimateCost(Node curNode, Node goalNode) { Vector3 vecCost = curNode.position - goalNode.position; return vecCost.magnitude; } Next, we have our main FindPath method: public static ArrayList FindPath(Node start, Node goal) { openList = new PriorityQueue(); openList.Push(start); start.nodeTotalCost = 0.0f; start.estimatedCost = HeuristicEstimateCost(start, goal); closedList = new PriorityQueue(); Node node = null; We initialize our open and closed lists. Starting with the start node, we put it in our open list. Then we start processing our open list: while (openList.Length != 0) { node = openList.First(); //Check if the current node is the goal node if (node.position == goal.position) { return CalculatePath(node); } //Create an ArrayList to store the neighboring nodes ArrayList neighbours = new ArrayList(); GridManager.instance.GetNeighbours(node, neighbours); for (int i = 0; i < neighbours.Count; i++) { Node neighbourNode = (Node)neighbours[i]; if (!closedList.Contains(neighbourNode)) { float cost = HeuristicEstimateCost(node, neighbourNode); float totalCost = node.nodeTotalCost + cost; float neighbourNodeEstCost = HeuristicEstimateCost( neighbourNode, goal); neighbourNode.nodeTotalCost = totalCost; neighbourNode.parent = node; neighbourNode.estimatedCost = totalCost + neighbourNodeEstCost; if (!openList.Contains(neighbourNode)) { openList.Push(neighbourNode); } } } //Push the current node to the closed list closedList.Push(node); //and remove it from openList openList.Remove(node); } if (node.position != goal.position) { Debug.LogError("Goal Not Found"); return null; } return CalculatePath(node); } This code implementation resembles the algorithm that we have previously discussed, so you can refer back to it if you are not clear of certain things. Get the first node of our openList. Remember our openList of nodes is always sorted every time a new node is added. So, the first node is always the node with the least estimated cost to the goal node. Check whether the current node is already at the goal node. If so, exit the while loop and build the path array. Create an array list to store the neighboring nodes of the current node being processed. Use the GetNeighbours method to retrieve the neighbors from the grid. For every node in the neighbors array, we check whether it's already in closedList. If not, put it in the calculate the cost values, update the node properties with the new cost values as well as the parent node data, and put it in openList. Push the current node to closedList and remove it from openList. Go back to step 1. If there are no more nodes in openList, our current node should be at the target node if there's a valid path available. Then, we just call the CalculatePath method with the current node parameter: private static ArrayList CalculatePath(Node node) { ArrayList list = new ArrayList(); while (node != null) { list.Add(node); node = node.parent; } list.Reverse(); return list; } } The CalculatePath method traces through each node's parent node object and builds an array list. It gives an array list with nodes from the target node to the start node. Since we want a path array from the start node to the target node, we just call the Reverse method. So, this is our AStar class. We'll write a test script in the following code to test all this and then set up a scene to use them in. Implementing a TestCode class This class will use the AStar class to find the path from the start node to the goal node, as shown in the following TestCode.cs file: using UnityEngine; using System.Collections; public class TestCode : MonoBehaviour { private Transform startPos, endPos; public Node startNode { get; set; } public Node goalNode { get; set; } public ArrayList pathArray; GameObject objStartCube, objEndCube; private float elapsedTime = 0.0f; //Interval time between pathfinding public float intervalTime = 1.0f; First, we set up the variables that we'll need to reference. The pathArray is to store the nodes array returned from the AStar FindPath method: void Start () { objStartCube = GameObject.FindGameObjectWithTag("Start"); objEndCube = GameObject.FindGameObjectWithTag("End"); pathArray = new ArrayList(); FindPath(); } void Update () { elapsedTime += Time.deltaTime; if (elapsedTime >= intervalTime) { elapsedTime = 0.0f; FindPath(); } } In the Start method, we look for objects with the Start and End tags and initialize our pathArray. We'll be trying to find our new path at every interval that we set to our intervalTime property in case the positions of the start and end nodes have changed. Then, we call the FindPath method: void FindPath() { startPos = objStartCube.transform; endPos = objEndCube.transform; startNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(startPos.position))); goalNode = new Node(GridManager.instance.GetGridCellCenter( GridManager.instance.GetGridIndex(endPos.position))); pathArray = AStar.FindPath(startNode, goalNode); } Since we implemented our pathfinding algorithm in the AStar class, finding a path has now become a lot simpler. First, we take the positions of our start and end game objects. Then, we create new Node objects using the helper methods of GridManager and GetGridIndex to calculate their respective row and column index positions inside the grid. Once we get this, we just call the AStar.FindPath method with the start node and goal node and store the returned array list in the local pathArray property. Next, we implement the OnDrawGizmos method to draw and visualize the path found: void OnDrawGizmos() { if (pathArray == null) return; if (pathArray.Count > 0) { int index = 1; foreach (Node node in pathArray) { if (index < pathArray.Count) { Node nextNode = (Node)pathArray[index]; Debug.DrawLine(node.position, nextNode.position, Color.green); index++; } } } } } We look through our pathArray and use the Debug.DrawLine method to draw the lines connecting the nodes from the pathArray. With this, we'll be able to see a green line connecting the nodes from start to end, forming a path, when we run and test our program. Setting up our sample scene We are going to set up a scene that looks something similar to the following screenshot: A sample test scene We'll have a directional light, the start and end game objects, a few obstacle objects, a plane entity to be used as ground, and two empty game objects in which we put our GridManager and TestAStar scripts. This is our scene hierarchy: The scene Hierarchy Create a bunch of cube entities and tag them as Obstacle. We'll be looking for objects with this tag when running our pathfinding algorithm. The Obstacle node Create a cube entity and tag it as Start. The Start node Then, create another cube entity and tag it as End. The End node Now, create an empty game object and attach the GridManager script. Set the name as GridManager because we use this name to look for the GridManager object from our script. Here, we can set up the number of rows and columns for our grid as well as the size of each tile. The GridManager script Testing all the components Let's hit the play button and see our A* Pathfinding algorithm in action. By default, once you play the scene, Unity will switch to the Game view. Since our pathfinding visualization code is written for the debug drawn in the editor view, you'll need to switch back to the Scene view or enable Gizmos to see the path found. Found path one Now, try to move the start or end node around in the scene using the editor's movement gizmo (not in the Game view, but the Scene view). Found path two You should see the path updated accordingly if there's a valid path from the start node to the target goal node, dynamically in real time. You'll get an error message in the console window if there's no path available. Summary In this article, we learned how to implement our own simple A* Pathfinding system. To attain this, we firstly implemented the Node class and established the priority queue. Then, we move on to setting up the grid manager. After that, we dived in deeper by implementing a TestCode class and setting up our sample scene. Finally, we tested all the components. Resources for Article: Further resources on this subject: Saying Hello to Unity and Android[article] Enemy and Friendly AIs[article] Customizing skin with GUISkin [article]
Read more
  • 0
  • 0
  • 2574

article-image-networking-qt
Packt
21 Sep 2015
21 min read
Save for later

Networking in Qt

Packt
21 Sep 2015
21 min read
In this article from the book Game Programming using Qt by authors Witold Wysota and Lorenz Haas, you will be taught how to communicate with the Internet servers and with sockets in general. First, we will have a look at QNetworkAccessManager, which makes sending network requests and receiving replies really easy. Building on this basic knowledge, we will then use Google's Distance API to get information about the distance between two locations and the time it would take to get from one location to the other. (For more resources related to this topic, see here.) QNetworkAccessManager The easiest way to access files on the Internet is to use Qt's Network Access API. This API is centered on QNetworkAccessManager, which handles the complete communication between your game and the Internet. When we develop and test a network-enabled application, it is recommended that you use a private, local network if feasible. This way, it is possible to debug both ends of the connection and the errors will not expose sensitive data. If you are not familiar with setting up a web server locally on your machine, there are luckily a number of all-in-one installers that are freely available. These will automatically configure Apache2, MySQL, PHP, and much more on your system. On Windows, for example, you could use XAMPP (http://www.apachefriends.org/en) or the Uniform Server (http://www.uniformserver.com); on Apple computers there is MAMP (http://www.mamp.info/en); and on Linux, you normally don't have to do anything since there is already localhost. If not, open your preferred package manager, search for a package called apache2 or similar, and install it. Alternatively, have a look at your distribution's documentation. Before you go and install Apache on your machine, think about using a virtual machine like VirtualBox (http://www.virtualbox.org) for this task. This way, you keep your machine clean and you can easily try different settings of your test server. With multiple virtual machines, you can even test the interaction between different instances of your game. If you are on UNIX, Docker (http://www.docker.com) might be worth to have a look at too. Downloading files over HTTP For downloading files over HTTP, first set up a local server and create a file called version.txt in the root directory of the installed server. The file should contain a small text like "I am a file on localhost" or something similar. To test whether the server and the file are correctly set up, start a web browser and open http://localhost/version.txt. You then should see the file's content. Of course, if you have access to a domain, you can also use that. Just alter the URL used in the example correspondingly. If you fail, it may be the case that your server does not allow to display text files. Instead of getting lost in the server's configuration, just rename the file to version .html. This should do the trick! Result of requesting http://localhost/version.txt on a browser As you might have guessed, because of the filename, the real-life scenario could be to check whether there is an updated version of your game or application on the server. To get the content of a file, only five lines of code are needed. Time for action – downloading a file First, create an instance of QNetworkAccessManager: QNetworkAccessManager *m_nam = new QNetworkAccessManager(this); Since QNetworkAccessManager inherits QObject, it takes a pointer to QObject, which is used as a parent. Thus, you do not have to take care of deleting the manager later on. Furthermore, one single instance of QNetworkAccessManager is enough for an entire application. So, either pass a pointer to the network access manager in your game around or, for ease of use, create a singleton pattern and access the manager through that. A singleton pattern ensures that a class is instantiated exactly once. The pattern is useful for accessing application-wide configurations or—in our case—an instance of QNetworkAccessManager. On the wiki pages for qtcentre.org and qt-project.org, you will find examples for different singleton patterns. A simple template-based approach would look like this (as a header file): template <class T> class Singleton { public: static T& Instance() { static T _instance; return _instance; } private: Singleton(); ~Singleton(); Singleton(const Singleton &); Singleton& operator=(const Singleton &); }; In the source code, you would include this header file and acquire a singleton of a class called MyClass with: MyClass *singleton = &Singleton<MyClass>::Instance(); If you are using Qt Quick, you can directly use the view instance of QNetworkAccessManager: QQuickView *view = new QQuickView; QNetworkAccessManager *m_nam = view->engine()->networkAccessManager(); Secondly, we connect the manager's finished() signal to a slot of our choice. For example, in our class, we have a slot called downloadFinished(): connect(m_nam, SIGNAL(finished(QNetworkReply*)), this, SLOT(downloadFinished(QNetworkReply*))); Then, it actually request's the version.txt file from localhost: m_nam->get(QNetworkRequest(QUrl("http://localhost/version.txt"))); With get(), a request to get the contents of the file, specified by the URL, is posted. The function expects QNetworkRequest, which defines all the information needed to send a request over the network. The main information of such a request is naturally the URL of the file. This is the reason why QNetworkRequest takes a QUrl as an argument in its constructor. You can also set the URL with setUrl() to a request. If you like to define some additional headers, you can either use setHeader() for the most common header or use setRawHeader() to be fully flexible. If you want to set, for example, a custom user agent to the request, the call would look like: QNetworkRequest request; request.setUrl(QUrl("http://localhost/version.txt")); request.setHeader(QNetworkRequest::UserAgentHeader, "MyGame"); m_nam->get(request); The setHeader() function takes two arguments, the first is a value of the enumeration QNetworkRequest::KnownHeaders, which holds the most common—self-explanatory—headers such as LastModifiedHeader or ContentTypeHeader, and the second is the actual value. You could also have written the header by using of setRawHeader(): request.setRawHeader("User-Agent", "MyGame"); When you use setRawHeader(), you have to write the header field names yourself. Beside that, it behaves like setHeader(). A list of all available headers for the HTTP protocol Version 1.1 can be found in section 14 at http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14. With the get() function we requested the version.txt file from localhost. All we have to do from now on is to wait for the server to reply. As soon as the server's reply is finished, the slot downloadFinished() will be called. That was defined by the previous connection statement. As an argument the reply of type QNetworkReply is transferred to the slot and we can read the reply's data and set it to m_edit, an instance of QPlainTextEdit, using the following code: void FileDownload::downloadFinished(QNetworkReply *reply) { const QByteArray content = reply->readAll(); m_edit->setPlainText(content); reply->deleteLater(); } Since QNetworkReply inherits QIODevice, there are also other possibilities to read the contents of the reply including QDataStream or QTextStream to either read and interpret binary data or textual data. Here, as fourth command, QIODevice::readAll() is used to get the complete content of the requested file in a QByteArray. The responsibility for the transferred pointer to the corresponding QNetworkReply lies with us, so we need to delete it at the end of the slot. This would be the fifth line of code needed to download a file with Qt. However, be careful and do not call delete on the reply directly. Always use deleteLater() as the documentation suggests! Have a go hero – extending the basic file downloader If you haven't set up a localhost, just alter the URL in the source code to download another file. Of course, having to alter the source code in order to download another file is far from an ideal approach. So try to extend the dialog, by adding a line edit where you can specify the URL you want to download. Also, you can offer a file dialog to choose the location to where the downloaded file should be saved. Error handling If you do not see the content of the file, something went wrong. Just as in real life, this can always happen so we better make sure, that there is good error handling in such cases to inform the user what is going on. Time for action – displaying a proper error message Fortunately QNetworkReply offers several possibilities to do this. In the slot called downloadFinished() we first want to check if an error occurred: if (reply->error() != QNetworkReply::NoError) {/* error occurred */} The function QNetworkReply::error() returns the error that occurred while handling the request. The error is encoded as a value of type QNetworkReply::NetworkError. The two most common errors are probably these: Error code Meaning ContentNotFoundError This error indicates that the URL of the request could not be found. It is similar to the HTTP error code 404. ContentAccessDenied This error indicates that you do not have the permission to access the requested file. It is similar to the HTTP error 401. You can look up the other 23 error codes in the documentation. But normally you do not need to know exactly what went wrong. You only need to know if everything worked out—QNetworkReply::NoError would be the return value in this case—or if something went wrong. Since QNetworkReply::NoError has the value 0, you can shorten the test phrase to check if an error occurred to: if (reply->error()) { // an error occurred } To provide the user with a meaningful error description you can use QIODevice::errorString(). The text is already set up with the corresponding error message and we only have to display it: if (reply->error()) { const QString error = reply->errorString(); m_edit->setPlainText(error); return; } In our example, assuming we had an error in the URL and wrote versions.txt by mistake, the application would look like this: If the request was a HTTP request and the status code is of interest, it could be retrieved by QNetworkReply::attribute(): reply->attribute(QNetworkRequest::HttpStatusCodeAttribute) Since it returns QVariant, you can either use QVariant::toInt() to get the code as an integer or QVariant::toString() to get the number as a QString. Beside the HTTP status code you can query through attribute() a lot of other information. Have a look at the description of the enumeration QNetworkRequest::Attribute in the documentation. There you also will find QNetworkRequest::HttpReasonPhraseAttribute which holds a human readable reason phrase of the HTTP status code. For example "Not Found" if an HTTP error 404 occurred. The value of this attribute is used to set the error text for QIODevice::errorString(). So you can either use the default error description provided by errorString() or compose your own by interpreting the reply's attributes. If a download failed and you want to resume it or if you only want to download a specific part of a file, you can use the range header: QNetworkRequest req(QUrl("...")); req.setRawHeader("Range", "bytes=300-500"); QNetworkReply *reply = m_nam->get(req); In this example only the bytes 300 to 500 would be downloaded. However, the server must support this. Downloading files over FTP As simple as it is to download files over HTTP, as simple it is to download a file over FTP. If it is an anonymous FTP server for which you do not need an authentication, just use the URL like we did earlier. Assuming there is again a file called version.txt on the FTP server on localhost, type: m_nam->get(QNetworkRequest(QUrl("ftp://localhost/version.txt"))); That is all, everything else stays the same. If the FTP server requires an authentication you'll get an error, for example: Setting the user name and the user password to access an FTP server is likewise easy. Either write it in the URL or use QUrl functions setUserName() and setPassword(). If the server does not use a standard port, you can set the port explicitly with QUrl::setPort(). To upload a file to a FTP server use QNetworkAccessManager::put() which takes as first argument a QNetworkRequest, calling a URL that defines the name of the new file on the server, and as second argument the actual data, that should be uploaded. For small uploads, you can pass the content as a QByteArray. For larger contents, better use a pointer to a QIODevice. Make sure the device is open and stays available until the upload is done. Downloading files in parallel A very important note on QNetworkAccessManager: it works asynchronously. This means you can post a network request without blocking the main event loop and this is what keeps the GUI responsive. If you post more than one request, they are put on the manager's queue. Depending on the protocol used they get processed in parallel. If you are sending HTTP requests, normally up to six requests will be handled at a time. This will not block the application. Therefore, there is really no need to encapsulate QNetworkAccessManager in a thread, unfortunately, this unnecessary approach is frequently recommended all over the Internet. QNetworkAccessManager already threads internally. Really, don't move QNetworkAccessManager to a thread—unless you know exactly what you are doing. If you send multiple requests, the slot connected to the manager's finished() signal is called in an arbitrary order depending on how quickly a request gets a reply from the server. This is why you need to know to which request a reply belongs. This is one reason why every QNetworkReply carries its related QNetworkRequest. It can be accessed through QNetworkReply::request(). Even if the determination of the replies and their purpose may work for a small application in a single slot, it will quickly get large and confusing if you send a lot of requests. This problem is aggravated by the fact that all replies are delivered to only one slot. Since most probably there are different types of replies that need different treatments, it would be better to bundle them to specific slots, specialized for a special task. Fortunately this can be achieved very easily. QNetworkAccessManager::get() returns a pointer to the QNetworkReply which will get all information about the request you post with get(). By using this pointer, you can then connect specific slots to the reply's signals. For example if you have several URLs and you want to save all linked images from these sites to the hard drive, then you would request all web pages via QNetworkAccessManager::get() and connect their replies to a slot specialized for parsing the received HTML. If links to images are found, this slot would request them again with get(). However, this time the replies to these requests would be connected to a second slot, which is designed for saving the images to the disk. Thus you can separate the two tasks, parsing HTML and saving data to a local drive. The most important signals of QNetworkReply are. The finished signal The finished() signal is equivalent with the QNetworkAccessManager::finished() signal we used earlier. It is triggered as soon as a reply has been returned—successfully or not. After this signal has been emitted, neither the reply's data nor its metadata will be altered anymore. With this signal you are now able to connect a reply to a specific slot. This way you can realize the scenario outlined previously. However, one problem remains: if you post simultaneous requests, you do not know which one has finished and thus called the connected slot. Unlike QNetworkAccessManager::finished(), QNetworkReply::finished() does not pass a pointer to QNetworkReply; this would actually be a pointer to itself in this case. A quick solution to solve this problem is to use sender(). It returns a pointer to the QObject instance that has called the slot. Since we know that it was a QNetworkReply, we can write: QNetworkReply *reply = qobject_cast<QNetworkReply*>(sender()); if (!reply) return; This was done by casting sender() to a pointer of type QNetworkReply. Whenever casting classes that inherit QObject, use qobject_cast. Unlike dynamic_cast it does not use RTTI and works across dynamic library boundaries. Although we can be pretty confident the cast will work, do not forget to check if the pointer is valid. If it is a null pointer, exit the slot. Time for action – writing OOP conform code by using QSignalMapper A more elegant way that does not rely on sender(), would be to use QSignalMapper and a local hash, in which all replies that are connected to that slot are stored. So whenever you call QNetworkAccessManager::get() store the returned pointer in a member variable of type QHash<int, QNetworkReply*> and set up the mapper. Let's assume that we have following member variables and that they are set up properly: QNetworkAccessManager *m_nam; QSignalMapper *m_mapper; QHash<int, QNetworkReply*> m_replies; Then you would connect the finished() signal of a reply this way: QNetworkReply *reply = m_nam->get(QNetworkRequest(QUrl(/*...*/))); connect(reply, SIGNAL(finished()), m_mapper, SLOT(map())); int id = /* unique id, not already used in m_replies*/; m_replies.insert(id, reply); m_mapper->setMapping(reply, id); What just happened? First we post the request and fetch the pointer to the QNetworkReply with reply. Then we connect the reply's finished signal to the mapper's slot map(). Next we have to find a unique ID which must not already be in use in the m_replies variable. One could use random numbers generated with qrand() and fetch numbers as long as they are not unique. To determine if a key is already in use, call QHash::contains(). It takes the key as an argument against which it should be checked. Or even simpler: count up another private member variable. Once we have a unique ID we insert the pointer to QNetworkReply in the hash using the ID as a key. Last, with setMapping(), we set up the mapper's mapping: the ID's value corresponds to the actual reply. At a prominent place, most likely the constructor of the class, we already have connected the mappers map() signal to a custom slot. For example: connect(m_mapper, SIGNAL(mapped(int)), this, SLOT(downloadFinished(int))); When the slot downloadFinished() is called, we can get the corresponding reply with: void SomeClass::downloadFinished(int id) { QNetworkReply *reply = m_replies.take(id); // do some stuff with reply here reply->deleteLater(); } QSignalMapper also allows to map with QString as an identifier instead of an integer as used above. So you could rewrite the example and use the URL to identify the corresponding QNetworkReply; at least as long as the URLs are unique. The error signal If you download files sequentially, you can swap the error handling out. Instead of dealing with errors in the slot connected to the finished() signal, you can use the reply's signal error() which passes the error of type QNetworkReply::NetworkError to the slot. After the error() signal has been emitted, the finished() signal will most likely also be emitted shortly. The readyRead signal Until now, we used the slot connected to the finished() signal to get the reply's content. That works perfectly if you deal with small files. However, this approach is unsuitable when dealing with large files since they would unnecessarily bind too many resources. For larger files it is better to read and save transferred data as soon as it is available. We get informed by QIODevice::readyRead() whenever new data is available to be read. So for large files you should type in the following: connect(reply, SIGNAL(readyRead()), this, SLOT(readContent())); file.open(QIODevice::WriteOnly); This will help you connect the reply's signal readyRead() to a slot, set up QFile and open it. In the connected slot, type in the following snippet: const QByteArray ba = reply->readAll(); file.write(ba); file.flush(); Now you can fetch the content, which was transferred so far, and save it to the (already opened) file. This way the needed resources are minimized. Don't forget to close the file after the finished() signal was emitted. In this context it would be helpful if one could know upfront the size of the file one wants to download. Therefore, we can use QNetworkAccessManager::head(). It behaves like the get() function, but does not transfer the content of the file. Only the headers are transferred. And if we are lucky, the server sends the "Content-Length" header, which holds the file size in bytes. To get that information we type: reply->head(QNetworkRequest::ContentLengthHeader).toInt(); With this information, we could also check upfront if there is enough space left on the disk. The downloadProgress method Especially when a big file is being downloaded, the user usually wants to know how much data has already been downloaded and how long it will approximately take for the download to finish. Time for action – showing the download progress In order to achieve this we can use the reply's downloadProgress() signal. As a first argument it passes the information on how many bytes have already been received and as a second argument how many there are in total. This gives us the possibility to indicate the progress of the download with QProgressBar. As the passed arguments are of type qint64 we can't use them directly with QProgressBar since it only accepts int. So in the connected slot we first calculate the percentage of the download progress: void SomeClass::downloadProgress(qint64 bytesReceived, qint64 bytesTotal) { qreal progress = (bytesTotal < 1) ? 1.0 : bytesReceived * 100.0 / bytesTotal; progressBar->setValue(progress * progressBar->maximum()); } What just happened? With the percentage we set the new value for the progress bar where progressBar is the pointer to this bar. However, what value will progressBar->maximum() have and where do we set the range for the progress bar? What is nice is that you do not have to set it for every new download. It is only done once, for example in the constructor of the class containing the bar. As range values I would recommend: progressBar->setRange(0, 2048); The reason is that if you take for example a range of 0 to 100 and the progress bar is 500 pixels wide, the bar would jump 5 pixels forward for every value change. This will look ugly. To get a smooth progression where the bar expands by 1 pixel at a time, a range of 0 to 99.999.999 would surely work but would be highly inefficient. This is because the current value of the bar would change a lot without any graphical depiction. So the best value for the range would be 0 to the actual bar's width in pixel. Unfortunately, the width of the bar can change depending on the actual widget width and frequently querying the actual size of the bar every time the value change is also not a good solution. Why 2048, then? The idea behind this value is the resolution of the screen. Full HD monitors normally have a width of 1920 pixels, thus taking 2^11, aka 2048, ensure that a progress bar runs smoothly, even if it is fully expanded. So 2048 isn't the perfect number but a fairly good compromise. If you are targeting smaller devices, choose a smaller, more appropriate number. To be able to calculate the remaining time for the download to finish you have to start a timer. In this case use QElapsedTimer. After posting the request with QNetworkAccessManager::get() start the timer by calling QElapsedTimer::start(). Assuming the timer is called m_timer, the calculation would be: qint64 total = m_timer.elapsed() / progress; qint64 remaining = (total – m_timer.elapsed()) / 1000; QElapsedTimer::elapsed() returns the milliseconds counting from the moment when the timer was started. This value divided by the progress equals the estimated total download time. If you subtract the elapsed time and divide the result by 1000, you'll get the remaining time in seconds. Using a proxy If you like to use a proxy you first have to set up a QNetworkProxy. You have to define the type of the proxy with setType(). As arguments you most likely want to pass QNetworkProxy::Socks5Proxy or QNetworkProxy::HttpProxy. Then set up the host name with setHostName(), the user name with setUserName() and the password with setPassword(). The last two properties are, of course, only needed if the proxy requires an authentication. Once the proxy is set up you can set it to the access manager via QNetworkAccessManager::setProxy(). Now, all new requests will use that proxy. Summary In this article you familiarized yourself with QNetworkAccessManager. This class is at the heart of your code whenever you want to download or upload files to the Internet. After having gone through the different signals that you can use to fetch errors, to get notified about new data or to show the progress, you should now know everything you need on that topic. Resources for Article: Further resources on this subject: GUI Components in Qt 5[article] Code interlude – signals and slots [article] Configuring Your Operating System [article]
Read more
  • 0
  • 0
  • 7103

article-image-replacing-2d-sprites-3d-models
Packt
21 Sep 2015
21 min read
Save for later

Replacing 2D Sprites with 3D Models

Packt
21 Sep 2015
21 min read
In this article by Maya Posch author of the book Mastering AndEngine Game Development, when using a game engine that limits itself to handling scenes in two dimensions, it seems obvious that you would use two-dimensional images here, better known as sprites. After all, you won't need that third dimension, right? It is when you get into more advanced games and scenes that you notice that with animations, and also with the usage of existing assets, there are many advantages of using a three-dimensional model in a two-dimensional scene. In this article we will cover these topics: Using 3D models directly with AndEngine Loading of 3D models with an AndEngine game (For more resources related to this topic, see here.) Why 3D in a 2D game makes sense The reasons we want to use 3D models in our 2D scene include the following: Recycling of assets: You can use the same models as used for a 3D engine project, as well as countless others. Broader base of talent: You'll be able to use a 3D modeler for your 2D game, as good sprite artists are so rare. Ease of animation: Good animation with sprites is hard. With 3D models, you can use various existing utilities to get smooth animations with ease. As for the final impact it has on the game's looks, it's no silver bullet but should ease the development somewhat. The quality of the used models and produced animations as well as the way they are integrated into a scene will determine the final look. 2D and 3D compared In short: 2D sprite 3D model Defined using a 2D grid of pixels Defined using vertices in a 3D grid Only a single front view Rotatable to observe any desired side Resource-efficient Resource-intensive A sprite is an image, or—if it's animated—a series of images. Within the boundaries of its resolution (for example 64, x 64 pixels), the individual pixels make up the resulting image. This is a proven low-tech method, and it has been in use since the earliest video games. Even the first 3D games, such as Wolfenstein 3D and Doom, used sprites instead of models, as the former are easy to implement and require very few resources to render. With the available memory and processing capabilities of video consoles and personal computers until the later part of the 1990s, sprites were everywhere. It wasn't until the appearance of dedicated vertex graphics processors for consumer systems from companies such as 3dfx, Nvidia, and ATI that sprites would be largely replaced by vertex (3D) models. This is not to say that 3D models were totally new by then, of course. The technology had been in commercial use since the 1970s, when it was used for movie CGI and engineering in particular. In essence, both sprites and models are a representation of the same object; it's just that one contains more information than the other. Once rendered on the screen, the resulting image contains roughly the same amount of data. The biggest difference between sprites and models is the total amount of information that they can contain. For a sprite, there is no side or back. A model, on the other hand, has information about every part of its surface. It can be rotated in front of a camera to obtain a rendering of each of those orientations. A sprite is thus equivalent to a single orientation of a model. Dealing with the third dimension The first question that is likely to come to mind when it is suggested to use 3D models in what is advertised as a 2D engine is whether or not this will make the game engine into a 3D engine. The brief answer here is "No." The longer answer is that despite the presence of these models, the engine's camera and other features are not aware of this third dimension, and so they will not be able to deal with it. It's not unlike the ray-casting engine employed by titles such as Wolfenstein 3D, which always operated in a horizontal plane and, by default, was not capable of tilting the camera to look up or down. This does imply that AndEngine can be turned into a 3D engine if all of its classes are adapted to deal with another dimension. We're not going that far here, however. All that we are interested in right now is integrating 3D model support into the existing framework. For this, we need a number of things. The most important one is to be able to load these models. The second is to render them in such a way that we can use them within the AndEngine framework. As we explored earlier, the way of integrating 3D models into a 2D scene is by realizing that a model is just a very large collection of possible sprites. What we need is a camera so that we can orient it relatively to the model, similar to how the camera in a 3D engine works. We can then display the model from the orientation. Any further manipulations, such as scaling and scene-wide transformations, are performed on the model's camera configuration. The model is only manipulated to obtain a new orientation or frame of an animation. Setting up the environment We first need to load the model from our resources into the memory. For this, we require logic that fetches the file, parses it, and produces the output, which we can use in the following step of rendering an orientation of the model. To load the model, we can either write the logic for it ourselves or use an existing library. The latter approach is generally preferred, unless you have special needs that are not yet covered by an existing library. As we have no such special needs, we will use an existing library. Our choice here is the open Asset Import Library, or assimp for short. It can import numerous 3D model files in addition to other kinds of resource files, which we'll find useful later on. Assimp is written in C++, which means that we will be using it as a native library (.a or .so). To accomplish this, we first need to obtain its source code and compile it for Android. The main Assimp site can be found at http://assimp.sf.net/, and the Git repository is at https://github.com/assimp/assimp. From the latter, we obtain the current source for Assimp and put it into a folder called assimp. We can easily obtain the Assimp source by either downloading an archive file containing the full repository or by using the Git client (from http://git-scm.com/) and cloning the repository using the following command in an empty folder (the assimp folder mentioned): git clone https://github.com/assimp/assimp.git This will create a local copy of the remote Git repository. An advantage of this method is that we can easily keep our local copy up to date with the Assimp project's version simply by pulling any changes. As Assimp uses CMake for its build system, we will also need to obtain the CMake version for Android from http://code.google.com/p/android-cmake/. Android-Cmake contains the toolchain file that we will need to set up the cross-compilation from our host system to Android/ARM. Assuming that we put Android-cmake into the android-cmake folder, we can then find this toolchain file under android-cmake/toolchain/android.toolchain.cmake. We now need to either set the following environmental variable or make sure we have properly set it: ANDROID_NDK: This points to the root folder where the Android NDK is placed At this point, we can use either the command-line-based CMake tool or the cross-platform CMake GUI. We choose the latter for sheer convenience. Unless you are quite familiar with the working of CMake, the use of the GUI tool can make the experience significantly more intuitive, not to mention faster and more automated. Any commands we use in the GUI tool will, however, easily translate to the command-line tool. The first thing we do after opening the CMake GUI utility is specify the location of the source—the assimp source folder—and the output for the CMake-generated files. For this path to the latter, we will create a new folder called buildandroid inside the Assimp source folder and specify it as the build folder. We now need to set a variable inside the CMake GUI: CMAKE_MAKE_PROGRAM: This variable specifies the path to the Make executable. For Linux/BSD, use GNU Make or similar; for Windows, use MinGW Make. Next, we will want to click on the Configure button where we can set the type of Make files generated as well as specify the location of the toolchain file. For the Make file type, you will generally want to pick Unix makefiles on Linux or similar and MinGW makefiles on Windows. Next, pick the option that allows you to specify the cross-compile toolchain file and select this file inside the Android-cmake folder as detailed earlier. After this, the CMake GUI should output Configuring done. What has happened now is that the toolchain file that we linked to has configured CMake to use the NDK's compiler, which targets ARM as well as sets other configuration options. If we want, we can change some options here, such as the following: CMAKE_BUILD_TYPE: We can specify the type of build we want here, which includes the Debug and Release strings. ASSIMP_BUILD_STATIC_LIB: This is a boolean value. Setting it to true (or checking the box in the GUI) will generate only a library file for static linking and no .so file. Whether we want to build statically or not depends on our ultimate goals and distribution details. As static linking of external libraries is quite convenient and also reduces the total file size on the platform, which is generally already strapped for space, it seems obvious to link statically. The resulting .a library for a release build should be in the order of 16 megabytes, while a debug build is about 68 megabytes. When linking the final application, only those parts of the library that we'll use will be included in our application, shrinking the total file size once more. We are now ready to click on the Generate button, which should generate a Generating done output. If you get an error along the lines of Could not uniquely determine machine name for compiler, you should look at the paths used by CMake and check whether they exist. For the NDK toolchain on Windows, for example, the path may contain the windows part, whereas the NDK only has a folder called windows-x86_64. If we look into the buildandroid folder after this, we can see that CMake has generated a makefile and additional relevant files. We only need the central Make file in the buildandroid folder, however. In a terminal window, we navigate to this folder and execute the following command: make This should start the execution of the Make files that CMake generated and result in a proper build. At the end of this compilation sequence, we should have a library file in assimp/libs/armeabi-v7a/ called libassimp.a. For our project, we need this library and the Assimp include files. We can find them under assimp/include/assimp. We copy the folder with the include files to our project's /jni folder. The .a library is placed in the /jni folder as well. As this is a relatively simple NDK project, a simple file structure is fine. For a more complex project, we would want to have a separate /jni/libs folder, or something similar. Importing a model The Assimp library provides conversion tools for reading resource files, such as those for 3D mesh models, and provides a generic format on the application's side. For a 3D mesh file, Assimp provides us with an aiScene object that contains all the meshes and related data as described by the imported file. After importing a model, we need to read the sets of data that we require for rendering. These are the types of data: Vertices (positions) Normals Texture mapping (UV) Indices Vertices might be obvious; they are the positions of points between which lines of basic geometric shapes are drawn. Usually, three vertices are used to form a triangular face, which forms the basic shape unit for a model. Normals indicate the orientation of the vertex. We have one normal per vertex. Texture mapping is provided using so-called UV coordinates. Each vertex has a UV coordinate if texture mapping information is provided with the model. Finally, indices are values provided per face, indicating which vertices should be used. This is essentially a compression technique, allowing the faces to define the vertices that they will use so that shared vertices have to be defined only once. During the drawing process, these indices are used by OpenGL to find the vertices to draw. We start off our importer code by first creating a new file called assimpImporter.cpp in the /jni folder. We require the following include: #include "assimp/Importer.hpp" // C++ importer interface #include "assimp/scene.h" // output data structure #include "assimp/postprocess.h" // post processing flags // for native asset manager #include <sys/types.h> #include <android/asset_manager.h> #include <android/asset_manager_jni.h> The Assimp include give us access to the central Importer object, which we'll use for the actual import process, and the scene object for its output. The postprocess include contains various flags and presets for post-processing information to be used with Importer, such as triangulation. The remaining includes are meant to give us access to the Android Asset Manager API. The model file is stored inside the /assets folder, which once packaged as an APK is only accessible during runtime via this API, whether in Java or in native code. Moving on, we will be using a single function in our native code to perform the importing and processing. As usual, we have to first declare a C-style interface so that when our native library gets compiled, our Java code can find the function in the library: extern "C" { JNIEXPORT jboolean JNICALL Java_com_nyanko_andengineontour_MainActivity_getModelData(JNIEnv* env, jobject obj, jobject model, jobject assetManager, jstring filename); }; The JNIEnv* parameter and the first jobject parameter are standard in an NDK/JNI function, with the former being a handy pointer to the current JVM environment, offering a variety of utility functions. Our own parameters are the following: model assetManager filename The model is a basic Java class with getters/setters for the arrays of vertex, normal, UV and index data of which we create an instance and pass a reference via the JNI. The next parameter is the Asset Manager instance that we created in the Java code. Finally, we obtain the name of the file that we are supposed to load from the assets containing our mesh. One possible gotcha in the naming of the function we're exporting is that of underscores. Within the function name, no underscores are allowed, as underscores are used to indicate to the NDK what the package name and class names are. Our getModelData function gets parsed as being in the MainActivity class of the package com.nyanko.andengineontour. If we had tried to use, for example, get_model_data as the function name, it would have tried to find function data in the model class of the com.nyanko.andengineontour.get package. Next, we can begin the actual importing process. First, we define the aiScene instance, that will contain the imported scene, and the arrays for the imported data, as well as the Assimp Importer instance: const aiScene* scene = 0; jfloat* vertexArray; jfloat* normalArray; jfloat* uvArray; jshort* indexArray; Assimp::Importer importer; In order to use a Java string in native code, we have to use the provided method to obtain a reference via the env parameter: const char* utf8 = env->GetStringUTFChars(filename, 0); if (!utf8) { return JNI_FALSE; } We then create a reference to the Asset Manager instance that we created in Java: AAssetManager* mgr = AAssetManager_fromJava(env, assetManager); if (!mgr) { return JNI_FALSE; } We use this to obtain a reference to the asset we're looking for, being the model file: AAsset* asset = AAssetManager_open(mgr, utf8, AASSET_MODE_UNKNOWN); if (!asset) { return JNI_FALSE; } Finally, we release our reference to the filename string before moving on to the next stage: env->ReleaseStringUTFChars(filename, utf8); With access to the asset, we can now read it from the memory. While it is, in theory, possible to directly read a file from the assets, you will have to write a new I/O manager to allow Assimp to do this. This is because asset files, unfortunately, cannot be passed as a standard file handle reference on Android. For smaller models, however, we can read the entire file from the memory and pass this data to the Assimp importer. First, we get the size of the asset, create an array to store its contents, and read the file in it: int count = (int) AAsset_getLength(asset); char buf[count + 1]; if (AAsset_read(asset, buf, count) != count) { return JNI_FALSE; } Finally, we close the asset reference: AAsset_close(asset); We are now done with the asset manager and can move on to the importing of this model data: const aiScene* scene = importer.ReadFileFromMemory(buf, count, aiProcessPreset_TargetRealtime_Fast); if (!scene) { return JNI_FALSE; } The importer has a number of possible ways to read in the file data, as mentioned earlier. Here, we read from a memory buffer (buf) that we filled in earlier with the count parameter, indicating the size in bytes. The last parameter of the import function is the post-processing parameters. Here, we use the aiProcessPreset_TargetRealtime_Fast preset, which performs triangulation (converting non-triangle faces to triangles), and other sensible presets. The resulting aiScene object can contain multiple meshes. In a complete importer, you'd want to import all of them into a loop. We'll just look at importing the first mesh into the scene here. First, we get the mesh: aiMesh* mesh = scene->mMeshes[0]; This aiMesh object contains all of the information on the data we're interested in. First, however, we need to create our arrays: int vertexArraySize = mesh->mNumVertices * 3; int normalArraySize = mesh->mNumVertices * 3; int uvArraySize = mesh->mNumVertices * 2; int indexArraySize = mesh->mNumFaces * 3; vertexArray = new float[vertexArraySize]; normalArray = new float[normalArraySize]; uvArray = new float[uvArraySize]; indexArray = new jshort[indexArraySize]; For the vertex, normal, and texture mapping (UV) arrays, we use the number of vertices as defined in the aiMesh object as normal, and the UVs are defined per vertex. The former two have three components (x, y, z) and the UVs have two (x, y). Finally, indices are defined per vertex of the face, so we use the face count from the mesh multiplied by the number of vertices. All things but indices use floats for their components. The jshort type is a short integer type defined by the NDK. It's generally a good idea to use the NDK types for values that are sent to and from the Java side. Reading the data from the aiMesh object to the arrays is fairly straightforward: for (unsigned int i = 0; i < mesh->mNumVertices; i++) { aiVector3D pos = mesh->mVertices[i]; vertexArray[3 * i + 0] = pos.x; vertexArray[3 * i + 1] = pos.y; vertexArray[3 * i + 2] = pos.z; aiVector3D normal = mesh->mNormals[i]; normalArray[3 * i + 0] = normal.x; normalArray[3 * i + 1] = normal.y; normalArray[3 * i + 2] = normal.z; aiVector3D uv = mesh->mTextureCoords[0][i]; uvArray[2 * i * 0] = uv.x; uvArray[2 * i * 1] = uv.y; } for (unsigned int i = 0; i < mesh->mNumFaces; i++) { const aiFace& face = mesh->mFaces[i]; indexArray[3 * i * 0] = face.mIndices[0]; indexArray[3 * i * 1] = face.mIndices[1]; indexArray[3 * i * 2] = face.mIndices[2]; } To access the correct part of the array to write to, we use an index that uses the number of elements (floats or shorts) times the current iteration plus an offset to ensure that we reach the next available index. Doing things this way instead of pointing incrementation has the benefit that we do not have to reset the array pointer after we're done writing. There! We have now read in all of the data that we want from the model. Next is arguably the hardest part of using the NDK—passing data via the JNI. This involves quite a lot of reference magic and type-matching, which can be rather annoying and lead to confusing errors. To make things as easy as possible, we used the generic Java class instance so that we already had an object to put our data into from the native side. We still have to find the methods in this class instance, however, using what is essentially a Java reflection: jclass cls = env->GetObjectClass(model); if (!cls) { return JNI_FALSE; } The first goal is to get a jclass reference. For this, we use the jobject model variable, as it already contains our instantiated class instance: jmethodID setVA = env->GetMethodID(cls, "setVertexArray", "([F)V"); jmethodID setNA = env->GetMethodID(cls, "setNormalArray", "([F)V"); jmethodID setUA = env->GetMethodID(cls, "setUvArray", "([F)V"); jmethodID setIA = env->GetMethodID(cls, "setIndexArray", "([S)V"); We then obtain the method references for the setters in the class as jmethodID variables. The parameters in this class are the class reference we created, the name of the method, and its signature, being a float array ([F) parameter and a void (V) return type. Finally, we create our native Java arrays to pass back via the JNI: jfloatArray jvertexArray = env->NewFloatArray(vertexArraySize); env->SetFloatArrayRegion(jvertexArray, 0, vertexArraySize, vertexArray); jfloatArray jnormalArray = env->NewFloatArray(normalArraySize); env->SetFloatArrayRegion(jnormalArray, 0, normalArraySize, normalArray); jfloatArray juvArray = env->NewFloatArray(uvArraySize); env->SetFloatArrayRegion(juvArray, 0, uvArraySize, uvArray); jshortArray jindexArray = env->NewShortArray(indexArraySize); env->SetShortArrayRegion(jindexArray, 0, indexArraySize, indexArray); This code uses the env JNIEnv* reference to create the Java array and allocate memory for it in the JVM. Finally, we call the setter functions in the class to set our data. These essentially calls the methods on the Java class inside the JVM, providing the parameter data as Java types: env->CallVoidMethod(model, setVA, jvertexArray); env->CallVoidMethod(model, setNA, jnormalArray); env->CallVoidMethod(model, setUA, juvArray); env->CallVoidMethod(model, setIA, jindexArray); We only have to return JNI_TRUE now, and we're done. Building our library To build our code, we write the Android.mk and Application.mk files. Next, we go to the top level of our project in a terminal window and execute the ndk-build command. This will compile the code and place a library in the /libs folder of our project, inside a folder that indicates the CPU architecture it was compiled for. For further details on the ndk-build tool, you can refer to the official documentation at https://developer.android.com/ndk/guides/ndk-build.html. Our Android.mk file looks as follows: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := libassimp LOCAL_SRC_FILES := libassimp.a include $(PREBUILT_STATIC_LIBRARY) include $(CLEAR_VARS) LOCAL_MODULE := assimpImporter #LOCAL_MODULE_FILENAME := assimpImporter LOCAL_SRC_FILES := assimpImporter.cpp LOCAL_LDLIBS := -landroid -lz -llog LOCAL_STATIC_LIBRARIES := libassimp libgnustl_static include $(BUILD_SHARED_LIBRARY) The only things worthy of notice here are the inclusion of the Assimp library we compiled earlier and the use of the gnustl_static library. Since we only have a single native library in the project, we don't have to share the STL library. So, we link it with our library. Finally, we have the Application.mk file: APP_PLATFORM := android-9 APP_STL := gnustl_static There's not much to see here beyond the required specification of the STL runtime that we wish to use and the Android revision we are aiming for. After executing the build command, we are ready to build the actual application that performs the rendering of our model data. Summary With our code added, we can now load 3D models from a variety of formats, import it into our application, and create objects out of them, which we can use together with AndEngine. As implemented now, we essentially have an embedded rendering pipeline for 3D assets that extends the basic AndEngine 2D rendering pipeline. This provides a solid platform for the next stages in extending these basics even further to provide the texturing, lighting, and physics effects that we need to create an actual game. Resources for Article: Further resources on this subject: Cross-platform Building[article] Getting to Know LibGDX [article] Nodes [article]
Read more
  • 0
  • 0
  • 3926

article-image-overview-unreal-engine-4
Packt
18 Sep 2015
2 min read
Save for later

Overview of Unreal Engine 4

Packt
18 Sep 2015
2 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will discuss and evaluate the basic 3D physics and mathematics concepts in an effort to gain a basic understanding of Unreal Engine 4 physics and real-world physics. To start with, we will discuss the units of measurement, what they are, and how they are used in Unreal Engine 4. In addition, we will cover the following topics: The scientific notation 2D and 3D coordinate systems Scalars and vectors Newton's laws or Newtonian physics concepts Forces and energy For the purpose of this chapter, we will want to open Unreal Engine 4 and create a simple project using the First Person template by following these steps. (For more resources related to this topic, see here.) Launching Unreal Engine 4 When we first open Unreal Engine 4, we will see the Unreal Engine Launcher, which contains a News tab, a Learn tab, a Marketplace tab, and a Library tab. As the first title suggests, the News tab provides you with the latest news from Epic Games, ranging from Marketplace Content releases to Unreal Dev Grant winners, Twitch Stream Recaps, and so on. The Learn tab provides you with numerous resources to learn more about Unreal Engine 4, such as Written Documentation, Video Tutorials, Community Wikis, Sample Game Projects, and Community Contributions. The Marketplace tab allows you to purchase content, such as FX, Weapons Packs, Blueprint Scripts, Environmental Assets, and so on, from the community and Epic Games. Lastly, the Library tab is where you can download the newest versions of Unreal Engine 4, open previously created projects, and manage your project files. Let's start by first launching the Unreal Engine Launcher and choosing Launch from the Library tab, as seen in the following image: For the sake of consistency, we will use the latest version of the editor. At the time of writing this book, the version is 4.7.6. Next, we will select the New Project tab that appears at the top of the window, select the First Person project template with Starter Content, and name the project Unreal_PhyProject: Summary In this article we had an an overview of Unreal Engine 4 and how to launch Unreal Engine 4. Resources for Article: Further resources on this subject: Exploring and Interacting with Materials using Blueprints [article] Unreal Development Toolkit: Level Design HQ [article] Configuration and Handy Tweaks for UDK [article]
Read more
  • 0
  • 0
  • 2598
article-image-straight-blender
Packt
16 Sep 2015
18 min read
Save for later

Straight into Blender!

Packt
16 Sep 2015
18 min read
 In this article by Romain Caudron and Pierre-Armand Nicq, the authors of Blender 3D By Example, you will start getting familiar with Blender. (For more resources related to this topic, see here.) Here, navigation within the interface will be presented. Its approach is atypical in comparison to other 3D software, such as Autodesk Maya® or Autodesk 3DS Max®, but once you get used to this, it will be extremely effective. If you have had the opportunity to use Blender before, it is important to note that the interface went through changes during the evolution of the software (especially since version 2.5). We will give you an idea of the possibilities that this wonderful free and open source software gives by presenting different workflows. You will learn some vocabulary and key concepts of 3D creation so that you will not to get lost during your learning. Finally, you will have a brief introduction to the projects that we will carry out throughout this book. Let's dive into the third dimension! The following topics will be covered in this article: Learning some theory and vocabulary Navigating the 3D viewport How to set up preferences Using keyboard shortcuts to save time An overview of the 3D workflow Before learning how to navigate the Blender interface, we will give you a short introduction to the 3D workflow. An anatomy of a 3D scene To start learning about Blender, you need to understand some basic concepts. Don't worry, there is no need to have special knowledge in mathematics or programming to create beautiful 3D objects; it only requires curiosity. Some artistic notions are a plus. All 3D elements, which you will handle, will evolve in to a scene. There is a three-dimensional space with a coordinate system composed of three axes. In Blender, the x axis shows the width, y axis shows the depth, and the z axis shows the height. Some softwares use a different approach and reverses the y and z axes. These axes are color-coded, we advise you to remember them: the x axis in red, the y axis in green and the z axis in blue. A scene may have the scale you want and you can adjust it according to your needs. This looks like a film set for a movie. A scene can be populated by one or more cameras, lights, models, rigs, and many other elements. You will have the control of their placement and their setup. A 3D scene looks like a film set. A mesh is made of vertices, edges, and faces. The vertices are some points in the scene space that are placed at the end of the edges. They could be thought of as 3D points in space and the edges connect them. Connected together, the edges and the vertices form a face, also called a polygon. It is a geometric plane, which has several sides as its name suggests. In 3D software, a polygon is constituted of at least three sides. It is often essential to favor four-sided polygons during modeling for a better result. You will have an opportunity to see this in more detail later. Your actors and environments will be made of polygonal objects, or more commonly called as meshes. If you have played old 3D games, you've probably noticed the very angular outline of the characters; it was, in fact, due to a low count of polygons. We must clarify that the orientation of the faces is important for your polygon object to be illuminated. Each face has a normal. This is a perpendicular vector that indicates the direction of the polygon. In order for the surface to be seen, it is necessary that the normals point to the outside of the model. Except in special cases where the interior of a polygonal object is empty and invisible. You will be able to create your actors and environment as if you were handling virtual clay to give them the desired shape. Anatomy of a 3D Mesh To make your characters presentable, you will have to create their textures, which are 2D images that will be mapped to the 3D object. UV coordinates will be necessary in order to project the texture onto the mesh. Imagine an origami paper cube that you are going to unfold. This is roughly the same. These details are contained in a square space with the representation of the mesh laid flat. You can paint the texture of your model in your favorite software, even in Blender. This is the representation of the UV mapping process. The texture on the left is projected to the 3D model on the right. After this, you can give the illusion of life to your virtual actors by animating them. For this, you will need to place animation keys spaced on the timeline. If you change the state of the object between two keyframes, you will get the illusion of movement—animation. To move the characters, there is a very interesting process that uses a bone system, mimicking the mechanism of a real skeleton. Your polygon object will be then attached to the skeleton with a weight assigned to the vertices on each bone, so if you animate the bones, the mesh components will follow them. Once your characters, props, or environment are ready, you will be able to choose a focal length and an adequate framework for your camera. In order to light your scene, the choice of the render engine will be important for the kind of lamps to use, but usually there are three types of lamps as used in cinema productions. You will have to place them carefully. There are directional lights, which behave like the sun and produce hard shadows. There are omnidirectional lights, which will allow you to simulate diffuse light, illuminating everything around it and casting soft shadows. There are also spots that will simulate a conical shape. As in the film industry or other imaging creation fields, good lighting is a must-have in order to sell the final picture. Lighting is an expressive and narrative element that can magnify your models, or make them irrelevant. Once everything is in place, you are going to make a render. You will have a choice between a still image and an animated sequence. All the given parameters with the lights and materials will be calculated by the render engine. Some render engines offer an approach based on physics with rays that are launched from the camera. Cycles is a good example of this kind of engine and succeeds in producing very realistic renders. Others will have a much simpler approach, but none less technically based on visible elements from the camera. All of this is an overview of what you will be able to achieve while reading this book and following along with Blender. What can you do with Blender? In addition to being completely free and open source, Blender is a powerful tool that is stable and with an integral workflow that will allow you to understand your learning of 3D creation with ease. Software updates are very frequent; they fix bugs and, more importantly, add new features. You will not feel alone as Blender has an active and passionate community around it. There are many sites providing tutorials, and an official documentation detailing the features of Blender. You will be able to carry out everything you need in Blender, including things that are unusual for a 3D package such as concept art creation, sculpting, or digital postproduction, which we have not yet discussed, including compositing and video editing. This is particularly interesting in order to push the aesthetics of your future images and movies to another level. It is also possible to make video games. Also, note that the Blender game engine is still largely unknown and underestimated. Although this aspect of the software is not as developed as other specialized game engines, it is possible to make good quality games without switching to another software. You will realize that the possibilities are enormous, and you will be able to adjust your workflow to suit your needs and desires. Software of this type could scare you by its unusual handling and its complexity, but you'll realize that once you have learned its basics, it is really intuitive in many ways. Getting used to the navigation in Blender Now that you have been introduced to the 3D workflow, you will learn how to navigate the Blender interface, starting with the 3D viewport. An introduction to the navigation of the 3D Viewport It is time to learn how to navigate in the Blender viewport. The viewport represents the 3D space, in which you will spend most of your time. As we previously said, it is defined by three axes (x, y, and z). Its main goal is to display the 3D scene from a certain point of view while you're working on it. The Blender 3D Viewport When you are navigating through this, it will be as if you were a movie director but with special powers that allow you to film from any point of view. The navigation is defined by three main actions: pan, orbit, and zoom. The pan action means that you will move horizontally or vertically according to your current point of view. If we connect that to our cameraman metaphor, it's like if you were moving laterally to the left, or to the right, or moving up or down with a camera crane. By default, in Blender the shortcut to pan around is to press the Shift button and the Middle Mouse Button (MMB), and drag the mouse. The orbit action means that you will rotate around the point that you are focusing on. For instance, imagine that you are filming a romantic scene of two actors and that you rotate around them in a circular manner. In this case, the couple will be the main focus. In a 3D scene, your main focus would be a 3D character, a light, or any other 3D object. To orbit around in the Blender viewport, the default shortcut is to press the MMB and then drag the mouse. The last action that we mentioned is zoom. The zoom action is straightforward. It is the action of moving our point of view closer to an element or further away from an element. In Blender, you can zoom in by scrolling your mouse wheel up and zoom out by scrolling your mouse wheel down. To gain time and precision, Blender proposes some predefined points of view. For instance, you can quickly go in a top view by pressing the numpad 7, you can also go in a front view by pressing the numpad 1, you can go in a side view by pressing the numpad 3, and last but not least, the numpad 0 allows you to go in Camera view, which represents the final render point of the view of your scene. You can also press the numpad 5 in order to activate or deactivate the orthographic mode. The orthographic mode removes perspective. It is very useful if you want to be precise. It feels as if you were manipulating a blueprint of the 3D scene. The difference between Perspective (left) and Orthographic (right) If you are lost, you can always look at the top left corner of the viewport in order to see in which view you are, and whether the orthographic mode is on or off. Try to learn by heart all these shortcuts; you will use them a lot. With repetition, this will become a habit. What are editors? In Blender, the interface is divided into subpanels that we call editors; even the menu bar where you save your file is an editor. Each editor gives you access to tools categorized by their functionality. You have already used an editor, the 3D view. Now it's time to learn more about the editor's anatomy. In this picture, you can see how Blender is divided into editors The anatomy of an editor There are 17 different editors in Blender and they all have the same base. An editor is composed of a Header, which is a menu that groups different options related to the editor. The first button of the header is to switch between other editors. For instance, you can replace the 3D view by the UV Image Editor by clicking on it. You can easily change its place by right-clicking on it in an empty space and by choosing the Flip to Top/Bottom option. The header can be hidden by selecting its top edge and by pulling it down. If you want to bring it back, press the little plus sign at the far right. The header of the 3D viewport. The first button is for switching between editors, and also, we can choose between different options in the menu In some editors, you can get access to hidden panels that give you other options. For instance, in the 3D view you can press the T key or the N key to toggle them on or off. As in the header, if a sub panel of an editor is hidden, you can click on the little plus sign to display it again. Split, Join, and Detach Blender offers you the possibility of creating editors where you want. To do this, you need to right-click on the border of an editor and select Split Area in order to choose where to separate them. Right-click on the border of an editor to split it into two editors The current editor will then be split in two editors. Now you can switch to any other editor that you desire by clicking on the first button of the header bar. If you want to merge two editors into one, you can right-click on the border that separates them and select the Join Area button. You will then have to click on the editor that you want to erase by pointing the arrow on it. Use the Join Area option to join two editors together You then have to choose which editor you want to remove by pointing and clicking on it. We are going to see another method of splitting editors that is nice. You can drag the top right corner of an editor and another editor will magically appear! If you want to join back two editors together, you will have to drag the top right corner in the direction of the editor that you want to remove. The last manipulation can be tricky at first, but with a little bit of practice, you will also be able to do it with closed eyes! The top right corner of an editor If you have multiple monitors, it could be a great idea to detach some editors in a separated window. With this, you could gain space and won't be overwhelmed by a condensed interface. In order to do this, you will need to press the Shift key and drag the top right corner of the editor with the Left Mouse Button (LMB). Some useful layout presets Blender offers you many predefined layouts that depend on the context of your creation. For instance, you can select the Animation preset in order to have all the major animation tools, or you can use the UV Editing preset in order to prepare your texturing. To switch between the presets, go to the top of the interface (in the Info editor, near the Help menu) and click on the drop-down menu. If you want, you can add new presets by clicking on the plus sign or delete presets by clicking on the X button. If you want to rename a preset, simply enter a new name in the corresponding text field. The following screenshot shows the Layout presets drop-down menu: The layout presets drop-down menu Setting up your preferences When we start learning new software, it's good to know how to set up your preferences. Blender has a large number of options, but we will show you just the basic ones in order to change the default navigation style or to add new tools that we call add-ons in Blender. An introduction to the Preferences window The preferences window can be opened by navigating to the File menu and selecting the User Preferences option. If you want, you can use the Ctrl + Alt + U shortcut or the Cmd key and comma key on a Mac system. There are seven tabs in this window as shown here: The different tabs that compose the Preferences window A nice thing that Blender offers is the ability to change its default theme. For this, you can go to the Themes tab and choose between different presets or even change the aspect of each interface elements. Another useful setting to change is the number of undo that is 32 steps, by default. To change this number, go to the Editing tab and under the Undo label, slide the Steps to the desired value. Customizing the default navigation style We will now show you how to use a different style of navigation in the viewport. In many other 3D programs, such as Autodesk Maya®, you can use the Alt key in order to navigate in the 3D view. In order to activate this in Blender, navigate to the Input tab, and under the Mouse section, check the Emulate 3 Button Mouse option. Now if you want to use this navigation style in the viewport, you can press Alt and LMB to orbit around, Ctrl + Alt and the LMB to zoom, and Alt + Shift and the LMB to pan. Remember these shortcuts as they will be very useful when we enter the sculpting mode while using a pen tablet. The Emulate 3 Button Mouse checkbox is shown as follows: The Emulate 3 Button Mouse will be very useful when sculpting using a pen tablet Another useful setting is the Emulate Numpad. It allows you to use the numeric keys that are above the QWERTY keys in addition to the numpad keys. This is very useful for changing the views if you have a laptop without a numpad, or if you want to improve your workflow speed. The Emulate Numpad allows you to use the numeric keys above the QWERTY keys in order to switch views or toggle the perspective on or off Improving Blender with add-ons If you want even more tools, you can install what is called as add-ons on your copy of Blender. Add-ons, also called Plugins or Scripts, are Python files with the .py extension. By default, Blender comes with many disabled add-ons ordered by category. We will now activate two very useful add-ons that will improve our speed while modeling. First, go to the Add-ons tab, and click on the Mesh button in the category list at the left. Here, you will see all the default mesh add-ons available. Click on the check-boxes at the left of the Mesh: F2 and Mesh: LoopTools subpanels in order to activate these add-ons. If you know the name of the add-on you want to activate, you can try to find it by typing its name in the search bar. There are many websites where you can download free add-ons, starting from the official Blender website. If you want to install a script, you can click on the Install from File button and you will be asked to select the corresponding Python file. The official Blender Add-ons Catalog You can find it at http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts. The following screenshot shows the steps for activating the add-ons: Steps for Add-ons activation Where are the add-ons on the hard-disk? All the scripts are placed in the add-ons folder that is located wherever you have installed Blender on your hard disk. This folder will usually be at Your Installation PathBlender FoundationBlender2.VersionNumberscriptsaddons. If you find it easier, you can drop the Python files here instead of at the standard installation. Don't forget to click on the Save User Settings button in order to save all your changes! Summary In this article, you have learned the steps behind 3D creations. You know what a mesh is and what it is composed of. Then you have been introduced to navigation in Blender by manipulating the 3D viewport and going through the user preference menu. In the later sections, you configured some preferences and extended Blender by activating some add-ons. Resources for Article: Further resources on this subject: Editing the UV islands[article] Working with Blender[article] Designing Objects for 3D Printing [article]
Read more
  • 0
  • 0
  • 1634

article-image-using-3d-objects
Packt
15 Sep 2015
11 min read
Save for later

Using 3D Objects

Packt
15 Sep 2015
11 min read
In this article by Liz Staley, author of the book Manga Studio EX 5 Cookbook, you will learn the following topics: Adding existing 3D objects to a page Importing a 3D object from another program Manipulating 3D objects Adjusting the 3D camera (For more resources related to this topic, see here.) One of the features of Manga Studio 5 that people ask me about all the time is 3D objects. Manga Studio 5 comes with a set of 3D assets: characters, poses, and a few backgrounds and small objects. These can be added directly to your page, posed and positioned, and used in your artwork. While I usually use these 3D poses as a reference (much like the wooden drawing dolls that you can find in your local craft store), you can conceivably use 3D characters and imported 3D assets from programs such as Poser to create entire comics. Let's get into the third dimension now, and you will learn how to use these assets in Manga Studio 5. Adding existing 3D objects to a page Manga Studio 5 comes with many 3D objects present in the materials library. This is the fastest way to get started with using the 3D features. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start the recipes covered here. How to do it… The following steps will show us how to add an existing 3D material to a page: Open the materials library. This can be done by going to Window | Material | Material [3D]. Select a category of 3D material from the list on the left-hand side of the library, or scroll down the Material library preview window to browse all the available materials. Select a material to add to the page by clicking on it to highlight it. In this recipe, we are choosing the School girl B 02 character material. It is highlighted in the following screenshot: Hold the left mouse button down on the selected material and drag it onto the page, releasing the mouse button once the cursor is over the page, to display the material. Alternately, you can click on the Paste selected material to canvas icon at the bottom of the Material library menu. The selected 3D material will be added to the page. The School girl B 02 material is shown in this default character pose: Importing a 3D object from another program You don't have to use only the default 3D models included in Manga Studio 5. The process of importing a model is very easy. The types of files that can be imported into Manga Studio 5 are c2fc, c2fr, fbx, 1wo, 1ws, obj, 6kt, and 6kh. Getting ready You must have a page open in order to add a 3D object. Open a page of any size to start this recipe. For this recipe, you will also need a model to import into the program. These can be found on numerous websites, including my.smithmicro.com, under the Poser tab. How to do it… The following steps will walk us through the simple process of importing a 3D model into Manga Studio 5: Open the location where the 3D model you wish to import has been saved. If you have downloaded the 3D model from the Internet, it may be in the Downloads folder on your PC. Arrange the windows on your computer screen so that the location of the 3D model and Manga Studio 5 are both visible, as shown in the following screenshot: Click on the 3D model file and hold down the mouse button. While still holding down the mouse button, drag the 3D model file into the Manga Studio 5 window. Release the mouse button. The 3D model will be imported into the open page, as shown in this screenshot: Manipulating 3D objects You've learned how to add a 3D object to our project. But how can you pose it the way you want it to look for your scene? With a little time and patience, you'll be posing characters like a pro in no time! Getting ready Follow the directions in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… This recipe will walk us through moving a character into a custom pose: Be sure that the Object tool under Operation is selected. Click on the 3D object to manipulate, if it is not already selected. To move the entire object up, down, left, or right, hover the mouse cursor over the fourth icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then, drag to move the object in the desired direction. The following screenshot shows the location of the icon used to move the object up, down, left, or right. It is highlighted in pink and also shown over the 3D character. If your models are moving very slowly, you may need to allocate more memory to Manga Studio EX 5. This can be done by going to File | Preferences | Performance. To rotate the object along the y axis (or the horizon line), hover the mouse cursor over the fifth icon in the top-left corner of the box around the selected object. Click on it, hold the left mouse button, and drag. The object will rotate along the y axis, as shown in this screenshot: To rotate the object along the x axis (straight up and down vertically), hover the mouse cursor over the sixth icon in the top-left corner of the box around the selected object. Click and drag. The object will rotate vertically around its center, , as shown in the following screenshot: To move the object back and forth in 3D space, hover the mouse cursor over the seventh icon in the top-left corner of the box around the selected object. Click and hold the left mouse button; then drag it. The icon is shown as follows, highlighted in pink, and the character has been moved back—away from the camera: To move one part of a character, click on the part to be moved. For this recipe, we'll move the character's arm down. To do this, we'll click on the upper arm portion of the character to select it. When a portion of the character is selected, a sphere with three lines circling it will appear. Each of these three lines represents one axis (x, y, and z) and controls the rotation of that portion of the character. This set of lines is shown here: Use the lines of the sphere to rotate the part of the character to the desired position. For a more precise movement, the scroll wheel on the mouse can be used as well. In the following screenshot, the arm has been rotated so that it is down at the character's side: Do you keep accidentally moving a part of the model that you don't want to move? Put the cursor over the part of the model that you'd like to keep in place, and then right-click. A blue box will appear on that part of the model, and the piece will be locked in to place. Right-click again to unlock the part. How it works… In this recipe, we covered how to move and rotate a 3D object and portions of 3D characters. This is the start of being able to create your own custom poses and saving them for reuse. It's also the way to pose the drawing doll models in Manga Studio to make pose references for your comic artwork. In the 3D-Body Type folder of the materials library, you will find Female and Male drawing dolls that can be posed just as the premade characters can. These generic dolls are great for getting that difficult pose down. Then use the next recipe, Adjusting the 3D camera, to get the angle you need, and draw away! The following screenshot shows a drawing doll 3D object that has been posed in a custom stance. The preceding pose was relatively easy to achieve. The figure was rotated along the x axis, and then the head and neck joints were both rotated individually so that the doll looked toward the camera. Both its arms were rotated down and then inward. The hands were posed. The ankle joints were selected and the feet were rotated so that the toes were pointed. Then the knee of the near leg was rotated to bend it. The hip of the near leg was also rotated so that the leg was lifted slightly, giving a "cutesy" look to the pose. Having trouble posing a character's hands exactly the way you want them? Then open the Sub Tool Detail palette and click on Pose in the left-hand-side menu. In this area, you will find a menu with a picture of a hand. This is a quick controller for the fingers. Select the hand that you wish to pose. Along the bottom of the menu are some preset hand poses for things such as closed fists. At the top of each finger on this menu is an icon that looks like chain links. Click on one of them to lock the finger that it is over and prevent it from moving. The triangle area over the large blue hand symbol controls how open and closed the fingers are. You will find this menu much easier than rotating each joint individually—I'm sure! Adjusting the 3D camera In addition to manipulating 3D objects or characters, you can also change the position of the 3D camera to get the composition that you desire for your work. Think of the 3D camera just like a camera on a movie set. It can be rotated or moved around to frame the actors (3D characters) and scenery just the way the director wants! Not sure whether you moved the character or the camera? Take a look at the ground plane, which is the "checkerboard" floor area underneath the characters and objects. If the character is standing straight up and down on the ground plane, it means that the camera was moved. If the character is floating above or below the ground plane, or part of the way through it, it means that the character or object was moved. Getting ready Follow the directions given in the Adding existing 3D objects to a page recipe before following the steps in this recipe. How to do it… To rotate the camera around an object (the object will remain stationary), hover the mouse cursor over the first icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and the camera rotation are shown in the following screenshot: To move the camera up, down, left, or right, hover the mouse cursor over the second icon in the top-left corner of the box around the selected object. Click and hold the left mouse button, and then drag. The icon and camera movement are shown in this screenshot: To move the camera back and forth in the 3D space, hover the mouse cursor over the third icon in the top-left corner of the box around the selected object. Again, click and hold the left mouse button, and then drag. The next screenshot shows the zoom icon in pink at the top and the overlay on top of the character. Note how the hand of the character and the top of the head are now out of the page, since the camera is closer to her and she appears larger on the canvas. Summary In this article, we have studied to add existing 3D objects to a page using Manga Studio 5 in detail. After adding the existing object, we saw steps to add the 3D object from another program. Then, there are steps to manipulate these 3D objects along the co-ordinate system by using tools available in Manga Studio 5. Finally, we learnt to position the 3D camera, by rotating it around an object. Resources for Article: Further resources on this subject: Ink Slingers [article] Getting Familiar with the Story Features [article] Animating capabilities of Cinema 4D [article]
Read more
  • 0
  • 0
  • 2110

article-image-hello-pong
Packt
15 Sep 2015
19 min read
Save for later

Hello, Pong!

Packt
15 Sep 2015
19 min read
In this article written by Alejandro Rodas de Paz and Joseph Howse, authors of the book Python Game Programming By Example, we learn how game development is a highly evolving software development process, and it how has improved continuously since the appearance of the first video games in the 1950s. Nowadays, there is a wide variety of platforms and engines, and this process has been facilitated with the arrival of open source tools. Python is a free high-level programming language with a design intended to write readable and concise programs. Thanks to its philosophy, we can create our own games from scratch with just a few lines of code. There are a plenty of game frameworks for Python, but for our first game, we will see how we can develop it without any third-party dependency. We will be covering the following topics: Installation of the required software Overview of Tkinter, a GUI library included in the Python standard library Applying object-oriented programming to encapsulate the logic of our game Basic collision and input detection Drawing game objects without external assets (For more resources related to this topic, see here.) Installing Python You will need Python 3.4 with Tcl / Tk 8.6 installed on your computer. The latest branch of this version is Python 3.4.3, which can be downloaded from https://www.python.org/downloads/. Here, you can find the official binaries for the most popular platforms, such as Windows and Mac OS. During the installation process, make sure that you check the Tcl/Tk option to include the library. The code examples included in the book have been tested against Windows 8 and Mac, but can be run on Linux without any modification. Note that some distributions may require you to install the appropriate package for Python 3. For instance, on Ubuntu, you need to install the python3-tk package. Once you have Python installed, you can verify the version by opening Command Prompt or a terminal and executing these lines: $ python –-version Python 3.4.3 After this check, you should be able to start a simple GUI program: $ python >>> from tkinter import Tk >>> root = Tk() >>> root.title('Hello, world!') >>> root.mainloop() These statements create a window, change its title, and run indefinitely until the window is closed. Do not close the new window that is displayed when the second statement is executed. Otherwise, it will raise an error because the application has been destroyed. We will use this library in our first game, and the complete documentation of the module can be found at https://docs.python.org/3/library/tkinter.html. Tkinter and Python 2 The Tkinter module was renamed to tkinter in Python 3. If you have Python 2 installed, simply change the import statement with Tkinter in uppercase, and the program should run as expected. Overview of Breakout The Breakout game starts with a paddle and a ball at the bottom of the screen and some rows of bricks at the top. The player must eliminate all the bricks by hitting them with the ball, which rebounds against the borders of the screen, the bricks, and the bottom paddle. As in Pong, the player controls the horizontal movement of the paddle. The player starts the game with three lives, and if she or he misses the ball's rebound and it reaches the bottom border of the screen, one life is lost. The game is over when all the bricks are destroyed, or when the player loses all their lives. This is a screenshot of the final version of our game: Basic GUI layout We will start out game by creating a top-level window as in the simple program we ran previously. However, this time, we will use two nested widgets: a container frame and the canvas where the game objects will be drawn, as shown here: With Tkinter, this can easily be achieved using the following code: import tkinter as tk lives = 3 root = tk.Tk() frame = tk.Frame(root) canvas = tk.Canvas(frame, width=600, height=400, bg='#aaaaff') frame.pack() canvas.pack() root.title('Hello, Pong!') root.mainloop() Through the tk alias, we access the classes defined in the tkinter module, such as Tk, Frame, and Canvas. Notice the first argument of each constructor call which indicates the widget (the child container), and the required pack() calls for displaying the widgets on their parent container. This is not necessary for the Tk instance, since it is the root window. However, this approach is not exactly object-oriented, since we use global variables and do not define any new class to represent our new data structures. If the code base grows, this can lead to poorly organized projects and highly coupled code. We can start encapsulating the pieces of our game in this way: import tkinter as tk class Game(tk.Frame): def __init__(self, master): super(Game, self).__init__(master) self.lives = 3 self.width = 610 self.height = 400 self.canvas = tk.Canvas(self, bg='#aaaaff', width=self.width, height=self.height,) self.canvas.pack() self.pack() if __name__ == '__main__': root = tk.Tk() root.title('Hello, Pong!') game = Game(root) game.mainloop() Our new type, called Game, inherits from the Frame Tkinter class. The class Game(tk.Frame): definition specifies the name of the class and the superclass between parentheses. If you are new to object-oriented programming with Python, this syntax may not sound familiar. In our first look at classes, the most important concepts are the __init__ method and the self variable: The __init__ method is a special method that is invoked when a new class instance is created. Here, we set the object attributes, such as the width, the height, and the canvas widget. We also call the parent class initialization with the super(Game, self).__init__(master) statement, so the initial state of the Frame is properly initialized. The self variable refers to the object, and it should be the first argument of a method if you want to access the object instance. It is not strictly a language keyword, but the Python convention is to call it self so that other Python programmers won't be confused about the meaning of the variable. In the preceding snippet, we introduced the if __name__ == '__main__' condition, which is present in many Python scripts. This snippet checks the name of the current module that is being executed, and will prevent starting the main loop where this module was being imported from another script. This block is placed at the end of the script, since it requires that the Game class be defined. New- and old-style classes You may see the MySuperClass.__init__(self, arguments) syntax in some Python 2 examples, instead of the super call. This is the old-style syntax, the only flavor available up to Python 2.1, and is maintained in Python 2 for backward compatibility. The super(MyClass, self).__init__(arguments) is the new-class style introduced in Python 2.2. It is the preferred approach, and we will use it throughout this book. Since no external assets are needed, you can place the set of code files given along with the book(Chapter1_01.Py) in any directory and execute it from the python command line by running the file. The main loop will run indefinitely until you click on the close button of the window, or if you kill the process from the command line. This is the starting point of our game, so let's start diving into the Canvas widget and see how we can draw and animate items in it. Diving into the Canvas widget So far, we have the window set up and now we can start drawing items on the canvas. The canvas widget is two-dimensional and uses the Cartesian coordinate system. The origin—the (0, 0) ordered pair—is placed at the top-left corner, and the axis can be represented as shown in the following screenshot: Keeping this layout in mind, we can use two methods of the Canvas widget to draw the paddle, the bricks, and the ball: canvas.create_rectangle(x0, y0, x1, y1, **options) canvas.create_oval(x0, y0, x1, y1, **options) Each of these calls returns an integer, which identifies the item handle. This reference will be used later to manipulate the position of the item and its options. The **options syntax represents a key/value pair of additional arguments that can be passed to the method call. In our case, we will use the fill and the tags option. The x0 and y0 coordinates indicate the top-left corner of the previous screenshot, and x1 and y1 are indicated in the bottom-right corner. For instance, we can call canvas.create_rectangle(250, 300, 330, 320, fill='blue', tags='paddle') to create a player's paddle, where: The top-left corner is at the coordinates (250, 300). The bottom-right corner is at the coordinates (300, 320). The fill='blue' means that the background color of the item is blue. The tags='paddle' means that the item is tagged as a paddle. This string will be useful later to find items in the canvas with specific tags. We will invoke other Canvas methods to manipulate the items and retrieve widget information. This table gives the references to the Canvas widget that will be used here: Method Description canvas.coords(item) Returns the coordinates of the bounding box of an item. canvas.move(item, x, y) Moves an item by a horizontal and a vertical offset. canvas.delete(item) Deletes an item from the canvas. canvas.winfo_width() Retrieves the canvas width. canvas.itemconfig(item, **options) Changes the options of an item, such as the fill color or its tags. canvas.bind(event, callback) Binds an input event with the execution of a function. The callback handler receives one parameter of the type Tkinter event. canvas.unbind(event) Unbinds the input event so that there is no callback function executed when the event occurs. canvas.create_text(*position, **opts) Draws text on the canvas. The position and the options arguments are similar to the ones passed in canvas.create_rectangle and canvas.create_oval. canvas.find_withtag(tag) Returns the items with a specific tag. canvas.find_overlapping(*position) Returns the items that overlap or are completely enclosed by a given rectangle. You can check out a complete reference of the event syntax as well as some practical examples at http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm#events. Basic game objects Before we start drawing all our game items, let's define a base class with the functionality that they will have in common—storing a reference to the canvas and its underlying canvas item, getting information about its position, and deleting the item from the canvas: class GameObject(object): def __init__(self, canvas, item): self.canvas = canvas self.item = item def get_position(self): return self.canvas.coords(self.item) def move(self, x, y): self.canvas.move(self.item, x, y) def delete(self): self.canvas.delete(self.item) Assuming that we have created a canvas widget as shown in our previous code samples, a basic usage of this class and its attributes would be like this: item = canvas.create_rectangle(10,10,100,80, fill='green') game_object = GameObject(canvas,item) #create new instance print(game_object.get_position()) # [10, 10, 100, 80] game_object.move(20, -10) print(game_object.get_position()) # [30, 0, 120, 70] game_object.delete() In this example, we created a green rectangle and a GameObject instance with the resulting item. Then we retrieved the position of the item within the canvas, moved it, and calculated the position again. Finally, we deleted the underlying item. The methods that the GameObject class offers will be reused in the subclasses that we will see later, so this abstraction avoids unnecessary code duplication. Now that you have learned how to work with this basic class, we can define separate child classes for the ball, the paddle, and the bricks. The Ball class The Ball class will store information about the speed, direction, and radius of the ball. We will simplify the ball's movement, since the direction vector will always be one of the following: [1, 1] if the ball is moving towards the bottom-right corner [-1, -1] if the ball is moving towards the top-left corner [1, -1] if the ball is moving towards the top-right corner [-1, 1] if the ball is moving towards the bottom-left corner Representation of the possible direction vectors Therefore, by changing the sign of one of the vector components, we will change the ball's direction by 90 degrees. This will happen when the ball bounces with the canvas border, or when it hits a brick or the player's paddle: class Ball(GameObject): def __init__(self, canvas, x, y): self.radius = 10 self.direction = [1, -1] self.speed = 10 item = canvas.create_oval(x-self.radius, y-self.radius, x+self.radius, y+self.radius, fill='white') super(Ball, self).__init__(canvas, item)   For now, the object initialization is enough to understand the attributes that the class has. We will cover the ball rebound logic later, when the other game objects are defined and placed in the game canvas. The Paddle class The Paddle class represents the player's paddle and has two attributes to store the width and height of the paddle. A set_ball method will be used store a reference to the ball, which can be moved with the ball before the game starts: class Paddle(GameObject): def __init__(self, canvas, x, y): self.width = 80 self.height = 10 self.ball = None item = canvas.create_rectangle(x - self.width / 2, y - self.height / 2, x + self.width / 2, y + self.height / 2, fill='blue') super(Paddle, self).__init__(canvas, item) def set_ball(self, ball): self.ball = ball def move(self, offset): coords = self.get_position() width = self.canvas.winfo_width() if coords[0] + offset >= 0 and coords[2] + offset <= width: super(Paddle, self).move(offset, 0) if self.ball is not None: self.ball.move(offset, 0) The move method is responsible for the horizontal movement of the paddle. Step by step, the following is the logic behind this method: The self.get_position() calculates the current coordinates of the paddle The self.canvas.winfo_width() retrieves the canvas width If both the minimum and maximum x-axis coordinates plus the offset produced by the movement are inside the boundaries of the canvas, this is what happens: The super(Paddle, self).move(offset, 0) calls the method with same name in the Paddle class's parent class, which moves the underlying canvas item If the paddle still has a reference to the ball (this happens when the game has not been started), the ball is moved as well This method will be bound to the input keys so that the player can use them to control the paddle's movement. We will see later how we can use Tkinter to process the input key events. For now, let's move on to the implementation of the last one of our game's components. The Brick class Each brick in our game will be an instance of the Brick class. This class contains the logic that is executed when the bricks are hit and destroyed: class Brick(GameObject): COLORS = {1: '#999999', 2: '#555555', 3: '#222222'} def __init__(self, canvas, x, y, hits): self.width = 75 self.height = 20 self.hits = hits color = Brick.COLORS[hits] item = canvas.create_rectangle(x - self.width / 2, y - self.height / 2, x + self.width / 2, y + self.height / 2, fill=color, tags='brick') super(Brick, self).__init__(canvas, item) def hit(self): self.hits -= 1 if self.hits == 0: self.delete() else: self.canvas.itemconfig(self.item, fill=Brick.COLORS[self.hits]) As you may have noticed, the __init__ method is very similar to the one in the Paddle class, since it draws a rectangle and stores the width and the height of the shape. In this case, the value of the tags option passed as a keyword argument is 'brick'. With this tag, we can check whether the game is over when the number of remaining items with this tag is zero. Another difference from the Paddle class is the hit method and the attributes it uses. The class variable called COLORS is a dictionary—a data structure that contains key/value pairs with the number of hits that the brick has left, and the corresponding color. When a brick is hit, the method execution occurs as follows: The number of hits of the brick instance is decreased by 1 If the number of hits remaining is 0, self.delete() deletes the brick from the canvas Otherwise, self.canvas.itemconfig() changes the color of the brick. For instance, if we call this method for a brick with two hits left, we will decrease the counter by 1 and the new color will be #999999, which is the value of Brick.COLORS[1]. If the same brick is hit again, the number of remaining hits will become zero and the item will be deleted. Adding the Breakout items Now that the organization of our items is separated into these top-level classes, we can extend the __init__ method of our Game class: class Game(tk.Frame): def __init__(self, master): super(Game, self).__init__(master) self.lives = 3 self.width = 610 self.height = 400 self.canvas = tk.Canvas(self, bg='#aaaaff', width=self.width, height=self.height) self.canvas.pack() self.pack() self.items = {} self.ball = None self.paddle = Paddle(self.canvas, self.width/2, 326) self.items[self.paddle.item] = self.paddle for x in range(5, self.width - 5, 75): self.add_brick(x + 37.5, 50, 2) self.add_brick(x + 37.5, 70, 1) self.add_brick(x + 37.5, 90, 1) self.hud = None self.setup_game() self.canvas.focus_set() self.canvas.bind('<Left>', lambda _: self.paddle.move(-10)) self.canvas.bind('<Right>', lambda _: self.paddle.move(10)) def setup_game(self): self.add_ball() self.update_lives_text() self.text = self.draw_text(300, 200, 'Press Space to start') self.canvas.bind('<space>', lambda _: self.start_game()) This initialization is more complex that what we had at the beginning of the article. We can divide it into two sections: Game object instantiation, and their insertion into the self.items dictionary. This attribute contains all the canvas items that can collide with the ball, so we add only the bricks and the player's paddle to it. The keys are the references to the canvas items, and the values are the corresponding game objects. We will use this attribute later in the collision check, when we will have the colliding items and will need to fetch the game object. Key input binding, via the Canvas widget. The canvas.focus_set() call sets the focus on the canvas, so the input events are directly bound to this widget. Then we bind the left and right keys to the paddle's move() method and the spacebar to trigger the game start. Thanks to the lambda construct, we can define anonymous functions as event handlers. Since the callback argument of the bind method is a function that receives a Tkinter event as an argument, we define a lambda that ignores the first parameter—lambda _: <expression>. Our new add_ball and add_brick methods are used to create game objects and perform a basic initialization. While the first one creates a new ball on top of the player's paddle, the second one is a shorthand way of adding a Brick instance:   def add_ball(self): if self.ball is not None: self.ball.delete() paddle_coords = self.paddle.get_position() x = (paddle_coords[0] + paddle_coords[2]) * 0.5 self.ball = Ball(self.canvas, x, 310) self.paddle.set_ball(self.ball) def add_brick(self, x, y, hits): brick = Brick(self.canvas, x, y, hits) self.items[brick.item] = brick The draw_text method will be used to display text messages in the canvas. The underlying item created with canvas.create_text() is returned, and it can be used to modify the information:   def draw_text(self, x, y, text, size='40'): font = ('Helvetica', size) return self.canvas.create_text(x, y, text=text, font=font) The update_lives_text method displays the number of lives left and changes its text if the message is already displayed. It is called when the game is initialized—this is when the text is drawn for the first time—and it is also invoked when the player misses a ball rebound:    def update_lives_text(self): text = 'Lives: %s' % self.lives if self.hud is None: self.hud = self.draw_text(50, 20, text, 15) else: self.canvas.itemconfig(self.hud, text=text) We leave start_game unimplemented for now, since it triggers the game loop, and this logic will be added in the next section. Since Python requires a code block for each method, we use the pass statement. This does not execute any operation, and it can be used as a placeholder when a statement is required syntactically: def start_game(self): pass If you execute this script, it will display a Tkinter window like the one shown in the following figure. At this point, we can move the paddle horizontally, so we are ready to start the game and hit some bricks! Summary We covered the basics of the control flow and the class syntax. We used Tkinter widgets, especially the Canvas widget and its methods, to achieve the functionality needed to develop a game based on collisions and simple input detection. Our Breakout game can be customized as we want. Feel free to change the color defaults, the speed of the ball, or the number of rows of bricks. However, GUI libraries are very limited, and more complex frameworks are required to achieve a wider range of capabilities. Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [article] Understanding the Python regex engine [article] Ten IPython essentials [article]
Read more
  • 0
  • 1
  • 3918
article-image-using-mannequin-editor
Packt
07 Sep 2015
14 min read
Save for later

Using the Mannequin editor

Packt
07 Sep 2015
14 min read
In this article, Richard Marcoux, Chris Goodswen, Riham Toulan, and Sam Howels, the authors of the book CRYENGINE Game Development Blueprints, will take us through animation in CRYENGINE. In the past, animation states were handled by a tool called Animation Graph. This is akin to Flow Graph but handled animations and transitions for all animated entities, and unfortunately reduced any transitions or variation in the animations to a spaghetti graph. Thankfully, we now have Mannequin! This is an animation system where the methods by which animation states are handled is all dealt with behind the scenes—all we need to take care of are the animations themselves. In Mannequin, an animation and its associated data is known as a fragment. Any extra detail that we might want to add (such as animation variation, styles, or effects) can be very simply layered on top of the fragment in the Mannequin editor. While complex and detailed results can be achieved with all manner of first and third person animation in Mannequin, for level design we're only really interested in basic fragments we want our NPCs to play as part of flavor and readability within level scripting. Before we look at generating some new fragments, we'll start off with looking at how we can add detail to an existing fragment—triggering a flare particle as part of our flare firing animation. (For more resources related to this topic, see here.) Getting familiar with the interface First things first, let's open Mannequin! Go to View | Open View Pane | Mannequin Editor. This is initially quite a busy view pane so let's get our bearings on what's important to our work. You may want to drag and adjust the sizes of the windows to better see the information displayed. In the top left, we have the Fragments window. This lists all the fragments in the game that pertain to the currently loaded preview. Let's look at what this means for us when editing fragment entries. The preview workflow A preview is a complete list of fragments that pertains to a certain type of animation. For example, the default preview loaded is sdk_playerpreview1p.xml, which contains all the first person fragments used in the SDK. You can browse the list of fragments in this window to get an idea of what this means—everything from climbing ladders to sprinting is defined as a fragment. However, we're interested in the NPC animations. To change the currently loaded preview, go to File | Load Preview Setup and pick sdk_humanpreview.xml. This is the XML file that contains all the third person animations for human characters in the SDK. Once this is loaded, your fragment list should update to display a larger list of available fragments usable by AI. This is shown in the following screenshot:   If you don't want to perform this step every time you load Mannequin, you are able to change the default preview setup for the editor in the preferences. Go to Tools | Preferences | Mannequin | General and change the Default Preview File setting to the XML of your choice. Working with fragments Now we have the correct preview populating our fragment list, let's find our flare fragment. In the box with <FragmentID Filter> in it, type flare and press Enter. This will filter down the list, leaving you with the fireFlare fragment we used earlier. You'll see that the fragment is comprised of a tree. Expanding this tree one level brings us to the tag. A tag in mannequin is a method of choosing animations within a fragment based on a game condition. For example, in the player preview we were in earlier, the begin_reload fragment has two tags: one for SDKRifle and one for SDKShotgun. Depending on the weapon selected by the player, it applies a different tag and consequently picks a different animation. This allows animators to group together animations of the same type that are required in different situations. For our fireFlare fragment, as there are no differing scenarios of this type, it simply has a <default> tag. This is shown in the following screenshot:   Inside this tag, we can see there's one fragment entry: Option 1. These are the possible variations that Mannequin will choose from when the fragment is chosen and the required tags are applied. We only have one variation within fireFlare, but other fragments in the human preview (for example, IA_talkFunny) offer extra entries to add variety to AI actions. To load this entry for further editing, double-click Option 1. Let's get to adding that flare! Adding effects to fragments After loading the fragment entry, the Fragment Editor window has now updated. This is the main window in the center of Mannequin and comprises of a preview window to view the animation and a list of all the available layers and details we can add. The main piece of information currently visible here is the animation itself, shown in AnimLayer under FullBody3P: At the bottom of the Fragment Editor window, some buttons are available that are useful for editing and previewing the fragment. These include a play/pause toggle (along with a playspeed dropdown) and a jump to start button. You are also able to zoom in and out of the timeline with the mouse wheel, and scrub the timeline by click-dragging the red timeline marker around the fragment. These controls are similar to the Track View cinematics tool and should be familiar if you've utilized this in the past. Procedural layers Here, we are able to add our particle effect to the animation fragment. To do this, we need to add ProcLayer (procedural layer) to the FullBody3P section. The ProcLayer runs parallel to AnimLayer and is where any extra layers of detail that fragments can contain are specified, from removing character collision to attaching props. For our purposes, we need to add a particle effect clip. To do this, double-click on the timeline within ProcLayer. This will spawn a blank proc clip for us to categorize. Select this clip and Procedural Clip Properties on the right-hand side of the Fragment Editor window will be populated with a list of parameters. All we need to do now is change the type of this clip from None to ParticleEffect. This is editable in the dropdown Type list. This should present us with a ParticleEffect proc clip visible in the ProcLayer alongside our animation, as shown in the following screenshot:   Now that we have our proc clip loaded with the correct type, we need to specify the effect. The SDK has a couple of flare effects in the particle libraries (searchable by going to RollupBar | Objects Tab | Particle Entity); I'm going to pick explosions.flare.a. To apply this, select the proc clip and paste your chosen effect name into the Effect parameter. If you now scrub through fragment, you should see the particle effect trigger! However, currently the effect fires from the base of the character in the wrong direction. We need to align the effect to the weapon of the enemy. Thankfully, the ParticleEffect proc clip already has support for this in its properties. In the Reference Bone parameter, enter weapon_bone and hit Enter. The weapon_bone is the generic bone name that character's weapons are attached too, and as such it is a good bet for any cases where we require effects or objects to be placed in a character's weapon position. Scrubbing through the fragment again, the effect will now fire from the weapon hand of the character. If we ever need to find out bone names, there are a few ways to access this information within the editor. Hovering over the character in the Mannequin previewer will display the bone name. Alternatively, in Character Editor (we'll go into the details later), you can scroll down in the Rollup window on the right-hand side, expand Debug Options, and tick ShowJointNames. This will display the names of all bones over the character in the previewer. With the particle attached, we can now ensure that the timing of the particle effect matches the animation. To do this, you can click-and-drag the proc clip around timeline—around 1.5 seconds seems to match the timings for this animation. With the effect timed correctly, we now have a fully functioning fireFlare fragment! Try testing out the setup we made earlier with this change. We should now have a far more polished looking event. The previewer in Mannequin shares the same viewport controls as the perspective view in Sandbox. You can use this to zoom in and look around to gain a better view of the animation preview. The final thing we need to do is save our changes to the Mannequin databases! To do this, go to File | Save Changes. When the list of changed files is displayed, press Save. Mannequin will then tell you that you're editing data from the .pak files. Click Yes to this prompt and your data will be saved to your project. The resulting changed database files will appear in GameSDKAnimationsMannequinADB, and it should be distributed with your project if you package it for release. Adding a new fragment Now that we know how to add some effects feedback to existing fragments, let's look at making a new fragment to use as part of our scripting. This is useful to know if you have animators on your project and you want to get their assets in game quickly to hook up to your content. In our humble SDK project, we can effectively simulate this as there are a few animations that ship with the SDK that have no corresponding fragment. Now, we'll see how to browse the raw animation assets themselves, before adding them to a brand new Mannequin fragment. The Character Editor window Let's open the Character Editor. Apart from being used for editing characters and their attachments in the engine, this is a really handy way to browse the library of animation assets available and preview them in a viewport. To open the Character Editor, go to View | Open View Pane | Character Editor. On some machines, the expense of rendering two scenes at once (that is, the main viewport and the viewports in the Character Editor or Mannequin Editor) can cause both to drop to a fairly sluggish frame rate. If you experience this, either close one of the other view panes you have on the screen or if you have it tabbed to other panes, simply select another tab. You can also open the Mannequin Editor or the Character Editors without a level loaded, which allows for better performance and minimal load times to edit content. Similar to Mannequin, the Character Editor will initially look quite overwhelming. The primary aspects to focus on are the Animations window in the top-left corner and the Preview viewport in the middle. In the Filter option in the Animations window, we can search for search terms to narrow down the list of animations. An example of an animation that hasn't yet been turned into a Mannequin fragment is the stand_tac_callreinforcements_nw_3p_01 animation. You can find this by entering reinforcements into the search filter:   Selecting this animation will update the debug character in the Character Editor viewport so that they start to play the chosen animation. You can see this specific animation is a oneshot wave and might be useful as another trigger for enemy reinforcements further in our scripting. Let's turn this into a fragment! We need to make sure we don't forget this animation though; right-click on the animation and click Copy. This will copy the name to the clipboard for future reference in Mannequin. The animation can also be dragged and dropped into Mannequin manually to achieve the same result. Creating fragment entries With our animation located, let's get back to Mannequin and set up our fragment. Ensuring that we're still in the sdk_humanpreview.xml preview setup, take another look at the Fragments window in the top left of Mannequin. You'll see there are two rows of buttons: the top row controls creation and editing of fragment entries (the animation options we looked at earlier). The second row covers adding and editing of fragment IDs themselves: the top level fragment name. This is where we need to start. Press the New ID button on the second row of buttons to bring up the New FragmentID Name dialog. Here, we need to add a name that conforms to the prefixes we discussed earlier. As this is an action, make sure you add IA_ (interest action) as the prefix for the name you choose; otherwise, it won't appear in the fragment browser in the Flow Graph. Once our fragment is named, we'll be presented with Mannequin FragmentID Editor. For the most part, we won't need to worry about these options. But it's useful to be aware of how they might be useful (and don't worry, these can be edited after creation). The main parameters to note are the Scope options. These control which elements of the character are controlled by the fragment. By default, all these boxes are ticked, which means that our fragment will take control of each ticked aspect of the character. An example of where we might want to change this would be the character LookAt control. If we want to get an NPC to look at another entity in the world as part of a scripted sequence (using the AI:LookAt Flow Graph node), it would not be possible with the current settings. This is because the LookPose and Looking scopes are controlled by the fragment. If we were to want to control this via Flow Graph, these would need to be unticked, freeing up the look scopes for scripted control. With scopes covered, press OK at the bottom of the dialog box to continue adding our callReinforcements animation! We now have a fragment ID created in our Fragments window, but it has no entries! With our new fragment selected, press the New button on the first row of buttons to add an entry. This will automatically add itself under the <default> tag, which is the desired behavior as our fragment will be tag-agnostic for the moment. This has now created a blank fragment in the Fragment Editor. Adding the AnimLayer This is where our animation from earlier comes in. Right-click on the FullBody3P track in the editor and go to Add Track | AnimLayer. As we did previously with our effect on ProcLayer, double-click on AnimLayer to add a new clip. This will create our new Anim Clip, with some red None markup to signify the lack of animation. Now, all we need to do is select the clip, go to the Anim Clip Properties, and paste in our animation name by double-clicking the Animation parameter. The Animation parameter has a helpful browser that will allow you to search for animations—simply click on the browse icon in the parameter entry section. It lacks the previewer found in the Character Editor but can be a quick way to find animation candidates by name within Mannequin. With our animation finally loaded into a fragment, we should now have a fragment setup that displays a valid animation name on the AnimLayer. Clicking on Play will now play our reinforcements wave animation!   Once we save our changes, all we need to do now is load our fragment in an AISequence:Animation node in Flow Graph. This can be done by repeating the steps outlined earlier. This time, our new fragment should appear in the fragment dialog. Summary Mannequin is a very powerful tool to help with animations in CRYENGINE. We have looked at how to get started with it. Resources for Article: Further resources on this subject: Making an entity multiplayer-ready[article] Creating and Utilizing Custom Entities[article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 1538

article-image-getting-know-libgdx
Packt
25 Aug 2015
15 min read
Save for later

Getting to Know LibGDX

Packt
25 Aug 2015
15 min read
In this article written by James Cook, author of the book LibGDX Game Development By Example, the author likes to state that, "Creating games is fun, and that is why I like to do it". The process of having an idea for a game to actually delivering it has changed over the years. Back in the 1980s, it was quite common that the top games around were created by either a single person or a very small team. However, anyone who is lucky enough (in my opinion) to see games grow from being quite a simplistic affair to the complex beast that the now AAA titles are, must have also seen the resources needed for these grow with them. The advent of mobile gaming reduced the barrier for entry; once again, the smaller teams could produce a game that could be a worldwide hit! Now, there are games of all genres and complexities available across major gaming platforms. Due to this explosion in the number of games being made, new general-purpose game-making tools appeared in the community. Previously, the in-house teams built and maintained very specific game engines for their games; however, this would have led to a lot of reinventing the wheel. I hate to think how much time I would have lost if for each of my games, I had to start from scratch. Now, instead of worrying about how to display a 2D image on the screen, I can focus on creating that fun player experience I have in my head. My tool of choice? LibGDX. (For more resources related to this topic, see here.) Before I dive into what LibGDX is, here is how LibGDX describes itself. From the LibGDX wiki—https://github.com/libgdx/libgdx/wiki/Introduction: LibGDX is a cross-platform game and visualization development framework. So what does that actually mean? What can LibGDX do for us game-makers that allows us to focus purely on the gameplay? To begin with, LibGDX is Java-based. This means you can reuse a lot, and I mean a lot, of tools that already exist in the Java world. I can imagine a few of you right now must be thinking, "But Java? For a game? I thought Java is supposed to be slow". To a certain extent, this can be true; after all, Java is still an interpreted language that runs in a virtual machine. However, to combat the need for the best possible performance, LibGDX takes advantage of the Java Native Interface (JNI) to implement native platform code and negate the performance disadvantage. One of the beauties of LibGDX is that it allows you to go as low-level as you would like. Direct access to filesystems, input devices, audio devices, and OpenGL (via OpenGL ES 2.0/3.0) is provided. However, the added edge LibGDX gives is that with the APIs that are built on top of these low-level facilities, displaying an image on the screen takes now a days only a few lines of code. A full list of the available features for LibGDX can be found here:http://libgdx.badlogicgames.com/features.html I am happy to wait here while you go and check it out. Impressive list of features, no? So, how cross-platform is this gaming platform? This is probably what you are thinking now. Well, as mentioned before, games are being delivered on many different platforms, be it consoles, PCs, or mobiles. LibGDX currently supports the following platforms: Windows Linux Mac OS X Android BlackBerry iOS HTML/WebGL That is a pretty comprehensive list. Being able to write your game once and have it delivered to all the preceding platforms is pretty powerful. At this point, I would like to mention that LibGDX is completely free and open source. You can go to https://github.com/libGDX/libGDX and check out all the code in all its glory. If the code does something and you would like to understand how, it is all possible; or, if you find a bug, you can make a fix and offer it back to the community. Along with the source code, there are plenty of tests and demos showcasing what LibGDX can do, and more importantly, how to do it. Check out the wiki for more information: https://github.com/libgdx/libgdx/wiki/Running-Demos https://github.com/libgdx/libgdx/wiki/Running-Tests "Who else uses LibGDX?" is quite a common query that comes up during a LibGDX discussion. Well it turns out just about everyone has used it. Google released a game called "Ingress" (https://play.google.com/store/apps/details?id=com.nianticproject.ingress&hl=en) on the play store in 2013, which uses LibGDX. Even Intel (https://software.intel.com/en-us/articles/getting-started-with-libgdx-a-cross-platform-game-development-framework) has shown an interest in LibGDX. Finally, I would like to end this section with another quote from the LibGDX website: LibGDX aims to be a framework rather than an engine, acknowledging that there is no one-size-fits-all solution. Instead we give you powerful abstractions that let you chose how you want to write your game or application. libGDX wiki—https://github.com/libgdx/libgdx/wiki/Introduction This means that you can use the available tools if you want to; if not, you can dive deeper into the framework and create your own! Setting up LibGDX We know by now that LibGDX is this awesome tool for creating games across many platforms with the ability to iterate on our code at superfast speeds. But how do we start using it? Thankfully, some helpful people have made the setup process quite easy. However, before we get to that part, we need to ensure that we have the prerequisites installed, which are as follows: Java Development Kit 7+ (at the time of writing, version 8 is available) Android SDK Not that big a list! Follow the given steps: First things first. Go to http://www.oracle.com/technetwork/java/javase/downloads/index.html. Download and install the latest JDK if you haven't already done so. Oracle developers are wonderful people and have provided a useful installation guide, which you can refer to if you are unsure on how to install the JDK, at http://docs.oracle.com/javase/8/docs/technotes/guides/install/install_overview.html. Once you have installed the JDK, open up the command line and run the following command: java -version If it is installed correctly, you should get an output similar to this: If you generate an error while doing this, consult the Oracle installation documentation and try again. One final touch would be to ensure that we have JAVA_HOME configured. On the command line, perform the following:    For Windows, set JAVA_HOME = C:PathToJDK    For Linux and Mac OSX, export JAVA_HOME = /Path/ToJDK/ Next, on to the Android SDK. At the time of writing, Android Studio has just been released. Android Studio is an IDE offered by Google that is built upon JetBrains IntelliJ IDEA Java IDE. If you feel comfortable using Android Studio as your IDE, and as a developer who has used IntelliJ for the last 5 years, I suggest that you at least give it a go. You can download Android Studio + Android SDK in a bundle from here: http://developer.android.com/sdk/index.html Alternatively, if you plan to use a different IDE (Eclipse or NetBeans, for example) you can just install the tools from the following URL: http://developer.android.com/sdk/index.html#Other You can find the installation instructions here: https://developer.android.com/sdk/installing/index.html?pkg=tools However, I would like to point out that the official IDE for Android is now Android Studio and no longer Eclipse with ADT. For the sake of simplicity, we will only focus on making games for desktops for the greater part of this article. We will look at exporting to Android and iOS later on. Once the Android SDK is installed, it would be well worth running the SDK manager application; so, finalize the set up. If you opt to use Android Studio, you can access this from the SDK Manager icon in the toolbar. Alternatively, you can also access it as follows: On Windows: Double-click on the SDK's Manager.exe file at the root of the Android SDK directory On Mac/Linux: Open a terminal and navigate to the tools/ directory in the location where the Android SDK is installed, then execute Android SDK. The following screen might appear: As a minimum configuration, select: Android SDK Tools Android SDK Platform-tools Android SDK Build-tools (latest available version) Latest version of SDK Platform Let them download and install the selected configuration. Then that's it! Well, not really. We just need to set the ANDROID_HOME environment variable. To do this, we can open up a command line and run the following command: On Windows: Set ANDROID_HOME=C:/Path/To/Your/Android/Sdk On Linux and Mac OS X: Export ANDROID_HOME=/Path/To/Your/Android/Sdk Phew! With that done, we can now move on to the best part—creating our first ever LibGDX game! Creating a project Follow the given steps to create your own project: As mentioned earlier, LibGDX comes with a really useful project setup tool. Download the application from here: http://libgdx.badlogicgames.com/download.html At the time of writing, it is the big red "Download Setup App" button in the middle of your screen. Once downloaded, open the command line and navigate to the location of the application. You will notice that it is a JAR file type. This means we need to use Java to run it. Running this will open the setup UI: Before we hit the Generate button, let's just take a look at what we are creating here: Name: This is the name of our game. Package: This is the Java package our game code will be developed in. Game class: This parameter sets the name of our game class, where the magic happens! Destination: This is the project's directory. You can change this to any location of your choice. Android SDK: This is the location of the SDK. If this isn't set correctly, we can change it here. Going forward, it might be worth setting the ANDROID_HOME environment variable. Next is the version of LibGDX we want to use. At time of writing, the version is 1.5.4. Now, let's move on to the subprojects. As we are only interested in desktops at the moment, let's deselect the others. Finally, we come to extensions. Feel free to uncheck any that are checked. We won't be needing any of them at this point in time. For more information on available extensions, check out the LibGDX wiki (https://github.com/libgdx/libgdx/wiki). Once all is set, let's hit the Generate button! There is a little window at the bottom of the UI that will now spring to life. Here, it will show you the setup progress as it downloads the necessary setup files. Once complete, open that command line, navigate to the directory, and run your preferred tree command (in Windows, it is just "tree").   Hopefully, you will have the same directory layout as the previous image shows. The astute among you will now ask, "What is this Gradle?" and quite rightly so. I haven't mentioned it yet, although it appears twice in our projects directory. What is Gradle? Well, Gradle is a very excellent build tool and LibGDX leverages its abilities to look after the dependencies, build process, and IDE integration. This is especially useful if you are going to be working in a team with a shared code base. Even if you are not, the dependency management aspect is worth it alone. Anyone who isn't familiar with dependency management may well be used to downloading Java JARs manually and placing them in a libs folder, but they might run into problems later when the JAR they just downloaded needs another JAR, and so on. The dependency management will take care of this for you and even better is that the LibGDX setup application takes care of this for you by already describing the dependencies that you need to run! Within LibGDX, there is something called the Gradle Wrapper. This is essentially the Gradle application embedded into the project. This allows portability of our project, as now if we want someone else to run it, they can. I guess this leads us to the question, how do we use Gradle to run our project? In the LibGDX wiki (https://github.com/libgdx/libgdx/wiki/Gradle-on-the-Commandline), you will find a comprehensive list of commands that can be used while developing your game. However, for now, we will only cover the desktop project. What you may not have noticed is that the setup application actually generates a very simple "Hello World" game for us. So, we have something we can run from the command line right away! Let's go for it! On our command line, let's run the following:    On Windows: gradlew desktop:run    On Linux and Mac OS X: ./gradlew desktop:run The following screen will appear once you execute the preceding command:   You will get an output similar to the preceding screenshot. Don't worry if it suddenly wants to start downloading the dependencies. This is our dependency management in action! All those JARs and native binaries are being downloaded and put on to classpaths. But, we don't care. We are here to create games! So, after the command prompt has finished downloading the files, it should then launch the "Hello World" game. Awesome! You have just launched your very first LibGDX game! Although, before we get too excited, you will notice that not much actually happens here. It is just a red screen with the Bad Logic Games logo. I think now is the time to look at the code! Importing a project So far, we have launched the "Hello World" game via the command line, and haven't seen a single line of code so far. Let's change that. To do this, I will use IntelliJ IDEA. If you are using Android Studio, the screenshots will look familiar. If you are using Eclipse, I am sure you will be able to see the common concepts. To begin with, we need to generate the appropriate IDE project files. Again, this is using Gradle to do the heavy lifting for us. Once again, on the command line, run the following (pick the one that applies): On Windows: gradlew idea or gradlew eclipse On Linux and Mac OS X: ./gradlew idea or ./gradlew eclipse Now, Gradle will have generated some project files. Open your IDE of choice and open the project. If you require more help, check out the following wiki pages: https://github.com/libgdx/libgdx/wiki/Gradle-and-Eclipse https://github.com/libgdx/libgdx/wiki/Gradle-and-Intellij-IDEA https://github.com/libgdx/libgdx/wiki/Gradle-and-NetBeans Once the project is open, have a poke around and look at some of the files. I think our first port of call should be the build.gradle file in the root of the project. Here, you will see that the layout of our project is defined and the dependencies we require are on display. It is a good time to mention that going forward, there will be new releases of LibGDX, and to update our project to the latest version, all we need to do is update the following property: gdxVersion = '1.6.4' Now, run your game and Gradle will kick in and download everything for you! Next, we should look for our game class, remember the one we specified in the setup application—MyGdxGame.java? Find it, open it, and be in awe of how simple it is to display that red screen and Bad Logic Games logo. In fact, I am going to paste the code here for you to see how simple it is: public class MyGdxGame extends ApplicationAdapter { SpriteBatch batch; Texture img; @Override public void create () { batch = new SpriteBatch(); img = new Texture("badlogic.jpg"); } @Override public void render () { Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); batch.draw(img, 0, 0); batch.end(); } } Essentially, we can see that when the create() method is called, it sets up a SpriteBatch batch and creates a texture from a given JPEG file. Then, on the render() method, this is called on every iteration of the game loop; it covers the screen with the color red, then it draws the texture at the (0, 0) coordinate location. Finally, we will look at the DesktopLauncher class, which is responsible for running the game in the desktop environment. Let's take a look at the following code snippet: public class DesktopLauncher { public static void main (String[] arg) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); new LwjglApplication(new MyGdxGame(), config); } } The preceding code shows how simple it is. We have a configuration object that will define how our desktop application runs, setting things like screen resolution and framerate, amongst others. In fact, this is an excellent time to utilize the open source aspect of LibGDX. In your IDE, click through to the LwjglApplicationConfiguration class. You will see all the properties that can be tweaked and notes on what they mean. The instance of the LwjglApplicationConfiguration class is then passed to the constructor of another class LwjglApplication, along with an instance of our MyGdxGame class. Finally, those who have worked with Java a lot in the past will recognize that it is wrapped in a main method—a traditional entry point for a Java application. That is all that is needed to create and launch a desktop-only LibGDX game. Summary In this article, we looked at what LibGDX is about and how to go about creating a standard project, running it from the command line and importing it into your preferred IDE ready for development. Resources for Article: Further resources on this subject: 3D Modeling[article] Using Google's offerings[article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 4190