Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - 3D Game Development

115 Articles
article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15732

article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 5732

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 5502

article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 3886

article-image-working-with-shaders-in-c-to-create-3d-games
Amarabha Banerjee
15 Jun 2018
28 min read
Save for later

Working with shaders in C++ to create 3D games

Amarabha Banerjee
15 Jun 2018
28 min read
A shader is a computer program that is used to do image processing such as special effects, color effects, lighting, and, well, shading. The position, brightness, contrast, hue, and other effects on all pixels, vertices, or textures used to produce the final image on the screen can be altered during runtime, using algorithms constructed in the shader program(s). These days, most shader programs are built to run directly on the Graphical Processing Unit (GPU). In this article, we are going to get acquainted with shaders and implement our own shader infrastructure for the example engine. Shader programs are executed in parallel. This means, for example, that a shader might be executed once per pixel, with each of the executions running simultaneously on different threads on the GPU. The amount of simultaneous threads depends on the graphics card specific GPU, with modern cards sporting processors in the thousands. This all means that shader programs can be very performant and provide developers with lots of creative flexibility. The following article is a part of the book Mastering C++ game Development written by Mickey Macdonald. With this book, you can create advanced games with C++. Shader languages With advances in graphics card technology, more flexibility has been added to the rendering pipeline. Where at one time developers had little control over concepts such as fixed-function pipeline rendering, new advancements have allowed programmers to take deeper control of graphics hardware for rendering their creations. Originally, this deeper control was achieved by writing shaders in assembly language, which was a complex and cumbersome task. It wasn't long before developers yearned for a better solution. Enter the shader programming languages. Let's take a brief look at a few of the more common languages in use. C for graphics (Cg) is a shading language originally developed by the Nvidia graphics company. Cg is based on the C programming language and, although they share the same syntax, some features of C were modified and new data types were added to make Cg more suitable for programming GPUs. Cg compilers can output shader programs supported by both DirectX and OpenGL. While Cg was mostly deprecated, it has seen a resurgence in a new form with its use in the Unity game engine. High-Level Shading Language (HLSL) is a shading language developed by the Microsoft Corporation for use with the DirectX graphics API. HLSL is again modeled after the C programming language and shares many similarities to the Cg shading language. HLSL is still in development and continues to be the shading language of choice for DirectX. Since the release, DirectX 12 the HLSL language supports even lower level hardware control and has seen dramatic performance improvements. OpenGL Shading Language (GLSL) is a shading language that is also based on the C programming language. It was created by the OpenGL Architecture Review Board (OpenGL ARB) to give developers more direct control of the graphics pipeline without having to use ARB assembly language or other hardware-specific languages. The language is still in open development and will be the language we will focus on in our examples. Building a shader program infrastructure Most modern shader programs are composed of up to five different types of shader files: fragment or pixel shaders, vertex shaders, geometry shaders, compute shaders, and tessellation shaders. When building a shader program, each of these shader files must be compiled and linked together for use, much like how a C++ program is compiled and linked. Next, we are going to walk you through how this process works and see how we can build an infrastructure to allow for easier interaction with our shader programs. To get started, let's look at how we compile a GLSL shader. The GLSL compiler is part of the OpenGL library itself, and our shaders can be compiled within an OpenGL program. We are going to build an architecture to support this internal compilation. The whole process of compiling a shader can be broken down into some simple steps. First, we have to create a shader object, then provide the source code to the shader object. We can then ask the shader object to be compiled. These steps can be represented in the following three basic calls to the OpenGL API. First, we create the shader object: GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); We create the shader object using the glCreateShader() function. The argument we pass in is the type of shader we are trying to create. The types of shaders can be GL_VERTEX_SHADER, GL_FRAGMENT_SHADER, GL_GEOMETRY_SHADER, GL_TESS_EVALUATION_SHADER, GL_TESS_CONTROL_SHADER, or GL_COMPUTE_SHADER. In our example case, we are trying to compile a vertex shader, so we use the GL_VERTEX_SHADER type. Next, we copy the shader source code into the shader object: GLchar* shaderCode = LoadShader("shaders/simple.vert"); glShaderSource(vertexShader, 1, shaderCode, NULL); Here we are using the glShaderSource() function to load our shader source to memory. This function accepts an array of strings, so before we call glShaderSource(), we create a pointer to the start of the shaderCode array object using a still-to-be-created method. The first argument to glShaderSource() is the handle to the shader object. The second is the number of source code strings that are contained in the array. The third argument is a pointer to an array of source code strings. The final argument is an array of GLint values that contains the length of each source code string in the previous argument. Finally, we compile the shader: glCompileShader(vertexShader); The last step is to compile the shader. We do this by calling the OpenGL API method, glCompileShader(), and passing the handle to the shader that we want compiled. Of course, because we are using memory to store the shaders, we should know how to clean up when we are done. To delete a shader object, we can call the glDeleteShader() function. Deleting a Shader ObjectShader objects can be deleted when no longer needed by calling glDeleteShader(). This frees the memory used by the shader object. It should be noted that if a shader object is already attached to a program object, as in linked to a shader program, it will not be immediately deleted, but rather flagged for deletion. If the object is flagged for deletion, it will be deleted when it is detached from the linked shader program object. Once we have compiled our shaders, the next step we need to take before we can use them in our program is to link them together into a complete shader program. One of the core aspects of the linking step involves making the connections between input variables from one shader to the output variables of another and making the connections between the input/output variables of a shader to appropriate locations in the OpenGL program itself. Linking is much like compiling the shader. We create a new shader program and attach each shader object to it. We then tell the shader program object to link everything together. The steps to accomplish this in the OpenGL environment can be broken down into a few calls to the API, as follows: First, we create the shader program object: GLuint shaderProgram = glCreateProgram(); To start, we call the glCreateProgram() method to create an empty program object. This function returns a handle to the shader program object which, in this example, we are storing in a variable named shaderProgram. Next, we attach the shaders to the program object: glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); To load each of the shaders into the shader program, we use the glAttachShader() method. This method takes two arguments. The first argument is the handle to the shader program object, and the second is the handle to the shader object to be attached to the shader program. Finally, we link the program: glLinkProgram(programHandle); When we are ready to link the shaders together we call the glLinkProgram() method. This method has only one argument: the handle to the shader program we want to link. It's important that we remember to clean up any shader programs that we are not using anymore. To remove a shader program from the OpenGL memory, we call glDeleteProgram() method. The glDeleteProgram() method takes one argument: the handle to the shader program that is to be deleted. This method call invalidates the handle and frees the memory used by the shader program. It is important to note that if the shader program object is currently in use, it will not be immediately deleted, but rather flagged for deletion. This is similar to the deletion of shader objects. It is also important to note that the deletion of a shader program will detach any shader objects that were attached to the shader program at linking time. This, however, does mean the shader object will be deleted immediately unless those shader objects have already been flagged for deletion by a previous call to the glDeleteShader() method. So those are the simplified OpenGL API calls required to create, compile, and link shader programs. Now we are going to move onto implementing some structure to make the whole process much easier to work with. To do this, we are going to create a new class called ShaderManager. This class will act as the interface for compiling, linking, and managing the cleanup of shader programs. To start with, let's look at the implementation of the CompileShaders() method in the ShaderManager.cpp file. I should note that I will be focusing on the important aspects of the code that pertain to the implementation of the architecture. The full source code for this chapter can be found in the Chapter07 folder in the GitHub repository. void ShaderManager::CompileShaders(const std::string& vertexShaderFilePath, const std::string& fragmentShaderFilepath) { m_programID = glCreateProgram(); m_vertexShaderID = glCreateShader(GL_VERTEX_SHADER); if (m_vertexShaderID == 0){ Exception("Vertex shader failed to be created!"); } m_fragmentShaderID = glCreateShader(GL_FRAGMENT_SHADER); if (m_fragmentShaderID == 0){ Exception("Fragment shader failed to be created!"); } CompileShader(vertexShaderFilePath, m_vertexShaderID); CompileShader(fragmentShaderFilepath, m_fragmentShaderID); } To begin, for this example we are focusing on two of the shader types, so our ShaderManager::CompileShaders() method accepts two arguments. The first argument is the file path location of the vertex shader file, and the second is the file path location to the fragment shader file. Both are strings. Inside the method body, we first create the shader program handle using the glCreateProgram() method and store it in the m_programID variable. Next, we create the handles for the vertex and fragment shaders using the glCreateShader() command. We check for any errors when creating the shader handles, and if we find any we throw an exception with the shader name that failed. Once the handles have been created, we then call the CompileShader() method, which we will look at next. The CompileShader() function takes two arguments: the first is the path to the shader file, and the second is the handle in which the compiled shader will be stored. The following is the full CompileShader() function. It handles the look and loading of the shader file from storage, as well as calling the OpenGL compile command on the shader file. We will break it down chunk by chunk: void ShaderManager::CompileShader(const std::string& filePath, GLuint id) { std::ifstream shaderFile(filePath); if (shaderFile.fail()){ perror(filePath.c_str()); Exception("Failed to open " + filePath); } //File contents stores all the text in the file std::string fileContents = ""; //line is used to grab each line of the file std::string line; //Get all the lines in the file and add it to the contents while (std::getline(shaderFile, line)){ fileContents += line + "n"; } shaderFile.close(); //get a pointer to our file contents c string const char* contentsPtr = fileContents.c_str(); //tell opengl that we want to use fileContents as the contents of the shader file glShaderSource(id, 1, &contentsPtr, nullptr); //compile the shader glCompileShader(id); //check for errors GLint success = 0; glGetShaderiv(id, GL_COMPILE_STATUS, &success); if (success == GL_FALSE){ GLint maxLength = 0; glGetShaderiv(id, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetShaderInfoLog(id, maxLength, &maxLength, &errorLog[0]); //Provide the infolog in whatever manor you deem best. //Exit with failure. glDeleteShader(id); //Don't leak the shader. //Print error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shader " + filePath + " failed to compile"); } } To start the function, we first use an ifstream object to open the file with the shader code in it. We also check to see if there were any issues loading the file and if, there were, we throw an exception notifying us that the file failed to open: std::ifstream shaderFile(filePath); if (shaderFile.fail()) { perror(filePath.c_str()); Exception("Failed to open " + filePath); } Next, we need to parse the shader. To do this, we create a string variable called fileContents that will hold the text in the shader file. We then create another string variable named line; this will be a temporary holder for each line of the shader file we are trying to parse. Next, we use a while loop to step through the shader file, parsing the contents line by line and saving each loop into the fileContents string. Once all the lines have been read into the holder variable, we call the close method on the shaderFile ifstream object to free up the memory used to read the file: std::string fileContents = ""; std::string line; while (std::getline(shaderFile, line)) { fileContents += line + "n"; } shaderFile.close(); You might remember from earlier in the chapter that I mentioned that when we are using the glShaderSource() function, we have to pass the shader file text as a pointer to the start of a character array. In order to meet this requirement, we are going to use a neat trick where we use the C string conversation method built into the string class to allow us to pass back a pointer to the start of our shader character array. This, in case you are unfamiliar, is essentially what a string is: const char* contentsPtr = fileContents.c_str(); Now that we have a pointer to the shader text, we can call the glShaderSource() method to tell OpenGL that we want to use the contents of the file to compile our shader. Then, finally, we call the glCompileShader() method with the handle to the shader as the argument: glShaderSource(id, 1, &contentsPtr, nullptr); glCompileShader(id); That handles the compilation, but it is a good idea to provide ourselves with some debug support. We implement this compilation debug support by closing out the CompileShader() function by first checking to see if there were any errors during the compilation process. We do this by requesting information from the shader compiler through glGetShaderiv() function, which, among its arguments, takes an enumerated value that specifies what information we would like returned. In this call, we are requesting the compile status: GLint success = 0; glGetShaderiv(id, GL_COMPILE_STATUS, &success); Next, we check to see if the returned value is GL_FALSE, and if it is, that means we have had an error and should ask the compiler for more information about the compile issues. We do this by first asking the compiler what the max length of the error log is. We use this max length value to then create a vector of character values called errorLog. Then we can request the shader compile log by using the glGetShaderInfoLog() method, passing in the handle to the shader file the number of characters we are pulling, and where we want to save the log: if (success == GL_FALSE){ GLint maxLength = 0; glGetShaderiv(id, GL_INFO_LOG_LENGTH, &maxLength); std::vector<char> errorLog(maxLength); glGetShaderInfoLog(id, maxLength, &maxLength, &errorLog[0]); Once we have the log file saved, we go ahead and delete the shader using the glDeleteShader() method. This ensures we don't have any memory leaks from our shader: glDeleteShader(id); Finally, we first print the error log to the console window. This is great for runtime debugging. We also throw an exception with the shader name/file path, and the message that it failed to compile: std::printf("%sn", &(errorLog[0])); Exception("Shader " + filePath + " failed to compile"); } ... That really simplifies the process of compiling our shaders by providing a simple interface to the underlying API calls. Now, in our example program, to load and compile our shaders we use a simple line of code similar to the following: shaderManager.CompileShaders("Shaders/SimpleShader.vert", "Shaders/SimpleShader.frag"); Having now compiled the shaders, we are halfway to a useable shader program. We still need to add one more piece, linking. To abstract away some of the processes of linking the shaders and to provide us with some debugging capabilities, we are going to create the LinkShaders() method for our ShaderManager class. Let's take a look and then break it down: void ShaderManager::LinkShaders() { //Attach our shaders to our program glAttachShader(m_programID, m_vertexShaderID); glAttachShader(m_programID, m_fragmentShaderID); //Link our program glLinkProgram(m_programID); //Note the different functions here: glGetProgram* instead of glGetShader*. GLint isLinked = 0; glGetProgramiv(m_programID, GL_LINK_STATUS, (int *)&isLinked); if (isLinked == GL_FALSE){ GLint maxLength = 0; glGetProgramiv(m_programID, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetProgramInfoLog(m_programID, maxLength, &maxLength, &errorLog[0]); //We don't need the program anymore. glDeleteProgram(m_programID); //Don't leak shaders either. glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); //print the error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shaders failed to link!"); } //Always detach shaders after a successful link. glDetachShader(m_programID, m_vertexShaderID); glDetachShader(m_programID, m_fragmentShaderID); glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); } To start our LinkShaders() function, we call the glAttachShader() method twice, using the handle to the previously created shader program object, and the handle to each shader we wish to link, respectively: glAttachShader(m_programID, m_vertexShaderID); glAttachShader(m_programID, m_fragmentShaderID); Next, we perform the actual linking of the shaders into a usable shader program by calling the glLinkProgram() method, using the handle to the program object as its argument: glLinkProgram(m_programID); We can then check to see if the linking process has completed without any errors and provide ourselves with any debug information that we might need if there were any errors. I am not going to go through this code chunk line by line since it is nearly identical to what we did with the CompileShader() function. Do note, however, that the function to return the information from the linker is slightly different and uses glGetProgram* instead of the glGetShader* functions from before: GLint isLinked = 0; glGetProgramiv(m_programID, GL_LINK_STATUS, (int *)&isLinked); if (isLinked == GL_FALSE){ GLint maxLength = 0; glGetProgramiv(m_programID, GL_INFO_LOG_LENGTH, &maxLength); //The maxLength includes the NULL character std::vector<char> errorLog(maxLength); glGetProgramInfoLog(m_programID, maxLength, &maxLength, &errorLog[0]); //We don't need the program anymore. glDeleteProgram(m_programID); //Don't leak shaders either. glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); //print the error log and quit std::printf("%sn", &(errorLog[0])); Exception("Shaders failed to link!"); } Lastly, if we are successful in the linking process, we need to clean it up a bit. First, we detach the shaders from the linker using the glDetachShader() method. Next, since we have a completed shader program, we no longer need to keep the shaders in memory, so we delete each shader with a call to the glDeleteShader() method. Again, this will ensure we do not leak any memory in our shader program creation process: glDetachShader(m_programID, m_vertexShaderID); glDetachShader(m_programID, m_fragmentShaderID); glDeleteShader(m_vertexShaderID); glDeleteShader(m_fragmentShaderID); } We now have a simplified way of linking our shaders into a working shader program. We can call this interface to the underlying API calls by simply using one line of code, similar to the following one: shaderManager.LinkShaders(); So that handles the process of compiling and linking our shaders, but there is another key aspect to working with shaders, which is the passing of data to and from the running program/the game and the shader programs running on the GPU. We will look at this process and how we can abstract it into an easy-to-use interface for our engine next. Working with shader data One of the most important aspects of working with shaders is the ability to pass data to and from the shader programs running on the GPU. This can be a deep topic, and much like other topics in this book has had its own dedicated books. We are going to stay at a higher level when discussing this topic and again will focus on the two needed shader types for basic rendering: the vertex and fragment shaders. To begin with, let's take a look at how we send data to a shader using the vertex attributes and Vertex Buffer Objects (VBO). A vertex shader has the job of processing the data that is connected to the vertex, doing any modifications, and then passing it to the next stage of the rendering pipeline. This occurs once per vertex. In order for the shader to do its thing, we need to be able to pass it data. To do this, we use what are called vertex attributes, and they usually work hand in hand with what is referred to as VBO. For the vertex shader, all per-vertex input attributes are defined using the keyword in. So, for example, if we wanted to define a vector 3 input attribute named VertexColour, we could write something like the following: in vec3 VertexColour; Now, the data for the VertexColour attribute has to be supplied by the program/game. This is where VBO come in. In our main game or program, we make the connection between the input attribute and the vertex buffer object, and we also have to define how to parse or step through the data. That way, when we render, the OpenGL can pull data for the attribute from the buffer for each call of the vertex shader. Let's take a look a very simple vertex shader: #version 410 in vec3 VertexPosition; in vec3 VertexColour; out vec3 Colour; void main(){ Colour = VertexColour; gl_Position = vec4(VertexPosition, 1.0); } In this example, there are just two input variables for this vertex shader, VertexPosition and VertexColor. Our main OpenGL program needs to supply the data for these two attributes for each vertex. We will do so by mapping our polygon/mesh data to these variables. We also have one output variable named Colour, which will be sent to the next stage of the rendering pipeline, the fragment shader. In this example, Colour is just an untouched copy of VertexColour. The VertexPosition attribute is simply expanded and passed along to the OpenGL API output variable gl_Position for more processing. Next, let's take a look at a very simple fragment shader: #version 410 in vec3 Colour; out vec4 FragColour; void main(){ FragColour = vec4(Colour, 1.0); } In this fragment shader example, there is only one input attribute, Colour. This input corresponds to the output of the previous rendering stage, the vertex shader's Colour output. For simplicity's sake, we are just expanding the Colour and outputting it as the variable FragColour for the next rendering stage. That sums up the shader side of the connection, so how do we compose and send the data from inside our engine? We can accomplish this in basically four steps. First, we create a Vertex Array Object (VAO) instance to hold our data: GLunit vao; Next, we create and populate the VBO for each of the shaders' input attributes. We do this by first creating a VBO variable, then, using the glGenBuffers() method, we generate the memory for the buffer objects. We then create handles to the different attributes we need buffers for, assigning them to elements in the VBO array. Finally, we populate the buffers for each attribute by first calling the glBindBuffer() method, specifying the type of object being stored. In this case, it is a GL_ARRAY_BUFFER for both attributes. Then we call the glBufferData() method, passing the type, size, and handle to bind. The last argument for the glBufferData() method is one that gives OpenGL a hint about how the data will be used so that it can determine how best to manage the buffer internally. For full details about this argument, take a look at the OpenGL documentation: GLuint vbo[2]; glGenBuffers(2, vbo); GLuint positionBufferHandle = vbo[0]; GLuint colorBufferHandle = vbo[1]; glBindBuffer(GL_ARRAY_BUFFER,positionBufferHandle); glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), positionData, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle); glBufferData(GL_ARRAY_BUFFER, 9 * sizeof(float), colorData, GL_STATIC_DRAW); The third step is to create and define the VAO. This is how we will define the relationship between the input attributes of the shader and the buffers we just created. The VAO contains this information about the connections. To create a VAO, we use the glGenVertexArrays() method. This gives us a handle to our new object, which we store in our previously created VAO variable. Then, we enable the generic vertex attribute indexes 0 and 1 by calling the glEnableVertexAttribArray() method. By making the call to enable the attributes, we are specifying that they will be accessed and used for rendering. The last step makes the connection between the buffer objects we have created and the generic vertex attribute indexes the match too: glGenVertexArrays( 1, &vao ); glBindVertexArray(vao); glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, positionBufferHandle); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL); Finally, in our Draw() function call, we bind to the VAO and call glDrawArrays() to perform the actual render: glBindVertexArray(vaoHandle);glDrawArrays(GL_TRIANGLES, 0, 3 ); Before we move on to another way to pass data to the shader, there is one more piece of this attribute connection structure we need to discuss. As mentioned, the input variables in a shader are linked to the generic vertex attribute we just saw, at the time of linking. When we need to specify the relationship structure, we have a few different choices. We can use what are known as layout qualifiers within the shader code itself. The following is an example: layout (location=0) in vec3 VertexPosition; Another choice is to just let the linker create the mapping when linking, and then query for them afterward. The third and the one I personally prefer is to specify the relationship prior to the linking process by making a call to the glBindAttribLocation() method. We will see how this is implemented shortly when we discuss how to abstract these processes. We have described how we can pass data to a shader using attributes, but there is another option: uniform variables. Uniform variables are specifically used for data that changes infrequently. For example, matrices are great candidates for uniform variables. Within a shader, a uniform variable is read-only. That means the value can only be changed from outside the shader. They can also appear in multiple shaders within the same shader program. They can be declared in one or more shaders within a program, but if a variable with a given name is declared in more than one shader, its type must be the same in all shaders. This gives us insight into the fact that the uniform variables are actually held in a shared namespace for the whole of the shader program. To use a uniform variable in your shader, you first have to declare it in the shader file using the uniform identifier keyword. The following is what this might look like: uniform mat4 ViewMatrix; We then need to provide the data for the uniform variable from inside our game/program. We do this by first finding the location of the variable using the glGetUniformLocation() method. Then we assign a value to the found location using one of the glUniform() methods. The code for this process could look something like the following: GLuint location = glGetUniformLocation(programHandle," ViewMatrix "); if( location >= 0 ) { glUniformMatrix4fv(location, 1, GL_FALSE, &viewMatrix [0][0]) } We then assign a value to the uniform variable's location using the glUniformMatrix4fv() method. The first argument is the uniform variable's location. The second argument is the number of matrices that are being assigned. The third is a GL bool type specifying whether or not the matrix should be transposed. Since we are using the GLM library for our matrices, a transpose is not required. If you were implementing the matrix using data that was in row-major order, instead of column-major order, you might need to use the GL_TRUE type for this argument. The last argument is a pointer to the data for the uniform variable. Uniform variables can be any GLSL type, and this includes complex types such as structures and arrays. The OpenGL API provides a glUniform() function with the different suffixes that match each type. For example, to assign to a variable of type vec3, we would use glUniform3f() or glUniform3fv() methods. (the v denotes multiple values in the array). So, those are the concepts and techniques for passing data to and from our shader programs. However, as we did for the compiling and linking of our shaders, we can abstract these processes into functions housed in our ShaderManager class. We are going to focus on working with attributes and uniform variables. First, we will look at the abstraction of adding attribute bindings using the AddAttribute() function of the ShaderManger class. This function takes one argument, the attribute's name, to be bound as a string. We then call the glBindAttribLocation() function, passing the program's handle and the current index or number of attributes, which we increase on call, and finally the C string conversion of the attributeName string, which provides a pointer to the first character in the string array. This function must be called after compilation, but before the linking of the shader program: void ShaderManager::AddAttribute(const std::string& attributeName) { glBindAttribLocation(m_programID, m_numAttributes++, attributeName.c_str()); } For the uniform variables, we create a function that abstracts looking up the location of the uniform in the shader program, the GetUniformLocation() function. This function again takes only one variable which is a uniform name in the form of a string. We then create a temporary holder for the location and assign it the returned value of the glGetUniformLocation() method call. We check to make sure the location is valid, and if not we throw an exception letting us know about the error. Finally, we return the valid location if found: GLint ShaderManager::GetUniformLocation(const std::string& uniformName) { GLint location = glGetUniformLocation(m_programID, uniformName.c_str()); if (location == GL_INVALID_INDEX) { Exception("Uniform " + uniformName + " not found in shader!"); } return location; } This gives us the abstraction for binding our data, but we still need to assign which shader should be used for a certain draw call, and to activate any attributes we need. To accomplish this, we create a function in the ShaderManager called Use(). This function will first set the current shader program as the active one using the glUseProgram() API method call. We then use a for loop to step through the list of attributes for the shader program, activating each one: void ShaderManager::Use(){ glUseProgram(m_programID); for (int i = 0; i < m_numAttributes; i++) { glEnableVertexAttribArray(i); } } Of course, since we have an abstracted way to enable the shader program, it only makes sense that we should have a function to disable the shader program. This function is very similar to the Use() function, but in this case, we are setting the program in use to 0, effectively making it NULL, and we use the glDisableVertexAtrribArray() method to disable the attributes in the for loop: void ShaderManager::UnUse() { glUseProgram(0); for (int i = 0; i < m_numAttributes; i++) { glDisableVertexAttribArray(i); } } The net effect of this abstraction is we can now set up our entire shader program structure with a few simple calls. Code similar to the following would create and compile the shaders, add the necessary attributes, link the shaders into a program, locate a uniform variable, and create the VAO and VBO for a mesh: shaderManager.CompileShaders("Shaders/SimpleShader.vert", "Shaders/SimpleShader.frag"); shaderManager.AddAttribute("vertexPosition_modelspace"); shaderManager.AddAttribute("vertexColor"); shaderManager.LinkShaders(); MatrixID = shaderManager.GetUniformLocation("ModelViewProjection"); m_model.Init("Meshes/Dwarf_2_Low.obj", "Textures/dwarf_2_1K_color.png"); Then, in our Draw loop, if we want to use this shader program to draw, we can simply use the abstracted functions to activate and deactivate our shader, similar to the following code: shaderManager.Use(); m_model.Draw(); shaderManager.UnUse(); This makes it much easier for us to work with and test out advanced rendering techniques using shaders. Here in this article, we have discussed how advanced rendering techniques, hands-on practical knowledge of game physics and shaders and lighting can help you to create advanced games with C++. If you have liked this above article, check out the complete book Mastering C++ game Development.  How to use arrays, lists, and dictionaries in Unity for 3D game development Unity 2D & 3D game kits simplify Unity game development for beginners How AI is changing game development
Read more
  • 0
  • 0
  • 14348

article-image-building-c-game-play-engines-in-finite-state-machine-pattern
Amarabha Banerjee
14 Jun 2018
20 min read
Save for later

Building C++ game play engines in finite state machine pattern [Tutorial]

Amarabha Banerjee
14 Jun 2018
20 min read
One of the most important aspect of game development is creating game states which helps in different tasks like controlling game flows, managing different game windows and so on. Here in this article, we are going to show you how you can create game play systems with C++ which will help you to manage game states and empower you to control different game functionalities efficiently. We use game states in many different ways. For example to control the game flow, handle the different ways characters can act and react, even for simple menu navigation. Needless to say, states are an important requirement for a strong and manageable code base. There are many different types of states machines; the one we will focus in this section is the Finite State Machine (FSM) pattern. We will be creating an FSM pattern that will help you to create a more generic and flexible state machine. This article is taken from the book Mastering C++ game Development written by Mickey Macdonald. This book shows you how you can create interesting and fun filled games with C++. There are a few ways we can implement a simple state machine in our game. One way would be to simply use a switch case set up to control the states and an enum structure for the state types. An example of this would be as follows: enum PlayerState { Idle, Walking } ... PlayerState currentState = PlayerState::Idle; //A holder variable for the state currently in ... // A simple function to change states void ChangeState(PlayState nextState) { currentState = nextState; } void Update(float deltaTime) { ... switch(currentState) { case PlayerState::Idle: ... //Do idle stuff //Change to next state ChangeState(PlayerState::Walking); break; case PlayerState::Walking: ... //Do walking stuff //Change to next state ChangeState(PlayerState::Idle); break; } ... } Using a switch/case like this can be effective for a lot of situations, but it does have some strong drawbacks. What if we decide to add a few more states? What if we decide to add branching and more if conditionals? The simple switch/case we started out with has suddenly become very large and undoubtedly unwieldy. Every time we want to make a change or add some functionality, we multiply the complexity and introduce more chances for bugs to creep in. We can help mitigate some of these issues and provide more flexibility by taking a slightly different approach and using classes to represent our states. Through the use of inheritance and polymorphism, we can build a structure that will allow us to chain together states and provide the flexibility to reuse them in many situations. Let's walk through how we can implement this in our demo examples, starting with the base class we will inherit from in the future, IState: ... namespace BookEngine { class IState { public: IState() {} virtual ~IState(){} // Called when a state enters and exits virtual void OnEntry() = 0; virtual void OnExit() = 0; // Called in the main game loop virtual void Update(float deltaTime) = 0; }; } As you can see, this is just a very simple class that has a constructor, a virtual destructor, and three completely virtual functions that each inherited state must override. OnEntry, which will be called as the state is first entered, will only execute once per state change. OnExit, like OnEntry, will only be executed once per state change and is called when the state is about to be exited. The last function is the Update function; this will be called once per game loop and will contain much of the state's logic. Although this seems very simple, it gives us a great starting point to build more complex states. Now let's implement this basic IState class in our examples and see how we can use it for one of the common needs of a state machine: creating game states. First, we will create a new class called GameState that will inherit from IState. This will be the new base class for all the states our game will need. The GameState.h file consists of the following: #pragma once #include <BookEngineIState.h> class GameState : BookEngine::IState { public: GameState(); ~GameState(); //Our overrides virtual void OnEntry() = 0; virtual void OnExit() = 0; virtual void Update(float deltaTime) = 0; //Added specialty function virtual void Draw() = 0; }; The GameState class is very much like the IState class it inherits from, except for one key difference. In this class, we add a new virtual method Draw() that all classes will now inherit from GameState will be implemented. Each time we use IState and create a new specialized base class, player state, menu state, and so on, we can add these new functions to customize it to the requirements of the state machine. This is how we use inheritance and polymorphism to create more complex states and state machines. Continuing with our example, let's now create a new GameState. We start by creating a new class called GameWaiting that inherits from GameState. To make it a little easier to follow, I have grouped all of the new GameState inherited classes into one set of files GameStates.h and GameStates.cpp. The GamStates.h file will look like the following: #pragma once #include "GameState.h" class GameWaiting: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameRunning: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameOver : GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; Nothing new here; we are just declaring the functions for each of our GameState classes. Now, in our GameStates.cpp file, we can implement each individual state's functions as described in the preceding code: #include "GameStates.h" void GameWaiting::OnEntry() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::OnExit() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Update(float deltaTime) { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Draw() { ... //Called when entering the GameWaiting state's OnEntry function ... } ... //Other GameState implementations ... For the sake of page space, I am only showing the GameWaiting implementation, but the same goes for the other states. Each one will have its own unique implementation of these functions, which allows you to control the code flow and implement more states as necessary without creating a hard-to-follow maze of code paths. Now that we have our states defined, we can implement them in our game. Of course, we could go about this in many different ways. We could follow the same pattern that we did with our screen system and implement a GameState list class, a definition of which could look like the following: class GameState; class GameStateList { public: GameStateList (IGame* game); ~ GameStateList (); GameState* GoToNext(); GameState * GoToPrevious(); void SetCurrentState(int nextState); void AddState(GameState * newState); void Destroy(); GameState* GetCurrent(); protected: IGame* m_game = nullptr; std::vector< GameState*> m_states; int m_currentStateIndex = -1; }; } Or we could simply use the GameState classes we created with a simple enum and a switch case. The use of the state pattern allows for this flexibility. Working with cameras At this point, we have discussed a good amount about the structure of systems and have now been able to move on to designing ways of interacting with our game and 3D environment. This brings us to an important topic: the design of virtual camera systems. A camera is what provides us with a visual representation of our 3D world. It is how we immerse ourselves and it provides us with feedback on our chosen interactions. In this section, we are going to cover the concept of a virtual camera in computer graphics. Before we dive into writing the code for our camera, it is important to have a strong understanding of how, exactly, it all works. Let's start with the idea of being able to navigate around the 3D world. In order to do this, we need to use what is referred to as a transformation pipeline. A transformation pipeline can be thought of as the steps that are taken to transform all objects and points relative to the position and orientation of a camera viewpoint. The following is a simple diagram that details the flow of a transformation pipeline: Beginning with the first step in the pipeline, local space, when a mesh is created it has a local origin 0 x, 0 y, 0 z. This local origin is typically located in either the center of the object or in the case of some player characters, the center of the feet. All points that make up that mesh are then based on that local origin. When talking about a mesh that has not been transformed, we refer to it as being in local space: The preceding image pictures the gnome mesh in a model editor. This is what we would consider local space. In the next step, we want to bring a mesh into our environment, the world space. In order to do this, we have to multiply our mesh points by what is referred to as a model matrix. This will then place the mesh in world space, which sets all the mesh points to be relative to a single world origin. It's easiest to think of world space as being the description of the layout of all the objects that make up your game's environment. Once meshes have been placed in world space, we can start to do things such as compare distances and angles. A great example of this step is when placing game objects in a world/level editor; this is creating a description of the model's mesh in relation to other objects and a single world origin (0,0,0). We will discuss editors in more detail in the next chapter. Next, in order to navigate this world space, we have to rearrange the points so that they are relative to the camera's position and orientations. To accomplish this, we perform a few simple operations. The first is to translate the objects to the origin. First, we would move the camera from its current world coordinates. In the following example figure, there is 20 on the x axis, 2 on the y axis, and -15 on the z axis, to the world origin or 0,0,0. We can then map the objects by subtracting the camera's position, the values used to translate the camera object, which in this case would be -20, -2, 15. So if our game object started out at 10.5 on the x axis, 1 on the y axis, and -20 on the z axis, the newly translated coordinates would be -9.5, -1, -5. The last operation is to rotate the camera to face the desired direction; in our current case, that would be pointing down the -z axis. For the following example, that would mean rotating the object points by -90 degrees, making the example game object's new position 5, -1, -9.5. These operations combine into what is referred to as the view matrix: Before we go any further, I want to briefly cover some important details when it comes to working with matrices, in particular, handling matrix multiplication and the order of operations. When working with OpenGL, all matrices are defined in a column-major layout. The opposite being row-major layout, found in other graphics libraries such as Microsoft's DirectX. The following is the layout for column-major view matrices, where U is the unit vector pointing up, F is our vector pointing forward, R is the right vector, and P is the position of the camera: When constructing a matrix with a combination of translations and rotations, such as the preceding view matrix, you cannot, generally, just stick the rotation and translation values into a single matrix. In order to create a proper view matrix, we need to use matrix multiplication to combine two or more matrices into a single final matrix. Remembering that we are working with column-major notations, the order of the operations is therefore right to left. This is important since, using the orientation (R) and translation (T) matrices, if we say V = T x R, this would produce an undesired effect because this would first rotate the points around the world origin and then move them to align to the camera position as the origin. What we want is V = R x T, where the points would first align to the camera as the origin and then apply the rotation. In a row-major layout, this is the other way around of course: The good news is that we do not necessarily need to handle the creation of the view matrix manually. Older versions of OpenGL and most modern math libraries, including GLM, have an implementation of a lookAt() function. Most take a version of camera position, target or look position, and the up direction as parameters, and return a fully created view matrix. We will be looking at how to use the GLM implementation of the lookAt() function shortly, but if you want to see the full code implementation of the ideas described just now, check out the source code of GLM which is included in the project source repository. Continuing through the transformation pipeline, the next step is to convert from eye space to homogeneous clip space. This stage will construct a projection matrix. The projection matrix is responsible for a few things. First is to define the near and far clipping planes. This is the visible range along the defined forward axis (usually z). Anything that falls in front of the near distance and anything that falls past the far distance is considered out of range. Any geometrical objects that are on the outside of this range will be clipped (removed) from the pipeline in a later step. Second is to define the Field of View (FOV). Despite the name, the field of view is not a field but an angle. For the FOV, we actually only specify the vertical range; most modern games use 66 or 67 degrees for this. The horizontal range will be calculated for us by the matrix once we provide the aspect ratio (how wide compared to how high). To demonstrate, a 67 degree vertical angle on a display with a 4:3 aspect ratio would have a FOV of 89.33 degrees (67 * 4/3 = 89.33). These two steps combine to create a volume that takes the shape of a pyramid with the top chopped off. This created volume is referred to as the view frustum. Any of the geometry that falls outside of this frustum is considered to be out of view. The following diagram illustrates what the view frustum looks like: You might note that there is more visible space available at the end of the frustum than in the front. In order to properly display this on a 2D screen, we need to tell the hardware how to calculate the perspective. This is the next step in the pipeline. The larger, far end of the frustum will be pushed together creating a box shape. The collection of objects visible at this wide end will also be squeezed together; this will provide us with a perspective view. To understand this, imagine the phenomenon of looking along a straight stretch of railway tracks. As the tracks continue into the distance, they appear to get smaller and closer together. The next step in the pipeline, after defining the clipping space, is to use what is called the perspective division to normalize the points into a box shape with the dimensions of (-1 to 1, -1 to 1, -1 to 1). This is referred to as the normalized device space. By normalizing the dimensions into unit size, we allow the points to be multiplied to scale up or down to any viewport dimensions. The last major step in the transformation pipeline is to create the 2D representation of the 3D that will be displayed. To do this, we flatten the normalized device space with the objects further away being drawn behind the objects that are closer to the camera (draw depth). The dimensions are scaled from the X and Y normalized values into actual pixel values of the viewport. After this step, we have a 2D space referred to as the Viewport space. That completes the transformation pipeline stages. With that theory covered, we can now shift to implementation and write some code. We are going to start by looking at the creation of a basic, first person 3D camera, which means we are looking through the eyes of the player's character. Let's start with the camera's header file, Camera3D.h in the source code repository: ... #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> ..., We start with the necessary includes. As I just mentioned, GLM includes support for working with matrices, so we include both glm.hpp and the matrix_transform.hpp to gain access to GLM's lookAt() function: ... public: Camera3D(); ~Camera3D(); void Init(glm::vec3 cameraPosition = glm::vec3(4,10,10), float horizontalAngle = -2.0f, float verticalAngle = 0.0f, float initialFoV = 45.0f); void Update(); Next, we have the public accessible functions for our Camera3D class. The first two are just the standard constructor and destructor. We then have the Init() function. We declare this function with a few defaults provided for the parameters; this way if no values are passed in, we will still have values to calculate our matrices in the first update call. That brings us to the next function declared, the Update() function. This is the function that the game engine will call each loop to keep the camera updated: glm::mat4 GetView() { return m_view; }; glm::mat4 GetProjection() { return m_projection; }; glm::vec3 GetForward() { return m_forward; }; glm::vec3 GetRight() { return m_right; }; glm::vec3 GetUp() { return m_up; }; After the Update() function, there is a set of five getter functions to return both the View and Projection matrices, as well as the camera's forward, up, and right vectors. To keep the implementation clean and tidy, we can simply declare and implement these getter functions right in the header file: void SetHorizontalAngle(float angle) { m_horizontalAngle = angle; }; void SetVerticalAngle(float angle) { m_verticalAngle = angle; }; Directly after the set of getter functions, we have two setter functions. The first will set the horizontal angle, the second will set the vertical angle. This is useful for when the screen size or aspect ratio changes: void MoveCamera(glm::vec3 movementVector) { m_position += movementVector; }; The last public function in the Camera3D class is the MoveCamera() function. This simple function takes in a vector 3, then cumulatively adds that vector to the m_position variable, which is the current camera position: ... private: glm::mat4 m_projection; glm::mat4 m_view; // Camera matrix For the private declarations of the class, we start with two glm::mat4 variables. A glm::mat4 is the datatype for a 4x4 matrix. We create one for the view or camera matrix and one for the projection matrix: glm::vec3 m_position; float m_horizontalAngle; float m_verticalAngle; float m_initialFoV; Next, we have a single vector 3 variable to hold the position of the camera, followed by three float values—one for the horizontal and one for the vertical angles, as well as a variable to hold the field of view: glm::vec3 m_right; glm::vec3 m_up; glm::vec3 m_forward; We then have three more vector 3 variable types that will hold the right, up, and forward values for the camera object. Now that we have the declarations for our 3D camera class, the next step is to implement any of the functions that have not already been implemented in the header file. There are only two functions that we need to provide, the Init() and the Update() functions. Let's begin with the Init() function, found in the Camera3D.cpp file: void Camera3D::Init(glm::vec3 cameraPosition, float horizontalAngle, float verticalAngle, float initialFoV) { m_position = cameraPosition; m_horizontalAngle = horizontalAngle; m_verticalAngle = verticalAngle; m_initialFoV = initialFoV; Update(); } ... Our Init() function is straightforward; all we are doing in the function is taking in the provided values and setting them to the corresponding variable we declared. Once we have set these values, we simply call the Update() function to handle the calculations for the newly created camera object: ... void Camera3D::Update() { m_forward = glm::vec3( glm::cos(m_verticalAngle) * glm::sin(m_horizontalAngle), glm::sin(m_verticalAngle), glm::cos(m_verticalAngle) * glm::cos(m_horizontalAngle) ); The Update() function is where all of the heavy lifting of the classes is done. It starts out by calculating the new forward for our camera. This is done with a simple formula leveraging GLM's cosine and sine functions. What is occurring is that we are converting from spherical coordinates to cartesian coordinates so that we can use the value in the creation of our view matrix. m_right = glm::vec3( glm::sin(m_horizontalAngle - 3.14f / 2.0f), 0, glm::cos(m_horizontalAngle - 3.14f / 2.0f) ); After we calculate the new forward, we then calculate the new right vector for our camera, again using a simple formula that leverages GLM's sine and cosine functions: m_up = glm::cross(m_right, m_forward); Now that we have the forward and up vectors calculated, we can use GLM's cross-product function to calculate the new up vector for our camera. It is important that these three steps happen every time the camera changes position or rotation, and before the creation of the camera's view matrix: float FoV = m_initialFoV; Next, we specify the FOV. Currently, I am just setting it back to the initial FOV specified when initializing the camera object. This would be the place to recalculate the FOV if the camera was, say, zoomed in or out (hint: mouse scroll could be useful here): m_projection = glm::perspective(glm::radians(FoV), 4.0f / 3.0f, 0.1f, 100.0f); Once we have the field of view specified, we can then calculate the projection matrix for our camera. Luckily for us, GLM has a very handy function called glm::perspective(), which takes in a field of view in radians, an aspect ratio, the near clipping distance, and a far clipping distance, which will then return a created projection matrix for us. Since this is an example, I have specified a 4:3 aspect ratio (4.0f/3.0f) and a clipping space of 0.1 units to 100 units directly. In production, you would ideally move these values to variables that could be changed during runtime: m_view = glm::lookAt( m_position, m_position + m_forward, m_up ); } Finally, the last thing we do in the Update() function is to create the view matrix. As I mentioned before, we are fortunate that the GLM library supplies a lookAt() function to abstract all the steps we discussed earlier in the section. This lookAt() function takes three parameters. The first is the position of the camera. The second is a vector value of where the camera is pointed, or looking at, which we provide by doing a simple addition of the camera's current position and it's calculated forward. The last parameter is the camera's current up vector which, again, we calculated previously. Once finished, this function will return the newly updated view matrix to use in our graphics pipeline. We learned to create a simple FSM game engine with C++. Checkout this book Mastering C++ game Development to learn high-end game development with advanced C++ 17 programming techniques. How AI is changing game development How to use arrays, lists, and dictionaries in Unity for 3D game development Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 12439
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 17952

article-image-fuzzy-logic-ai-characters-unity-3d-games
Kunal Chaudhari
01 Jun 2018
16 min read
Save for later

Implementing fuzzy logic to bring AI characters alive in Unity based 3D games

Kunal Chaudhari
01 Jun 2018
16 min read
Fuzzy logic is a fantastic way to represent the rules of your game in a more nuanced way. Perhaps more so than other concepts, fuzzy logic is a very math-heavy topic. Most of the information can be represented purely by mathematical functions. For the sake of teaching the important concepts as they apply to Unity, most of the math has been simplified and implemented using Unity's built-in features. In this tutorial, we will take a look at the concepts behind fuzzy logic systems and implement in your AI system. Implementing fuzzy logic will make your game characters more believable and depict real-world attributes. This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book will help you leverage the power of artificial intelligence to program smart entities for your games. Defining fuzzy logic The simplest way to define fuzzy logic is by comparison to binary logic.  Generally, transition rules as looked at as true or false or 0 or 1 values. Is something visible? Is it at least a certain distance away? Even in instances where multiple values were being evaluated, all of the values had exactly two outcomes; thus, they were binary. In contrast, fuzzy values represent a much richer range of possibilities, where each value is represented as a float rather than an integer. We stop looking at values as 0 or 1, and we start looking at them as 0 to 1. A common example used to describe fuzzy logic is temperature. Fuzzy logic allows us to make decisions based on non-specific data. I can step outside on a sunny Californian summer's day and ascertain that it is warm, without knowing the temperature precisely. Conversely, if I were to find myself in Alaska during the winter, I would know that it is cold, again, without knowing the exact temperature. These concepts of cold, cool, warm, and hot are fuzzy ones. There is a good amount of ambiguity as to at what point we go from warm to hot. Fuzzy logic allows us to model these concepts as sets and determine their validity or truth by using a set of rules. When people make decisions, people have some gray areas. That is to say, it's not always black and white. The same concept applies to agents that rely on fuzzy logic. Say you hadn't eaten in a few hours, and you were starting to feel a little hungry. At which point were you hungry enough to go grab a snack? You could look at the time right after a meal as 0, and 1 would be the point where you approached starvation. The following figure illustrates this point: When making decisions, there are many factors that determine the ultimate choice. This leads into another aspect of fuzzy logic controllers—they can take into account as much data as necessary. Let's continue to look at our "should I eat?" example. We've only considered one value for making that decision, which is the time since the last time you ate. However, there are other factors that can affect this decision, such as how much energy you're expending and how lazy you are at that particular moment. Or am I the only one to use that as a deciding factor? Either way, you can see how multiple input values can affect the output, which we can think of as the "likeliness to have another meal." Fuzzy logic systems can be very flexible due to their generic nature. You provide input, the fuzzy logic provides an output. What that output means to your game is entirely up to you. We've primarily looked at how the inputs would affect a decision, which, in reality, is taking the output and using it in a way the computer, our agent, can understand. However, the output can also be used to determine how much of something to do, how fast something happens, or for how long something happens. For example, imagine your agent is a car in a sci-fi racing game that has a "nitro-boost" ability that lets it expend a resource to go faster. Our 0 to 1 value can represent a normalized amount of time for it to use that boost or perhaps a normalized amount of fuel to use. Picking fuzzy systems over binary systems With most things in game programming, we must evaluate the requirements of our game and the technology and hardware limitations when deciding on the best way to tackle a problem. As you might imagine, there is a performance cost associated with going from a simple yes/no system to a more nuanced fuzzy logic one, which is one of the reasons we may opt out of using it. Of course, being a more complex system doesn't necessarily always mean it's a better one. There will be times when you just want the simplicity and predictability of a binary system because it may fit your game better. While there is some truth to the old adage, "the simpler, the better", one should also take into account the saying, "everything should be made as simple as possible, but not simpler". Though the quote is widely attributed to Albert Einstein, the father of relativity, it's not entirely clear who said it. The important thing to consider is the meaning of the quote itself. You should make your AI as simple as your game needs it to be, but not simpler. Pac-Man's AI works perfectly for the game–it's simple enough. However, rules say that simple would be out of place in a modern shooter or strategy game. Using fuzzy logic Once you understand the simple concepts behind fuzzy logic, it's easy to start thinking of the many ways in which it can be useful. In reality, it's just another tool in our belt, and each job requires different tools. Fuzzy logic is great at taking some data, evaluating it in a similar way to how a human would (albeit in a much simpler way), and then translating the data back to information that is usable by the system. Fuzzy logic controllers have several real-world use cases. Some are more obvious than others, and while these are by no means one-to-one comparisons to our usage in game AI, they serve to illustrate a point: Heating ventilation and air conditioning (HVAC) systems: The temperature example when talking about fuzzy logic is not only a good theoretical approach to explaining fuzzy logic, but also a very common real-world example of fuzzy logic controllers in action. Automobiles: Modern automobiles come equipped with very sophisticated computerized systems, from the air conditioning system (again), to fuel delivery, to automated braking systems. In fact, putting computers in automobiles has resulted in far more efficient systems than the old binary systems that were sometimes used. Your smartphone: Ever notice how your screen dims and brightens depending on how much ambient light there is? Modern smartphone operating systems look at ambient light, the color of the data being displayed, and the current battery life to optimize screen brightness. Washing machines: Not my washing machine necessarily, as it's quite old, but most modern washers (from the last 20 years) make some use of fuzzy logic. Load size, water dirtiness, temperature, and other factors are taken into account from cycle to cycle to optimize water use, energy consumption, and time. If you take a look around your house, there is a good chance you'll find a few interesting uses of fuzzy logic, and I mean besides your computer, of course. While these are neat uses of the concept, they're not particularly exciting or game-related. I'm partial to games involving wizards, magic, and monsters, so let's look at a more relevant example. Implementing a simple fuzzy logic system For this example, we're going to use my good friend, Bob, the wizard. Bob lives in an RPG world, and he has some very powerful healing magic at his disposal. Bob has to decide when to cast this magic on himself based on his remaining health points (HPs). In a binary system, Bob's decision-making process might look like this: if(healthPoints <= 50) { CastHealingSpell(me); } We see that Bob's health can be in one of two states—above 50, or not. Nothing wrong with that, but let's have a look at what the fuzzy version of this same scenario might look like, starting with determining Bob's health status: Before the panic sets in upon seeing charts and values that may not quite mean anything to you right away, let's dissect what we're looking at. Our first impulse might be to try to map the probability that Bob will cast a healing spell to how much health he is missing. That would, in simple terms, just be a linear function. Nothing really fuzzy about that—it's a linear relationship, and while it is a step above a binary decision in terms of complexity, it's still not truly fuzzy. Enter the concept of a membership function. It's key to our system, as it allows us to determine how true a statement is. In this example, we're not simply looking at raw values to determine whether or not Bob should cast his spell; instead, we're breaking it up into logical chunks of information for Bob to use in order to determine what his course of action should be. In this example, we're comparing three statements and evaluating not only how true each one is, but which is the most true: Bob is in a critical condition Bob is hurt Bob is healthy If you're into official terminology, we call this determining the degree of membership to a set. Once we have this information, our agent can determine what to do with it next. At a glance, you'll notice it's possible for two statements to be true at a time. Bob can be in a critical condition and hurt. He can also be somewhat hurt and a little bit healthy. You're free to pick the thresholds for each, but, in this example, let's evaluate these statements as per the preceding graph. The vertical value represents the degree of truth of a statement as a normalized float (0 to 1): At 0 percent health, we can see that the critical statement evaluates to 1. It is absolutely true that Bob is critical when his health is gone. At 40 percent health, Bob is hurt, and that is the truest statement. At 100 percent health, the truest statement is that Bob is healthy. Anything outside of these absolutely true statements is squarely in fuzzy territory. For example, let's say Bob's health is at 65 percent. In that same chart, we can visualize it like this: The vertical line drawn through the chart at 65 represents Bob's health. As we can see, it intersects both sets, which means that Bob is a little bit hurt, but he's also kind of healthy. At a glance, we can tell, however, that the vertical line intercepts the Hurt set at a higher point in the graph. We can take this to mean that Bob is more hurt than he is healthy. To be specific, Bob is 37.5 percent hurt, 12.5 percent healthy, and 0 percent critical. Let's take a look at this in code; open up our FuzzySample scene in Unity. The hierarchy will look like this: The important game object to look at is Fuzzy Example. This contains the logic that we'll be looking at. In addition to that, we have our Canvas containing all of the labels and the input field and button that make this example work. Lastly, there's the Unity-generated EventSystem and Main Camera, which we can disregard. There isn't anything special going on with the setup for the scene, but it's a good idea to become familiar with it, and you are encouraged to poke around and tweak it to your heart's content after we've looked at why everything is there and what it all does. With the Fuzzy Example game object selected, the inspector will look similar to the following image: Our sample implementation is not necessarily something you'll take and implement in your game as it is, but it is meant to illustrate the previous points in a clear manner. We use Unity's AnimationCurve for each different set. It's a quick and easy way to visualize the very same lines in our earlier graph. Unfortunately, there is no straightforward way to plot all the lines in the same graph, so we use a separate AnimationCurve for each set. In the preceding screenshot, they are labeled Critical, Hurt, and Healthy. The neat thing about these curves is that they come with a built-in method to evaluate them at a given point (t). For us, t does not represent time, but rather the amount of health Bob has. As in the preceding graph, the Unity example looks at a HP range of 0 to 100. These curves also provide a simple user interface for editing the values. You can simply click on the curve in the inspector. That opens up the curve editing window. You can add points, move points, change tangents, and so on, as shown in the following screenshot: Unity's curve editor window Our example focuses on triangle-shaped sets. That is, linear graphs for each set. You are by no means restricted to this shape, though it is the most common. You could use a bell curve or a trapezoid, for that matter. To keep things simple, we'll stick to the triangle. You can learn more about Unity's AnimationCurve editor at http://docs.unity3d.com/ScriptReference/AnimationCurve.html. The rest of the fields are just references to the different UI elements used in code that we'll be looking at later in this chapter. The names of these variables are fairly self-explanatory, however, so there isn't much guesswork to be done here. Next, we can take a look at how the scene is set up. If you play the scene, the game view will look something similar to the following screenshot: A simple UI to demonstrate fuzzy values We can see that we have three distinct groups, representing each question from the "Bob, the wizard" example. How healthy is Bob, how hurt is Bob, and how critical is Bob? For each set, upon evaluation, the value that starts off as 0 true will dynamically adjust to represent the actual degree of membership. There is an input box in which you can type a percentage of health to use for the test. No fancy controls are in place for this, so be sure to enter a value from 0 to 100. For the sake of consistency, let's enter a value of 65 into the box and then press the Evaluate! button. This will run some code, look at the curves, and yield the exact same results we saw in our graph earlier. While this shouldn't come as a surprise (the math is what it is, after all), there are fewer things more important in game programming than testing your assumptions, and sure enough, we've tested and verified our earlier statement. After running the test by hitting the Evaluate! button, the game scene will look similar to the following screenshot: This is how Bob is doing at 65 percent health Again, the values turn out to be 0.125 (or 12.5 percent) healthy and 0.375 (or 37.5 percent) hurt. At this point, we're still not doing anything with this data, but let's take a look at the code that's handling everything: using UnityEngine; using UnityEngine.UI; using System.Collections; public class FuzzySample1 : MonoBehaviour { private const string labelText = "{0} true"; public AnimationCurve critical; public AnimationCurve hurt; public AnimationCurve healthy; public InputField healthInput; public Text healthyLabel; public Text hurtLabel; public Text criticalLabel; private float criticalValue = 0f; private float hurtValue = 0f; private float healthyValue = 0f; We start off by declaring some variables. The labelText is simply a constant we use to plug into our label. We replace {0} with the real value. Next, we declare the three AnimationCurve variables that we mentioned earlier. Making these public or otherwise accessible from the inspector is key to being able to edit them visually (though it is possible to construct curves by code), which is the whole point of using them. The following four variables are just references to UI elements that we saw earlier in the screenshot of our inspector, and the last three variables are the actual float values that our curves will evaluate into: private void Start () { SetLabels(); } /* * Evaluates all the curves and returns float values */ public void EvaluateStatements() { if (string.IsNullOrEmpty(healthInput.text)) { return; } float inputValue = float.Parse(healthInput.text); healthyValue = healthy.Evaluate(inputValue); hurtValue = hurt.Evaluate(inputValue); criticalValue = critical.Evaluate(inputValue); SetLabels(); } The Start() method doesn't require much explanation. We simply update our labels here so that they initialize to something other than the default text. The EvaluateStatements() method is much more interesting. We first do some simple null checking for our input string. We don't want to try and parse an empty string, so we return out of the function if it is empty. As mentioned earlier, there is no check in place to validate that you've input a numerical value, so be sure not to accidentally input a non-numerical value or you'll get an error. For each of the AnimationCurve variables, we call the Evaluate(float t) method, where we replace t with the parsed value we get from the input field. In the example we ran, that value would be 65. Then, we update our labels once again to display the values we got. The code looks similar to this: /* * Updates the GUI with the evluated values based * on the health percentage entered by the * user. */ private void SetLabels() { healthyLabel.text = string.Format(labelText, healthyValue); hurtLabel.text = string.Format(labelText, hurtValue); criticalLabel.text = string.Format(labelText, criticalValue); } } We simply take each label and replace the text with a formatted version of our labelText constant that replaces the {0} with the real value. To summarize, we learned how fuzzy logic is used in the real world, and how it can help illustrate vague concepts in a way binary systems cannot. We also learned to implement our own fuzzy logic controllers using the concepts of member functions, degrees of membership, and fuzzy sets.  If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to build exciting and richer games by mastering advanced Artificial Intelligence concepts in Unity. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Put your game face on! Unity 2018.1 is now available How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 13214

article-image-unity-variables-script-unity-2017-games
Amarabha Banerjee
23 May 2018
12 min read
Save for later

Working with Unity Variables to script powerful Unity 2017 games

Amarabha Banerjee
23 May 2018
12 min read
In this tutorial, you will learn how to work with the different variables available with the Unity 2017 platform. We will show you how to use these variables through use cases in order to script powerful Unity games. This article is an excerpt from the book, Learning C# by Developing Games with Unity 2017 written by Micael DaGraca, Greg Lukosek. The most popular game engine of our generation i.e. Unity is a preferred choice among game developers. Due to the flexibility it provides to code and script a game in C#. To understand and leverage the power of C# in your games, it is utterly necessary to get a proper understanding of how C# coding works. We are going to show you exactly that in the section given below. Writing C# statements properly When you do normal writing, it's in the form of a sentence, with a period used to end the sentence. When you write a line of code, it's called a statement, with a semicolon used to end the statement. This is necessary because the console reads the code one line at a time and it's necessary to use a semicolon to tell the console that the line of code is over and that the console can jump to the next line. (This is happening so fast that it looks like the computer is reading all of them at the same time, but it isn't.) When we start learning how to code, forgetting about this detail is very common, so don't forget to check for this error if the code isn't working: The code for a C# statement does not have to be on a single line as shown in the following example: public int number1 = 2; The statement can be on several lines. Whitespace and carriage returns are ignored, so, if you really want to, you can write it as follows: public int number1 = 2; However, I do not recommend writing your code like this because it's terrible to read code that is formatted like the preceding code. Nevertheless, there will be times when you'll have to write long statements that are longer than one line. Unity won't care. It just needs to see the semicolon at the end. Understanding component properties in Unity's Inspector GameObjects have some components that make them behave in a certain way. For instance, select Main Camera and look at the Inspector panel. One of the components is the camera. Without that component, it will cease being a camera. It would still be a GameObject in your scene, just no longer a functioning camera. Variables become component properties When we refer to components, we are basically referring to the available functions of a GameObject, for example, the human body has many functions, such as talking, moving, and observing. Now let's say that we want the human body to move faster. What is the function linked to that action? Movement. So in order to make our body move faster, we would need to create a script that had access to the movement component and then we would use that to make the body move faster. Just like in real life, different GameObjects can also have different components, for example, the camera component can only be accessed from a camera. There are plenty of components that already exist that were created by Unity's programmers, but we can also write our own components. This means that all the properties that we see in Inspector are just variables of some type. They simply store data that will be used by some method. Unity changes script and variable names slightly When we create a script, one of the first things that we need to do is give a name to the script and it's always good practice to use a name that identifies the content of the script. For example, if we are creating a script that is used to control the player movement, ideally that would be the name of the script. The best practice is to write playerMovement, where the first word is uncapitalized and the second one is capitalized. This is the standard way Unity developers name scripts and variables. Now let's say that we created a script named playerMovement. After assigning that script to a GameObject, we'll see that in the Inspector panel we would see that Unity adds a space to separate the words of the name, Player Movement. Unity does this modification to variable names too where, for example, a variable named number1 is shown as Number 1 and number2 as Number 2. Unity capitalizes the first letter as well. These changes improve readability in Inspector. Changing a property's value in the Inspector panel There are two situations where you can modify a property value: During the Play mode During the development stage (not in the Play mode) When you are in the Play mode, you will see that your changes take effect immediately in real time. This is great when you're experimenting and want to see the results. Write down any changes that you want to keep because when you stop the Play mode, any changes you made will be lost. When you are in the Development mode, changes that you make to the property values will be saved by Unity. This means that if you quit Unity and start it again, the changes will be retained. Of course, you won't see the effect of your changes until you click Play. The changes that you make to the property values in the Inspector panel do not modify your script. The only way your script can be changed is by you editing it in the script editor (MonoDevelop). The values shown in the Inspector panel override any values you might have assigned in your script. If you want to undo the changes you've made in the Inspector panel, you can reset the values to the default values assigned in your script. Click on the cog icon (the gear) on the far right of the component script, and then select Reset, as shown in the following screenshot: Displaying public variables in the Inspector panel You might still be wondering what the word public at the beginning of a variable statement means: public int number1 = 2; We mentioned it before. It means that the variable will be visible and accessible. It will be visible as a property in the Inspector panel so that you can manipulate the value stored in the variable. The word also means that it can be accessed from other scripts using the dot syntax. Private variables Not all variables need to be public. If there's no need for a variable to be changed in the Inspector panel or be accessed from other scripts, it doesn't make sense to clutter the Inspector panel with needless properties. In the LearningScript, perform the following steps: Change line 6 to this: private int number1 = 2; Then change line 7 to the following: int number2 = 9; Save the file In Unity, select Main Camera You will notice in the Inspector panel that both properties, Number 1 and Number 2, are gone: Line 6: private int number1 = 2; The preceding line explicitly states that the number1 variable has to be private. Therefore, the variable is no longer a property in the Inspector panel. It is now a private variable for storing data: Line 7: int number2 = 9; The number2 variable is no longer visible as a property either, but you didn't specify it as private. If you don't explicitly state whether a variable will be public or private, by default, the variable will implicitly be private in C#. It is good coding practice to explicitly state whether a variable will be public or private. So now, when you click Play, the script works exactly as it did before. You just can't manipulate the values manually in the Inspector panel anymore. Naming Unity variables properly As we explored previously, naming a script or variable is a very important step. It won't change the way that the code runs, but it will help us to stay organized and, by using best practices, we are avoiding errors and saving time trying to find the piece of code that isn't working. Always use meaningful names to store your variables. If you don't do that, six months down the line, you will be lost. I'm going to exaggerate here a bit to make a point. Let's say you will name a variable as shown in this code: public bool areRoadConditionsPerfect = true; That's a descriptive name. In other words, you know what it means by just reading the variable. So 10 years from now, when you look at that name, you'll know exactly what I meant in the previous comment. Now suppose that instead of areRoadConditionsPerfect, you had named this variable as shown in the following code: public bool perfect = true; Sure, you know what perfect is, but would you know that it refers to perfect road conditions? I know that right now you'll understand it because you just wrote it, but six months down the line, after writing hundreds of other scripts for all sorts of different projects, you'll look at this word and wonder what you meant. You'll have to read several lines of code you wrote to try to figure it out. You may look at the code and wonder who in their right mind would write such terrible code. So, take your time to write descriptive code that even a stranger can look at and know what you mean. Believe me, in six months or probably less time, you will be that stranger. Using meaningful names for variables and methods is helpful not only for you but also for any other game developer who will be reading your code. Whether or not you work in a team, you should always write easy-to-read code. Beginning variable names with lowercase You should begin a variable name with a lowercase letter because it helps distinguish between a class name and a variable name in your code. There are some other guides in the C# documentation as well, but we don't need to worry about them at this stage. Component names (class names) begin with an uppercase letter. For example, it's easy to know that Transform is a class and transform is a variable. There are, of course, exceptions to this general rule, and every programmer has a preferred way of using lowercase, uppercase, and perhaps an underscore to begin a variable name. In the end, you will have to decide upon a naming convention that you like. If you read the Unity forums, you will notice that there are some heated debates on naming variables. In this book, I will show you my preferred way, but you can use whatever is more comfortable for you. Using multiword variable names Let's use the same example again, as follows: public bool areRoadConditionsPerfect = true; You can see that the variable name is actually four words squeezed together. Since variable names can be only one word, begin the first word with a lowercase and then just capitalize the first letter of every additional word. This greatly helps create descriptive names that the viewer is still able to read. There's a term for this, it's called camel casing. I have already mentioned that for public variables, Unity's Inspector will separate each word and capitalize the first word. Go ahead! Add the previous statement to the LearningScript and see what Unity does with it in the Inspector panel. Declaring a variable and its type Every variable that we want to use in a script must be declared in a statement. What does that mean? Well, before Unity can use a variable, we have to tell Unity about it first. Okay then, what are we supposed to tell Unity about the variable? There are only three absolute requirements to declare a variable and they are as follows: We have to specify the type of data that a variable can store We have to provide a name for the variable We have to end the declaration statement with a semicolon The following is the syntax we use to declare a variable: typeOfData nameOfTheVariable; Let's use one of the LearningScript variables as an example; the following is how we declare a variable with the bare minimum requirements: int number1; This is what we have: Requirement #1 is the type of data that number1 can store, which in this case is an int, meaning an integer Requirement #2 is a name, which is number1 Requirement #3 is the semicolon at the end The second requirement of naming a variable has already been discussed. The third requirement of ending a statement with a semicolon has also been discussed. The first requirement of specifying the type of data will be covered next. The following is what we know about this bare minimum declaration as far as Unity is concerned: There's no public modifier, which means it's private by default It won't appear in the Inspector panel or be accessible from other scripts The value stored in number1 defaults to zero We discussed working with the Unity 2017 variables and how you can start working with them to create fun-filled games effectively. If you liked this article, be sure to go through the book Learning C# by Developing games with Unity 2017 to create exciting games with C# and Unity 2017. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 5115

article-image-arrays-lists-dictionaries-unity-3d-game-development
Amarabha Banerjee
16 May 2018
14 min read
Save for later

How to use arrays, lists, and dictionaries in Unity for 3D game development

Amarabha Banerjee
16 May 2018
14 min read
A key ingredient in scripting 3D games with Unity is the ability to work with C# to create arrays, lists, objects and dictionaries within the Unity platform. In this tutorial, we help you to get started with creating arrays, lists, and dictionaries effectively. This article is an excerpt from Learning C# by Developing Games with Unity 2017. Read more here. You can also read the latest edition of the book here.  An array stores a sequential collection of values of the same type, in the simplest terms. We can use arrays to store lists of values in a single variable. Imagine we want to store a number of student names. Simple! Just create a few variables and name them student1, student2, and so on: public string student1 = "Greg"; public string student2 = "Kate"; public string student3 = "Adam"; public string student4 = "Mia"; There's nothing wrong with this. We can print and assign new values to them. The problem starts when you don't know how many student names you will be storing. The name variable suggests that it's a changing element. There is a much cleaner way of storing lists of data. Let's store the same names using a C# array variable type: public string[ ] familyMembers = new string[ ]{"Greg", "Kate", "Adam", "Mia"} ; As you can see, all the preceding values are stored in a single variable called familyMembers. Declaring an array To declare a C# array, you must first say what type of data will be stored in the array. As you can see in the preceding example, we are storing strings of characters. After the type, we have an open square bracket and then immediately a closed square bracket, [ ]. This will make the variable an actual array. We also need to declare the size of the array. It simply means how many places are there in our variable to be accessed. The minimum code required to declare a variable looks similar to this: public string[] myArrayName = new string[4]; The array size is set during assignment. As you have learned before, all code after the variable declaration and the equal sign is an assignment. To assign empty values to all places in the array, simply write the new keyword followed by the type, an open square bracket, a number describing the size of the array, and then a closed square bracket. If you feel confused, give yourself a bit more time. Then you will fully understand why arrays are helpful. Take a look at the following examples of arrays; don't worry about testing how they work yet: string[ ] familyMembers = new string[]{"John", "Amanda", "Chris", "Amber"} ; string[ ] carsInTheGarage = new string[] {"VWPassat", "BMW"} ; int[ ] doorNumbersOnMyStreet = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }; GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); As you can see, we can store different types of data as long as the elements in the array are of the same type. You are probably wondering why the last example, shown here, looks different: GameObject[ ] carsInTheScene = GameObject.FindGameObjectsWithTag("car"); In fact, we are just declaring the new array variable to store a collection of GameObject in the scene using the "car" tag. Jump into the Unity scripting documentation and search for GameObject.FindGameObjectsWithTag: As you can see, GameObject.FindGameObjectsWithTag is a special built-in Unity function that takes a string parameter (tag) and returns an array of GameObjects using this tag. Storing items in the List Using a List instead of an array can be so easier to work with in a script. Look at some forum sites related to C# and Unity, and you'll discover that plenty of programmers simply don't use an array unless they have to; they prefer to use a List. It is up to the developer's preference and task. Let's stick to lists for now. Here are the basics of why a List is better and easier to use than an array: An array is of fixed size and unchangeable The size of a List is adjustable You can easily add and remove elements from a List To mimic adding a new element to an array, we would need to create a whole new array with the desired number of elements and then copy the old elements The first thing to understand is that a List has the ability to store any type of object, just like an array. Also, like an array, we must specify which type of object we want a particular List to store. This means that if you want a List of integers of the int type then you can create a List that will store only the int type. Let's go back to the first array example and store the same data in a List. To use a List in C#, you need to add the following line at the beginning of your script: using System.Collections.Generic; As you can see, using Lists is slightly different from using arrays. Line 9 is a declaration and assignment of the familyMembers List. When declaring the list, there is a requirement for a type of objects that you will be storing in the List. Simply write the type between the < > characters. In this case, we are using string. As we are adding the actual elements later in lines 14 to 17, instead of assigning elements in the declaration line, we need to assign an empty List to be stored temporarily in the familyMembers variable. Confused? If so, just take a look at the right-hand side of the equal sign on line 9. This is how you create a new instance of the List for a given type, string for this example: new List<string>(); Lines 14 to 17 are very simple to understand. Each line adds an object at the end of the List, passing the string value in the parentheses. In various documentation, Lists of type look like this: List< T >. Here, T stands for the type of data. This simply means that you can insert any type in place of T and the List will become a list of that specific type. From now on, we will be using it. Common operations with Lists List<T> is very easy to use. There is a huge list of different operations that you can perform with it. We have already spoken about adding an element at the end of a List. Very briefly, let's look at the common ones that we will be possibly using at later stages: Add: This adds an object at the end of List<T>. Remove: This removes the first occurrence of a specific object from List<T>. Clear: This removes all elements from List<T>. Contains: This determines whether an element is in List<T> or not. It is very useful to check whether an element is stored in the list. Insert: This inserts an element into List<T> at the specified index. ToArray: This copies the elements of List<T> to a new array. You don't need to understand all of these at this stage. All I want you to know is that there are many out-of-the-box operations that you can use. If you want to see them all, I encourage you to dive into the C# documentation and search for the List<T> class. List <T> versus arrays Now you are probably thinking, "Okay, which one should I use?" There isn't a general rule for this. Arrays and List<T> can serve the same purpose. You can find a lot of additional information online to convince you to use one or the other. Arrays are generally faster. For what we are doing at this stage, we don't need to worry about processing speeds. Some time from now, however, you might need a bit more speed if your game slows down, so this is good to remember. List<T> offers great flexibility. You don't need to know the size of the list during declaration. There is a massive list of out-of-the-box operations that you can use with List, so it is my recommendation. Array is faster, List<T> is more flexible. Retrieving the data from the Array or List<T> Declaring and storing data in the array or list is very clear to us now. The next thing to learn is how to get stored elements from an array. To get a stored element from the array, write an array variable name followed by square brackets. You must write an int value within the brackets. That value is called an index. The index is simply a position in the array. So, to get the first element stored in the array, we will write the following code: myArray[0]; Unity will return the data stored in the first place in myArray. It works exactly the same way as the return type methods. So, if myArray stores a string value on index 0, that string will be returned to the place where you are calling it. Complex? It's not. Let's show you by example. The index value starts at 0, no 1, so the first element in an array containing 10 elements will be accessible through an index value of 0 and last one through a value of 9. Let's extend the familyMembers example: I want to talk about line 20. The rest of it is pretty obvious for you, isn't it? Line 20 creates a new variable called thirdFamilyMember and assigns the third value stored in the familyMembers list. We are using an index value of 2 instead of 3 because in programming counting starts at 0. Try to memorize this; it is a common mistake made by beginners in programming. Go ahead and click Play. You will see the name Adam being printed in the Unity Console. While accessing objects stored in an array, make sure you use an index value between zero and the size of the array. In simpler words, we cannot access data from index 10 in an array that contains only four objects. Makes sense? Checking the size This is very common; we need to check the size of the array or list. There is a slight difference between a C# array and List<T>. To get the size as an integer value, we write the name of the variable, then a dot, and then Length of an array or Count for List<T>: arrayName.Length: This returns an integer value with the size of the array listName.Count: This returns an integer value with the size of the list As we need to focus on one of the choices here and move on, from now on we will be using List<T>. ArrayList We definitely know how to use lists now. We also know how to declare a new list and add, remove, and retrieve elements. Moreover, you have learned that the data stored in List<T> must be of the same type across all elements. Let's throw a little curveball. ArrayList is basically List<T> without a specified type of data. This means that we can store whatever objects we want. Storing elements of different types is also possible. ArrayList is very flexible. Take a look at the following example to understand what ArrayList can look like: You have probably noticed that ArrayList also supports all common operations, such as .Add(). Lines 12 to 15 add different elements into the array. The first two are of the integer type, the third is a string type, and the last one is a GameObject. All mixed types of elements in one variable! When using ArrayList, you might need to check what type of element is under a specific index to know how to treat it in code. Unity provides a very useful function that you can use on virtually any type of object. Its GetType() method returns the type of the object, not the value. We are using it in lines 18 and 19 to print the types of the second and third elements. Go ahead, write the preceding code, and click Play. You should get the following output in the Console window: Dictionaries When we talk about collection data, we need to mention Dictionaries. A Dictionary is similar to a List. However, instead of accessing a certain element by index value, we use a string called key. The Dictionary that you will probably be using the most often is called Hashtable. Feel free to dive into the C# documentation after reading this chapter to discover all the bits of this powerful class. Here are a few key properties of Hashtable: Hashtable can be resized dynamically, like List<T> and ArrayList Hashtable can store multiple data types at the same type, like ArrayList A public member Hashtable isn't visible in the Unity Inspector panel due to default inspector limitations I want to make sure that you won't feel confused, so I will go straight to a simple example: Accessing values To access a specific key in the Hashtable, you must know the string key the value is stored under. Remember, the key is the first value in the brackets when adding an element to Hashtable. Ideally, you should also know the type of data you are trying to access. In most cases, that would not be an issue. Take a look at this line: Debug.Log((string)personalDetails["firstName"]); We already know that using Debug.Log serves to display a message on the Unity console, so what are we trying to display? A string value (it's one that can contain letters and numbers), then we specify where that value is stored. In this case, the information is stored under Hashtable personalDetails and the content that we want to display is firstName. Now take a look at the script once again and see if you can display the age, remember that the value that we are trying to access here is a number, so we should use int instead of string: Similar to ArrayList, we can store mixed-type data in Hashtable. Unity requires the developer to specify how an accessed element should be treated. To do this, we need to cast the element into a specific data type. The syntax is very simple. There are brackets with the data type inside, followed by the Hashtable variable name. Then, in square brackets, we have to enter the key string the value is stored under. Ufff, confusing! As you can see in the preceding line, we are casting to string (inside brackets). If we were to access another type of data, for example, an integer number, the syntax would look like this: (int)personalDetails["age"]; I hope that this is clear now. If it isn't, why not search for more examples on the Unity forums? How do I know what's inside my Hashtable? Hashtable, by default, isn't displayed in the Unity Inspector panel. You cannot simply look at the Inspector tab and preview all keys and values in your public member Hashtable. We can do this in code, however. You know how to access a value and cast it. What if you are trying to access the value under a key that isn't stored in the Hashtable? Unity will spit out a null reference error and your program is likely to crash. To check whether an element exists in the Hashtable, we can use the .Contains(object) method, passing the key parameter: This determines whether the array contains the item and if so, the code will continue; otherwise, it will stop there, preventing any error. We discussed how to use C# to create arrays, lists, dictionaries and objects in Unity. The code samples and the examples will help you implement these from the platform. Do check out this book Learning C# by Developing Games with Unity 2017  to develop your first interactive 2D and 3D platform game. Read More Unity 2D & 3D game kits simplify Unity game development for beginners How to create non-player Characters (NPC) with Unity 2018 Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 4
  • 125040
article-image-your-first-unity-project
Packt
15 Jun 2017
11 min read
Save for later

Your first Unity project

Packt
15 Jun 2017
11 min read
In this article by Tommaso Lintrami, the author of the book Unity 2017 Game Development Essentials - Third Edition, we will see that when starting out in game development, one of the best ways to learn the various parts of the discipline is to prototype your idea. Unity excels in assisting you with this, with its visual scene editor and public member variables that form settings in the Inspector. To get to grips with working in the Unity editor. In this article, you will learn about: Creating a New Project in Unity Working with GameObjects in the SceneView and Hierarchys (For more resources related to this topic, see here.) As Unity comes in two main forms—a standard, free download and a paid Pro developer license.We'll stick to using features that users of the standard free edition will have access to. If you're launching Unity for the very first time, you'll be presented with a Unitydemonstration project. While this is useful to look into the best practices for the development of high-end projects, if you're starting out, looking over some of the assets and scripting may feel daunting, so we'll leave this behind and start from scratch! Take a look at the following steps for setting up your Unity project: In Unity go to File | NewProject and you will be presented with the ProjectWizard.The following screenshot is a Mac version shown: From here select the NEW tab and 3D type of project. Be aware that if at any time you wish to launch Unity and be taken directly to the ProjectWizard, then simply launch the Unity Editor application and immediately hold the Alt key (Mac and Windows). This can be set to the default behavior for launch in the Unity preferences. Click the Set button and choose where you would like to save your new Unity project folder on your hard drive.  The new project has been named UGDE after this book, and chosen to store it on my desktop for easy access. The Project Wizard also offers the ability to import many Assetpackages into your new project which are provided free to use in your game development by Unity Technologies. Comprising scripts, ready-made objects, and other artwork, these packages are a useful way to get started in various types of new project. You can also import these packages at any time from the Assets menu within Unity, by selecting ImportPackage, and choosing from the list of available packages. You can also import a package from anywhere on your hard drive by choosing the CustomPackage option here. This import method is also used to share assets with others, and when receiving assets you have downloaded through the AssetStore—see Window | Asset Store to view this part of Unity later. From the list of packages to be imported, select the following (as shown in the previous image):     Characters     Cameras     Effects     TerrainAssets     Environment When you are happy with your selection, simply choose Create Project at the bottom of this dialog window. Unity will then create your new project and you will see progress bars representing the import of the four packages. A basic prototyping environment To create a simple environment to prototype some game mechanics, we'll begin with a basic series of objects with which we can introduce gameplay that allows the player to aim and shoot at a wall of primitive cubes. When complete, your prototyping environment will feature a floor comprised of a cube primitive, a main camera through which to view the 3D world, and a point light setup to highlight the area where our gameplay will be introduced. It will look like something as shown in the following screenshot: Setting the scene As all new scenes come with a Main Camera object by default, we'll begin by adding a floor for our prototyping environment. On the Hierarchy panel, click the Create button, and from the drop-down menu, choose Cube. The items listed in this drop-down menu can also be found in the GameObject | CreateOther top menu. You will now see an object in the Hierarchy panel called Cube. Select this and press Return (Mac)/F2 (Windows) or double-click the object name slowly (both platforms) to rename this object, type in Floor and press Return (both platforms) to confirm this change. For consistency's sake, we will begin our creation at world zero—the center of the 3D environment we are working in. To ensure that the floor cube you just added is at this position, ensure it is still selected in the Hierarchypanel and then check the Transform component on the Inspector panel, ensuring that the position values for X, Y, and Z are all at 0, if not, change them all to zero either by typing them in or by clicking the cog icon to the right of the component, and selecting ResetPosition from the pop-out menu. Next, we'll turn the cube into a floor, by stretching it out in the X and Z axes. Into the X and Z values under Scale in the Transform component, type a value of 100, leaving Y at a value of 1. Adding simple lighting Now we will highlight part of our prototyping floor by adding a point light. Select the Create button on the Hierarchypanel(or go to Game Object | Create Other) and choose point light. Position the new point light at (0,20,0) using the Position values in the Transform component, so that it is 20 units above the floor. You will notice that this means that the floor is out of range of the light, so expand the range by dragging on the yellow dot handles that intersect the outline of the point light in the SceneView, until the value for range shown in the Light component in the Inspector reaches something around a value of 40, and the light is creating a lit part of the floor object. Bear in mind that most components and visual editing tools in the SceneView are inextricably linked, so altering values such asRangein the Inspector Light component will update the visual display in the SceneView as you type, and stay constant as soon as you pressReturnto confirm the values entered. Another brick in the wall Now let's make a wall of cubes that we can launch a projectile at. We'll do this by creating a single master brick, adding components as necessary, and then duplicating this until our wall is complete. Building the master brick In order to create a template for all of our bricks, we'll start by creating a master object, something to create clones of. This is done as follows: Click the Create button at the top of the Hierarchy, and select Cube. Position this at (0,1,0) using the Position values in the Transform component on the Inspector. Then, focus your view on this object by ensuring it is still selectedin the Hierarchy, by hovering your cursor over the SceneView, and pressing F. Add physics to your Cube object by choosing Component | Physics | Rigidbody from the top menu. This means that your object is now a Rigidbody—it has mass, gravity, and is affected by other objects using the physics engine for realistic reactions in the 3D world. Finally, we'll color this object by creating amaterial. Materials are a way of applying color and imagery to our 3D geometry. To make a new one, go to the Create button on the Project panel and choose Material from the drop-down menu. Press Return (Mac) or F2 (Windows) to rename this asset to Red instead of the default name New Material. You can also right-click in the Materials Project folder and Create| Material or alternatively you can use the editor main menu: Assets | Create | Material With this material selected, the Inspector shows its properties. Click on the color block to the right of MainColor [see image label 1] to open the Color Picker[see image label 2]. This will differ in appearance depending upon whether you are using Mac or Windows. Simply choose a shade of red, and then close the window. The Main Color block should now have been updated. To apply this material, drag it from the Project panel and drop it onto either the cube as seen in the SceneView, or onto the name of the object in the Hierarchy. The material is then applied to the Mesh Renderer component of this object and immediately seen following the other components of the object in the Inspector. Most importantly, your cube should now be red! Adjusting settings using the preview of this material on any object will edit the original asset, as this preview is simply a link to the asset itself, not a newly editable instance. Now that our cube has a color and physics applied through the Rigid body component, it is ready to be duplicated and act as one brick in a wall of many. However, before we do that, let’s have a quick look at the physics in action. With the cube still selected, set the Yposition value to 15 and the Xrotation value to 40 in the Transform component in the Inspector. Press Play at the top of the Unity interface and you should see the cube fall and then settle, having fallen at an angle. The shortcut for Play is Ctrl+Pfor Windows andCommand+Pfor Mac. Press Play again to stop testing. Do not press Pause as this will only temporarily halt the test, and changes made thereafter to the scene will not be saved. Set the Yposition value for the cube back to 1, and set the X Rotation back to 0. Now that we know our brick behaves correctly, let's start creating a row of bricks to form our wall. And snap!—It's a row To help you position objects, Unity allows you to snap to specific increments when dragging—these increments can be redefined by going to Edit | Snap Settings. To use snapping, hold down Command (Mac) or Ctrl (Windows) when using the Translatetool (W) to move objects in theSceneView. So in order to start building thewall, duplicate the cube brick we already have using the shortcut Command+D (Mac) or Ctrl+D (PC), then drag the red axis handle while holding the snapping key. This will snap one unit at a time by default, so snap-move your cube one unit in the X axis so that it sits next to the original cube, shown as follows: Repeat this procedure of duplication and snap-dragging until you have a row of 10 cubes in a line. This is the first row of bricks, and to simplify building the rest of the bricks we will now group this row under an empty object, and then duplicate the parent empty object. Vertex snapping The basic snapping technique used here works well as our cubes are a generic scale of 1, but when scaling more detailed shaped objects, you should use vertex snapping instead. To do this, ensure that the Translate tool is selected and hold down V on the keyboard.Now hover your cursor over a vertex point on your selected object and drag to any other vertex of another object to snap to it. Grouping and duplicating with empty objects Create an empty object by choosing GameObject | Create Empty from the top menu, then position this at (4.5,0.5,-1) using the Transform component in the Inspector. Rename this from the default nameGameObjecttoCubeHolder. Now select all of the cube objects in the Hierarchy by selecting the top one, holding the Shift key, and then selecting the last. Now drag this list of cubes in the Hierarchy onto the empty object named CubeHolder in the Hierarchy in order to make this their parent object.The Hierarchy should now look like this: You'll notice that the parent empty object now has an arrow to the left of its object title, meaning you can expand and collapse it. To save space in the Hierarchy, click the arrow now to hide all of the child objects, and then re-select the CubeHolder. Now that we have a complete row made and parented, we can simply duplicate the parent object, and use snap-dragging to lift a whole new row up in the Y axis. Use the duplicate shortcut (Command/Ctrl + D) as before, then select the Translate tool (W) and use the snap-drag technique (hold command on Mac, Ctrl on PC) outlined earlier to lift by 1 unit in the Y axis by pulling the green axis handle. Repeat this procedure to create eight rows of bricks in all, one on top of the other. It should look something like the following screenshot. Note that in the image all CubeHolderrow objects are selected in the Hierarchy. Summary In this article, you should have become familiar with the basics of using the Unity interface, working with GameObjects. Resources for Article: Further resources on this subject: Components in Unity [article] Component-based approach of Unity [article] Using Specular in Unity [article]
Read more
  • 0
  • 0
  • 2140

article-image-game-objective
Packt
04 Jan 2017
5 min read
Save for later

Game objective

Packt
04 Jan 2017
5 min read
In this article by Alan Thorn, author of the book Mastering Unity 5.x, we will see what the game objective is and asset preparation. Every game (except for experimental and experiential games) need an objective for the player; something they must strive to do, not just within specific levels, but across the game overall. This objective is important not just for the player (to make the game fun), but also for the developer, for deciding how challenge, diversity and interest can be added to the mix. Before starting development, have a clearly stated and identified objective in mind. Challenges are introduced primarily as obstacles to the objective, and bonuses are 'things' that facilitate the objective; that make it possible and easier to achieve. For Dead Keys, the primary objective is to survive and reach the level end. Zombies threaten that objective by attacking and damaging the player, and bonuses exist along the way to make things more interesting. I highly recommend using project management and team collaboration tools to chart, document and time-track tasks within your project. And you can do this for free too. Some online tools for this include Trello (https://trello.com), Bitrix 24 (https://www.bitrix24.com), BaseCamp (https://basecamp.com), FreedCamp (https://freedcamp.com), UnFuddle (https://unfuddle.com), BitBucket (https://bitbucket.org), Microsoft Visual Studio Team Services (https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx), Concord Contract Management (http://www.concordnow.com). Asset preparation When you've reached a clear decision on initial concept and design, you're ready to prototype! This means building a Unity project demonstrating the core mechanic and game rules in action; as a playable sample. After this, you typically refine the design more, and repeat prototyping until arriving at an artefact you want to pursue. From here, the art team must produce assets (meshes and textures) based on concept art, the game design, and photographic references. When producing meshes and textures for Unity, some important guidelines should be followed to achieve optimal graphical performance in-game. This is about structuring and building assets in a smart way, so they export cleanly and easily from their originating software, and can then be imported with minimal fuss, performing as best as they can at run-time. Let's see some of these guidelines for meshes and textures. Meshes - work only with good topology Good mesh topology consists in all polygons having only three or four sides in the model (not more). Additionally, Edge Loops should flow in an ordered, regular way along the contours of the model, defining its shape and form. Clean Topology Unity automatically converts, on import, any NGons (Polygons with more than four sides) into triangles, if the mesh has any. But, it's better to build meshes without NGons, as opposed to relying on Unity's automated methods. Not only does this cultivate good habits at the modelling phase, but it avoids any automatic and unpredictable retopology of the mesh, which affects how it's shaded and animated. Meshes - minimize polygon count Every polygon in a mesh entails a rendering performance hit insofar as a GPU needs time to process and render each polygon. Consequently, it's sensible to minimize the number of a polygons in a mesh, even though modern graphics hardware is adept at working with many polygons. It's good practice to minimize polygons where possible and to the degree that it doesn't detract from your central artistic vision and style. High-Poly Meshes! (Try reducing polygons where possible) There are many techniques available for reducing polygon counts. Most 3D applications (like 3DS Max, Maya and Blender) offer automated tools that decimate polygons in a mesh while retaining its basic shape and outline. However, these methods frequently make a mess of topology; leaving you with faces and edge loops leading in all directions. Even so, this can still be useful for reducing polygons in static meshes (Meshes that never animate), like statues or houses or chairs. However, it's typically bad for animated meshes where topology is especially important. Reducing Mesh Polygons with Automated Methods can produce messy topology! If you want to know the total vertex and face count of a mesh, you can use your 3D Software statistics. Blender, Maya, 3DS Max, and most 3D software, let you see vertex and face counts of selected meshes directly from the viewport. However, this information should only be considered a rough guide! This is because, after importing a mesh into Unity, the vertex count frequently turns out higher than expected! There are many reasons for this, explained in more depth online, here: http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html In short, use the Unity Vertex Count as the final word on the actual Vertex Count of your mesh. To view the vertex-count for an imported mesh in Unity, click the right-arrow on the mesh thumbnail in the Project Panel. This shows the Internal Mesh asset. Select this asset, and then view the Vertex Count from the Preview Pane in the Object Inspector. Viewing the Vertex and Face Count for meshes in Unity Summary In this article, we've learned about what are game objectives and about asset preparation.
Read more
  • 0
  • 0
  • 3229

article-image-customizing-player-character
Packt
13 Oct 2016
18 min read
Save for later

Customizing the Player Character

Packt
13 Oct 2016
18 min read
One of the key features of an RPG is to be able to customize your character player. In this article by Vahe Karamian, author of the book Building an RPG with Unity 5.x, we will take a look at how we can provide a means to achieve this. (For more resources related to this topic, see here.) Once again, the approach and concept are universal, but the actual implementation might be a little different based on your model structure. Create a new scene and name it Character Customization. Create a Cube prefab and set it to the origin. Change theScaleof the cube to<5, 0.1, 5>, you can also change the name of the GameObject to Base. This will be the platform that our character model stands on while the player customizes his/her character before game play. Drag and drop thefbxfile representing your character model into theScene View. The next few steps will entirely depend on your model hierarchy and structure as designed by the modeller. To illustrate the point, I have placed the same model in the scene twice. The one on the left is the model that has been configured to display only the basics, the model on the right is the model in its original state as shown in the figure below: Notice that this particular model I am using has everything attached. These include the different types of weapons, shoes, helmets, and armour. The instantiated prefab on the left hand side has turned off all of the extras from the GameObject's hierarchy. Here is how the hierarchy looks in the Hierarchy View: The model has a veryextensivehierarchy in its structure, the figure above is a small snippet to demonstrate that you will need to navigate the structure and manually identify and enable or disable the mesh representing a particular part of the model. Customizable Parts Based on my model, I cancustomizea few things on my 3D model. I can customize the shoulder pads, I can customize the body type, I can customize the weapons and armor it has, I can customize the helmet and shoes, and finally I can also customize the skin texture to give it different looks. Let's get a listing of all the different customizable items we have for our character: Shoulder Shields:there are four types Body Type:there are three body types; skinny, buff, and chubby Armor:knee pad, leg plate Shields:there are two types of shields Boots:there are two types of boots Helmet:there are four types of helmets Weapons:there are 13 different types of weapons Skins:there are 13 different types of skins User Interface Now that we know what ouroptionsare for customizing our player character, we can start thinking about the User Interface (UI) that will be used to enable the customization of the character. To design our UI, we will need to create aCanvasGameObject, this is done by right-clicking in theHierarchy Viewand selectingCreate|UI|Canvas. This will place aCanvasGameObject and anEventSystemGameObject in theHierarchy View. It is assumed that you already know how to create a UI in Unity. I am going to use a Panel togroupthe customizable items. For the moment I will be using checkboxes for some items and scroll bars for the weapons and skin texture. The following figure will illustrate how my UI for customization looks: These UI elements will need to be integrated with Event Handlers that will perform the necessary actions for enabling or disabling certain parts of the character model. For instance, using the UI I can select Shoulder Pad 4, Buff Body Type, move the scroll bar until the Hammer weapon shows up, selecting the second Helmet checkbox, selecting Shield 1 and Boot 2, my character will look like the figure below.We need a way to refer to each one of the meshes representing the different types of customizable objects on the model. This will be done through a C# script. The script will need to keep track of all the parts we are going to be managing for customization. Some models will not have the extra meshes attached. You can always create empty GameObjects at a particular location on the model, and you can dynamically instantiate the prefab representing your custom object at the given point. This can also be done for our current model, for instance, if we have a special space weapon that somehow gets dropped by the aliens in the game world, we can attach the weapon to our model through C# code. The important thing is to understand the concept, and the rest is up to you! The Code for Character Customization Things don't happen automatically. So we need to create some C# code that will handle the customization of our character model. The script we create here will handle the UI events that will drive the enabling and disabling of different parts of the model mesh. Create a new C# script and call itCharacterCustomization.cs. This script will be attached to theBaseGameObject in the scene. Here is a listing of the script: using UnityEngine; using UnityEngine.UI; using System.Collections; using UnityEngine.SceneManagement; public class CharacterCustomization : MonoBehaviour { public GameObject PLAYER_CHARACTER; public Material[] PLAYER_SKIN; public GameObject CLOTH_01LOD0; public GameObject CLOTH_01LOD0_SKIN; public GameObject CLOTH_02LOD0; public GameObject CLOTH_02LOD0_SKIN; public GameObject CLOTH_03LOD0; public GameObject CLOTH_03LOD0_SKIN; public GameObject CLOTH_03LOD0_FAT; public GameObject BELT_LOD0; public GameObject SKN_LOD0; public GameObject FAT_LOD0; public GameObject RGL_LOD0; public GameObject HAIR_LOD0; public GameObject BOW_LOD0; // Head Equipment public GameObject GLADIATOR_01LOD0; public GameObject HELMET_01LOD0; public GameObject HELMET_02LOD0; public GameObject HELMET_03LOD0; public GameObject HELMET_04LOD0; // Shoulder Pad - Right Arm / Left Arm public GameObject SHOULDER_PAD_R_01LOD0; public GameObject SHOULDER_PAD_R_02LOD0; public GameObject SHOULDER_PAD_R_03LOD0; public GameObject SHOULDER_PAD_R_04LOD0; public GameObject SHOULDER_PAD_L_01LOD0; public GameObject SHOULDER_PAD_L_02LOD0; public GameObject SHOULDER_PAD_L_03LOD0; public GameObject SHOULDER_PAD_L_04LOD0; // Fore Arm - Right / Left Plates public GameObject ARM_PLATE_R_1LOD0; public GameObject ARM_PLATE_R_2LOD0; public GameObject ARM_PLATE_L_1LOD0; public GameObject ARM_PLATE_L_2LOD0; // Player Character Weapons public GameObject AXE_01LOD0; public GameObject AXE_02LOD0; public GameObject CLUB_01LOD0; public GameObject CLUB_02LOD0; public GameObject FALCHION_LOD0; public GameObject GLADIUS_LOD0; public GameObject MACE_LOD0; public GameObject MAUL_LOD0; public GameObject SCIMITAR_LOD0; public GameObject SPEAR_LOD0; public GameObject SWORD_BASTARD_LOD0; public GameObject SWORD_BOARD_01LOD0; public GameObject SWORD_SHORT_LOD0; // Player Character Defense Weapons public GameObject SHIELD_01LOD0; public GameObject SHIELD_02LOD0; public GameObject QUIVER_LOD0; public GameObject BOW_01_LOD0; // Player Character Calf - Right / Left public GameObject KNEE_PAD_R_LOD0; public GameObject LEG_PLATE_R_LOD0; public GameObject KNEE_PAD_L_LOD0; public GameObject LEG_PLATE_L_LOD0; public GameObject BOOT_01LOD0; public GameObject BOOT_02LOD0; // Use this for initialization void Start() { } public bool ROTATE_MODEL = false; // Update is called once per frame void Update() { if (Input.GetKeyUp(KeyCode.R)) { this.ROTATE_MODEL = !this.ROTATE_MODEL; } if (this.ROTATE_MODEL) { this.PLAYER_CHARACTER.transform.Rotate(new Vector3(0, 1, 0), 33.0f * Time.deltaTime); } if (Input.GetKeyUp(KeyCode.L)) { Debug.Log(PlayerPrefs.GetString("NAME")); } } public void SetShoulderPad(Toggle id) { switch (id.name) { case "SP-01": { this.SHOULDER_PAD_R_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 1); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-02": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 1); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-03": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 1); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-04": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(id.isOn); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 1); break; } } } public void SetBodyType(Toggle id) { switch (id.name) { case "BT-01": { this.RGL_LOD0.SetActive(id.isOn); this.FAT_LOD0.SetActive(false); break; } case "BT-02": { this.RGL_LOD0.SetActive(false); this.FAT_LOD0.SetActive(id.isOn); break; } } } public void SetKneePad(Toggle id) { this.KNEE_PAD_R_LOD0.SetActive(id.isOn); this.KNEE_PAD_L_LOD0.SetActive(id.isOn); } public void SetLegPlate(Toggle id) { this.LEG_PLATE_R_LOD0.SetActive(id.isOn); this.LEG_PLATE_L_LOD0.SetActive(id.isOn); } public void SetWeaponType(Slider id) { switch (System.Convert.ToInt32(id.value)) { case 0: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 1: { this.AXE_01LOD0.SetActive(true); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 2: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(true); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 3: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(true); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 4: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(true); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 5: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(true); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 6: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(true); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 7: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(true); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 8: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(true); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 9: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(true); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 10: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(true); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 11: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(true); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 12: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(true); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 13: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(true); break; } } } public void SetHelmetType(Toggle id) { switch (id.name) { case "HL-01": { this.HELMET_01LOD0.SetActive(id.isOn); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-02": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(id.isOn); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-03": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(id.isOn); this.HELMET_04LOD0.SetActive(false); break; } case "HL-04": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(id.isOn); break; } } } public void SetShieldType(Toggle id) { switch (id.name) { case "SL-01": { this.SHIELD_01LOD0.SetActive(id.isOn); this.SHIELD_02LOD0.SetActive(false); break; } case "SL-02": { this.SHIELD_01LOD0.SetActive(false); this.SHIELD_02LOD0.SetActive(id.isOn); break; } } } public void SetSkinType(Slider id) { this.SKN_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.FAT_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.RGL_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; } public void SetBootType(Toggle id) { switch (id.name) { case "BT-01": { this.BOOT_01LOD0.SetActive(id.isOn); this.BOOT_02LOD0.SetActive(false); break; } case "BT-02": { this.BOOT_01LOD0.SetActive(false); this.BOOT_02LOD0.SetActive(id.isOn); break; } } } } This is a long script but it is straightforward. At the top of the script we have defined all of the variables that will be referencing the different meshes in our model character. All variables are of type GameObject with the exception of thePLAYER_SKINvariable which is an array ofMaterialdata type. The array is used to store the different types of texture created for the character model. There are a few functions defined that are called by the UI event handler. These functions are:SetShoulderPad(Toggle id); SetBodyType(Toggle id); SetKneePad(Toggle id); SetLegPlate(Toggle id); SetWeaponType(Slider id); SetHelmetType(Toggle id); SetShieldType(Toggle id); SetSkinType(Slider id);All of the functions take a parameter that identifies which specific type is should enable or disable. A BIG NOTE HERE! You can also use the system we just built to create all of the different variations of your Non-Character Player models and store them as prefabs! Wow! This will save you so much time and effort in creating your characters representing different barbarians!!! Preserving Our Character State Now that we have spent the time to customize our character, we need to preserve our character and use it in our game. In Unity, there is a function calledDontDestroyOnLoad(). This is a great function that can be utilized at this time. What does it do? It keeps the specified GameObject in memory going from one scene to the next. We can use these mechanisms for now, eventually though, you will want to create a system that can save and load your user data. Go ahead and create a new C# script and call itDoNotDestroy.cs. This script is going to be very simple. Here is the listing: using UnityEngine; using System.Collections; public class DoNotDestroy : MonoBehaviour { // Use this for initialization void Start() { DontDestroyOnLoad(this); } // Update is called once per frame void Update() { } } After you create the script go ahead and attach it to your character model prefab in the scene. Not bad, let's do a quick recap of what we have done so far. Recap By now you should have three scenes that are functional. We have our scene that represents the main menu, we have our scene that represents our initial level, and we just created a scene that is used for character customization. Here is the flow of our game thus far: We start the game, see the main menu, select theStart Gamebutton to enter the character customization scene, do our customization, and when we click theSavebutton we loadlevel 1. For this to work, we have created the following C# scripts: GameMaster.cs:used as the main script to keep track of our game state CharacterCustomization.cs:used exclusively for customizing our character DoNotDestroy.cs:used to save the state of a given object CharacterController.cs:used to control the motion of our character IKHandle.cs:used to implement inverse kinematics for the foot When you combine all of this together you now have a good framework and flow that can be extended and improved as we go along. Summary We covered some very important topics and concepts in the article that can be used and enhanced for your games. We started the article by looking into how to customize your player character. The concepts you take away from the article can be applied to a wide variety of scenarios. We look at how to understand the structure of your character model so that you can better determine the customization methods. These are the different types of weapons, clothing, armour, shields and so on... We then looked at how to create a user interface to help enable us with the customization of our player character during gameplay. We also learned that the tool we developed can be used to quickly create several different character models (customized) and store them as Prefabs for later use! Great time saver!!! We also learned how to preserve the state of our player character after customization for gameplay. You should now have an idea of how to approach your project. Resources for Article: Further resources on this subject: Animations Sprites [article] Development Tricks with Unreal Engine 4 [article] The Game World [article]
Read more
  • 0
  • 0
  • 1957
article-image-introduction-aws-lumberyard-game-development
Packt
03 Oct 2016
15 min read
Save for later

An Introduction to AWS Lumberyard Game Development

Packt
03 Oct 2016
15 min read
In this article by Dr. Edward Lavieri, author of the book Learning AWS Lumberyard Game Development, you will learn what the Lumberyard game engine means for game developers and the game development industry. (For more resources related to this topic, see here.) What is Lumberyard? Lumberyard is a free 3D game engine that has, in addition to typical 3D game engine capabilities, an impressive set of unique qualities. Most impressively, Lumberyard integrates with Amazon Web Services (AWS) for cloud computing and storage. Lumberyard, also referred to as Amazon Lumberyard, integrates with Twitch to facilitate in-game engagement with fans. Another component that makes Lumberyard unique among other game engines is the tremendous support for multiplayer games. The use of Amazon GameLift empowers developers to instantiate multiplayer game sessions with relative ease. Lumberyard is presented as a game engine intended for creating cross-platform AAA games. There are two important components of that statement. First, cross-platform refers to, in the case of Lumberyard, the ability to develop games for PC/Windows, PlayStation 4, and Xbox One. There is even additional support for Mac OS, iOS, and Android devices. The second component of the earlier statement is AAA games. A triple-A (AAA) game is like a top-grossing movie, one that had a tremendous budget, was extensively advertised, and wildly successful. If you can think of a console game (for Xbox One and/or PlayStation 4) that is advertised on national television, it is a sign the title is an AAA game. Now that this AAA game engine is available for free, it is likely that more than just AAA games will be developed using Lumberyard. This is an exciting time to be a game developer. More specifically, Amazon hopes that Lumberyard will be used to develop multiplayer online games that use AWS for cloud computing and storage, and that integrates with Twitch for user engagement. The engine is free, but AWS usage is not. Don't worry, you can create single player games with Lumberyard as well. System requirements Amazon recommends a system with the following specifications for developing games with Lumberyard: PC running a 64-bit version of Windows 7 or Windows 10 At least 8 GB RAM Minimum of 60 GB hard disk storage A 3 GHz or greater quad-core processor A DirectX 11 (DX11) compatible video card with at least 2 GB of video RAM (VRAM) As mentioned earlier, there is no support for running Lumberyard on a Mac OS or Linux computer. The game engine is a very large and complex software suite. You should take the system requirements seriously and, if at all possible, exceed the minimum requirements. Beta software As you likely know, the Lumberyard game engine is, at the time of this book's publication, in beta. What does that mean? It means a couple of things that are worth exploring. First, developers (that's you!) get early access to amazing software. Other than the cool factor of being able to experiment with a new game engine, it can accelerate game projects. There are several detractors to this as well. Here are the primary detractors from using beta software: Not all functions and features will be implemented. Depending on the engine's specific limitations, this can be a showstopper for your game project. Some functions and features might be partially implemented, not function correctly, or be unreliable. If the features that have these characteristics are not the ones you plan to use, then this is not an issue for you. This, of course, can be a tremendous problem. For example, let's say that the engine's gravity system is buggy. That would make testing your game very difficult as you would not be able to rely on the gravity system and not know if your code has issues or not. Things can change from release to release. Anything done in one beta version is apt to work just fine in subsequent beta releases. Things that tend to change between beta versions, other than bug fixes and improvements, are interface changes. This can slow a project up considerably as development workflows you have adopted may no longer work. In the next section, you will see what changes were ushered in with each sequential beta release. Release notes Amazon initially launched the Lumberyard game engine in February 2016. Since then, there have been several new versions. At the time of this book's publication, there were five releases: 1.0, 1.1, 1.2, 1.3, and 1.4. The following graphic shows the timeline of the five releases: Let's look at the major offerings of each beta release. Beta 1.0 The initial beta of the Lumberyard game engine was released on February 9, 2016. This was an innovative offering from Amazon. Lumberyard was released as a triple-A cross-platform game engine at no cost to developers. Developers had full access to the game engine along with the underlying source code. This permits developers to release games without a revenue share and to even create their own game engines using the Lumberyard source code as a base. Beta 1.1 Beta 1.1 was released just a few short weeks after Beta 1.0. According to Amazon, there were 208 feature upgrades, bug fixes, and improvements with this release. Here are the highlights: Autoscaling features Component Entity System FBX Importer New Game Gems Cloud Canvas Resource Manager New Twitch ChatPlay Features Beta 1.2 In just a few short weeks after Beta 1.1 was released, Beta 1.2 was made available. The rapid release of sequential beta versions is indicative of tremendous development work by the Lumberyard team. This also gives some indication as to the amount of support the game engine is likely to have once it is no longer in beta. With this beta, Amazon announced 218 enhancements and bug fixes to nearly two-dozen core Lumberyard components. Here are the largest Lumberyard game engine components to be upgraded in Beta 1.2: Particle editor Mannequin Geppetto FBX Importer Multiplayer Gem Cloud Canvas Resource Manager Beta 1.3 The three previous beta versions were released in subsequent months. This was an impressive pace, but not likely sustainable due to the tremendous complexities of game engine modifications and the fact that the Lumberyard game engine continues to mature. Released in June 2016, the Beta 1.3 release of Lumberyard introduced support for Virtual Reality (VR) and High Dynamic Range (HDR). Adding support for VR and HDR is enough reason to release a new beta version. Impressively, this release also contained over 130 enhancements and bug fixes to the game engine. Here is partial list of game engine components that were updated in this release: Volumetric fog Motion blur Height Mapped Ambient Occlusion Depth of field Emittance Integrated Graphics Profiler FBX Importer UI Editor FlowGraph Nodes Cloud Canvas Resource Manager Beta 1.4 At the time of this book's publication, the current beta version of Lumberyard was 1.4, which was released in August 2016. This release contained over 230 enhancements and bug fixes as well as some new features. The primary focus of this release seemed to focus on multiplayer games and making them more efficient. The result of the changes provided in this release are greater cost-efficiencies for multiplayer games when using Amazon GameLift. Sample game content Creating triple-A games is a complex process that typically involves a large number of people in a variety of roles including developers, designers, artists, and more. There is no industry average for how long it takes to create a Triple-A game because there are too many variables including budget, team size, game specifications, and individual and team experience. This being said, it is likely to take up to 2 years to create a triple-A game from scratch. Triple-A, or AAA, games typically have very large budgets, large design and development teams, large advertising efforts, and are are largely successful. In a nutshell, Triple-A games are large! Around 2 years is a long time, so we have shortened things for you in this book using available game samples that come bundled with the game engine. As illustrated in the following section, Lumberyard comes with a great set of starter content and sample games. Starter content When you first launch Lumberyard, you are able to create a new level or open an existing one. In the Open a Level dialog window, you will find nine levels listed under Levels | GettingStartedFiles. Each of these levels presents a unique opportunity to explore the game engine and learn how things are done. Let's look at each of these, next. getting-started-completed-level As the level name suggests, this is a complete game level featuring a small game grid and a player-controlled robot character. The character is moved to the standard WASD keyboard keys, rotated with the mouse, and uses the spacebar to jump. The level does a great job of demonstrating physics. As is indicated in the following screenshot, the level contains a wall of blocks that have natural physics applied. The robot can run into the wall and fallen blocks as well as use the ramp to launch into the wall. More than just playing the game, you can examine how it was created. This level is fully playable. Simply use the Ctrl + G keyboard combination to enter Game Mode. When you are through playing the game, press the Ecs key to exit. This completed level can be further examined by loading the remaining levels, featured in this section. These subsequent levels make it easier to examine specific components of the level. start-section03-terrain This section simply contains the terrain. The complete terrain is provided as well as additional space to explore and practice creating, duplicating, and placing objects. start-section04-lighting This level is presented with an expanded terrain to help you explore lighting options. There are environmental lighting effects as well as street lamp objects that can be used to emit light and generate shadows in the game. This level is not playable and is provided to aid your learning of Lumberyard's lighting system. start-section05-camera-playerstart This non-playable level is convenient for examining camera placement and discovering how that impacts the player's starting position on game launch. start-section06-designer-objects This level is playable, but only to the extent that you can control the robot character and explore the game's environment. With this level, you can focus your exploration on editing the objects. start-section07-materials This level includes the full game environment along with two natively created objects: a block and a sphere. You can freely edit these objects and see how they look in Game Mode. This represents a great way to learn as it is a no-risk situation. This simply means that you do not have to save your changes, as you are essentially working in a sandbox with no impact to a real game project. This is a playable level that allows you to explore the game environment and preview any changes you make to the level. start-section08-physics This starter level has the same two 3D objects (block and sphere) as the previous starter level. In this level, the objects have textures. No physics are already applied to this level, so it is a good level to use to practice creating objects with physics. One option is to attempt to replicate the wall of stacked objects that is present in the completed level. start-section09-flowgraph-scripting This playable level contains the wall of 3D blocks that can be knocked over. The game's gameplay is instantiated with FlowGraphs, which can be viewed and edited using this starter level. start-section10-audio This final starter level contains the full playable game that serves as a testing ground for implementing audio in the game. Sample games There are six sample unrelated game levels accessible through the Open a Level dialog window, you will find nine levels listed under Levels | Samples. Each of these levels are single level games that demonstrate specific functionality and gameplay. Let's look at each of these next. Animation_Basic_Sample This game level contains an animated character in an empty game environment. There is a camera and light. You can play the game to watch the animated character's idle animation. When you create 3D characters, you will use Geppetto, Lumberyard's animation tool. Camera_Sample You can use this sample game to help learn how to create gameplay. The sample game includes a Heads Up Display (HUD) that presents the player with three game modes, each with a different camera. This game can also be used to further explore FlowGraphs and the FlowGraph Editor. Dont_Die The Don't Die game level provides a colorful example of full gameplay with an interactive menu system. The game starts with a Press any key to start message followed by a color selection menu depicted here. The selected color is applied to the spacecraft used in the game. Movers_Sample With this game, you can learn how to instantiate animations triggered by user input. Using this game, you will also gain exposure to FlowGraphs and FlowGraph Editor. Trigger_Sample This sample game provides several examples of Proximity and Area Triggers. Here is the list of triggers instantiated in this sample game: Proximity trigger Player only Any entity One entity at a time Only selected entity Three entities required to trigger Area trigger Two volumes use same trigger Stand in all three volumes at the same time Step inside each trigger in any order Step inside each trigger in correct sequence UIEditor_Sample This sample game is not playable but provides a commercial-quality User Interface (UI) example. If you run the level in Game Mode, you will not have a game to play, but the stunning visuals of the UI give you a glimpse of what is possible with the Lumberyard game engine. Amazon Web Services AWS is a family of cloud-based scalable services to support, in the context of Lumberyard, your game. AWS includes several technologies that can support your Lumberyard game. These services include: Cloud Canvas Cloud Computing GameLift Simple Notification Service (SNS) Simple Query Service (SQS) Simple Storage Service (S3) Asset creation Game assets include graphic files such as materials, textures, color palettes, 2D objects, and 3D objects. These assets are used to bring a game to life. A terrain, for example, is nothing without grass and dirt textures applied to it. Much of this content is likely to be created with external tools. One internal tool used to implement the externally created graphical assets is the Material Editor. Audio system Lumberyard has Audio System that controls how in-game audio is instantiated. No audio sounds are created directly in Lumberyard. Instead, they are created using Wwise Software (Wave Works Interactive Sound Engine) by Audiokinetic. Because audio is created external to Lumberyard, a game project's audio team will likely consist of content creators and developers that implement the content in the Lumberyard game. Cinematics system Lumberyard has a Cinematics System that can be used to create cut-scenes and promotional videos. With this system, you can also make your cinematics interactive. Flow graph system Lumberyard's flow graph system is a visual scripting system for creating gameplay. This tool is likely to be used by many of your smaller teams. It can be beneficial to have someone that oversees all Flow Graphs to ensure compatibility and standardization. Geppetto Geppetto is Lumberyard's character tool. A character team will likely create the game's characters using external tools such as Maya or 3D Studio Max. Using those systems, they can export the necessary files to support importing the character assets into your Lumberyard game. Lumberyard has an FBX Importer tool that is used to import characters created in external programs. Mannequin editor Animating objects, especially 3D objects, is a complex process that takes artistic talent, and technical expertise. Some projects incorporate separate teams for object creation and animation. For example, you might have a small team that creates robot characters and another team that generates their animations. Production team The production team is responsible for creating builds and distributing releases. They will also handle testing coordination. One of their primary tools will be the Waf Build System. Terrain editor A game's environment consists of terrain and objects. The terrain is the foundation for the entire game experience and is the focus of exacting efforts. The creation of a terrain starts when a new level is created. The Height Map resolution is the first decision a level editor, or person responsible for creating terrain, is faced with. Twitch ChatPlay system Twitch integration represents exciting game possibilities. Twitch integration allows you to engage your game's users in unique ways. UI editor Creating a user interface is often, at least on very large projects, the responsibility of a specialized team. This team, or individual, will create the user interface components on each game level to ensure consistency. Artwork required for the user interfaces is likely to be produced by the Asset team. Summary In this article, you learned about AWS Lumberyard and what it is capable of. You gained an appreciation for Lumberyard's significance to the game development industry. You also learned about the beta history of Lumberyard and how quickly it is maturing into a game engine of choice. Resources for Article: Further resources on this subject: What Makes a Game a Game? [article] Integrating Accumulo into Various Cloud Platforms [article] CryENGINE 3: Breaking Ground with Sandbox [article]
Read more
  • 0
  • 0
  • 2060

article-image-third-dimension
Packt
10 Aug 2016
13 min read
Save for later

The Third Dimension

Packt
10 Aug 2016
13 min read
In this article by Sebastián Di Giuseppe, author of the book, Building a 3D game with LibGDX, describes about how to work in 3 dimensions! For which we require new camera techniques. The third dimension adds a new axis, instead of having just the x and y grid, a slightly different workflow, and lastly new render methods are required to draw our game. We'll learn the very basics of this workflow in this article for you to have a sense of what's coming, like moving, scaling, materials, environment, and some others and we are going to move systematically between them one step at a time. (For more resources related to this topic, see here.) The following topics will be covered in this article: Camera techniques Workflow LibGDX's 3D rendering API Math Camera techniques The goal of this article is to successfully learn about working with 3D as stated. In order to achieve this we will start at the basics, making a simple first person camera. We will facilitate the functions and math that LibGDX contains. Since you probably have used LibGDX more than once, you should be familiar with the concepts of the camera in 2D. The way 3D works is more or less the same, except there is a z axis now for the depth . However instead of an OrthographicCamera class, a PerspectiveCamera class is used to set up the 3D environment. Creating a 3D camera is just as easy as creating a 2D camera. The constructor of a PerspectiveCamera class requires three arguments, the field of vision, camera width and camera height. The camera width and height are known from 2D cameras, the field of vision is new. Initialization of a PerspectiveCamera class looks like this: float FoV = 67; PerspectiveCamera camera = new PerspectiveCamera(FoV, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); The first argument, field of vision, describes the angle the first person camera can see. The image above gives a good idea what the field of view is. For first person shooters values up to 100 are used. Higher than 100 confuses the player, and with a lower field of vision the player is bound to see less. Displaying a texture. We will start by doing something exciting, drawing a cube on the screen! Drawing a cube First things first! Let's create a camera. Earlier, we showed the difference between the 2D camera and the 3D camera, so let's put this to use. Start by creating a new class on your main package (ours is com.deeep.spaceglad) and name it as you like. The following imports are used on our test: import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.graphics.Color; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.PerspectiveCamera; import com.badlogic.gdx.graphics.VertexAttributes; import com.badlogic.gdx.graphics.g3d.*; import com.badlogic.gdx.graphics.g3d.attributes.ColorAttribute; import com.badlogic.gdx.graphics.g3d.environment.DirectionalLight; import com.badlogic.gdx.graphics.g3d.utils.ModelBuilder; Create a class member called cam of type PerspectiveCamera; public PerspectiveCamera cam; Now this camera needs to be initialized and needs to be configured. This will be done in the create method as shown below. public void create() { cam = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); cam.position.set(10f, 10f, 10f); cam.lookAt(0,0,0); cam.near = 1f; cam.far = 300f; cam.update(); } In the above code snippet we are setting the position of the camera, and looking towards a point set at 0, 0, 0 . Next up, is getting a cube ready to draw. In 2D it was possible to draw textures, but textures are flat. In 3D, models are used. Later on we will import those models. But we will start with generated models. LibGDX offers a convenient class to build simple models such as: spheres, cubes, cylinders, and many more to choose from. Let's add two more class members, a Model and a ModelInstance. The Model class contains all the information on what to draw, and the resources that go along with it. The ModelInstance class has information on the whereabouts of the model such as the location rotation and scale of the model. public Model model; public ModelInstance instance; Add those class members. We use the overridden create function to initialize our new class members. public void create() { … ModelBuilder modelBuilder = new ModelBuilder();Material mat = new Material(ColorAttribute.createDiffuse(Color.BLUE));model = modelBuilder.createBox(5, 5, 5, mat, VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal);instance = new ModelInstance(model); } We use a ModelBuilder class to create a box. The box will need a material, a color. A material is an object that holds different attributes. You could add as many as you would like. The attributes passed on to the material changes the way models are perceived and shown on the screen. We could, for example, add FloatAttribute.createShininess(8f) after the ColorAttribute class, that will make the box to shine with lights around. There are more complex configurations possible but we will leave that out of the scope for now. With the ModelBuilder class, we create a box of (5, 5, 5). Then we pass the material in the constructor, and the fifth argument are attributes for the specific box we are creating. We use a bitwise operator to combine a position attribute and a normal attribute. We tell the model that it has a position, because every cube needs a position, and the normal is to make sure the lighting works and the cube is drawn as we want it to be drawn. These attributes are passed down to openGL on which LibGDX is build. Now we are almost ready for drawing our first cube. Two things are missing, first of all: A batch to draw to. When designing 2D games in LibGDX a SpriteBatch class is used. However since we are not using sprites anymore, but rather models, we will use a ModelBatch class. Which is the equivalent for models. And lastly, we will have to create an environment and add lights to it. For that we will need two more class members: public ModelBatchmodelBatch; public Environment environment; And they are to be initialized, just like the other class members: public void create() { .... modelBatch = new ModelBatch(); environment = new Environment(); environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.4f, 0.4f, 0.4f, 1f)); environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, - 1f, -0.8f, -0.2f)); } Here we add two lights, an ambient light, which lights up everything that is being drawn (a general light source for all the environment), and a directional light, which has a direction (most similar to a "sun" type of source). In general, for lights, you can experiment directions, colors, and different types. Another type of light would be PointLight and it can be compared to a flashlight. Both lights start with 3 arguments, for the color, which won't make a difference yet as we don't have any textures. The directional lights constructor is followed by a direction. This direction can be seen as a vector. Now we are all set to draw our environment and the model in it @Override public void render() { Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); modelBatch.begin(cam); modelBatch.render(instance, environment); modelBatch.end(); } It directly renders our cube. The ModelBatch catch behaves just like a SpriteBatch, as can be seen if we run it, it has to be started (begin), then ask for it to render and give them the parameters (models and environment in our case), and then make it stop. We should not forget to release any resources that our game allocated. The model we created allocates memory that should be disposed of. @Override public void dispose() { model.dispose(); } Now we can look at our beautiful cube! It's only very static and empty. We will add some movement to it in our next subsection! Translation Translating rotating and scaling are a bit different to that of a 2D game. It's slightly more mathematical. The easier part are vectors, instead of a vector2D, we can now use a vector3D, which is essentially the same, just that, it adds another dimension. Let's look at some basic operations of 3D models. We will use the cube that we previously created. With translation we are able to move the model along all three the axis. Let's create a function that moves our cube along the x axis. We add a member variable to our class to store the position in for now. A Vector3 class. Vector3 position = new Vector3(); private void movement() { instance.transform.getTranslation(position); position.x += Gdx.graphics.getDeltaTime(); instance.transform.setTranslation(position); } The above code snippet retrieves the translation, adds the delta time to the x attribute of the translation. Then we set the translation of the ModelInstance. The 3D library returns the translation a little bit different than normally. We pass a vector, and that vector gets adjusted to the current state of the object. We have to call this function every time the game updates. So therefore we put it in our render loop before we start drawing. @Override public void render() { movement(); ... } It might seem like the cube is moving diagonally, but that's because of the angle of our camera. In fact it's' moving towards one face of the cube. That was easy! It's only slightly annoying that it moves out of bounds after a short while. Therefor we will change the movement function to contain some user input handling. private void movement() { instance.transform.getTranslation(position); if(Gdx.input.isKeyPressed(Input.Keys.W)){ position.x+=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.D)){ position.z+=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.A)){ position.z-=Gdx.graphics.getDeltaTime(); } if(Gdx.input.isKeyPressed(Input.Keys.S)){ position.x-=Gdx.graphics.getDeltaTime(); } instance.transform.setTranslation(position); } The rewritten movement function retrieves our position, updates it based on the keys that are pressed, and sets the translation of our model instance. Rotation Rotation is slightly different from 2D. Since there are multiple axes on which we can rotate, namely the x, y, and z axis. We will now create a function to showcase the rotation of the model. First off let us create a function in which  we can rotate an object on all axis private void rotate() { if (Gdx.input.isKeyPressed(Input.Keys.NUM_1)) instance.transform.rotate(Vector3.X, Gdx.graphics.getDeltaTime() * 100); if (Gdx.input.isKeyPressed(Input.Keys.NUM_2)) instance.transform.rotate(Vector3.Y, Gdx.graphics.getDeltaTime() * 100); if (Gdx.input.isKeyPressed(Input.Keys.NUM_3)) instance.transform.rotate(Vector3.Z, Gdx.graphics.getDeltaTime() * 100); } And let's not forget to call this function from the render loop, after we call the movement function @Override public void render() { ... rotate(); } If we press the number keys 1, 2 or 3, we can rotate our model. The first argument of the rotate function is the axis to rotate on. The second argument is the amount to rotate. These functions are to add a rotation. We can also set the value of an axis, instead of add a rotation, with the following function: instance.transform.setToRotation(Vector3.Z, Gdx.graphics.getDeltaTime() * 100); However say, we want to set all three axis rotations at the same time, we can't simply call setToRotation function three times in a row for each axis, as they eliminate any other rotation done before that. Luckily LibGDX has us covered with a function that is able to take all three axis. float rotation; private void rotate() { rotation = (rotation + Gdx.graphics.getDeltaTime() * 100) % 360; instance.transform.setFromEulerAngles(0, 0, rotation); } The above function will continuously rotate our cube. We face one last problem. We can't seem to move the cube! The setFromEulerAngles function clears all the translation and rotation properties. Lucky for us the setFromEulerAngles returns a Matrix4 type, so we can chain and call another function from it. A function which translates the matrix for example. For that we use the trn(x,y,z) function. Short for translate. Now we can update our rotation function, although it also translates. instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z); Now we can set our cube to a rotation, and translate it! These are the most basic operations which we will use a lot throughout the book. As you can see this function does both the rotation and the translation. So we can remove the last line in our movement function instance.transform.setTranslation(position); Our latest rotate function looks like the following: private void rotate() { rotation = (rotation + Gdx.graphics.getDeltaTime() * 100) % 360; instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z); } The setFromEulerAngles function will be extracted to a function of its own, as it serves multiple purposes now and is not solely bound to our rotate function. private void updateTransformation(){ instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z).scale(scale,scale,scale); } This function should be called after we've calculated our rotation and translation public void render() { rotate(); movement(); updateTransformation(); ... } Scaling We've almost had all of the transformations we can apply to models. The last one being described in this book is the scaling of a model. LibGDX luckily contains all the required functions and methods for this. Let's extend our previous example and make our box growing and shrinking over time. We first create a function that increments and subtracts from a scale variable. boolean increment;float scale = 1; void scale(){ if(increment) { scale = (scale + Gdx.graphics.getDeltaTime()/5); if (scale >= 1.5f) { increment = false; } else { scale = (scale - Gdx.graphics.getDeltaTime()/5); if(scale <= 0.5f) increment = true; } } Now to apply this scaling we can adjust our updateTransformation function to include the scaling. private void updateTransformation(){ instance.transform.setFromEulerAngles(0, 0, rotation).trn(position.x, position.y, position.z).scale(scale,scale,scale); } Our render method should now include the scaling function as well public void render() { rotate(); movement(); scale(); updateTransformation(); ... } And there you go, we can now successfully move, rotate and scale our cube! Summary In this article we learned about the workflow of LibGDX 3D API. We are now able to apply multiple kinds of transformations to a model, and understand the differences between 2D and 3D. We also learned how to apply materials to models, which will change the appearance of the model and lets us create cool effects. Note that there's plenty more information that you can learn about 3D and a lot of practice to go with it to fully understand it. There's also subjects not covered here, like how to create your own materials, and how to make and use of shaders. There's plenty room for learning and experimenting. In the next article we will start on applying the theory that's learned in this article, and start working towards an actual game! We will also go more in depth on the environment and lights, as well as collision detection. So plenty to look forward to. Resources for Article: Further resources on this subject: 3D Websites [Article] Your 3D World [Article] Using 3D Objects [Article]
Read more
  • 0
  • 0
  • 2744