Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech Guides - Game Development

8 Articles
article-image-uses-of-machine-learning-in-gaming
Natasha Mathur
22 Oct 2018
5 min read
Save for later

Uses of Machine Learning in Gaming

Natasha Mathur
22 Oct 2018
5 min read
All around us, our perception of learning and intellect is being challenged daily with the advent of new and emerging technologies. From self-driving cars, playing Go and Chess, to computers being able to beat humans at classic Atari games, the advent of a group of technologies we colloquially call Machine Learning have come to dominate a new era in technological growth – a new era of growth that has been compared with the same importance as the discovery of electricity and has already been categorized as the next human technological age. Games and simulations are no stranger to AI technologies and there are numerous assets available to the Unity developer in order to provide simulated machine intelligence. These technologies include content like Behavior Trees, Finite State Machine, navigation meshes, A*, and other heuristic ways game developers use to simulate intelligence. So, why Machine Learning and why now? The reason is due in large part to the OpenAI initiative, an initiative that encourages research across academia and the industry to share ideas and research on AI and ML. This has resulted in an explosion of growth in new ideas, methods, and areas for research. This means for games and simulations that we no longer have to fake or simulate intelligence. Now, we can build agents that learn from their environment and even learn to beat their human builders. This article is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. In this article, we look at the role that machine learning plays in game development. Machine Learning is an implementation of Artificial Intelligence. It is a way for a computer to assimilate data or state and provide a learned solution or response. We often think of AI now as a broader term to reflect a "smart" system. A full game AI system, for instance, may incorporate ML tools combined with more classic AIs like Behavior Trees in order to simulate a richer, more unpredictable AI. We will use AI to describe a system and ML to describe the implementation. How Machine Learning is useful in gaming Game engines have embraced the idea of incorporating ML into all aspects of its product and not just for use as a game AI. While most developers may try to use ML for gaming, it certainly helps game development in the following areas: Map/Level Generation: There are already plenty of examples where developers have used ML to auto-generate everything from dungeons to the realistic terrain. Getting this right can provide a game with endless replayability, but it can be some of the most challenging ML to develop. Texture/Shader Generation: Another area that is getting the attention of ML is texture and shader generation. These technologies are getting a boost brought on by the attention of advanced generative adversarial networks, or GAN. There are plenty of great and fun examples of this tech in action; just do a search for DEEP FAKES in your favorite search engine. Model Generation: There are a few projects coming to fruition in this area that could greatly simplify 3D object construction through enhanced scanning and/or auto-generation. Imagine being able to textually describe a simple model and having ML build it for you, in real-time, in a game or other AR/VR/MR app, for example. Audio Generation: Being able to generate audio sound effects or music on the fly is already being worked on for other areas, not just games. Yet, just imagine being able to have a custom designed soundtrack for your game developed by ML. Artificial Players: This encompasses many uses from the gamer themselves using ML to play the game on their behalf to the developer using artificial players as enhanced test agents or as a way to engage players during low activity. If your game is simple enough, this could also be a way of auto testing levels. NPCs or Game AI: Currently, there are better patterns out there to model basic behavioral intelligence in the form of Behavior Trees. While it's unlikely that BTs or other similar patterns will go away any time soon, imagine being able to model an NPC that may actually do an unpredictable, but rather cool behavior. This opens all sorts of possibilities that excite not only developers but players as well. So, we learned about different areas of the gaming world such as model generation, artificial players, NPCs, level generation, etc, where Machine learning can be extensively used. If you found this post useful, be sure to check out the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning' to learn more machine learning concepts in gaming. 5 Ways Artificial Intelligence is Transforming the Gaming Industry How should web developers learn machine learning? Deep Learning in games – Neural Networks set to design virtual worlds
Read more
  • 0
  • 0
  • 8583

article-image-rust-as-a-game-programming-language-is-it-any-good
Amarabha Banerjee
22 Sep 2018
4 min read
Save for later

Rust as a Game Programming Language: Is it any good?

Amarabha Banerjee
22 Sep 2018
4 min read
We have moved lightyears away from the the handheld gaming days. The good old Tetris and Mario games were easy to use, low on graphics, super difficult to program in-spite of their apparent simpler appearance. Although it’s difficult to trace back to the language in which all of these games were written, many of them were written in the C family of languages, which contributed to the difficulty in programming them. Rust has been touted as one of the successors of C. Which in-turn brings the question back - if C was difficult for coding, then how exactly is Rust going to be different? The answer of this question lies in the approach of Rust. Rust was designed primarily as a systems programming language by the Mozilla Foundation. The primary game development language over the past 20 years have been C/C++ majorly. Rust brings a fresh change in approach - from Object Oriented to Data Oriented. The problem with object oriented programming was summarized nicely by Catherine West from Chucklefish. According to her, treating game elements like NPC, game worlds, as Objects might work well at a small level. But when you are trying to create your own game engine, then treating game elements as Objects will imply creating a lot of super sized objects with complex layers of dependencies. The Rust approach, on the other hand, is data oriented. This implies that every element is treated as data. This simplifies the process of creating midsized game engines a lot. Chucklefish being a significant name in 2D game development, this statement from Catherine West comes as a major boost for developers who want to use Rust for developing 2D games. She has although expressed her doubts on using Rust for 3D game development. Another important personality who has recently come out in support of Rust is Andrea Pessino -  CTO of Ready at Dawn. Ready at Dawn is a well established game studio known for games such as The Order: 1886, Daxter and various God of War titles. His tweet read like this. This is another feather in Rust’s cap for game development. The present state of game development in Rust is quite encouraging. There are quite a few low level graphical libraries like GFX.  GFX is a low-level abstraction layer over platform specific graphical interfaces (OpenGL, Metal, Vulkan). It offers some handy wrapper over windows backend (glutin the Rust one, or wrapper around Vulkan system, GLFW and more). GFX is still at a very early stage of development with the present version being 0.17. Although major game engines like Unity, and Unreal are yet to support Rust for game development, there exist a few complete game engines which allow you to create complete games with Rust using their framework. The first one is Piston. It is the oldest game engine for Rust. It is also the most stable and one with great documentation. However, many people find Piston confusing and hard to use as it is super-modular by design. Sometimes it is even hard to understand which module to load for achieving a certain goal or build a certain component of a game. Amethyst is a more recent game engine/framework inspired by commercial monolithic game engines. It comes with all the necessary dependencies in its package. However it is evolving quickly and hence the present documentation is already outdated. However there is a vibrant community which is looking to include more and more developers into its foray. Hence this gives an opportunity to new developers to get into game development with Rust and get involved with a game engine also. GGEZ is a simple 2D game engine inspired by the LÖVE engine. This library is more suited at creating simple 2D games for hobbyists. GGEZ is also very new and changes quickly. The design simplicity is an incentive for indie developers and hobbyists to start creating games with it. Some other popular libraries include: noise-rs / a noise generator rlua / High level bindings between Rust and Lua sfxr / Reimplementation of DrPetter’s “sfxr” sound effect generator as a Rust library The conclusion that we can draw from here is that Rust has a lot of promise when it comes to game development. With the data oriented approach, easy memory management and access to low level performance enhancement techniques, Rust can become a full fledged game development language in the near future. Best game engines for Artificial Intelligence game development Implementing Unity game engine and assets for 2D game development How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 41010

article-image-how-artificial-intelligence-and-machine-learning-can-turbocharge-a-game-developers-career
Guest Contributor
06 Sep 2018
7 min read
Save for later

How Artificial Intelligence and Machine Learning can turbocharge a Game Developer's career

Guest Contributor
06 Sep 2018
7 min read
Gaming - whether board games or games set in the virtual realm - has been a massively popular form of entertainment since time immemorial. In the pursuit of creating more sophisticated, thrilling, and intelligent games, game developers have delved into ML and AI technologies to fuel innovation in the gaming sphere. The gaming domain is the ideal experimentation bed for evolving technologies because not only do they put up complex and challenging problems for ML and AI to solve, they also pose as a ground for creativity - a meeting ground for machine learning and the art of interaction. Machine Learning and Artificial Intelligence in Gaming The reliance on AI for gaming is not a recent development. In fact, it dates back to 1949, when the famous cryptographer and mathematician Claude Shannon made his musings public about how a supercomputer could be made to master Chess. Then again, in 1952, a graduate student in the UK developed an AI that could play tic-tac-toe with ultimate perfection. Source : Medium However, it isn’t just ML and AI that are progressing through experimentations on games. Game development, too, has benefited a great deal from these pioneering technologies. AI and ML have helped enhance the gaming experience on many grounds such as gaming design, the interactive quotient, as well as the inner functionalities of games. The above mentioned AI use cases focus on two primary things: one is to impart enhanced realism in virtual gaming environment and the second is to create a more naturalistic interface between the gaming environment and the players. As of now, the focus of game developers, data scientists, and ML researchers lies in two specific categories of the gaming domain - games of perfect information and games of imperfect information. In games of perfect information, a player is aware of all the aspects of the game throughout the playing session, whereas, in games of imperfect information, players are oblivious to specific aspects of the game. When it comes to games of perfect information such as Chess and Go, AI has shown various instances of overpowering human intelligence. Back in 1997, IBM’s Deep Blue successfully defeated world Chess champion, Garry Kasparov in a six-game match. In 2016, Google’s AlphaGo emerged as the victor in a Go match scoring 4-1 after defeating South Korean Go champion, Lee Sedol. One of the most advanced chess AIs developed yet, Stockfish, uses a combination of advanced heuristics and brute force to compute numeric values for each and every move in a specific position in Chess. It also effectively eliminates bad moves using the Alpha-beta pruning search algorithm. While the progress and contribution of AI and ML to the field of games of perfect information is laudable, researchers are now intrigued by games of imperfect information. Games of imperfect information offer much more challenging situations that are essentially difficult for machines to learn and master. Thus, the next evolution in the world of gaming will be to create spontaneous gaming environment using AI technology, in which developers will build only the gaming environment and its mechanics instead of creating a game with pre-programmed/scripted plots. In such a scenario, the AI will have to confront and solve spontaneous challenges with personalized scenarios generated on the spot. Games like StarCraft and StarCraft II have stirred up massive interest among game researchers and developers. In these games, the players are only partially aware of the gaming aspects and the game is largely determined not just by the AI moves and the previous state of the game, but also by the moves of other players. Since in these games you will have little knowledge about your rival’s moves, you have to take decisions on the go and your moves have to be spontaneous. The recent win of OpenAI Five over amateur human players in Dota2 is a good case in point. OpenAI Five is a team of five neural networks that leverages an advanced version of Proximal Policy Optimization and uses a separate LSTM to learn identifiable strategies. The progress of OpenAI Five shows that even without human data, reinforcement learning can facilitate long-term planning, thus, allowing us to make further progress in the games of imperfect information. Career in Game Development With ML and AI As ML and AI continue to penetrate the gaming industry, it is creating a huge demand for talented and skilled game developers who are well-versed in these technologies. Today, game development is at a place where it’s no longer necessary to build games using time-consuming manual techniques. ML and AI have made the task of game developers easier as by leveraging these technologies, they can design and build innovative gaming environment, and test them automatically. The integration of AI and ML in the gaming domain is giving birth to new job positions like Gameplay Software Engineer (AI), Gameplay Programmer (AI), and Game Security Data Scientist, to name a few. The salaries of traditional game developers is in stark contrast with that of those having AI/ML skills. While the average salary of game developers is usually around $44,000, it can scale up to and over $1,20,000 if one possesses AI/ML skills. Gameplay Engineer Average salary - $73,000 - $1,16,000 Gameplay engineers are usually part of the core game dev team and are entrusted with the responsibility of enhancing the existing gameplay systems to enrich the player experience. Companies today demand for gameplay engineers who are proficient in C/C++ and well-versed with AI/ML technologies. Gameplay Programmer Average salary - $98,000 - $1,49,000 Gameplay programmers work in close collaboration with the production and design team to develop cutting edge features in the existing and upcoming gameplay systems. Programming skills are a must and knowledge of AI/ML technologies is an added bonus. Game Security Data Scientist Average salary - $73,000 - $1,06,000 The role of a gameplay security data scientist is to combine both security and data science approaches to detect anomalies and fraudulent behavior in games. This calls for a high degree of expertise in AI, ML, and other statistical methods. With impressive salaries and exciting job opportunities cropping up fast in the game development sphere, the industry is attracting some major talent towards it. Game developers and software developers around the world are choosing the field due to the promises of rapid career growth. If you wish to bag better and more challenging roles in the domain of game development, you should definitely try and upskill your talent and knowledge base by mastering the fields of ML and AI. Packt Publishing is the leading UK provider of Technology eBooks, Coding eBooks, Videos and Blogs; helping IT professionals to put software to work. It offers several books and videos on Game development with AI and machine learning. It’s never too late to learn new disciplines and expand your knowledge base. There are numerous online platforms that offer great artificial intelligent courses. The perk of learning from a registered online platform is that you can learn and grow at your own pace and according to your convenience. So, enroll yourself in one and spice up your career in game development! About Author: Abhinav Rai is the Data Analyst at UpGrad, an online education platform providing industry oriented programs in collaboration with world-class institutes, some of which are MICA, IIIT Bangalore, BITS and various industry leaders which include MakeMyTrip, Ola, Flipkart etc.   Best game engines for AI game development Implementing Unity game engine and assets for 2D game development [Tutorial] How to use arrays, lists, and dictionaries in Unity for 3D game development      
Read more
  • 0
  • 0
  • 6230

article-image-best-game-engines-for-ai-game-development
Natasha Mathur
24 Aug 2018
8 min read
Save for later

Best game engines for Artificial Intelligence game development

Natasha Mathur
24 Aug 2018
8 min read
"A computer would deserve to be called intelligent if it could deceive a human into believing that it was human" — Alan Turing It is quite common to find games which are initially exciting but take a boring turn eventually, making you want to quit the game. Then, there are games which are too difficult to hold your interest and you end up quitting in the beginning phase itself.  These are also two of the most common problems that game developers face when building games. This is where AI comes to your rescue, to spice things up. Why use Artificial Intelligence in games? The major reason for using AI in games is to provide a challenging opponent to make the game more fun to play. But, AI in the gaming industry is not a recent news. The gaming world has been leveraging the wonders of AI for a long time now. One of the first examples of AI is the computerized game, Nim, was created back in 1951. Other games such as Façade, Black & White, The Sims, Versu, and F.E.A.R. are all great AI games, that hit the market long time back. Even modern-day games like Need for Speed, Civilization, or Counter-Strike use AI. AI controls a lot of elements in games and is usually behind characters such as enemy creeps, neutral merchants, or even animals. AI in games is used to enable the non-human characters (NPCs) with responsive, adaptive, and intelligent behaviors similar to human-like intelligence. AI helps make NPCs seem intelligent as they are able to actively change their level of skills based on the person playing the game. This makes the game seem more personalized to the gamer. Playing video games is fun, and developing these games is equally fun. There are different game engines on the market to help with the development of games. A game engine is a software that provides game creators with the necessary set of features to build games quickly and efficiently. Let’s have a look at the top game engines for Artificial Intelligence game development. Unity3D Developer:  Unity Technologies Release Date: June 8, 2005 Unity is a cross-platform game engine which provides users with the ability to create games in both 2D and 3D. It is extremely popular and loved by game designers from large and small studios alike. Apart from 3D, and 2D games, it also helps with simulations for desktops, laptops, home consoles, smart TVs, and mobile devices. Key AI features: Unity offers a machine learning agents toolkit to the game developers, which help them include AI agents within games. As per the Unity team, “machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents”. Unity AI - Unity 3D Artificial Intelligence  The ML-Agents SDK transforms games and simulations created using the Unity Editor into environments for training intelligent agents. These ML agents are trained using deep Reinforcement Learning, imitation learning, neuroevolution, or other machine learning methods via Python APIs. There’s also a TensorFlow based algorithm provided by Unity to allow game developers to easily train intelligent agents for 2D, 3D, and VR/AR games. These trained agents are then used for controlling the NPC behavior within games. The ML-Agents toolkit is beneficial for both game developers and AI researchers. Apart from this, Unity3D is easy to use and learn, compatible with every game platform and provides great community support. Learning Resources: Unity AI Programming Essentials Unity 2017 Game AI programming - Third Edition Unity 5.x Game AI Programming Cookbook Unreal Engine 4 Developer: Epic games Release Date: May 1998 Unreal Engine is widely used among developers all around the world. It is a collection of integrated tools for game developers which helps them build games, simulations, and visualization. It is also among the top game engines which are used to develop high-end AAA titles. Gears of War, Batman: Arkham Asylum and Mass Effect are some of the popular games developed using Unreal Engine. Key AI features: Unreal Engine uses a set of tools which helps add AI capabilities to a game. It uses tools such as behavior Tree, navigation Component, blackboard asset, enumeration, target point, AI Controller, and Navigation Volumes. Behavior tree creates different states and the logic behind AI. Navigation Component helps with handling movement for AI. Blackboard Asset stores information and acts as the local variable for AI. Enumeration creates states. It also allows alternating between these states. Target Point creates a basic Path node form. The AI Controller and Character tool is responsible for handling the communication between the world and the controlled pawn for AI. At last, the Navigation Volumes feature creates Navigation Mesh in the environment to allow easy Pathfinding for AI. There are also features such as Blueprint Visual Scripting which can be converted into performant C++ code, AIComponents, and the Environment Query System (EQS) which provides agents the ability to perceive their environment. Apart from its AI capabilities, the Unreal engine offers the largest community support with lifetime hours of video tutorials and assets. It is also compatible with a variety of operating platforms such as iOS, Android, Linux, Mac, Windows, and most game consoles. But there are certain inbuilt-tools in Unreal Engine which can be hard for beginners to learn. Learning resources: Unreal Engine 4 AI programming essentials CryEngine 3 Developer: Crytek Release Date: May 2, 2002 CryEngine is a powerful game development platform that comes packed with a set of tools and features to create world-class gaming experiences. It is the game engine behind games such as Sniper: Ghost Warrior 2, SNOW, etc. Key AI features: CryEngine comes with an AI system designed for easy creation of custom AI actors. This is flexible enough to handle a larger set of complex and different worlds. The core of CryEngine’s AI system is based on lots of scripting. There are different AI elements within this system that add the AI capabilities to the NPCs within the game. Some of these elements are AI Actions which allows the developers to script AI behaviors without creating new code. AI Actors Logger can log AI events and signals to files. AI Control Objects use AI object to control AI entities/actors. AI Debug Draw is the primary tool offered by CryEngine for information on the current state of the AI System and AI actors. AI Debugger registers the inputs that AI agents receive and the decisions that they make in real-time during a game session. AI Sequence system works in parallel to FG and AI systems. This helps to simplify and group AI control. CryEngine offers the easiest A.l. coding of any tech currently on the market. Since CryEngine is relatively new as compared to other game engines, it does not have a very flourishing community yet. Despite the easy AI coding, the overall learning curve of Unreal Engine is high. Panda3D Developer: Disney Interactive until 2010,  Walt Disney Imagineering, Carnegie Mellon University Release Date: 2002 Panda3D is a game engine, a framework for 3D rendering and game development for Python and C++ programs. It includes graphics, audio, I/O, collision detection, and other abilities for the creation of 3D games. Key AI features: Panda3D comes packed with an AI library named PandAI v1.0. PandAI is an AI library which provides 'Artificially Intelligent' behavior in NPC (Non-Playable Characters) in games. The PandAI library offers functionality for Steering Behaviors (Seek, Flee, Pursue, Evade, Wander, Flock, Obstacle Avoidance, Path Following) and path finding (helps the NPCs to intelligently avoiding obstacles via the shortest path ). This AI library is composed of several different entities. For instance, there’s a main AIWorld Class to update any AICharacters added to it. Each AICharacter has its own AIBehavior object for tracking all the position and rotation updates. Each AIBehavior object has the functionality to implement all the steering behaviors and pathfinding behaviors. These features within Panda3D gives you the ability to call the respective functions. Panda3D is a relatively simple game engine which lets you add AI capabilities within your games. The community is not as robust and has a low learning curve. AI is a fantastic tool which makes the entities in games seem more organic, alive, and real. The main goal here is not to copy the entire human thought process but to just sell the illusion of life. These game engines provide the developers with the entire framework needed to add AI capabilities to their games. The entire game development process is more fun as there is no need to create all systems including the physics, graphics, and AI, from scratch. Now, if you’re wondering about the best AI game engines out of the four mentioned in this article then there is no specific answer to that as selecting the best AI game engine depends on the requirements of your project. Game Engine Wars: Unity vs Unreal Engine Unity switches to WebAssembly as the output format for the Unity WebGL build target Developing Games Using AI  
Read more
  • 0
  • 1
  • 17883

article-image-what-you-should-know-about-unity-2018-interface
Amarabha Banerjee
23 Jul 2018
8 min read
Save for later

What you should know about Unity 2018 Interface

Amarabha Banerjee
23 Jul 2018
8 min read
In this article we will show Unity 2018 primary views and windows; we will also cover layouts and the toolbar. The interface components covered in the post are the used most ones. This article is taken from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. Unity 2018 User Interface Components at glance When we first launch Unity, we might be intimidated by all the areas, tabs, menus, and buttons on the interface. Unity is a complex game engine with a lot of functionality, so we should expect more components for us to interact with. If we break the interface down into separate components, we can examine each one independently to gain a thorough understanding of the entire interface. As you can see here, we have identified six primary areas of the interface. We will examine each of these in subsequent sections. As you will quickly learn, this interface is customizable. The following screenshot shows the default configuration of the Unity user interface. Menu The Unity editor's main menu bar, as depicted here, consists of eight pull-down options. We will briefly review each menu option in this section. Additional details will be provided in subsequent chapters, as we start developing our Cucumber Beetle game: Unity's menus are contextual. This means that only menu items pertinent to the currently selected object will be enabled. Other non-applicable menu items will appear as gray instead of black and not be selectable. Unity The Unity menu item, shown here, gives us access to information about Unity, our software license, display options, module information, and access to preferences: Accessing the Unity | About Unity... menu option gives you access to the version of the engine you are running. There is additional information as well, but you would probably only use this menu option to check your Unity version. The Unity | Preferences... option brings up the Unity Preferences dialog window. That interface has seven side tabs: General, External Tools, Colors, Keys, GI Cache, 2D, and Cache Server. You are encouraged to become familiar with them as you gain experience in Unity. The Unity | Modules option provides you with a list of playback engines that are running as well as any Unity extensions. You can quit the Unity game engine by selecting the Unity | Quit menu option. File Unity's File menu includes access to your game's scenes and projects. We will use these features throughout our game development process. As you can see in the following screenshot, we also have access to the Build Settings. Edit The Edit menu has similar functionality to standard editors, not just game engines. For example, the standard Cut, Copy, Paste, Delete, Undo, and Redo options are there. Moreover, the short keys are aligned with the software industry standard. As you can see from the following screenshot, there is additional functionality accessible here. There are play, pause, and step commands. We can also sign in and out of our Unity account: The Edit | Project Settings option gives us access to Input, Tags and Layers, Audio, Time, Player, Physics, Physics 2D, Quality, Graphics, Network, Editor, and Script Execution Order. In most cases, selecting one of these options opens or focuses keyboard control to the specific functionality. Assets Assets are representations of things that we can use in our game. Examples include audio files, art files, and 3D models. There are several types of assets that can be used in Unity. As you can see from the following screenshot, we are able to create, import, and export assets: You will become increasingly familiar with this collection of functionality as you progress through the book and start developing your game. GameObject The GameObject menu provides us with the ability to create and manipulate GameObjects. In Unity, GameObjects are things we use in our game such as lights, cameras, 3D objects, trees, characters, cars, and so much more. As you can see here, we can create an empty GameObject as well as an empty child GameObject: We will have extensive hands-on dealings with the GameObject menu items throughout this book. At this point, it is important that you know this is where you go to create GameObjects as well as perform some manipulations on them. Component We know that GameObjects are just things. They actually only become meaningful when we add components to them. Components are an important concept in Unity, and we will be working with them a lot as we progress with our game's development. It is the components that implement functionality for our GameObjects. The following screenshot shows the various categories of components. This is one method for creating components in Unity: Window The Window menu option provides access to a lot of extra features. As you can see here, there is a Minimize option that will minimize the main Unity editor window. The Zoom option toggles full screen and zoomed view: The Layouts option provides access to various editor layouts, to save or delete a layout. The following table provides a brief description of the remaining options available via the Window menu item. You will gain hands-on experience with these windows as you progress through this book: Window OptionDescriptionServicesAccess to integrated services: Ads, Analytics, Cloud Build, Collaborate, Performance Reporting, In-App Purchasing, and Multiplayer.SceneBrings focus to the Scene view. Opens the window if not already open. Additional details are provided later in this chapter.GameBrings focus to the Game view. Opens the window if not already open. Additional details are provided later in this chapter.InspectorBrings focus to the Inspector window. Opens the window if not already open. Additional details are provided later in this chapter.HierarchyBrings focus to the Hierarchy window. Opens the window if not already open. Additional details are provided later in this chapter.ProjectBrings focus to the Project window. Opens the window if not already open. Additional details are provided later in this chapter.AnimationBrings focus to the Animation window. Opens the window if not already open.ProfilerBrings focus to the Profiler window. Opens the window if not already open.Audio MixerBrings focus to the Audio Mixer window. Opens the window if not already open.Asset StoreBrings focus to the Asset Store window. Opens the window if not already open.Version ControlUnity provides functionality for most popular version control systems.Collab HistoryIf you are using an integrated collaboration tool, you can access the history of changes to your project here.AnimatorBrings focus to the Animator window. Opens the window if not already open.Animator ParameterBrings focus to the Animator Parameter window. Opens the window if not already open.Sprite PackerBrings focus to the Sprite Packer window. Opens the window if not already open. In order to use this feature, you will need to enable Legacy Sprite Packing in Project Settings.ExperimentalBrings focus to the Experimental window. Opens the window if not already open. By default, the Look Dev experimental feature is available. Additional experimental features can be found in the Unity Asset Store.Test RunnerBrings focus to the Experimental window. Opens the window if not already open. This is a tool that runs tests on your code both in edit and play modes. Builds can also be tested.Timeline EditorBrings focus to the Timeline Editor window. Opens the window if not already open. This is a contextual menu item.LightingAccess to the Lighting window and the Light Explorer window.Occlusion CullingThis feature allows you to select and edit how objects are drawn. With occlusion culling, only the objects within the current camera's visual range, and not obscured by other objects, are rendered.Frame DebuggerThis feature allows you to step through a game, one frame at a time, so you can see the draw calls on a given frame.NavigationUnity's navigation system allows us to implement artificial intelligence with regards to non-player character movement Physics DebuggerBrings focus to the Physics Debugger window. Opens the window if not already open. Here we can toggle several physics-related components to help debug physics in our games.ConsoleBrings focus to the Console window. Opens the window if not already open. The Console window shows warnings and errors. You can also output data here during gameplay, which is a common internal testing approach. To summarize, we have discussed the Unity 2018 interface. If you are interested to know more about using Unity 2018 and want to leverage its powerful features, you may refer to the book Getting Started with Unity 2018. What’s got game developers excited about Unity 2018.2? Put your game face on! Unity 2018.1 is now available Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 3752

article-image-designing-ui-in-unity-what-you-should-know
Amarabha Banerjee
15 Jul 2018
9 min read
Save for later

Designing UIs in Unity: What you should know

Amarabha Banerjee
15 Jul 2018
9 min read
Imagine a watch without a watch face to indicate the time. An interface provides important information to us, such as time, so that we can make informed decisions. For example, whether we have enough time to get ice cream before the movie starts. When it comes to games, the User Interface (UI) plays a vital role in how information is conveyed to a player during gameplay. The implementation of a UI is one of the main ways to exchange information with the player about moment-to-moment interactions and their consequences (for example, taking damage). However, UIs are not just about the exchange of information, they are also about how information is provided to a player and when. This can range from the subtle glow of a health bar as it depletes, to dirt covering the screen as you drive a high-speed vehicle through the desert. There are four main ways that UIs are provided to players within a game, which we will discuss shortly. The purpose of this article is to prime you with the fundamentals of UIs so that you not only know how to implement them within Unity but also how they relate to a player's experience. Toward the end, we will see how Unity handles UIs, and we will implement a UI for our first game. In fact, we will insert a scoring system as well as a Game Over Screen. There will be some additional considerations that you can experiment with in terms of adding additional UI elements that you can try implementing on your own. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Designing the user interface Think about reading a book, is the text or images in the center of the page, where is the page number located, and are the pages numbered consecutively? Typically, such things are pretty straightforward and follow conventions. Therefore, to some extent, we begin to expect things to be the same, especially if they are located in the same place, such as page numbers or even the next button. In the context of games, players also expect the same kinds of interactions, not only with gameplay but also with other on-screen elements, such as the UI. For example, if most games show health in a rectangular bar or with hearts, then that's something that players will be looking for when they want to know whether or not they are in danger. The design of a UI needs to consider a number of things. For example, the limitations of the platform that you are designing for, such as screen size, and the types of interaction that it can afford (does it use touch input or a mouse pointer?). Physiological reactions that the interface might give to the player need to be considered since they will be the final consumer. In fact, another thing to keep in mind is that some people read from right to left in their native languages, and the UI should reflect this as well. Players or users of applications are used to certain conventions and formats. For example, a house icon usually indicates home or the main screen, an email icon usually indicates contact, and an arrow pointing to the right usually indicates that it will continue to the next item in the list or the next question, and so on. Therefore, to improve ease of use and navigation, it is ideal to stick to these or to at least to keep these in mind during the design process. In addition to this, how the user navigates through the application is important. If there is only one way to get from the home screen to an option, and it's via a lot of screens, the whole experience is going to be tiresome. Therefore, make sure that you create navigation maps early on to determine the route for each part of the experience. If a user has to navigate through six screens before they can reach a certain page, then they won't be doing it for very long! In saying all of this, don't let the design overtake the practicality of the user's experience. For example, you may have a beautiful UI but if it makes it really hard to play the game or it causes too much confusion, then it is pretty much useless. Particularly during fast-paced gameplay, you don't want the player to have to sift through 20 different on-screen elements to find what they are looking for. You want the level mastery to be focused on the gameplay rather than understanding the UI. Another way to limit the number of UI elements presented to the player (at any one time) is to have sliding windows or pop-up windows that have other UI elements present. For example, if your player has the option to unlock many different types of ability but can only use one or two of them at any single moment during gameplay, there is no point in displaying them all. Therefore, having a UI element for them to click that then displays all of the other abilities, which they can swap for the existing ones, is one way to minimize the UI design. Of course, you don't want to have multiple pop-up windows, otherwise, it becomes a quest in itself to change in-game settings. Programming the user interface As we have seen in the previous section, designing the UI can be tough and requires experience to get into, especially if you take into consideration all the elements you should, such as the psychology of your audience. However, this is just halfway through. In fact, designing is one thing; making it work is another. Usually, in large teams, there are artists who design the UI and programmers who implement it, based on the artists' graphics. Is UI programming that different? Well, the answer is no, programming is still programming; however, it's quite an interesting branch of the field of programming. If you are building your game engine from scratch, implementing an entire system that handles input is not something you can create with just a couple of hours of work. Catching all the events that the player performs in the game and in the UI is not easy to implement, and requires a lot of practice. Luckily, in the context of Unity, most of this backend for UIs is already done. Unity also provides a solid framework on the frontend for UIs. This framework includes different components that can be easily used without knowing anything about programming. But if we are really interested in unlocking the potential of the Unity framework for UIs, we need to both understand and program within it. Even with a solid framework, such as the one in Unity, UI programming still needs to take into consideration many factors, enough to have a specific role for this in large teams. Achieving exactly what designers have in mind, and is possible without impacting the performance of the game too much, is most of the job of a UI programmer (at least using Unity). Four types of UI Before, moving on, I just want to point out a technical term about UIs, since it also appears in the official documentation of Unity. Some UIs are not fixed on the screen, but actually, have a physical space within the game environment. In saying this, the four types of interfaces are diegetic, non-diegetic, meta, and spatial. Each of these has its own specific use and effect when it comes to the player's experience and some are implicit (for example, static graphics) while others are explicit (blood and dirt on the screen). However, these types can be intermixed to create some interesting interfaces and player experiences. For Angel Cakes, we will implement a simple non-diegetic UI, which will show all of the information the player needs to play the game. Diegetic Diegetic UIs differ from to non-diegetic UIs because they exist in the game world instead of being on top of it and/or completely removed from the game's fiction. Diegetic UIs within the game world can be seen and heard by other players. Some examples of diegetic UI include the screens on computers, ticking clocks, remaining ammunition, and countdowns. To illustrate this, if you have a look at the following image from the Tribes Ascend game, you can see the amount of ammunition remaining: Non-diegetic Non-diegetic interfaces are ones that are rendered outside of the game world and are only visible to the player. They are your typical game UIs that overlay on top of the game. They are completely removed from the game's fiction. Some common uses of non-diegetic UIs can represent health and mana via a colored bar. Non-diegetic UIs are normally represented in 2D, like in the following screenshot of Star Trek Online: Spatial Spatial UI elements are physically presented in the game's space. These types of UIs may or not may be visible to the other players within the game space. This is something that is particularly featured in Virtual Reality (VR) experiences. Spatial UIs are effective when you want to guide players through a level or to indicate points of interest. The following example is from Army of Two. On the ground, you can see arrows directing the player where to go next. You can find out more about implementing Spatial UIs, like the one in the following screenshot, in Unity by visiting the link to the official documentation at: Meta Lastly, Meta UIs can exist in the game world but aren't necessarily visualized like they would be as Spatial UIs. This means that they may not be represented within the 3D Space. In most cases, Meta UIs represent an effect of various actions such as getting shot or requiring more oxygen. As you can see in the following image of Metro 2033, when the player is in an area that is not suitable for them, the view through the mask begins to get hazy. When they get shot or engage in combat, their mask also receives damage. You can see this through the cracks that appear on the edges of the mask: To summarize, we saw the importance of UI in game development and what are the different types of UI available. To know more, check out this book Unity 2017 2D Game Development Projects written by Lauren S. Ferro & Francesco Sapio. Google Cloud collaborates with Unity 3D; a connected gaming experience is here! Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 8370
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ai-for-game-developers
Kunal Chaudhari
25 May 2018
21 min read
Save for later

AI for game developers: 7 ways AI can take your game to the next level

Kunal Chaudhari
25 May 2018
21 min read
Artificial Intelligence (AI) is a rich and complex topic. At first glance, it can seem intimidating. The uses for it are diverse, ranging from robotics to statistics and to (more relevantly for us) entertainment, more specifically, video games. In this article, we will get to know the fundamentals of Artificial intelligence and why it is important for game developers. We have also covered a quick background on AI in academics, traditional domain, and game-specific applications. We will also look at the following: Application and implementation of AI in games is different from other domains Special requirements for AI in games Basic AI patterns used in games This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book would help you to create fun and unbelievable AI entities in your games with A*, Fuzzy logic and NavMesh with Unity 2017. Leveling up your game with AI AI in games dates back all the way to the earliest games, even as far back as Namco's arcade hit Pac-Man. The AI was rudimentary at best, but even in Pac-Man, each of the enemies—Blinky, Pinky, Inky, and Clyde—had unique behaviors that challenged the player in different ways. Learning those behaviors and reacting to them adds a huge amount of depth to the game and keeps players coming back, even after over 30 years since its release. It's the job of a good game designer to make the game challenging enough to be engaging, but not so difficult that a player can never win. To this end, AI is a fantastic tool that can help abstract the patterns that entities in games follow to make them seem more organic, alive, and real. Much like an animator through each frame or an artist through his brush, a designer or programmer can breathe life into their creations via clever use of the AI techniques covered in this article. The role of AI in games is to make games fun by providing challenging entities to compete with, and interesting non-player characters (NPCs) that behave realistically inside the game world. The objective here is not to replicate the whole thought process of humans or animals, but merely to sell the illusion of life and make NPCs seem intelligent by having them react to the changing situations inside the game world in a way that makes sense to the player. Technology allows us to design and create intricate patterns and behaviors, but we're not yet at the point where AI in games even begins to resemble true human behavior. While smaller, more powerful chips, buckets of memory, and even distributed computing have given programmers a much higher computational ceiling to dedicate to AI, at the end of the day, resources are still shared between other operations such as graphics rendering, physics simulation, audio processing, animation, and others, all in real time. All these systems have to play nice with each other to achieve a steady frame rate throughout the game. Like all the other disciplines in game development, optimizing AI calculations remains a huge challenge for AI developers. Using AI in Unity In this section, we'll walk you through the AI techniques being used in different types of games. Unity is a flexible engine that provides a number of approaches to implement AI patterns. Some are ready to go out of the box, so to speak, while others we'll have to build from scratch. We'll focus on implementing the most essential AI patterns within Unity so that you can get your game's AI entities up and running quickly. Learning and implementing the techniques within this article will serve as a fundamental first step in the vast world of AI. Many of the concepts we will cover in this article, such as pathfinding and Navigation Meshes, are interconnected and build on top of one another. For this reason, it's important to get the fundamentals right first before digging into the high-level APIs that Unity provides. Defining the agent Before jumping into our first technique, we should be clear on a key term—the agent. An agent, as it relates to AI, is our artificially intelligent entity. When we talk about our AI, we're not specifically referring to a character, but an entity that displays complex behavior patterns, which we can refer to as non-random, or in other words, intelligent. This entity can be a character, creature, vehicle, or anything else. The agent is the autonomous entity, executing the patterns and behaviors we'll be covering. With that out of the way, let's jump in. Finite State Machines Finite State Machines (FSM) can be considered one of the simplest AI models, and they are commonly used in games. A state machine basically consists of a set number of states that are connected in a graph by the transitions between them. A game entity starts with an initial state and then looks out for the events and rules that will trigger a transition to another state. A game entity can only be in exactly one state at any given time. For example, let's take a look at an AI guard character in a typical shooting game. Its states could be as simple as patrolling, chasing, and shooting: There are basically four components in a simple FSM: States: This component defines a set of distinct states that a game entity or an NPC can choose from (patrol, chase, and shoot) Transitions: This component defines relations between different states Rules: This component is used to trigger a state transition (player on sight, close enough to attack, and lost/killed player) Events: This is the component that will trigger to check the rules (guard's visible area, distance to the player, and so on) FSMs are commonly used go-to AI patterns in game development because they are relatively easy to implement, visualize, and understand. Using simple if/else statements or switch statements, we can easily implement an FSM. It can get messy as we start to have more states and more transitions. Seeing the world through our agent's eyes In order to make our AI convincing, our agent needs to be able to respond to the events around him, the environment, the player, and even other agents. Much like real living organisms, our agent can rely on sight, sound, and other "physical" stimuli. However, we have the advantage of being able to access much more data within our game than a real organism can from their surroundings, such as the player's location, regardless of whether or not they are in the vicinity, their inventory, the location of items around the world, and any variable you chose to expose to that agent in your code: In the preceding diagram, our agent's field of vision is represented by the cone in front of it, and its hearing range is represented by the grey circle surrounding it: Vision, sound, and other senses can be thought of, at their most essential level, as data. Vision is just light particles, sound is just vibrations, and so on. While we don't need to replicate the complexity of a constant stream of light particles bouncing around and entering our agent's eyes, we can still model the data in a way that produces believable results. As you might imagine, we can similarly model other sensory systems, and not just the ones used for biological beings such as sight, sound, or smell, but even digital and mechanical systems that can be used by enemy robots or towers, for example sonar and radar. If you've ever played Metal Gear Solid, then you've definitely seen these concepts in action—an enemy's field of vision is denoted on the player's mini map as cone-shaped fields of view. Enter the cone and an exclamation mark appears over the enemy's head, followed by an unmistakable chime, letting the player know that they've been spotted. Path following and steering Sometimes, we want our AI characters to roam around in the game world, following a roughly-guided or thoroughly-defined path. For example, in a racing game, the AI opponents need to navigate the road. In an RTS game, your units need to be able to get from wherever they are to the location you tell them navigating through the terrain and around each other. To appear intelligent, our agents need to be able to determine where they are going, and if they can reach that point, they should be able to route the most efficient path and modify that path if an obstacle appears as they navigate.  Even path following and steering can be represented via a finite state machine. You will then see how these systems begin to tie in. In this article, we will cover the primary methods of pathfinding and navigation, starting with our own implementation of an A* Pathfinding System, followed by an overview of Unity's built-in Navigation Mesh (NavMesh) feature. Dijkstra's algorithm While perhaps not quite as popular as A* Pathfinding (which we will cover next), it's crucial to understand Dijkstra's algorithm, as it lays the foundation for other similar approaches to finding the shortest path between two nodes in a graph. The algorithm was published by Edsger W. Dijkstra in 1959. Dijkstra was a computer scientist, and though he may be best known for his namesake algorithm, he also had a hand in developing other important computing concepts, such as the semaphore. It might be fair to say Dijkstra probably didn't have StarCraft in mind when developing his algorithm, but the concepts translate beautifully to game AI programming and remain relevant to this day. So what does the algorithm actually do? In a nutshell, it computes the shortest path between two nodes along a graph by assigning a value to each connected node based on distance. The starting node is given a value of zero. As the algorithm traverses through a list of connected nodes that have not been visited, it calculates the distance to it and assigns the value to that node. If the node had already been assigned a value in a prior iteration of the loop, it keeps the smallest value. The algorithm then selects the connected node with the smallest distance value, and marks the previously selected node as visited, so it will no longer be considered. The process repeats until all nodes have been visited. With this information, you can then calculate the shortest path. Need help wrapping your head around Dijkstra's algorithm? The University of San Francisco has created a handy visualization tool:  ;https://www.cs.usfca.edu/~galles/visualization/Dijkstra.html. While Dijkstra's algorithm is perfectly capable, variants of it have been developed that can solve the problem more efficiently. A* is one such algorithm, and it's one of the most widely used pathfinding algorithms in games, due to its speed advantage over Dijkstra's original version. Using A* Pathfinding There are many games in which you can find monsters or enemies that follow the player, or go to a particular point while avoiding obstacles. For example, let's take a typical RTS game. You can select a group of units and click on a location you want them to move to, or click on the enemy units to attack them. Your units then need to find a way to reach the goal without colliding with the obstacles or avoid them as intelligently as possible. The enemy units also need to be able to do the same. Obstacles could be different for different units, terrain, or other in-game entities. For example, an air force unit might be able to pass over a mountain, while the ground or artillery units need to find a way around it. A* (pronounced "A star") is a pathfinding algorithm that is widely used in games because of its performance and accuracy. Let's take a look at an example to see how it works. Let's say we want our unit to move from point A to point B, but there's a wall in the way and it can't go straight towards the target. So, it needs to find a way to get to point B while avoiding the wall. The following figure illustrates this scenario: In order to find the path from point A to point B, we need to know more about the map, such as the position of the obstacles. To do this, we can split our whole map into small tiles, representing the whole map in a grid format. The tiles can also be other shapes such as hexagons and triangles. Representing the whole map in a grid makes the search area more simplified, and this is an important step in pathfinding. We can now reference our map in a small 2D array: Once our map is represented by a set of tiles, we can start searching for the best path to reach the target by calculating the movement score of each tile adjacent to the starting tile, which is a tile on the map not occupied by an obstacle, and then choosing the tile with the lowest cost. A* Pathfinding calculates the cost to move across the tiles A* is an important pattern to know when it comes to pathfinding, but Unity also gives us a couple of features right out of the box, such as automatic Navigation Mesh generation and the NavMesh agent. These features make implementing pathfinding in your games a walk in the park (no pun intended). Whether you choose to implement your own A* solution or simply go with Unity's built-in NavMesh feature will depend on your project's needs. Each option has its own pros and cons, but ultimately, knowing about both will allow you to make the best possible choice. With that said, let's have a quick look at NavMesh. IDA* Pathfinding IDA* star stands for iterative deepening A*. It is a depth-first permutation of A* with a lower overall memory cost, but is generally considered costlier in terms of time. Whereas A* keeps multiple nodes in memory at a time, IDA* does not since it is a depth-first search. For this reason, IDA* may visit the same node multiple times, leading to a higher time cost. Either solution will give you the shortest path between two nodes. In instances where the graph is too big for A* in terms of memory, IDA* is preferable, but it is generally accepted that A* is good enough for most use cases in games. Using Navigation Mesh Now that we've taken a brief look at A*, let's look at some possible scenarios where we might find NavMesh a fitting approach to calculate the grid. One thing that you might notice is that using a simple grid in A* requires quite a number of computations to get a path that is the shortest to the target and, at the same time, avoids the obstacles. So, to make it cheaper and easier for AI characters to find a path, people came up with the idea of using waypoints as a guide to move AI characters from the start point to the target point. Let's say we want to move our AI character from point A to point B and we've set up three waypoints, as shown in the following figure: All we have to do now is to pick up the nearest waypoint and then follow its connected node leading to the target waypoint. Most games use waypoints for pathfinding because they are simple and quite effective in terms of using less computation resources. However, they do have some issues. What if we want to update the obstacles in our map? We'll also have to place waypoints for the updated map again, as shown in the following figure: Having to manually alter waypoints every time the layout of your level changes can be cumbersome and very time-consuming. In addition, following each node to the target can mean that the AI character moves in a series of straight lines from node to node. Look at the preceding figures; it's quite likely that the AI character will collide with the wall where the path is close to the wall. If that happens, our AI will keep trying to go through the wall to reach the next target, but it won't be able to and will get stuck there. Even though we can smooth out the path by transforming it to a spline and doing some adjustments to avoid such obstacles, the problem is that the waypoints don't give us any information about the environment, other than the spline being connected between the two nodes. What if our smoothed and adjusted path passes the edge of a cliff or bridge? The new path might not be a safe path anymore. So, for our AI entities to be able to effectively traverse the whole level, we're going to need a tremendous number of waypoints, which will be really hard to implement and manage. This is a situation where a NavMesh makes the most sense. NavMesh is another graph structure that can be used to represent our world, similar to the way we did with our square tile-based grid or waypoints graph, as shown in the following diagram: A Navigation Mesh uses convex polygons to represent the areas in the map that an AI entity can travel to. The most important benefit of using a Navigation Mesh is that it gives a lot more information about the environment than a waypoint system. Now we can adjust our path safely because we know the safe region in which our AI entities can travel. Another advantage of using a Navigation Mesh is that we can use the same mesh for different types of AI entities. Different AI entities can have different properties such as size, speed, and movement abilities. A set of waypoints is tailored for humans; AI may not work nicely for flying creatures or AI-controlled vehicles. These might need different sets of waypoints. Using a Navigation Mesh can save a lot of time in such cases. Generating a Navigation Mesh programmatically based on a scene can be a somewhat complicated process. Fortunately, Unity 3.5 introduced a built-in Navigation Mesh generator as a pro-only feature, but is now included for free from the Unity 5 personal edition onwards. Unity's implementation provides a lot of additional functionality out of the box. Not just the generation of the NavMesh itself, but agent collision and pathfinding on the generated graph (via A*, of course) as well. Flocking and crowd dynamics In nature, we can observe what we refer to as flocking behavior in several species. Flocking simply refers to a group moving in unison. Schools of fish, flocks of sheep, and cicada swarms are fantastic examples of this behavior. Modeling this behavior using manual means, such as animation, can be very time-consuming and is not very dynamic. Similarly, crowds of humans, be it on foot or in vehicles, can be modeled by representing the entire crowd as an entity rather than trying to model each individual as its own agent. Each individual in the group only really needs to know where the group is heading and what their nearest neighbor is up to in order to function as part of the system. Behavior trees The behavior tree is another pattern used to represent and control the logic behind AI agents. Behavior trees have become popular for applications in AAA games such as Halo and Spore. Previously, we briefly covered FSMs. They provide a very simple yet efficient way to define the possible behaviors of an agent, based on the different states and transitions between them. However, FSMs are considered difficult to scale as they can get unwieldy fairly quickly and require a fair amount of manual setup. We need to add many states and hardwire many transitions in order to support all the scenarios we want our agent to consider. So, we need a more scalable approach when dealing with large problems. This is where behavior trees come in. Behavior trees are a collection of nodes organized in a hierarchical order, in which nodes are connected to parents rather than states connected to each other, resembling branches on a tree, hence the name. The basic elements of behavior trees are task nodes, whereas states are the main elements for FSMs. There are a few different tasks such as Sequence, Selector, and Parallel Decorator. It can be a bit daunting to track what they all do. The best way to understand this is to look at an example. Let's break the following transitions and states down into tasks, as shown in the following figure: Let's look at a Selector task for this behavior tree. Selector tasks are represented by a circle with a question mark inside. The selector will evaluate each child in order, from left to right. First, it'll choose to attack the player; if the Attack task returns a success, the Selector task is done and will go back to the parent node, if there is one. If the Attack task fails, it'll try the Chase task. If the Chase task fails, it'll try the Patrol task. The following figure shows the basic structure of this tree concept: Test is one of the tasks in the behavior tree. The following diagram shows the use of Sequence tasks, denoted by a rectangle with an arrow inside it. The root selector may choose the first Sequence action. This Sequence action's first task is to check whether the player character is close enough to attack. If this task succeeds, it'll proceed with the next task, which is to attack the player. If the Attack task also returns successfully, the whole sequence will return as a success, and the selector will be done with this behavior and will not continue with other Sequence tasks. If the proximity check task fails, the Sequence action will not proceed to the Attack task, and will return a failed status to the parent selector task. Then the selector will choose the next task in the sequence, Lost or Killed Player? The following figure demonstrates this sequence: The other two common components are parallel tasks and decorators. A parallel task will execute all of its child tasks at the same time, while the Sequence and Selector tasks only execute their child tasks one by one. Decorator is another type of task that has only one child. It can change the behavior of its own child's tasks including whether to run its child's task or not, how many times it should run, and so on. Thinking with fuzzy logic Finally, we arrive at fuzzy logic. Put simply, fuzzy logic refers to approximating outcomes as opposed to arriving at binary conclusions. We can use fuzzy logic and reasoning to add yet another layer of authenticity to our AI. Let's use a generic bad guy soldier in a first person shooter as our agent to illustrate this basic concept. Whether we are using a finite state machine or a behavior tree, our agent needs to make decisions. Should I move to state x, y, or z? Will this task return true or false? Without fuzzy logic, we'd look at a binary value (true or false, or 0 or 1) to determine the answers to those questions. For example, can our soldier see the player? That's a yes/no binary condition. However, if we abstract the decision-making process even further, we can make our soldier behave in much more interesting ways. Once we've determined that our soldier can see the player, the soldier can then "ask" itself whether it has enough ammo to kill the player, or enough health to survive being shot at, or whether there are other allies around it to assist in taking the player down. Suddenly, our AI becomes much more interesting, unpredictable, and more believable. This added layer of decision making is achieved by using fuzzy logic, which in the simplest terms, boils down to seemingly arbitrary or vague terminology that our wonderfully complex brains can easily assign meaning to, such as "hot" versus "warm" or "cool" versus "cold," converting this to a set of values that a computer can easily understand. Game AI and academic AI have different objectives. Academic AI researchers try to solve real-world problems and prove a theory without much limitation in terms of resources. Game AI focuses on building NPCs within limited resources that seem to be intelligent to the player. The objective of AI in games is to provide a challenging opponent that makes the game more fun to play. To summarize, we learned briefly about the different AI techniques that are widely used in games such as FSMs, sensor and input systems, flocking and crowd behaviors, path following and steering behaviors etc. If you enjoyed this excerpt, check out this book Unity 2017 Game AI Programming - Third Edition, to explore brand-new features in Unity 2017 for easier Artificial Intelligence implementation in your games. How to create non-player Characters (NPC) with Unity 2018 Put your game face on! Unity 2018.1 is now available Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 7962

article-image-deep-learning-games-neural-networks-design-virtual-world
Amey Varangaonkar
28 Mar 2018
4 min read
Save for later

Deep Learning in games - Neural Networks set to design virtual worlds

Amey Varangaonkar
28 Mar 2018
4 min read
Games these days are closer to reality than ever. Life-like graphics, smart gameplay and realistic machine-human interactions have led to major game studios up the ante when it comes to adopting the latest and most up to date tech for developing games. In fact, not so long ago, we shared with you a few interesting ways in which Artificial Intelligence is transforming the gaming industry. Inclusion of deep learning in games has emerged as one popular solution to make the games smarter. Deep learning can be used to enhance the realism and excitement in games by teaching the game agents how to behave more accurately, and in a more life-like manner. We recently came up with this interesting implementation of deep learning to to play the game of FIFA 18, and we were quite impressed! Using just 2 layers of neural networks and with a limited amount of training, the bot that was developed managed to learn the basic rules of football (soccer). Not just that, it was also able to perform the basic movements and tasks in the game correctly. To achieve this, 2 neural networks were developed - a Convolutional Neural Network to detect objects within the game, and a second layer of LSTM (Long Short Term Memory) network to specify the movements accordingly. The same user also managed to leverage deep learning to improve the in-game graphics of the FIFA 18 game. Using the deepfakes algorithm, he managed to swap the in-game face of one of the players with the player’s real-life face. The reason? The in-game faces, although quite realistic, could be better and more realistic. The experiment ended up being near perfect, as the resultant face that was created was as good as perfect. How did he do it? After gathering some training data which was basically some images of players scraped off Google, the user managed to train two autoencoders which learnt the distinction between the in-game face and the real-world face. Then, using the deepfakes algorithm, the inputs were reversed, recreating the real-world face in the game itself. The difference is quite astonishing: Apart from improving the gameplay and the in-game character graphics, deep learning can also be used to enhance the way the opponents/adversaries interact with the player in the game. If we take the example of the FIFA game mentioned before, deep learning can be used to enhance the behaviour and appearance of the in-game crowd, who can react or cheer their team better as per their team’s performance. How can Deep Learning benefit video games? The following are some of the clear advantages of implementing deep learning techniques in games: Highly accurate results can be achieved with more and more training data Manual intervention is minimal Game developers can focus on effective storytelling than on the in-game graphics Another obvious question comes to mind at this stage, however. What are the drawbacks of implementing deep learning for games? A few come to mind immediately: Complexity of the training models can be quite high Images in games need to be generated in real-time which is quite a challenge The computation time can be quite significant The training dataset for achieving accurate results can be quite humongous With advancements in technology and better, faster hardware, many of the current limitations in developing smarter games  can be overcome. Fast generative models can look into the real-time generation of images, while faster graphic cards can take care of the model computation issue. All in all, dabbling with deep learning in games seems worth the punt which game studios should definitely think of taking. What do you think? Is incorporating deep learning techniques in games a scalable idea?
Read more
  • 0
  • 0
  • 5553