Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

370 Articles
article-image-virtually-everything-everyone
Packt
19 Aug 2015
21 min read
Save for later

Virtually Everything for Everyone

Packt
19 Aug 2015
21 min read
This virtual reality thing calls into question, what does it mean to "be somewhere"? Before cell phones, you would call someone and it would make no sense to say, "Hey, where are you?" You know where they are, you called their house, that's where they are. So then cell phones come around and you start to hear people say, "Hello. Oh, I'm at Starbucks," because the person on the other end wouldn't necessarily know where you are, because you became un-tethered from your house for voice communications. So when I saw a VR demo, I had this vision of coming home and my wife has got the kids settled down, she has a couple minutes to herself, and she's on the couch wearing goggles on her face. I come over and tap her on the shoulder, and I'm like, "Hey, where are you?" It's super weird. The person's sitting right in front of you, but you don't know where they are.                                                      -Jonathan Stark, mobile expert and podcaster In this article, by Jonathan Linowes, author of the book Unity Virtual Reality Projects, we will define virtual reality and illustrate how it can be applied not only to games but also many other areas of interest and productivity. Welcome to virtual reality! In this book, we will explore what it takes to create virtual reality experiences on our own. We will take a walk through a series of hands-on projects, step-by-step tutorials, and in-depth discussions using the Unity 5 3D game engine and other free or open source software. Though the virtual reality technology is rapidly advancing, we'll try to capture the basic principles and techniques that you can use to make your VR games and applications feel immersive and comfortable. This article discusses the following topics: What is virtual reality? Differences between virtual reality (VR) and augmented reality (AR) How VR applications may differ from VR games Types of VR experiences Technical skills that are necessary for the development of VR What is virtual reality to you? Today, we are witnesses to the burgeoning consumer virtual reality, an exciting technology that promises to transform in a fundamental way how we interact with information, our friends, and the world at large. What is virtual reality? In general, VR is the computer-generated simulation of a 3D environment, which seems very real to the person experiencing it, using special electronic equipment. The objective is to achieve a strong sense of being present in the virtual environment. Today's consumer tech VR involves wearing a head-mounted display (such as goggles) to view stereoscopic 3D scenes. You can look around by moving your head, and walk around by using hand controls or motion sensors. You are engaged in a fully immersive experience. It's as if you're really there in some other virtual world. The following image shows a guy experiencing an Oculus Rift Development Kit 2 (DK2): Virtual reality is not new. It's been here for decades, albeit hidden away in academic research labs and high-end industrial and military facilities. It was big, clunky, and expensive. Ivan Sutherland invented the first head-mounted display in 1966, which is shown in the following image. It was tethered to the ceiling! In the past, several failed attempts have been made to bring consumer-level virtual reality products to the market. In 2012, Palmer Luckey, the founder of Oculus VR LLC, gave a demonstration of a makeshift head-mounted VR display to John Carmack, the famed developer of Doom, Wolfenstein 3D, and Quake classic video games. Together, they ran a successful Kickstarter campaign and released a developer kit called Oculus Rift Development Kit 1 (DK1) to an enthusiastic community. This caught the attention of investors as well as Mark Zuckerberg, and in March 2014, Facebook bought the company for $2 billion. With no product, no customers, and an infinite promise, the money and attention that it attracted has helped fuel a new category of consumer products. Others have followed suit, including Google, Sony, Samsung, and Steam. New innovations and devices that enhance the VR experience continue to be introduced. Most of the basic research has already been done and the technology is now affordable thanks in large part to the mass adoption of devices that work on mobile technology. There is a huge community of developers with an experience in building 3D games and mobile apps. Creative content producers are joining in and the media is talking it up. At last, virtual reality is real! Say what? Virtual reality is real? Ha! If it's virtual, how can it be... Oh, never mind. Eventually, we will get past the focus on the emerging hardware devices and recognize that content is king. The current generation of 3D development software (commercial, free, and open source) that has spawned a plethora of indie, or independent, game developers can also be used to build non-game VR applications. Though VR finds most of its enthusiasts in the gaming community, the potential applications reach well beyond that. Any business that presently uses 3D modeling and computer graphics will be more effective if it uses the VR technology. The sense of immersive presence that is afforded by VR can enhance all common online experiences today, which includes engineering, social networking, shopping, marketing, entertainment, and business development. In the near future, viewing 3D websites with a VR headset may be as common as visiting ordinary flat websites today. Types of head-mounted displays Presently, there are two basic categories of head-mounted displays for virtual reality—desktop VR and mobile VR. Desktop VR With desktop VR (and console VR), your headset is a peripheral to a more powerful computer that processes the heavy graphics. The computer may be a Windows PC, Mac, Linux, or a game console. Most likely, the headset is connected to the computer with wires. The game runs on the remote machine and the head-mounted display (HMD) is a peripheral display device with a motion sensing input. The term desktop is an unfortunate misnomer since it's just as likely to be stationed in either a living room or a den. The Oculus Rift (https://www.oculus.com/) is an example of a device where the goggles have an integrated display and sensors. The games run on a separate PC. Other desktop headsets include HTC/Valve Vive and Sony's project Morpheus for PlayStation. The Oculus Rift is tethered to a desktop computer via video and USB cables, and generally, the more graphics processing unit (GPU) power, the better. However, for the purpose of this book, we won't have any heavy rendering in our projects, and you can get by even with a laptop (provided it has two USB ports and one HDMI port available). Mobile VR Mobile VR, exemplified by Google Cardboard (http://www.google.com/get/cardboard/), is a simple housing (device) for two lenses and a slot for your mobile phone. The phone's display is used to show the twin stereographic views. It has rotational head tracking, but it has no positional tracking. Cardboard also provides the user with the ability to click or tap its side to make selections in a game. The complexity of the imagery is limited because it uses your phone's processor for rendering the views on the phone display screen. Other mobile VR headsets include Samsung Gear VR and Zeiss VR One, among others. Google provides the open source specifications, and other manufacturers have developed ready-made models for purchase, with prices for the same as low as $15. If you want to find one, just Google it! There are versions of Cardboard-compatible headsets that are available for all sizes of phones—both Android and iOS. Although the quality of the VR experience with a Cardboard device is limited (some even say that it is inadequate) and it's probably a "starter" device that will just be quaint in a couple of years, Cardboard is fine for the small projects in this book, and we'll revisit its limitations from time to time. The difference between virtual reality and augmented reality It's probably worthwhile clarifying what virtual reality is not. A sister technology to VR is augmented reality (AR), which superimposes computer generated imagery (CGI) over views of the real world. Limited uses of AR can be found on smart phones, tablets, handheld gaming systems such as the Nintendo 3DS, and even in some science museum exhibits, which overlay the CGI on top of live video from a camera. The latest innovations in AR are the AR headsets, such as Microsoft HoloLens and Magic Leap, which show the computer graphics directly in your field of view; the graphics are not mixed into a video image. If the VR headsets are like closed goggles, the AR headsets are like translucent sunglasses that employ a technology called light fields to combine the real-world light rays with CGI. A challenge for AR is ensuring that the CGI is consistently aligned with and mapped onto the objects in the real-world space and eliminate latency while moving about so that they (the CGI and objects in real-world space) stay aligned. AR holds as much promise as VR for future applications, but it's different. Though AR intends to engage the user within their current surroundings, virtual reality is fully immersive. In AR, you may open your hand and see a log cabin resting in your palm, but in VR, you're transported directly inside the log cabin and you can walk around inside it. We can also expect to see hybrid devices that somehow either combine VR and AR, or let you switch between modes. Applications versus games The consumer-level virtual reality starts with gaming. Video gamers are already accustomed to being engaged in highly interactive hyper-realistic 3D environments. VR just ups the ante. Gamers are early adopters of high-end graphics technology. Mass production of gaming consoles and PC-based components in the tens of millions and competition between vendors leads to lower prices and higher performance. Game developers follow suit, often pushing the state-of-the-art, squeezing every ounce of performance out of hardware and software. Gamers are a very demanding bunch, and the market has consistently stepped up to keep them satisfied. It's no surprise that many, if not most, of the current wave of the VR hardware and software companies are first targeting the video gaming industry. A majority of the demos and downloads that are available on Oculus Share (https://share.oculus.com/) and Google Play for the Cardboard app (https://play.google.com/store/search?q=cardboard&c=apps) are games. Gamers are the most enthusiastic VR advocates and seriously appreciate its potential. Game developers know that the core of a game is the game mechanics, or the rules, which are largely independent of the skin, or the thematic topic of the game. Gameplay mechanics can include puzzles, chance, strategy, timing, or muscle memory (twitch). VR games can have the same mechanic elements but might need to be adjusted for the virtual environment. For example, a first-person character walking in a console video game is probably going about 1.5 times faster than their actual pace in real life. If this wasn't the case, the player would feel that the game is too slow and boring. Put the same character in a VR scene and they will feel that it is too fast; it could likely make the player feel nauseous. In VR, you will want your characters to walk a normal, earthly pace. Not all video games will map well to VR; it may not be fun to be in the middle of a war zone when you're actually there. That said, virtual reality is also being applied in areas other than gaming. Though games will remain important, non-gaming apps will eventually overshadow them. These applications may differ from games in a number of ways, with the most significant having much less emphasis on game mechanics and more emphasis on either the experience itself or application-specific goals. Of course, this doesn't preclude some game mechanics. For example, the application may be specifically designed to train the user at a specific skill. Sometimes, the gamification of a business or personal application makes it more fun and effective in driving the desired behavior through competition. In general, non-gaming VR applications are less about winning and more about the experience itself. Here are a few examples of the kinds of non-gaming applications that people are working on: Travel and tourism: Visit faraway places without leaving your home. Visit art museums in Paris, New York, and Tokyo in one afternoon. Take a walk on Mars. You can even enjoy Holi, the spring festival of colors, in India while sitting in your wintery cabin in Vermont. Mechanical engineering and industrial design: Computer-aided design software such as AutoCAD and SOLIDWORKS pioneered three-dimensional modeling, simulation, and visualization. With VR, engineers and designers can directly experience the hands-on end product before it's actually built and play with what-if scenarios at a very low cost. Consider iterating a new automobile design. How does it look? How does it perform? How does it appear sitting in the driver's seat? Architecture and civil engineering: Architects and engineers have always constructed scale models of their designs, if only to pitch the ideas to clients and investors, or more importantly, to validate the many assumptions about the design. Presently, modeling and rendering software is commonly used to build virtual models from architectural plans. With VR, the conversation with stakeholders can be so much more confident. Other personnel, such as the interior designers, HVAC, and electrical engineers, can be brought into the process sooner. Real estate: Real estate agents have been quick adopters of the Internet and visualization technology to attract buyers and close sales. Real estate search websites were some of the first successful uses of the Web. Online panoramic video walk-throughs of for-sale properties are commonplace today. With VR, I can be in New York and find a place to live in Los Angeles. This will become even easier with mobile 3D-sensing technologies such as Google Project Tango (https://www.google.com/atap/projecttango), which performs a 3D scan of a room using a smartphone and automatically builds a model of the space. Medicine: The potential of VR for health and medicine may literally be a matter of life and death. Every day, hospitals use MRI and other scanning devices to produce models of our bones and organs that are used for medical diagnosis and possibly pre-operative planning. Using VR to enhance visualization and measurement will provide a more intuitive analysis. Virtual reality is also being used for the simulation of surgery to train medical students. Mental health: Virtual reality experiences have been shown to be effective in a therapeutic context for the treatment of post traumatic stress disorder (PTSD) in what's called exposure therapy, where the patient, guided by a trained therapist, confronts their traumatic memories through the retelling of the experience. Similarly, VR is being used to treat arachnophobia (spiders) and the fear of flying. Education: The educational opportunities for VR are almost too obvious to mention. One of the first successful VR experiences is Titans of Space, which lets you explore the solar system first hand. Science, history, arts, and mathematics—VR will help students of all ages because, as they say, field trips are much more effective than textbooks. Training: Toyota has demonstrated a VR simulation of drivers' education to teach teenagers about the risks of distracted driving. In another project, vocational students got to experience the operating of cranes and other heavy construction equipment. Training for first responders, police, and the fire and rescue workers can be enhanced with VR by presenting highly risky situations and alternative virtual scenarios. The NFL is looking to VR for athletic training. Entertainment and journalism: Virtually attend rock concerts and sporting events. Watch music videos Erotica. Re-experience news events as if you were personally present. Enjoy 360-degree cinematic experiences. The art of storytelling will be transformed by virtual reality. Wow, that's quite a list! This is just the low-hanging fruit. The purpose of this book is not to dive too deeply into any of these applications. Rather, I hope that this survey helps stimulate your thinking and provides a perspective towards how virtual reality has the potential to be virtually anything for everyone. What this book covers This book takes a practical, project-based approach to teach the specifics of virtual reality development using the Unity 3D game development engine. You'll learn how to use Unity 5 to develop VR applications, which can be experienced with devices such as the Oculus Rift or Google Cardboard. However, we have a slight problem here—the technology is advancing very rapidly. Of course, this is a good problem to have. Actually, it's an awesome problem to have, unless you're a developer in the middle of a project or an author of a book on this technology! How does one write a book that does not have obsolete content the day it's published? Throughout the book, I have tried to distill some universal principles that should outlive any near-term advances in virtual reality technology, that includes the following: Categorization of different types of VR experiences with example projects Important technical ideas and skills, especially the ones relevant to the building of VR applications General explanations on how VR devices and software works Strategies to ensure user comfort and avoid VR motion sickness Instructions on using the Unity game engine to build VR experiences Once VR becomes mainstream, many of these lessons will perhaps be obvious rather than obsolete, just like the explanations from the 1980's on how to use a mouse would just be silly today. Who are you? If you are interested in virtual reality, want to learn how it works, or want to create VR experiences yourself, this book is for you. We will walk you through a series of hands-on projects, step-by-step tutorials, and in-depth discussions using the Unity 3D game engine. Whether you're a non-programmer who is unfamiliar with 3D computer graphics, or a person with experience in both but new to virtual reality, you will benefit from this book. It is not a cold start with Unity, but you do not need to be an expert either. Still, if you're new to Unity, you can pick up this book as long as you realize that you'll need to adapt to the pace of the book. Game developers may already be familiar with the concepts in the book, which are reapplied to the VR projects while learning many other ideas that are specific to VR. Engineers and 3D designers may understand many of the 3D concepts, but they may wish to learn to use the game engine for VR. Application developers may appreciate the potential non-gaming uses of VR and want to learn the tools that can make this happen. Whoever you are, we're going to turn you into a 3D Software VR Ninja. Well, OK, this may be a stretch goal for this little book, but we'll try to set you on the way. Types of VR experiences There is not just one kind of virtual reality experience. In fact, there are many. Consider the following types of virtual reality experiences: Diorama: In the simplest case, we build a 3D scene. You're observing from a third-person perspective. Your eye is the camera. Actually, each eye is a separate camera that gives you a stereographic view. You can look around. First-person experience: This time, you're immersed in the scene as a freely moving avatar. Using an input controller (keyboard, game controller, or some other technique), you can walk around and explore the virtual scene. Interactive virtual environment: This is like the first-person experience, but it has an additional feature—while you are in the scene, you can interact with the objects in it. Physics is at play. Objects may respond to you. You may be given specific goals to achieve and challenges with the game mechanics. You might even earn points and keep score. Riding on rails: In this kind of experience, you're seated and being transported through the environment (or, the environment changes around you). For example, you can ride a roller coaster via this virtual reality experience. However, it may not necessarily be an extreme thrill ride. It can be a simple real estate walk-through or even a slow, easy, and meditative experience. 360-degree media: Think panoramic images taken with GoPro® on steroids that are projected on the inside of a sphere. You're positioned at the center of the sphere and can look all around. Some purists don't consider this "real" virtual reality, because you're seeing a projection and not a model rendering. However, it can provide an effective sense of presence. Social VR: When multiple players enter the same VR space and can see and speak with each other's avatars, it becomes a remarkable social experience. In this book, we will implement a number of projects that demonstrate how to build each of these types of VR experience. For brevity, we'll need to keep it pure and simple, with suggestions for areas for further investigation. Technical skills that are important to VR You will learn about the following in this book: World scale: When building for a VR experience, attention to the 3D space and scale is important. One unit in Unity is usually equal to one meter in the virtual world. First-person controls: There are various techniques that can be used to control the movement of your avatar (first-person camera), such as the keyboard keys, game controllers, and head movements. User interfacecontrols: Unlike conventional video (and mobile) games, all user interface components are in world coordinates in VR, not screen coordinates. We'll explore ways to present notices, buttons, selectors, and other User interface (UI) controls to the users so that they can interact and make selections. Physics and gravity: Critical to the sense of presence and immersion in VR is the physics and gravity of the world. We'll use the Unity physics engine to our advantage. Animations: Moving objects within the scene is called "animation" (duh!) It can either be along predefined paths, or it may use AI (artificial intelligence) scripting that follows a logical algorithm in response to events in the environment. Multiuser services: Real-time networking and multiuser games are not easy to implement, but online services make it easy without you having to be a computer engineer. Build and run: Different HMDs use different developer kits (SDK) and assets to build applications that target a specific devise. We'll consider techniques that let you use a single interface for multiple devices. We will write scripts in the C# language and use features of Unity as and when they are needed to get things done. However, there are technical areas that we will not cover, such as realistic rendering, shaders, materials, and lighting. We will not go into modeling techniques, terrains, or humanoid animations. Effective use of advanced input devices and hand and body tracking is proving to be critical to VR, but we won't have a chance to get into it here either. We also won't discuss game mechanics, dynamics, and strategies. We will talk about rendering performance optimization, but not in depth. All of these are very important topics that may be necessary for you to learn (or for someone in your team), in addition to this book, to build complete, successful, and immersive VR applications. Summary In this article, we looked at virtual reality and realized that it can mean a lot of things to different people and can have different applications. There's no single definition, and it's a moving target. We are not alone, as everyone's still trying to figure it out. The fact is that virtual reality is a new medium that will take years, if not decades, to reach its potential. VR is not just for games; it can be a game changer for many different applications. We identified over a dozen. There are different kinds of VR experiences, which we'll explore in the projects in this book. VR headsets can be divided into those that require a separate processing unit (such as a desktop PC or a console) that runs with a powerful GPU and the ones that use your mobile phone for processing. In this book, we will use an Oculus Rift DK2 as an example of desktop VR and Google Cardboard as the example of mobile VR, although there are many alternative and new devices available. We're all pioneers living at an exciting time. Because you're reading this book, you're one, too. Whatever happens next is literally up to you. As the personal computing pioneer Alan Kay said, "The best way to predict the future is to invent it." So, let's get to it! Resources for Article: Further resources on this subject: Looking Back, Looking Forward [article] Unity Networking – The Pong Game [article] Getting Started with Mudbox 2013 [article]
Read more
  • 0
  • 0
  • 1624

article-image-improving-inspector-property-and-decorator-drawers
Packt
19 Aug 2015
10 min read
Save for later

Improving the Inspector with Property and Decorator Drawers

Packt
19 Aug 2015
10 min read
In this article by Angelo Tadres, author of the book Extending Unity with Editor Scripting, we will explore a way to create a custom GUI for our properties using Property Drawers. If you've worked on a Unity project for a long time, you know that the bigger your scripts get, the more unwieldy they become. All your public variables take up space in the Inspector Window, and as they accumulate, they begin to convert into one giant and scary monster. Sometimes, organization is the clue. So, in this article, you will learn how to improve your inspectors using Property and Decorator Drawers. (For more resources related to this topic, see here.) A Property Drawer is an attribute that allows you to control how the GUI of a Serializable class or property is displayed in the inspector window. An Attribute is a C# way of defining declarative tags, which you can place on certain entities in your source code to specify additional information. The information that attributes contain is retrieved at runtime through reflection. This approach significantly reduces the amount of work you have to do for the GUI customization because you don't need to write an entire Custom Inspector; instead, you can just apply appropriate attributes to variables in your scripts to tell the editor how you want those properties to be drawn. Unity has several Property Drawers implemented by default. Let's take a look at one of them called Range: using UnityEngine; public class RangeDrawerDemo : MonoBehaviour { [Range (0, 100)] public int IntValue = 50; } If you attach this script to a game object and then select it, you will see the following: Using the Range attribute, we rendered a slider that moves between 0 and 100 instead of the common int field. This is valuable in terms of validating the input for this field. Avoid possible mistakes such as using a negative value to define the radius of a sphere collider, and so on. Let's take a look to the rest of the built-in Property Drawers. Built-in Property Drawers The Unity documentation has information about the built-in property drawers, but there is no such place where you can check all the available ones listed. In this section, we will resolve this. Range The Range attribute restricts a float or int variable in a script to a specific range. When this attribute is used, the float or int will be shown as a slider in the inspector instead of the default number field: public RangeAttribute(float min, float max); public RangeAttribute(float min, float max); [Range (0, 1)] public float FloatRange = 0.5f; [Range (0, 100)] public int IntRange = 50; You will get the following output: Multiline The Multiline attribute is used to show a string value in a multiline text area. You can decide the number of lines of text to make room for. The default is 3 and the text doesn't wrap. The following is an example of the Multiline attribute: public MultilineAttribute(); public MultilineAttribute(int lines); [Multiline (2)] public string StringMultiline = "This text is using a multiline property drawer"; You will get the following output: TextArea The TextArea attribute allows a string to be edited with a height-flexible and scrollable text area. You can specify the minimum and maximum values; a scrollbar will appear if the text is bigger than the area available. Its behavior is better compared to Multiline. The following is an example of the TextArea attribute: public TextAreaAttribute(); public TextAreaAttribute(int minLines, int maxLines); [TextArea (2, 4)] public string StringTextArea = "This text is using a textarea property drawer"; You will get the following output: ContextMenu The ContextMenu attribute adds the method to the context menu of the component. When the user selects the context menu, the method will be executed. The method has to be nonstatic. In the following example, we call the method DoSomething, printing a log in the console: public ContextMenu(string name); [ContextMenu ("Do Something")] public void DoSomething() { Debug.Log ("DoSomething called..."); } You will get the following output: ContextMenuItem The ContextMenuItem attribute is used to add a context menu to a field that calls a named method. In the following example, we call a method to reset the value of the IntReset variable to 0: public ContextMenuItemAttribute(string name, string function); [ContextMenuItem("Reset this value", "Reset")] public int IntReset = 100; public void Reset() { IntReset = 0; } You will get the following output: Built-in Decorator Drawers There is another kind of drawer called Decorator Drawer. These drawers are similar in composition to the Property Drawers, but the main difference is that Decorator Drawers are designed to draw decoration in the inspector and are unassociated with a specific field. This means, while you can only declare one Property Drawer per variable, you can stack multiple decorator drawers in the same field. Let's take a look in the following built-in Decorator Drawers. Header This is the attribute that adds a header to some fields in the inspector: public HeaderAttribute(string header); [Header("This is a group of variables")] public int VarA = 10; public int VarB = 20; You will get the following output: Space The space attribute adds some spacing in the inspector: public SpaceAttribute(float height); public int VarC = 10; [Space(40)] public int VarD = 20; You will get the following output: Tooltip This Tooltip attribute specifies a tooltip for a field: public TooltipAttribute(string tooltip); [Tooltip("This is a tooltip")] public int VarE = 30; You will get the following output: Creating you own Property Drawers If you have a serializable parameter or structure that repeats constantly in your video game and you would like to improve how this renders in the Inspector, you can try to write your own Property Drawer. We will create a property drawer for an integer meant to be a variable to save time in seconds. This Property Drawer will draw a normal int field but also a label with the number of seconds converted to the m:s or h:m:s time format. To implement a Property Drawer, you must create two scripts: The attribute: This declares the attribute and makes it usable in your MonoBehaviour scripts. This will be part of your video game scripts. The drawer: This is responsible for rendering the custom GUI and handling the input of the user. This is placed inside a folder called Editor. The Editor folder is one of the several special folders Unity has. All scripts inside this folder will be treated as editor scripts rather than runtime scripts. For the first script, create one file inside your Unity project called TimeAttribute.cs and then add the following code: using UnityEngine; public class TimeAttribute : PropertyAttribute { public readonly bool DisplayHours; public TimeAttribute (bool displayHours = false) { DisplayHours = displayHours; } } Here we defined the name of the attribute and its parameters. You must create your attribute class extending from the PropertyAttribute class. The name of the class contains the suffix "attribute"; however, when you want to use the attribute over a certain property, the suffix is not needed. In this case, we will use Time and not TimeAttribute to use the property drawer. The TimeAttribute has an optional parameter called DisplayHours. The idea is to display a label under the int field with the time in m:s format by default; if the DisplayHours parameter is true, this will be displayed in h:m:s format. Now is the moment to implement the drawer. To do this, let's create a new script called TimeDrawer.cs inside an Editor folder: using UnityEngine; using UnityEditor; [CustomPropertyDrawer (typeof(TimeAttribute))] public class TimeDrawer : PropertyDrawer { public override float GetPropertyHeight (SerializedProperty property, GUIContent label) { return EditorGUI.GetPropertyHeight (property) * 2; } public override void OnGUI (Rect position, SerializedProperty property, GUIContent label) { if (property.propertyType == SerializedPropertyType.Integer) { property.intValue = EditorGUI.IntField (new Rect (position.x, position.y, position.width, position.height / 2), label, Mathf.Abs(property.intValue)); EditorGUI.LabelField (new Rect (position.x, position.y + position.height / 2, position.width, position.height / 2), " ", TimeFormat (property.intValue)); } else { EditorGUI.LabelField (position, label.text, "Use Time with an int."); } } private string TimeFormat (int seconds) { TimeAttribute time = attribute as TimeAttribute; if (time.DisplayHours) { return string.Format ("{0}:{1}:{2} (h:m:s)", seconds / (60 * 60), ((seconds % (60 * 60)) / 60).ToString ().PadLeft(2,'0'), (seconds % 60).ToString ().PadLeft(2,'0')); } else { return string.Format ("{0}:{1} (m:s)", (seconds / 60).ToString (), (seconds % 60).ToString ().PadLeft(2,'0')); } } } Property Drawers don't support layouts to create a GUI; for this reason, the class you must use here is EditorGUI instead of EditorGUILayout. Using this class requires a little extra effort; you need to define the Rect function that will contain the GUI element each time you want to use it. The CustomPropertyDrawer attribute is part of the UnityEditor namespace, and this is what Unity uses to bind a drawer with a Property attribute. In this case, we passed the TimeAttribute. Your must extend the TimeAttribute from the PropertyDrawer class, and in this way, you will have access to the core methods to create Property Drawers: GetPropertyHeight: This method is responsible for handling the height of the drawer. You need to overwrite this method in order to use it. In our case, we force the size of the drawer to be double. OnGUI: This is where you place all the code related to rendering the GUI. You can create Decorator Drawers too. You just need to follow the same steps we performed to create a Property Drawer, but instead of extending your Drawer from PropertyDrawer, you need to extend from DecoratorDrawer. Also, you will have access to the variable attribute. This has a reference to the attribute class we created, and with this we can access to their variables. To test our code, create a new script called TimeDrawerDemo.cs and add the following code: using UnityEngine; using UnityEditor; [CustomPropertyDrawer (typeof(TimeAttribute))] public class TimeDrawer : PropertyDrawer { public override float GetPropertyHeight (SerializedProperty property, GUIContent label) { return EditorGUI.GetPropertyHeight (property) * 2; } public override void OnGUI (Rect position, SerializedProperty property, GUIContent label) { if (property.propertyType == SerializedPropertyType.Integer) { property.intValue = EditorGUI.IntField (new Rect (position.x, position.y, position.width, position.height / 2), label, Mathf.Abs(property.intValue)); EditorGUI.LabelField (new Rect (position.x, position.y + position.height / 2, position.width, position.height / 2), " ", TimeFormat (property.intValue)); } else { EditorGUI.LabelField (position, label.text, "Use Time with an int."); } } private string TimeFormat (int seconds) { TimeAttribute time = attribute as TimeAttribute; if (time.DisplayHours) { return string.Format ("{0}:{1}:{2} (h:m:s)", seconds / (60 * 60), ((seconds % (60 * 60)) / 60).ToString ().PadLeft(2,'0'), (seconds % 60).ToString ().PadLeft(2,'0')); } else { return string.Format ("{0}:{1} (m:s)", (seconds / 60).ToString (), (seconds % 60).ToString ().PadLeft(2,'0')); } } } After compiling, attach this script to a game object. You will see this on the inspector: The time property uses the time attribute. We can check the three possible scenarios here: The attribute uses a property that is not an int The attribute has the variable DisplayHours = false The attribute has the variable DisplayHours = true A little change makes it easier to set up this data in our game objects. Summary In this article, we created a custom Property Drawer to be used in properties mean to store time in seconds, and in the process, you learned how these are implemented. We also explored the available built-in Property Drawers and Decorator Drawers in Unity. Applying this knowledge to your projects will enable you to add validation to sensible data in your properties and make your scripts more developer friendly. This will also allow you to have a professional look. Resources for Article: Further resources on this subject: Adding a Graphical User Interface [article] Animation features in Unity 5 [article] Event-driven Programming [article]
Read more
  • 0
  • 0
  • 7614

article-image-nodes
Packt
19 Aug 2015
18 min read
Save for later

Nodes

Packt
19 Aug 2015
18 min read
In this article by Samanyu Chopra, author of the book iOS Game Development By Example, we will study about nodes, which play an important role in understanding the tree structure of a game. Further, we will discuss about types of nodes in the Sprite Kit and their uses in detail. (For more resources related to this topic, see here.) All you need to know about nodes We have discussed many things about nodes so far. Almost everything you are making in a game with Sprite Kit is a node. Scenes that we are presenting to view are instances of the SKScene class, which is a subclass of the SKEffectNode class, which is itself a subclass of the SKNode class. Indirectly, SKScene is a subclass of the SKNode class. As a game follows the node tree formation, a scene acts like a root node and the other nodes are used as its children. It should be remembered that although SKNode is a base class for the node you see in a scene, it itself does not draw anything. It only provides some basic features to its subclass nodes. All the visual content we see in a Sprite Kit made game, is drawn by using the appropriate SKNode subclasses. Following are some subclasses of SKNode classes, which are used for different behaviors in a Sprite Kit-based game: SKSpriteNode: This class is used to instantiate a texture sprite in the game;SKVideoNode, this class is used to play video content in a scene. SKLabelNode: This class is used to draw labels in a game, with many customizing options, such as font type, font size, font color, and so on. SKShapeNode: This class is used to make a shape based on a path, at run time. For example, drawing a line or making a drawing game. SKEmitterNode: This class is used for emitting particle effects in scene, with many options, such as position, number of particles, color, and so on. SKCropNode: This class is basically used for cropping its child nodes, using a mask. Using this, you can selectively block areas of a layer. SKEffectNode: SKEffectNode is the parent of the SKScene class and the subclass of the SKNode class. It is used for applying image filter to its children. SKLightNode: SKLightNode class is used to make light and shadow effects in scene. SKFieldNode: This is a useful feature of Sprite Kit. You can define a portion of scene with some physical properties, for example, in space game, having a gravity effect on a blackhole, which attracts the things which are nearby. So, these are the basic subclasses of SKNode which are used frequently in Sprite Kit. SKNode provides some basic properties to its subclasses, which are used to view a node inside a scene, such as: position: This sets up the position of a node in a scene xScale: This scales in the width of a node yScale: This scales in the height of a node zRotation: This facilitates the rotation of a node in a clockwise or anti-clockwise direction frame: Node content bounding rectangle without accounting its children We know that the SKNode class does not draw anything by itself. So, what is the use of if? Well, we can use SKNode instances to manage our other nodes in different layers separately, or we can use them to manage different nodes in the same layer. Let's take a look at how we can do this. Using the SKNode object in the game Now, we will discover what the various aspects of SKNode are used for. Say you have to make a body from different parts of sprites, like a car. You can make it from sprites of wheels and body. The wheels and body of a car run in synchronization with each other, so that one control their action together, rather than manage each part separately. This can be done by adding them as a child of the SKNode class object and updating this node to control the activity of the car. The SKNode class object can be used for layering purposes in a game. Suppose we have three layers in our game: the foreground layer, which represents foreground sprites, the middle layer, which represents middle sprites, and the background layer which represents background sprites. If we want a parallax effect in our game, we will have to update each sprite's position separately or we can make three SKNode objects, referring to each layer, and add the sprites to their respective nodes. Now we have to update only these three nodes' position and the sprites will update their position automatically. The SKNode class can be used to make some kind of check point in a game, which is hidden but performs or triggers some event when a player crosses them, such as level end, bonus, or death trap. We can remove or add the whole sub tree inside a node and perform the necessary functions, such as rotating, scaling, positioning, and so on. Well, as we described that we can use the SKNode object as checkpoints in the game, it is important to recognize them in your scene. So, how we do that? Well the SKNode class provides a property for this. Let's find out more about it. Recognition of a node The SKNode class provides a property with a name, to recognize the correct node. It takes string as a parameter. Either you can search a node by its name or can use one of the two methods provided by SKNode, which are as follows: func childNodeWithName(name:String) -> SKNode: This function takes the name string as a parameter, and if it finds a node with a specific name, it returns that node or else it returns nil. If there is more than one node sharing the same name, it will return the first node in the search. func enumerateChildNodesWithName(name:String, usingBlock:((SKNode!,UnsafeMutablePointer<ObjCBool>)->Void)!): When you need all the nodes sharing the same name, use this function. This function takes the name and block as a parameter. In usingBlock, you need to provide two parameters. One matching node, and the other a pointer of type Boolean. In our game, if you remember, we used the name property inside PlayButton to recognize the node when a user taps on it. It's a very useful property to search for the desired node. So, let's have a quick look at other properties or methods of the SKNode class. Initializing a node There are two initializers to make an instance of SKNode. Both are available in iOS 8.0 or later. convenience init (fileNamed filename: String): This initializer is used for making a node by loading an archive file from main bundle. For this, you have to pass a file name with an sks extension in the main bundle. init(): It is used to make a simple node without any parameter. It is useful for layering purposes in a game. As we already discussed the positioning of a node, let's discuss some functions and properties that are used to build a node tree. Building a node tree SKNode provides some functions and properties to work with a node tree. Following are some of the functions: addChild(node:SKNode): This is a very common function and is used mostly to make a node tree structure. We already used it to add nodes to scenes. insertChild(node:SKNode,atIndex index: Int): This is used when you have to insert a child in a specific position in the array. removeFromParent(): This simply removes a node from its parent. removeAllChildren(): It is used when you have to clear all the children in a node. removeChildrenInArray(nodes:[AnyObject]!): It take an array of SKNode objects and removes it from the receiving node. inParentHierarchy(parent:SKNode) -> Bool: It takes an SKNode object to check as a parent of the receiving node, and returns a Boolean value according to that condition. There are some useful properties used in a node tree, as follows: children: This is a read only property. It contains the receiving node's children in the array. parent: This is also a read only property. It contain the reference of the parent of the receiving node, and if there is none, then it returns nil. scene: This too is a read only property. If the node is embedded in the scene, it will contain the reference of the scene, otherwise nil. In a game, we need some specific task on a node, such as changing its position from one point to another, changing sprites in a sequence, and so on. These tasks are done using actions on node. Let's talk about them now. Actions on a node tree Actions are required for some specific tasks in a game. For this, the SKNode class provides some basic functions, which are as follows. runAction(action:SKAction!): This function takes an SKAction class object as a parameter and performs the action on the receiving node. runAction(action:SKAction!,completion block: (() -> Void)!): This function takes an SKAction class object and a compilation block as object. When the action completes, it calls the block.  runAction(action:SKAction,withKey key:String!): This function takes an SKAction class object and a unique key, to identify this action and perform it on the receiving node. actionForKey(key:String) -> SKAction?: This takes a String key as a parameter and returns an associative SKAction object for that key identifier. This happens if it exists, otherwise it returns nil. hasActions() -> Bool: Through this action, if the node has any executing action, it returns true, or else false. removeAllActions(): This function removes all actions from the receiving node. removeActionForKey(key:String): This takes String name as key and removes an action associated with that key, if it exists. Some useful properties to control these actions are as follows: speed: This is used to speed up or speed down the action motion. The default value is 1.0 to run at normal speed; with increasing value, speed increases. paused: This Boolean value determines whether an action on the node should be paused or resumed. Sometimes, we require changing a point coordinate system according to a node inside a scene. The SKNode class provides two functions to interchange a point's coordinate system with respect to a node in a scene. Let's talk about them. Coordinate system of a node We can convert a point with respect to the coordinate system of any node tree. The functions to do that, are as follows: convertPoint(point:CGPoint, fromNode node : SKNode) -> CGPoint: This takes a point in another node's coordinate system and the other node as its parameter, and returns a converted point according to the receiving node's coordinate system. convertPoint(point:CGPoint, toNode node:SKNode) ->CGPoint: It takes a point in the receiving node's coordinate system and the other nodes in the node tree as its parameters, and returns the same point converted according to the other node's coordinate system. We can also determine if a point is inside a node's area or not. containsPoint(p:CGPoint) -> Bool: This returns the Boolean value according to the position of a point inside or outside of a receiving node's bounding box. nodeAtPoint(p:CGPoint) -> SKNode: This returns the deepest descendant node that intersects the point. If that is not there, then it returns the receiver node. nodesAtPoint(p:CGPoint) -> [AnyObject]: This returns an array of all the SKNode objects in the subtree that intersect the point. If no nodes intersect the point, an empty array is returned. Apart from these, the SKNode class provides some other functions and properties too. Let's talk about them. Other functions and properties Some other functions and properties of the SKNode class are as follows: intersectsNode(node:SKNode) -> Bool: As the name suggest, it returns a Boolean value according to the intersection of the receiving node and another node from the function parameter. physicsBody: It is a property of the SKNode class. The default value is nil, which means that this node will not take part in any physical simulation in the scene. If it contains any physical body, then it will change its position and rotation in accordance with the physical simulation in the scene. userData : NSMutableDictionary?: The userData property is used to store data for a node in a dictionary form. We can store position, rotation, and many custom data sets about the node inside it. constraints: [AnyObject]?: It contains an array of constraints SKConstraint objects to the receiving node. Constraints are used to limit the position or rotation of a node inside a scene. reachConstraints: SKReachConstraints?: This is basically used to make restricted values for the receiving node by making an SKReachConstraints object. For example, to make joints move in a human body. Node blending modes: The SKNode class declares an enum SKBlendMode of the int type to blend the receiving node's color by using source and destination pixel colors. The constant’s used for this are as follows: Alpha: It is used to blend source and destination colors by multiplying the source alpha value Add: It is used to add the source and destination colors Subtract: It is used to subtract the source color from the destination color Multiply: It is used to multiply the source color by the destination color MultiplyX2: It is used to multiply the source color by the destination color, and after that, the resulting color is doubled Screen: It is used to multiply the inverted source and the destination color respectively and it then inverts the final result color Replace: It is used to replace the destination color by source color calculateAccumulatedFrame()->CGRect: We know that a node does not draw anything by itself, but if a node has descendants that draw content, then we may be required to know the overall frame size of that node. This function calculates the frame that contains the content of the receiver node and all of its descendants. Now, we are ready to see some basic SKNode subclasses in action. The classes we are going to discuss are as follows: SKLabelNode SKCropNode SKShapeNode SKEmitterNode SKLightNode SKVideoNode To study these classes, we are going to create six different SKScene subclasses in our project, so that we can learn them separately. Now, having learned in detail about nodes, we can proceed further to utilize the concept of nodes in a game. Creating subclasses for our Platformer game With the theoretical understanding of nodes, one wonders how this concept is helpful in developing a game. To understand the development of a game using the concept of Nodes, we now go ahead with writing and executing code for our Platformer game. Create the subclasses of different nodes in Xcode, following the given steps: From the main menu, select New File | Swift | Save As | NodeMenuScene.swift: Make sure Platformer is ticked as the target. Now Create and Open and make the NodeMenuScene class by subclassing SKScene. Following the previous same steps as, make CropScene, ShapeScene, ParticleScene, LightScene, and VideoNodeScene files, respectively. Open the GameViewController.swift file and replace the viewDidLoad function by typing out the following code: override func viewDidLoad() { super.viewDidLoad() let menuscene = NodeMenuScene() let skview = view as SKView skview.showsFPS = true skview.showsNodeCount = true skview.ignoresSiblingOrder = true menuscene.scaleMode = .ResizeFill menuscene.anchorPoint = CGPoint(x: 0.5, y: 0.5) menuscene.size = view.bounds.size skview.presentScene(menuscene) } In this code, we just called our NodeMenuScene class from the GameViewController class. Now, it's time to add some code to the NodeMenuScene class. NodeMenuScene Open the NodeMenuScene.swift file and type in the code as shown next. Do not worry about the length of the code; as this code is for creating the node menu screen, most of the functions are similar to creating buttons: import Foundation import SpriteKit let BackgroundImage = "BG" let FontFile = "Mackinaw1" let sKCropNode = "SKCropNode" let sKEmitterNode = "SKEmitterNode" let sKLightNode = "SKLightNode" let sKShapeNode = "SKShapeNode" let sKVideoNode = "SKVideoNode" class NodeMenuScene: SKScene { let transitionEffect = SKTransition.flipHorizontalWithDuration(1.0) var labelNode : SKNode? var backgroundNode : SKNode? override func didMoveToView(view: SKView) { backgroundNode = getBackgroundNode() backgroundNode!.zPosition = 0 self.addChild(backgroundNode!) labelNode = getLabelNode() labelNode?.zPosition = 1 self.addChild(labelNode!) } func getBackgroundNode() -> SKNode { var bgnode = SKNode() var bgSprite = SKSpriteNode(imageNamed: "BG") bgSprite.xScale = self.size.width/bgSprite.size.width bgSprite.yScale = self.size.height/bgSprite.size.height bgnode.addChild(bgSprite) return bgnode } func getLabelNode() -> SKNode { var labelNode = SKNode() var cropnode = SKLabelNode(fontNamed: FontFile) cropnode.fontColor = UIColor.whiteColor() cropnode.name = sKCropNode cropnode.text = sKCropNode cropnode.position = CGPointMake(CGRectGetMinX(self.frame)+cropnode.frame.width/2, CGRectGetMaxY(self.frame)-cropnode.frame.height) labelNode.addChild(cropnode) var emitternode = SKLabelNode(fontNamed: FontFile) emitternode.fontColor = UIColor.blueColor() emitternode.name = sKEmitterNode emitternode.text = sKEmitterNode emitternode.position = CGPointMake(CGRectGetMinX(self.frame) + emitternode.frame.width/2 , CGRectGetMidY(self.frame) - emitternode.frame.height/2) labelNode.addChild(emitternode) var lightnode = SKLabelNode(fontNamed: FontFile) lightnode.fontColor = UIColor.whiteColor() lightnode.name = sKLightNode lightnode.text = sKLightNode lightnode.position = CGPointMake(CGRectGetMaxX(self.frame) - lightnode.frame.width/2 , CGRectGetMaxY(self.frame) - lightnode.frame.height) labelNode.addChild(lightnode) var shapetnode = SKLabelNode(fontNamed: FontFile) shapetnode.fontColor = UIColor.greenColor() shapetnode.name = sKShapeNode shapetnode.text = sKShapeNode shapetnode.position = CGPointMake(CGRectGetMaxX(self.frame) - shapetnode.frame.width/2 , CGRectGetMidY(self.frame) - shapetnode.frame.height/2) labelNode.addChild(shapetnode) var videonode = SKLabelNode(fontNamed: FontFile) videonode.fontColor = UIColor.blueColor() videonode.name = sKVideoNode videonode.text = sKVideoNode videonode.position = CGPointMake(CGRectGetMaxX(self.frame) - videonode.frame.width/2 , CGRectGetMinY(self.frame) ) labelNode.addChild(videonode) return labelNode } var once:Bool = true override func touchesBegan(touches: NSSet, withEvent event: UIEvent) { if !once { return } for touch: AnyObject in touches { let location = touch.locationInNode(self) let node = self.nodeAtPoint(location) if node.name == sKCropNode { once = false var scene = CropScene() scene.anchorPoint = CGPointMake(0.5, 0.5) scene.scaleMode = .ResizeFill scene.size = self.size self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKEmitterNode { once = false var scene = ParticleScene() scene.anchorPoint = CGPointMake(0.5, 0.5) scene.scaleMode = .ResizeFill scene.size = self.size self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKLightNode { once = false var scene = LightScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene , transition:transitionEffect) } else if node.name == sKShapeNode { once = false var scene = ShapeScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene, transition:transitionEffect) } else if node.name == sKVideoNode { once = false var scene = VideoNodeScene() scene.scaleMode = .ResizeFill scene.size = self.size scene.anchorPoint = CGPointMake(0.5, 0.5) self.view?.presentScene(scene , transition:transitionEffect) } } } } We will get the following screen from the previous code: The screen is obtained when we execute the NodeMenuScene,swift file In the preceding code, after import statements, we defined some String variables. We are going to use these variables as Label names in scene .We also added our font name as a string variable. Inside this class, we made two node references: one for background and the other for those labels which we are going to use in this scene. We are using these two nodes to make layers in our game. It is best to categorize the nodes in a scene, so that we can optimize the code. We make an SKTransition object reference of the flip horizontal effect. You can use other transition effects too. Inside the didMoveToView() function, we just get the node and add it to our scene and set their z position. Now, if we look at the getBackgroundNode() function, we can see that we made a node by the SKNode class instance, a background by the SKSpriteNode class instance, and then added it to node and returned it. If you see the syntax of this function, you will see -> SKNode. It means that this function returns an SKNode object. The same goes in the function, getLabelNode(). It also returns a node containing all the SKLabelNode class objects. We have given a font and a name to these labels and set the position of them in the screen. The SKLabelNode class is used to make labels in Sprite Kit with many customizable options. In the touchBegan() function, we get the information on which Label is touched, and we then call the appropriate scene with transitions. With this, we have created a scene with the transition effect. By tapping on each button, you can see the transition effect. d shadows will also change themselves according to the source. Summary In this article, we learned about nodes in detail. We discussed many properties and functions of the SKNode class of Sprite Kit, along with its usage. Also, we discussed about the building of a node tree, and actions on a node tree. Now we are familiar with the major subclasses of SKNode, namely SKLabelNode, SKCropNode, SKShapeNode, SKEmitterNode, SKLightNode, and SKVideoNode, along with their implementation in our game. Resources for Article: Further resources on this subject: Sprites, Camera, Actions! [article] Cross-platform Building [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article]
Read more
  • 0
  • 0
  • 1943

article-image-tappy-defender-building-home-screen
Packt
12 Aug 2015
11 min read
Save for later

Tappy Defender – Building the home screen

Packt
12 Aug 2015
11 min read
In this article by John Horton, the author of Android Game Programming by Example, we will look at developing the home screen UI for our game. (For more resources related to this topic, see here.) Creating the project Fire up Android Studio and create a new project by following these steps. On the welcome page of Android Studio, click on Start a new Android Studio project. In the Create New Project window shown next, we need to enter some basic information about our app. These bits of information will be used by Android Studio to determine the package name. In the following image, you can see the Edit link where you can customize the package name if required. If you will be copy/pasting the supplied code into your project, then use C1 Tappy Defender for the Application name field and gamecodeschool.com in the Company Domain field as shown in the following screenshot: Click on the Next button when you're ready. When asked to select the form factors your app will run on, we can accept the default settings (Phone and Tablet). So click on Next again. On the Add an activity to mobile dialog, just click on Blank Activity followed by the Next button. On the Choose options for your new file dialog, again we can accept the default settings because MainActivity seems like a good name for our main Activity. So click on the Finish button. What we did Android Studio has built the project and created a number of files, most of which you will see and edit during the course of building this game. As mentioned earlier, even if you are just copying and pasting the code, you need to go through this step because Android Studio is doing things behind the scenes to make your project work. Building the home screen UI The first and simplest part of your Tappy Defender game is the home screen. All you need is a neat picture with a scene about the game, a high score, and a button to start the game. The finished home screen will look a bit like this: When you built the project, Android Studio opens up two files ready for you to edit. You can see them as tabs in the following Android Studio UI designer. The files (and tabs) are MainActivity.java and activity_main.xml: The MainActivity.java file is the entry point to your game, and you will see this in more detail soon. The activity_main.xml file is the UI layout that your home screen will use. Now, you can go ahead and edit the activity_main.xml file, so it actually looks like your home screen should. First of all, your game will be played with the Android device in landscape mode. If you change your UI preview window to landscape, you will see your progress more accurately. Look for the button shown in the next image. It is just preceding the UI preview: Click on the button shown in the preceding screenshot, and your UI preview will switch to landscape like this: Make sure activity_main.xml is open by clicking on its tab. Now, you will set in a background image. You can use your own. Add your chosen image to the drawable folder of the project in Android Studio. In the Properties window of the UI designer, find and click on the background property as shown in the next image: Also, in the previous image the button labelled ... is outlined. It is just to the right of the background property. Click on that ... button and browse to and select the background image file that you will be using. Next, you need a TextView widget that you will use to display the high score. Note that there is already a TextView widget on the layout. It says Hello World. You will modify this and use it for your high score. Left click on and drag the TextView to where you want it. You can copy me if you intend using the supplied background or put it where it looks best with your background. Next, in the Properties window, find and click on the id property. Enter textHighScore. You can also edit the text property to say High Score: 99999 or similar so that the TextView looks the part. However, this isn't necessary because your Java code will take care of this later. Now, you will drag a button from the widget palette as shown in the following screenshot: Drag it to where it looks good on your background. You can copy me if using the supplied background or put it where it looks best with your background. What we did You now have a cool background with neatly arranged widgets (a TextView and a Button) for your home screen. You can add functionality via Java code to the Button widget next. Revisit the TextView for the player's high score. The important point is that both the widgets have been assigned a unique ID that you can use to reference and manipulate in your Java code. Coding the functionality Now, you have a simple layout for your game home screen. Now, you need to add the functionality that will allow the player to click on the Play button to start the game. Click on the tab for the MainActivity.java file. The code that was automatically generated for us is not exactly what we need. Therefore, we will start again as it is simpler and quicker than tinkering with what is already there. Delete the entire contents of the MainActivity.java file except the package name and enter the following code in it. Of course, your package name may be different. package com.gamecodeschool.c1tappydefender;import android.app.Activity;import android.os.Bundle;public class MainActivity extends Activity{    // This is the entry point to our game    @Override    protected void onCreate(Bundle savedInstanceState) {        super.onCreate(savedInstanceState);                //Here we set our UI layout as the view        setContentView(R.layout.activity_main);    }} The mentioned code is the current contents of your main MainActivity class and the entry point of your game, the onCreate method. The line of code that begins with setContentView... is the line that loads our UI layout from activity_main.xml to the players screen. We can run the game now and see our home screen. Now, let's handle the Play button on our home screen. Add the two highlighted lines of the following code into the onCreate method just after the call to setContentView(). The first new line creates a new Button object and gets a reference to Button in our UI layout. The second line is the code to listen for clicks on the button. //Here we set our UI layout as the viewsetContentView(R.layout.activity_main);// Get a reference to the button in our layoutfinal Button buttonPlay =   (Button)findViewById(R.id.buttonPlay);// Listen for clicksbuttonPlay.setOnClickListener(this); Note that you have a few errors in your code. You can resolve these errors by holding down the Alt keyboard key and then pressing Enter. This will add an import directive for the Button class. You still have one error. You need to implement an interface so that your code listens to the button clicks. Modify the MainActivity class declaration as highlighted: public class MainActivity extends Activity         implements View.OnClickListener{ When you implement the onClickListener interface, you must also implement the onClick method. This is where you will handle what happens when a button is clicked. You can automatically generate the onClick method by right-clicking somewhere after the onCreate method, but within the MainActivity class, and navigating to Generate | Implement methods | onClick(v:View):void. You also need to have Android Studio add another import directive for Android.view.View. Use the Alt | Enter keyboard combination again. You can now scroll to near the bottom of the MainActivity class and see that Android Studio has implemented an empty onClick method for you. You should have no errors in your code at this point. Here is the onClick method: @Overridepublic void onClick(View v) {  //Our code goes here} As you only have one Button object and one listener, you can safely assume that any clicks on your home screen are the player pressing your Play button. Android uses the Intent class to switch between activities. As you need to go to a new activity when the Play button is clicked, you will create a new Intent object and pass in the name of your future Activity class, GameActivity to its constructor. You can then use the Intent object to switch activities. Add the following code to the body of the onClick method: // must be the Play button.// Create a new Intent objectIntent i = new Intent(this, GameActivity.class);// Start our GameActivity class via the IntentstartActivity(i);// Now shut this activity downfinish(); Once again, you have errors in your code because you need to generate a new import directive, this time for the Intent class so use the Alt | Enter keyboard combination again. You still have one error in your code. This is because your GameActivity class does not exist yet. You will now solve this problem. Creating GameActivity You have seen that when the player clicks on the Play button, main activity will close and game activity will begin. Therefore, you need to create a new activity called GameActivity that will be where your game actually executes. From the main menu, navigate to File | New | Activity | Blank Activity. In the Choose options for your new file dialog, change the Activity name field to GameActivity. You can accept all the other default settings from this dialog, so click on Finish. As you did with your MainActivity class, you will code this class from scratch. Therefore, delete the entire code content from GameActivity.java. What we did Android Studio has generated two more files for you and done some work behind the scenes that you will investigate soon. The new files are GameActivity.java and activity_game.xml. They are both automatically opened for you in two new tabs, in the same place as the other tabs above the UI designer. You will never need activity_game.xml because you will build a dynamically generated game view, not a static UI. Feel free to close that now or just ignore it. You will come back to the GameActivity.java file, when you start to code your game for real. Configuring the AndroidManifest.xml file We briefly mentioned that when we create a new project or a new activity, Android Studio does more than just creating two files for us. This is why we create new projects/activities the way we do. One of the things going on behind the scenes is the creation and modification of the AndroidManifest.xml file in the manifests directory. This file is required for your app to work. Also, it needs to be edited to make your app work the way you want it to. Android Studio has automatically configured the basics for you, but you will now do two more things to this file. By editing the AndroidManifest.xml file, you will force both of your activities to run with a full screen, and you will also lock them to a landscape layout. Let's make these changes here: Open the manifests folder now, and double click on the AndroidManifest.xml file to open it in the code editor. In the AndroidManifest.xml file, find the following line of code:android:name=".MainActivity" Immediately following it, type or copy and paste these two lines to make MainActivity run full screen and lock it in the landscape orientation:android:theme="@android:style/Theme.NoTitleBar.Fullscreen"android:screenOrientation="landscape" In the AndroidManifest.xml file, find the following line of code:android:name=".GameActivity" Immediately following it, type or copy and paste these two lines to make GameActivity run full screen and lock it in the landscape orientation: android:theme="@android:style/Theme.NoTitleBar.Fullscreen"android:screenOrientation="landscape" What you did You have now configured both activities from your game to be full screen. This gives a much more pleasing appearance for your player. In addition, you have disabled the player's ability to affect your game by rotating their Android device. Continue building on what you've learnt so far with Android Game Programming by Example! Learn to implement the game rules, game mechanics, and game objects such as guns, life, money; and of course, the enemy. You even get to control a spaceship!
Read more
  • 0
  • 0
  • 1462

article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 2333

article-image-editing-uv-islands
Packt
10 Aug 2015
10 min read
Save for later

Editing the UV islands

Packt
10 Aug 2015
10 min read
In this article by Enrico Valenza, the author of Blender 3D Cookbook, we are going to join the two UV islands' halves together, in order to improve the final look of the texturing; we are also going to modify, if possible, a little of the island proportions in order to obtain a more regular flow of the UV vertices, and fix the distortions. We are going to the use the pin tool, which is normally used in conjunction with the Live Unwrap tool. (For more resources related to this topic, see here.) Getting ready First, we'll try to recalculate the unwrap of some of the islands by modifying the seams of the mesh. Before we start though, let's see if we can improve some of the visibility of the UV islands in the UV/Image Editor: Put the mouse cursor in the UV/Image Editor window and press the N key. In the Properties sidepanel that appears by pressing the N key on the right-hand side of the window, go to the Display subpanel and click on the Black or White button (depending on your preference) under the UV item. Check also the Smooth item box. Also, check the Stretch item, which even though it was made for a different purpose, can increase the visibility of the islands a lot. Press N again to get rid of the Properties sidepanel. All these options enabled should make the islands more easily readable in the UV/Image Editor window: The UV islands made more easily readable by the enabled items How to do it… Now we can start with the editing; initially, we are going to freeze the islands that we don't want to modify because their unwrap is either satisfactory, or we will deal with it later. So, perform the following steps: Press A to select all the islands, then by putting the mouse pointer on the two pelvis island halves and pressing Shift + L, multi-deselect them; press the P key to pin the remaining selected UV islands and then A to deselect everything: To the right-hand side, the pinned UV islands Zoom in on the islands of the pelvis, select both the left and right outer edge-loops, as shown in the following left image, and press P to pin them. Go to the 3D view and clear only the front part of the median seam on the pelvis. To do this, start to clear the seam from the front edges, go down and stop where it crosses the horizontal seam that passes the bottom part of the groin and legs, and leave the back part of the vertical median seam still marked: Pinning the extreme vertices in the UV/Image Editor, and editing the seam on the mesh Go into Face selection mode and select all the faces of the pelvis; put the mouse pointer in the 3D view and press U | Unwrap (alternatively, go into the UV/Image Editor and press E): Unwrapping again with the pinning and a different seam The island will keep the previous position because of the pinned edges, and is now unwrapped as one single piece (with the obvious exception of the seam on the back). We won't modify the pelvis island any further, so select all its vertices and press P to pin all of them and then deselect them. Press A in the 3D view to select all the faces of the mesh and make all the islands visible in the UV/Image Editor. Note that they are all pinned at the moment, so just select the vertices you want to unpin (Alt + P) in the islands of the tongue and inner mouth. Then, clear the median seam in the corresponding pieces on the mesh, and press E again: Re-unwrapping the tongue and inner mouth areas Select the UV vertices of the resulting islands and unpin them all; next, pin just one vertex at the top of the islands and one at the bottom, and unwrap again. This will result in a more organically distributed unwrap of the parts: Re-unwrapping again with a different pinning Select all the faces of the mesh, and then all the islands in the UV/Image Editor window. Press Ctrl + A to average their relative size and adjust their position in the default tile space: The rearranged UV islands Now, let's work on the head piece that, as in every character, should be the most important and well-finished piece. At the moment, the face is made using two separate islands; although this won't be visible in the final textured rendering of our character, it's always better, if possible, to join them in order to have a single piece, especially in the front mesh faces. Due to the elongated snout of the character, if we were to unwrap the head as a single piece simply without the median seam, we wouldn't get a nice evenly mapped result, so we must divide the whole head into more pieces. Actually, we can take advantage of the fact that the Gidiosaurus is wearing a helmet and that most of the head will be covered by it; this allows us to easily split the face from the rest of the mesh, hiding the seams under the helmet. Go into Edge selection mode and mark the seams, dividing the face from the cranium and neck as shown in the following screenshots. Select the crossing edge-loops, and then clear the unnecessary parts: New seams for the character's head part 1 Also clear the median seam in the upper face part, and under the seam on the bottom jaw, leaving it only on the front mandible and on the back of the cranium and neck: New seams for the character's head part 2 Go in the Face selection mode and select only the face section of the mesh, and then press E to unwrap. The new unwrap comes upside down, so select all the UV vertices and rotate the island by 180 degrees: The character's face unwrapped Select the cranium/neck section on the mesh and repeat the process: The rest of the head mesh unwrapped as a whole piece Now, select all the faces of the mesh and all the islands in the UV/Image Editor, and press Ctrl + A to average their reciprocal size. Once again, adjust the position of the islands inside the UV tile (Ctrl + P to automatically pack them inside the available space, and then tweak their position, rotation, and scale): The character's UV islands packed inside the default U0/V0 tile space How it works… Starting from the UV unwrap, we improved some of the islands by joining together the halves representing common mesh parts. When doing this, we tried to retain the already good parts of the unwrap by pinning the UV vertices that we didn't want to modify; this way, the new unwrap process was forced to calculate the position of the unpinned vertices using the constraints of the pinned ones (pelvis, tongue, and inner mouth). In other cases, we totally cleared the old seams on the model and marked new ones, in order to have a completely new unwrap of the mesh part (the head), we also used the character furniture (such as the armor) to hide the seams (which in any case, won't be visible at all). There's more… At this point, looking at the UV/Image Editor window containing the islands, it's evident that if we want to keep several parts in proportion to each other, some of the islands are a little too small to give a good amount of detail when texturing; for example, the Gidiosaurus's face. A technique for a good unwrap that is the current standard in the industry is UDIM UV Mapping, which means U-Dimension; basically, after the usual unwrap, the islands are scaled bigger and placed outside the default U0/V0 tile space. Look at the following screenshots, showing the Blender UV/Image Editor window:   The default U0/V0 tile space and the possible consecutive other tile spaces On the left-hand side, you can see, highlighted with red lines, the single UV tile that at present is the standard for Blender, which is identified by the UV coordinates 0 and 0: that is, U (horizontal) = 0 and V (vertical) = 0. Although not visible in the UV/Image Editor window, all the other possible consecutive tiles can be identified by the corresponding UV coordinates, as shown on the right-hand side of the preceding screenshot (again, highlighted with red lines). So, adjacent to the tile U0/V0, we can have the row with the tiles U1/V0, U2/V0, and so on, but we can also go upwards: U0/V1, U1/V1, U2/V1, and so on. To help you identify the tiles, Blender will show you the amount of pixels and also the number of tiles you are moving the islands in the toolbar of the UV/Image Editor window. In the following screenshot, the arm islands have been moved horizontally (on the negative x axis) by -3072.000 pixels; this is correct because that's exactly the X size of the grid image. In fact, in the toolbar of the UV/Image Editor window, while moving the islands we can read D: -3072.000 (pixels) and (inside brackets) 1.0000 (tile) along X; effectively, 3072 pixels = 1 tile.   Moving the arm islands to the U1/V0 tile space When moving UV islands from tile to tile, remember to check that the Constrain to Image Bounds item in the UVs menu on the toolbar of the UV/Image Editor window is disabled; also, enabling the Normalized item inside the Display subpanel under the N key Properties sidepanel of the same editor window will display the UV coordinates from 0.0 to 1.0, rather than in pixels. More, pressing the Ctrl key while moving the islands will constrain the movement to intervals, making it easy to translate them to exactly 1 tile space. Because at the moment Blender doesn't support the UDIM UV Mapping standard, simply moving an island outside the default U0/V0 tile, for example to U1/V0, will repeat the image you loaded in the U0/V0 tile and on the faces associated with the moved islands. To solve this, it's necessary, after moving the islands, to assign a different material, if necessary with its own different image textures, to each group of vertices/faces associated with each tile space. So, if you shared your islands over 4 tiles, you need to assign 4 different materials to your object, and each material must load the proper image texture. The goal of this process is obviously to obtain bigger islands mapped with bigger texture images, by selecting all the islands, scaling them bigger together using the largest ones as a guide, and then tweaking their position and distribution. One last thing: it is also better to unwrap the corneas and eyes (which are separate objects from the Gidiosaurus body mesh) and add their islands to the tiles where you put the face, mouth, teeth, and so on (use the Draw Other Objects tool in the View menu of the UV/Image Editor window to also show the UV islands of the other nonjoined unwrapped objects):   UV islands unwrapped, following the UDIM UV Mapping standard In our case, we assigned the Gidiosaurus body islands to 5 different tiles, U0/V0, U1/V0, U2/V0, U0/V1, and U1/V1, so we'll have to assign 5 different materials. Note that for exposition purposes only, in the preceding screenshot, you can see the cornea and eye islands together with the Gidiosaurus body islands because I temporarily joined the objects; however, it's usually better to maintain the eyes and corneas as separate objects from the main body. Summary In this article, we saw how we can work with UV islands. Resources for Article: Further resources on this subject: Working with Blender [article] Blender Engine : Characters [article] Blender 2.5: Rigging the Torso [article]
Read more
  • 0
  • 0
  • 4748
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animation-features-unity-5
Packt
05 Aug 2015
16 min read
Save for later

Animation features in Unity 5

Packt
05 Aug 2015
16 min read
In this article by Valera Cogut, author of the book Unity 5 for Android Essentials you will learn new Mecanim animation features and awesome new audio features in Unity 5. (For more resources related to this topic, see here.) New Mecanim animation features in Unity 5 Unity 5 contains some new awesome possibilities for the Mecanim animation system. Let's look at the new shiny features known in Unity 5. State machine behavior Now, you can inherit your classes from StateMachineBehaviour in order to be able to attach them to your Mecanim animation states. This class has the following very important callbacks: OnStateEnter OnStateUpdate OnStateExit OnStateMove OnStateIK The StateMachineBehaviour scripts behave like MonoBehaviour scripts, which you can attach on as many objects as you wish; the same is true for StateMachineBehaviour. You can use this solution with or without any animation at all. State machine transition Unity 5 introduced a new awesome feature for Mecanim animation systems known as state machine transitions in order to construct a higher abstraction level. In addition, entry and exit nodes were created. By these two additional nodes to StateMachine, you can now branch your start or finish state depending on your special conditions and requirements. These mixes of transitions are possible: StateMachine | StateMachine, State | StateMachine, State | State. In addition, you also can reorder your layers or parameters. This is the new UI that allows it by a very simple and useful drag-n-drop method. Asset creation API One more awesome possibility in Unity 5 was introduced using scripts in Unity Editor in order to programmatically create assets, such as layers, controllers, states, StateMachine, and blend trees. You can use different solutions with a high-level API provided by Unity engine maintenance and a low-level API, where you should manage all your assets manually. You can find more about both API versions on Unity documentation pages. Direct blend tree Another new feature that was introduced with the new BlendTree type is known as direct. It provides direct mapping and animator parameters to the weight of BlendTree children. Possibilities with Unity 5 have been enhanced with two useful features for Mecanim animation system: Camera can scale, orbit, and pan You can access your parameters in runtime Programmatically creating assets by Unity 5 API The following code snippets are self-explanatory, pretty simple, and straightforward. I list them just as a very useful reminder. Creating the controller To create a controller you can use the following code: var animatorController = UnityEditor.Animations.AnimatorController.CreateAnimatorControllerAtPath ("Assets/Your/Folder/Name/state_machine_transitions.controller"); Adding parameters To add parameters to the controller, you can use this code: animatorController.AddParameter("Parameter1", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter2", UnityEditor.Animations.AnimatorControllerParameterType.Trigger); animatorController.AddParameter("Parameter3″, UnityEditor.Animations.AnimatorControllerParameterType.Trigger); Adding state machines To add state machines, you can use the following code: var sm1 = animatorController.layers[0].stateMachine; var sm2 = sm1.AddStateMachine("sm2"); var sm3 = sm1.AddStateMachine("sm3"); Adding states To add states, you can use the code given here: var s1 = sm2.AddState("s1″); var s2 = sm3.AddState("s2″); var s3 = sm3.AddState("s3″); Adding transitions To add transitions, you can use the following code: var exitTransition = s1.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter1"); exitTransition.duration = 0;   var transition1 = sm2.AddAnyStateTransition(s1); transition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); transition.duration = 0;   var transition2 = sm3.AddEntryTransition(s2); transition2.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3″); sm3.AddEntryTransition(s3); sm3.defaultState = s2;   var exitTransition = s3.AddExitTransition(); exitTransition.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter3"); exitTransition.duration = 0;   var smt = rootStateMachine.AddStateMachineTransition(sm2, sm3); smt.AddCondition(UnityEditor.Animations.AnimatorConditionMode.If, 0, "Parameter2"); sm2.AddStateMachineTransition(sm1, sm3); Going deeper into new audio features Let's start with new amazing Audio Mixer possibilities. Now, you can do true submixing of audio in Unity 5. In the following figure, you can see a very simple example with different sound categories required in a game: Now in Unity 5, you can mix different sound collections within categories and tune up volume control and effects only once in a single place so that you can save a lot of time and effort. This new awesome audio feature in Unity 5 allows you to create a fantastic mood and atmosphere for your game. Each Audio Mixer can have a hierarchy of AudioGroups: The Audio Mixer can not only do a lot of useful things, but also mix different sound groups in one place. Different audio effects are applied sequentially in each AudioGroup. Now you're getting closer to the amazing, awesome, and shiny new features in Unity 5 for audio system! A callback script OnAudioFilterRead, which made possible the processing of samples directly into their scripts, previously was handled exclusively by the code. Unity now also supports custom plugins to create different effects. With these innovations, Unity 5 for audio system now has its own applications synthesizer, which has become much easier and more flexible than possible. Mood transitions As mentioned earlier, the mood of the game can be controlled with a mix of sound. This can be achieved with the involvement of new stems and music or ambient sounds. Another common way to accomplish this is to move the state of the mixture. A very effective way of taking mood where you want to go is by changing the volume section's mixture and transferring it to the different states of effect parameters. Inside, everything is the Audio Mixer's ability to identify pictures. Pictures capture the status of all parameters in Audio Mixer. Everything from investigative wet levels to AudioGroup tone levels can be captured and moved between the various parameters. You can even create a complex mixture of states between a whole bunch of pictures in your game, creating all kinds of possibilities and goals. Imagine installing all these things without having to write a line of code to the script. Physics and particle system effects in Unity 5 Physics for 2D and 3D in Unity are very similar, because they use the same concepts like Ias rigidbodies, joints, and colliders. However, Box2D has more features than Unity's 2D physics engine. It is not a problem to mix 2D and 3D physics engines (built-in, custom, third-party) in Unity. So, Unity provides an easy development way for your innovative games and applications. If you need to develop some real-life physics in your project, then you should not write your own library, framework, or engine, except specific requirements. However, you should try existing physics engines, libraries, or frameworks with many features already made. Let's start our introduction into Unity's built-in physics engine. In the case that you need to set your object under Unity's built-in physics management, you just need to attach the Rigidbody component to this object. After that, your object can collide with other entities in its world and gravity will have an affect on it. In other words, Rigidbody will be simulated physically. In your scripts, you can move any of your Rigidbodies by adding vector forces to them. It is not recommended to move the Transform component of a non-kinematic Rigidbody, because it will not collide correctly with other items. Instead, you can apply forces and torque to your Rigidbody. A Rigidbody can be used also to develop cars with wheel colliders and with some of your scripts to apply forces to it. Furthermore, a Rigidbody is used not only for vehicles, but also you can use it for any other physics issues such as airplanes, robots with various scripts for applying forces, and with joints. The most useful way to utilize a Rigidbody is to use it in collaboration with some primitive colliders (built-in in Unity) such as BoxCollider and SphereCollider. Next, we will show you two things to remember about Rigidbody: In your object's hierarchy, you must never have a child and its parent with the Rigidbody component together at the same time It is not recommended to scale Rigidbody's parent object One of the most important and fundamental components of physics in Unity is a Rigidbody component. This component activates physics calculations on the attached object. If you need your object to react to collisions( for example, while playing billiards, balls collide with each other and scatter in different directions) then you must also attach a Collider component on your GameObject. If you have attached a Rigidbody component to your object, then your object will move through the physics engine, and I recommend that you do not move your object by changing its position or rotation in the Transform component. If you need some way to move your object, you should apply the various forces acting on the object so that the Unity physics engine assumes all obligations for the calculation of collisions and moving dynamic objects. Also, in some situations, there is a need for a Rigidbody component, but your object must be moved only by changing its position or rotation properties in the Transform component. It is sometimes necessary to use components without Rigidbody calculating collisions of the object and its motion physics. That is, your object will move by your script or, for example, by running your animation. In order to solve this problem, you should just activate its IsKinematic property. Sometimes, it is required to use a combination of these two modes when IsKinematic is turned on and when it is turned off. You can create a symbiosis of these two modes, changing the IsKinematic parameter directly in your code or in your animation. Changing the IsKinematic property very often from your code or from your animation can be the cause of overhead in your performance. Therefore, you should use it very carefully and only when you really need it. A kinematic Rigidbody object is defined by the IsKinematic toggle option. If a Rigidbody is Kinematic, this object will not be affected by collisions, gravity, or forces. There is a Rigidbody component for 3D physics engine and an analogous Rigidbody2D for 2D physics engine. A kinematic Rigidbody can interact with other non-kinematic Rigidbodies. In the event of using kinematic Rigidbodies, you should translate their positions and rotation values of the Transform component by your scripts or animations. When there is a collision between Kinematic and non-kinematic Rigidbodies, then the Kinematic object will properly wake up non-kinematic Rigidbody. Furthermore, the first Rigidbody will apply friction to the second Rigidbody if the second object is on top of the first object. Let's list some possible usage examples of kinematic Rigidbodies: There are situations when you need your objects to be under physics management, but sometimes to be controlled explicitly from your scripts or animations. As an example, you can attach Rigidbodies to the bones of your animated personage and connect them with joints in order to utilize your entity as a ragdoll. If you are controlling your character by Unity's animation system, you should enable the IsKinematic checkbox. Sometimes you may require your hero to be affected by Unity's built-in physics engine if you are hitting the hero. In this case you should disable the IsKinematic checkbox. If you need a moving item that can push different items, yet not by itself. In case you have a moving platform and you need to place some Rigidbody objects on top, you ought to enable the IsKinematic checkbox rather than simply attaching a collider without a Rigidbody. You may need to enable the IsKinematic property of your Rigidbody object that is animated and has a genuine Rigidbody follower by utilizing one of the accessible joints. Earlier, I mentioned the collider, but now is the time to discuss this component in more detail. In the case of Unity, the physics engine can calculate collisions. You must specify geometric shapes for your object by attaching the Collider component. In most cases, the collider does not have to be the same shape as your mesh with many polygons. Therefore, it is desirable to use simple colliders, which will significantly improve your performance, otherwise with more complex geometric shapes you risk significantly increasing the computing time for physics collisions. Simple colliders in Unity are known as primitive colliders: BoxCollider, BoxCollider2D, SphereCollider, CircleCollider2D, and CapsuleCollider. Also, no one forbids you to combine different primitive colliders to create a more realistic geometric shape that the physics engine can handle very fast compared to MeshCollider. Therefore, to accelerate your performance, you should use primitive colliders wherever possible. You can also hang on to the child objects of different primitive colliders, which will change its position and rotation, depending on the parent Transform component. The Rigidbody component must be attached only to the GameObject root in the hierarchy of your entity. Unity provides a MeshCollider component for 3D physics and a PolygonCollider2D component for 2D physics. The MeshCollider component will use your object's mesh for its geometric shape. In PolygonCollider2D, you can edit directly in Unity and create any 2D geometry for your 2D physical computations. In order to react in collisions between different mesh colliders, you must enable a Convex property. You will certainly sacrifice performance for more accurate physics calculations, but if you have the right balance between quality and performance, then you can achieve good performance only through a proper approach. Objects are static when they have a Collider component without a Rigidbody component. Therefore, you should not move or rotate them by changing properties in their Transform component, because it will leave a heavy imprint on your performance as a physics engine should recalculate many polygons of various objects for right collisions and ray casts. Dynamic objects are those that have a Rigidbody component. Static objects (attached with the Collider component and without Rigidbody components) can interact with dynamic objects (attached with Collider and Rigidbody components). Furthermore, static objects will not be moved by collisions like dynamic objects. Also, Rigidbodies can sleep in order to increase performance. Unity provides the ability to control sleep in a Rigidbodies component directly in the code using following functions: Rigidbody.IsSleeping() Rigidbody.Sleep() Rigidbody.WakeUp() There are two variables characterized in the physics manager. You can open physics manager right from Unity menu here: Edit | Project Settings | Physics: Rigidbody.sleepVelocity: The default value is 0.14. This indicates lower limitations for linear velocity (from zero to infinity) below which objects will sleep. Rigidbody.sleepAngularVelocity: The default value is 0.14. This indicates lower limitations for angular velocity (from zero to infinity) below which objects will sleep. Rigidbodies awaken when: An alternate Rigidbody impacts the resting Rigidbody An alternate Rigidbody was joined through a joint At the point of adjusting a property of the Rigidbody At the point of adding force vectors A kinematic Rigidbody can wake the other sleeping Rigidbodies while static objects (attached with a Collider component and without a Rigidbody component) can't wake your sleeping Rigidbodies. The PhysX physics engine which is integrated into Unity works well on mobile devices, but mobile devices certainly have far fewer resources than powerful desktops. Let's look at a few points to optimize the physics engine in Unity: First of all, note that you can adjust the Fixed Timestep parameter in the time manager in order to reduce costs for the physical execution time updates. If you increase the value, you can increase the quality and accuracy of physics in your game or in your application, but you will lose the time to process. This can greatly reduce your productivity, or in other words, it can increase CPU overhead. The maximum allowed timestep indicates how much time will be spent in the worst case for physical treatment. The total processing time for physics depends on the awake rigidbodies and colliders in the scene, as well as the level of complexity of the colliders. Unity provides the ability to use physical materials for setting various properties such as friction and elasticity. For example, a piece of ice in your game may have very low friction or equal to zero (minimum value), while a jumping ball may have a very high friction force or equal to one (maximum value) and also very high elasticity. You should play with the settings of your physical materials for different objects and choose the most suitable solution for you and the best solution for your performance. Triggers do not require a lot of processing costs by the physics engine and can greatly help in improving your performance. Triggers are useful in situations where, for example, in your game you need to identify areas near all lights that are automatically turned on in the evening or night if the player is in its trigger zone or in other words within the geometric shape of its collider, which you can design as you wish. Unity triggers allow writing the three callbacks, which will be called when your object enters the trigger, while your object is staying in trigger, and when this object leaves the trigger. Thus, you can register any of these functions, the necessary instructions, for example, turn on the flashlight when entering the trigger zone or turn it off when exiting the trigger zone. It is important to know that in Unity, static objects (objects without a Rigidbody component) will not cause your callbacks to get into the zone trigger if your trigger does not contain a Rigidbody component; that is, in other words at least one of these objects must have a Rigidbody component in order to not ignore your callbacks. In the case of two triggers, there should be at least one object attached with a Rigidbody component to your callbacks were not ignored. Remember that when two objects are attached with Rigidbody and Collider components and if at least one of them is the trigger, then the trigger callbacks will be called and not the collision callbacks. I would also like to point out that your callbacks will be called for each object included in the collision or trigger zone. Also, you can directly control whether your collider is a trigger or not by setting the flag isTrigger value to true or false in your code. Of course, you can mix both options in order to obtain the best performance. All collision callbacks will be called only if at least one of two interacted rigidbodies is not kinematic. Summary This article covered new Mecanim animation features in Unity 5. You were introduced to the new awesome audio features in Unity 5. We also covered many useful details for your performance within Unity built-in physics and particle systems. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Learning NGUI for Unity [article]
Read more
  • 0
  • 0
  • 2483

article-image-simple-pathfinding-algorithm-maze
Packt
23 Jul 2015
10 min read
Save for later

A Simple Pathfinding Algorithm for a Maze

Packt
23 Jul 2015
10 min read
In this article by Mário Kašuba, author of the book Lua Game Development Cookbook, explains that maze pathfinding can be used effectively in many types of games, such as side-scrolling platform games or top-down, gauntlet-like games. The point is to find the shortest viable path from one point on the map to another. This can be used for moving NPCs and players as well. (For more resources related to this topic, see here.) Getting ready This article will use a simple maze environment to find a path starting at the start point and ending at the exit point. You can either prepare one by yourself or let the computer create one for you. A map will be represented by a 2D-map structure where each cell will consist of a cell type and cell connections. The cell type values are as follows: 0 means a wall 1 means an empty cell 2 means the start point 3 means the exit point Cell connections will use a bitmask value to get information about which cells are connected to the current cell. The following diagram contains cell connection bitmask values with their respective positions: Now, the quite common problem in programming is how to implement an efficient data structure for 2D maps. Usually, this is done either with a relatively large one-dimensional array or with an array of arrays. All these arrays have a specified static size, so map dimensions are fixed. The problem arises when you use a simple 1D array and you need to change the map size during gameplay or the map size should be unlimited. This is where map cell indexing comes into place. Often you can use this formula to compute the cell index from 2D map coordinates: local index = x + y * map_width map[index] = value There's nothing wrong with this approach when the map size is definite. However, changing the map size would invalidate the whole data structure as the map_width variable would change its value. A solution to this is to use indexing that's independent from the map size. This way you can ensure consistent access to all elements even if you resize the 2D map. You can use some kind of hashing algorithm that packs map cell coordinates into one value that can be used as a unique key. Another way to accomplish this is to use the Cantor pairing function, which is defined for two input coordinates:   Index value distribution is shown in the following diagram: The Cantor pairing function ensures that there are no key collisions no matter what coordinates you use. What's more, it can be trivially extended to support three or more input coordinates. To illustrate the usage of the Cantor pairing function for more dimensions, its primitive form will be defined as a function cantor(k1, k2), where k1 and k2 are input coordinates. The pairing function for three dimensions will look like this: local function cantor3D(k1, k2, k3) return cantor(cantor(k1, k2), k3) end Keep in mind that the Cantor pairing function always returns one integer value. With higher number of dimensions, you'll soon get very large values in the results. This may pose a problem because the Lua language can offer 52 bits for integer values. For example, for 2D coordinates (83114015, 11792250) you'll get a value 0x000FFFFFFFFFFFFF that still can fit into 52-bit integer values without rounding errors. The larger coordinates will return inaccurate values and subsequently you'd get key collisions. Value overflow can be avoided by dividing large maps into smaller ones, where each one uses the full address space that Lua numbers can offer. You can use another coordinate to identify submaps. This article will use specialized data structures for a 2D map with the Cantor pairing function for internal cell indexing. You can use the following code to prepare this type of data structure: function map2D(defaultValue) local t = {} -- Cantor pair function local function cantorPair(k1, k2)    return 0.5 * (k1 + k2) * ((k1 + k2) + 1) + k2 end setmetatable(t, {    __index = function(_, k)      if type(k)=="table" then        local i = rawget(t, cantorPair(k[1] or 1, k[2] or 1))        return i or defaultValue      end    end,    __newindex = function(_, k, v)      if type(k)=="table" then        rawset(t, cantorPair(k[1] or 1, k[2] or 1), v)      else        rawset(t, k, v)      end    end, }) return t end The maze generator as well as the pathfinding algorithm will need a stack data structure. How to do it… This section is divided into two parts, where each one solves very similar problems from the perspective of the maze generator and the maze solver. Maze generation You can either load a maze from a file or generate a random one. The following steps will show you how to generate a unique maze. First, you'll need to grab a maze generator library from the GitHub repository with the following command: git clone https://github.com/soulik/maze_generator This maze generator uses the depth-first approach with backtracking. You can use this maze generator in the following steps. First, you'll need to set up maze parameters such as maze size, entry, and exit points. local mazeGenerator = require 'maze' local maze = mazeGenerator { width = 50, height = 25, entry = {x = 2, y = 2}, exit = {x = 30, y = 4}, finishOnExit = false, } The final step is to iteratively generate the maze map until it's finished or a certain step count is reached. The number of steps should always be one order of magnitude greater than the total number of maze cells mainly due to backtracking. Note that it's not necessary for each maze to connect entry and exit points in this case. for i=1,12500 do local result = maze.generate() if result == 1 then    break end end Now you can access each maze cell with the maze.map variable in the following manner: local cell = maze.map[{x, y}] local cellType = cell.type local cellConnections = cell.connections Maze solving This article will show you how to use a modified Trémaux's algorithm, which is based on depth-first search and path marking. This method guarantees finding the path to the exit point if there's one. It relies on using two keys in each step: current position and neighbors. This algorithm will use three state variables—the current position, a set of visited cells, and the current path from the starting point: local currentPosition = {maze.entry.x, maze.entry.y} local visistedCells = map2D(false) local path = stack() The whole maze solving process will be placed into one loop. This algorithm is always finite, so you can use the infinite while loop. -- A placeholder for neighbours function that will be defined later local neighbours   -- testing function for passable cells local cellTestFn = function(cell, position) return (cell.type >= 1) and (not visitedCells[position]) end   -- include starting point into path visitedCells[currentPosition] = true path.push(currentPosition)   while true do local currentCell = maze.map[currentPosition] -- is current cell an exit point? if currentCell and    (currentCell.type == 3 or currentCell.type == 4) then    break else    -- have a look around and find viable cells    local possibleCells = neighbours(currentPosition, cellTestFn)    if #possibleCells > 0 then      -- let's try the first available cell      currentPosition = possibleCells[1]      visitedCells[currentPosition] = true      path.push(currentPosition)    elseif not path.empty() then      -- get back one step      currentPosition = path.pop()    else      -- there's no solution      break    end end end This fairly simple algorithm uses the neighbours function to obtain a list of cells that haven't been visited yet: -- A shorthand for direction coordinates local neighbourLocations = { [0] = {0, 1}, [1] = {1, 0}, [2] = {0, -1}, [3] = {-1, 0}, }   local function neighbours(position0, fn) local neighbours = {} local currentCell = map[position0] if type(currentCell)=='table' then    local connections = currentCell.connections    for i=0,3 do      -- is this cell connected?      if bit.band(connections, 2^i) >= 1 then        local neighbourLocation = neighbourLocations[i]        local position1 = {position0[1] + neighbourLocation[1],         position0[2] + neighbourLocation[2]}        if (position1[1]>=1 and position1[1] <= maze.width and         position1[2]>=1 and position1[2] <= maze.height) then          if type(fn)=="function" then            if fn(map[position1], position1) then              table.insert(neighbours, position1)            end          else            table.insert(neighbours, position1)          end        end      end    end end return neighbours end When this algorithm finishes, a valid path between entry and exit points is stored in the path variable represented by the stack data structure. The path variable will contain an empty stack if there's no solution for the maze. How it works… This pathfinding algorithm uses two main steps. First, it looks around the current maze cell to find cells that are connected to the current maze cell with a passage. This will result in a list of possible cells that haven't been visited yet. In this case, the algorithm will always use the first available cell from this list. Each step is recorded in the stack structure, so in the end, you can reconstruct the whole path from the exit point to the entry point. If there are no maze cells to go, it will head back to the previous cell from the stack. The most important is the neighbours function, which determines where to go from the current point. It uses two input parameters: current position and a cell testing function. It looks around the current cell in four directions in clockwise order: up, right, down, and left. There must be a passage from the current cell to each surrounding cell; otherwise, it'll just skip to the next cell. Another step determines whether the cell is within the rectangular maze region. Finally, the cell is passed into the user-defined testing function, which will determine whether to include the current cell in a list of usable cells. The maze cell testing function consists of a simple Boolean expression. It returns true if the cell has a correct cell type (not a wall) and hasn't been visited yet. A positive result will lead to inclusion of this cell to a list of usable cells. Note that even if this pathfinding algorithm finds a path to the exit point, it doesn't guarantee that this path is the shortest possible. Summary We have learned how pathfinding works in games with a simple maze.With pathfinding algorithm, you can create intelligent game opponents that won't jump into a lava lake at the first opportunity. Resources for Article: Further resources on this subject: Mesh animation [article] Getting into the Store [article] Creating a Direct2D game window class [article]
Read more
  • 0
  • 0
  • 5567

article-image-exploring-and-interacting-materials-using-blueprints
Packt
23 Jul 2015
16 min read
Save for later

Exploring and Interacting with Materials using Blueprints

Packt
23 Jul 2015
16 min read
In this article by Brenden Sewell, author of the book Blueprints Visual Scripting for Unreal Engine, we will cover the following topics: Exploring materials Creating our first Blueprint When setting out to develop a game, one of the first steps toward exploring your idea is to build a prototype. Fortunately, Unreal Engine 4 and Blueprints make it easier than ever to quickly get the essential gameplay functionality working so that you can start testing your ideas sooner. To develop some familiarity with the Unreal editor and Blueprints, we will begin by prototyping simple gameplay mechanics using some default assets and a couple of Blueprints. (For more resources related to this topic, see here.) Exploring materials Earlier, we set for ourselves the goal of changing the color of the cylinder when it is hit by a projectile. To do so, we will need to change the actor's material. A material is an asset that can be added to an actor's mesh (which defines the physical shape of the actor) to create its look. You can think of a material as paint applied on top of an actor's mesh or shape. Since an actor's material determines its color, one method of changing the color of an actor is to replace its material with a material of a different color. To do this, we will first be creating a material of our own. It will make an actor appear red. Creating materials We can start by creating a new folder inside the FirstPersonBP directory and calling it Materials. Navigate to the newly created folder and right-click inside the empty space in the content browser, which will generate a new asset creation popup. From here, select Material to create a new material asset. You will be prompted to give the new material a name, which I have chosen to call TargetRed. Material Properties and Blueprint Nodes Double-click on TargetRed to open a new editor tab for editing the material, like this: You are now looking at Material Editor, which shares many features and conventions with Blueprints. The center of this screen is called the grid, and this is where we will place all the objects that will define the logic of our Blueprints. The initial object you see in the center of the grid screen, labeled with the name of the material, is called a node. This node, as seen in the previous screenshot, has a series of input pins that other material nodes can attach to in order to define its properties. To give the material a color, we will need to create a node that will give information about the color to the input labeled Base Color on this node. To do so, right-click on empty space near the node. A popup will appear, with a search box and a long list of expandable options. This shows all the available Blueprint node options that we can add to this Material Blueprint. The search box is context sensitive, so if you start typing the first few letters of a valid node name, you will see the list below shrink to include only those nodes that include those letters in the name. The node we are looking for is called VectorParameter, so we start typing this name in the search box and click on the VectorParameter result to add that node to our grid: A vector parameter in the Material Editor allows us to define a color, which we can then attach to the Base Color input on the tall material definition node. We first need to give the node a color selection. Double-click on the black square in the middle of the node to open Color Picker. We want to give our target a bright red color when it is hit, so either drag the center point in the color wheel to the red section of the wheel, or fill in the RGB or Hex values manually. When you have selected the shade of red you want to use, click on OK. You will notice that the black box in your vector parameter node has now turned red. To help ourselves remember what parameter or property of the material our vector parameter will be defining, we should name it color. You can do this by ensuring that you have the vector parameter node selected (indicated by a thin yellow highlight around the node), and then navigating to the Details panel in the editor. Enter a value for Parameter Name, and the node label will change automatically: The final step is to link our color vector parameter node to the base material node. With Blueprints, you can connect two nodes by clicking and dragging one output pin to another node's input pin. Input pins are located on the left-hand side of a node, while output pins are always located to the right. The thin line that connects two nodes that have been connected in this way is called a wire. For our material, we need to click and drag a wire from the top output pin of the color node to the Base Color input pin of the material node, as shown in the following screenshot: Adding substance to our material We can optionally add some polish to our material by taking advantage of some of the other input pins on the material definition node. 3D objects look unrealistic with flat, single color materials applied, but we can add additional reflectiveness and depth by setting a value for the materials Metallic and Roughness inputs. To do so, right click in empty grid space and type scalar into the search box. The node we are looking for is called ScalarParameter. Once you have a scalar parameter node, select it, and go to the Details panel. A scalar parameter takes a single float value (a number with decimal values). Set Default Value to 0.1, as we want any additive effects to our material to be subtle. We should also change Parameter Name to Metallic. Finally, we click and drag the output pin from our Metallic node to the Metallic input pin of the material definition node. We want to make an additional connection to the Roughness parameter, so right-click on the Metallic node we just created and select Duplicate. This will generate a copy of that node, without the wire connection. Select this duplicate Metallic node and then change the Parameter Name field in the Details panel to Roughness. We will keep the same default value of 0.1 for this node. Now click and drag the output pin from the Roughness node to the Roughness input pin of the Material definition node. The final result of our Material Blueprint should look like what is shown in the following screenshot: We have now made a shiny red material. It will ensure that our targets will stand out when they are hit. Click on the Save button in the top-left corner of the editor to save the asset, and click again on the tab labeled FirstPersonExampleMap to return to your level. Creating our first Blueprint We now have a cylinder in the world, and the material we would like to apply to the cylinder when shot. The final piece of the interaction will be the game logic that evaluates that the cylinder has been hit, and then changes the material on the cylinder to our new red material. In order to create this behavior and add it to our cylinder, we will have to create a Blueprint. There are multiple ways of creating a Blueprint, but to save a couple of steps, we can create the Blueprint and directly attach it to the cylinder we created in a single click. To do so, make sure you have the CylinderTarget object selected in the Scene Outliner panel, and click on the blue Blueprint/Add Script button at the top of the Details panel. You will then see a path select window. For this project, we will be storing all our Blueprints in the Blueprints folder, inside the FirstPersonBP folder. Since this is the Blueprint for our CylinderTarget actor, leaving the name of the Blueprint as the default, CylinderTarget_Blueprint, is appropriate. CylinderTarget_Blueprint should now appear in your content browser, inside the Blueprints folder. Double-click on this Blueprint to open a new editor tab for the Blueprint. We will now be looking at the Viewport view of our cylinder. From here, we can manipulate some of the default properties of our actor, or add more components, each of which can contain their own logic to make the actor more complex. for creating a simple Blueprint attached to the actor directly. To do so, click on the tab labeled Event Graph above the Viewport window. Exploring the Event Graph panel The Event Graph panel should look very familiar, as it shares most of the same visual and functional elements as the Material Editor we used earlier. By default, the event graph opens with three unlinked event nodes that are currently unused. An event refers to some action in the game that acts as a trigger for a Blueprint to do something. Most of the Blueprints you will create follow this structure: Event (when) | Conditionals (if) | Actions (do). This can be worded as follows: when something happens, check whether X, Y, and Z are true, and if so, do this sequence of actions. A real-world example of this might be a Blueprint that determines whether or not I have fired a gun. The flow is like this: WHEN the trigger is pulled, IF there is ammo left in the gun, DO fire the gun. The three event nodes that are present in our graph by default are three of the most commonly used event triggers. Event Begin Play triggers actions when the player first begins playing the game. Event Actor Begin Overlap triggers actions when another actor begins touching or overlapping the space containing the existing actor controlled by the Blueprint. Event Tick triggers attached actions every time a new frame of visual content is displayed on the screen during gameplay. The number of frames that are shown on the screen within a second will vary depending on the power of the computer running the game, and this will in turn affect how often Event Tick triggers the actions. We want to trigger a "change material" action on our target every time a projectile hits it. While we could do this by utilizing the Event Actor Begin Overlap node to detect when a projectile object was overlapping with the cylinder mesh of our target, we will simplify things by detecting only when another actor has hit our target actor. Let's start with a clean slate, by clicking and dragging a selection box over all the default events and hitting the Delete key on the keyboard to delete them. Detecting a hit To create our hit detection event, right-click on empty graph space and type hit in the search box. The Event Hit node is what we are looking for, so select it when it appears in the search results. Event Hit triggers actions every time another actor hits the actor controlled by this Blueprint. Once you have the Event Hit node on the graph, you will notice that Event Hit has a number of multicolored output pins originating from it. The first thing to notice is the white triangle pin that is in the top-right corner of the node. This is the execution pin, which determines the next action to be taken in a sequence. Linking the execution pins of different nodes together enables the basic functionality of all Blueprints. Now that we have the trigger, we need to find an action that will enable us to change the material of an actor. Click and drag a wire from the execution pin to the empty space to the right of the node. Dropping a wire into empty space like this will generate a search window, allowing you to create a node and attach it to the pin you are dragging from in a single operation. In the search window that appears, make sure that the Context Sensitive box is checked. This will limit the results in the search window to only those nodes that can actually be attached to the pin you dragged to generate the search window. With Context Sensitive checked, type set material in the search box. The node we want to select is called Set Material (StaticMeshComponent). If you cannot find the node you are searching for in the context-sensitive search, try unchecking Context Sensitive to find it from the complete list of node options. Even if the node is not found in the context-sensitive search, there is still a possibility that the node can be used in conjunction with the node you are attempting to attach it to. The actions in the Event Hit node can be set like this: Swapping a material Once you have placed the Set Material node, you will notice that it is already connected via its input execution pin to the Event Hit node's output execution pin. This Blueprint will now fire the Set Material action whenever the Blueprint's actor hits another actor. However, we haven't yet set up the material that will be called when the Set Material action is called. Without setting the material, the action will fire but not produce any observable effect on the cylinder target. To set the material that will be called, click on the drop-down field labeled Select Asset underneath Material inside the Set Material node. In the asset finder window that appears, type red in the search box to find the TargetRed material we created earlier. Clicking on this asset will attach it to the Material field inside the Set Material node. We have now done everything we need with this Blueprint to turn the target cylinder red, but before the Blueprint can be saved, it must be compiled. Compiling is the process used to convert the developer-friendly Blueprint language into machine instructions that tell the computer what operations to perform. This is a hands-off process, so we don't need to concern ourselves with it, except to ensure that we always compile our Blueprint scripts after we assemble them. To do so, hit the Compile button in the top-left corner of the editor toolbar, and then click on Save. Now that we have set up a basic gameplay interaction, it is wise to test the game to ensure that everything is happening the way we expect. Click on the Play button, and a game window will appear directly above the Blueprint Editor. Try both shooting and running into the CylinderTarget actor you created. Improving the Blueprint When we run the game, we see that the cylinder target does change colors upon being hit by a projectile fired from the player's gun. This is the beginning of a framework of gameplay that can be used to get enemies to respond to the player's actions. However, you also might have noticed that the target cylinder changes color even when the player runs into it directly. Remember that we wanted the cylinder target to become red only when hit by a player projectile, and not because of any other object colliding with it. Unforeseen results like this are common whenever scripting is involved, and the best way to avoid them is to check your work by playing the game as you construct it as often as possible. To fix our Blueprint so that the cylinder target changes color only in response to a player projectile, return to the CylinderTarget_Blueprint tab and look at the Event Hit node again. The remaining output pins on the Event Hit node are variables that store data about the event that can be passed to other nodes. The color of the pins represents the kind of data variable it passes. Blue pins pass objects, such as actors, whereas red pins contain a boolean (true or false) variable. You will learn more about these pin types as we get into more complicated Blueprints; for now, we only need to concern ourselves with the blue output pin labeled Other, which contains the data about which other actor performed the hitting to fire this event. This will be useful in order for us to ensure that the cylinder target changes color only when hit by a projectile fired from the player, rather than changing color because of any other actors that might bump into it. To ensure that we are only triggering in response to a player projectile hit, click and drag a wire from the Other output pin to empty space. In this search window, type projectile. You should see some results that look similar to the following screenshot. The node we are looking for is called Cast To FirstPersonProjectile: FirstPersonProjectile is a Blueprint included in Unreal Engine 4's First Person template that controls the behavior of the projectiles that are fired from your gun. This node uses casting to ensure that the action attached to the execution pin of this node occurs only if the actor hitting the cylinder target matches the object referenced by the casting node. When the node appears, you should already see a blue wire between the Other output pin of the Event Hit node and the Object pin of the casting node. If not, you can generate it manually by clicking and dragging from one pin to the other. You should also remove the connections between the Event Hit and Set Material node execution pins so that the casting node can be linked between them. Removing a wire between two pins can be done by holding down the Alt key and clicking on a pin. Once you have linked the three nodes, the event graph should look like what is shown in the following screenshot: Now compile, save, and click on the play button again to test. This time, you should notice that the cylinder target retains its default color when you walk up and touch it, but does turn red when fired upon by your gun. Summary In this article, the skills you have learned will serve as a strong foundation for building more complex interactive behavior. You may wish to spend some additional time modifying your prototype to include a more appealing layout, or feature faster moving targets. One of the greatest benefits of Blueprint's visual scripting is the speed at which you can test new ideas, and each additional skill that you learn will unlock exponentially more possibilities for game experiences that you can explore and prototype. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] Configuration and Handy Tweaks for UDK [article] Unreal Development Toolkit: Level Design HQ [article]
Read more
  • 0
  • 0
  • 7269

article-image-sprites-camera-actions
Packt
20 Jul 2015
20 min read
Save for later

Sprites, Camera, Actions!

Packt
20 Jul 2015
20 min read
In this article by, Stephen Haney, author of the book Game Development with Swift, we will focus on building great gameplay experiences while SpriteKit performs the mechanical work of the game loop. To draw an item to the screen, we create a new instance of a SpriteKit node. These nodes are simple; we attach a child node to our scene, or to existing nodes, for each item we want to draw. Sprites, particle emitters, and text labels are all considered nodes in SpriteKit. The topics in this article include: Drawing your first sprite Animation: movement, scaling, and rotation Working with textures Organizing art into texture atlases For this article, you need to first install Xcode, and then create a project. The project automatically creates the GameScene.swift file as the default file to store the scene of your new game. (For more resources related to this topic, see here.) Drawing your first sprite It is time to write some game code – fantastic! Open your GameScene.swift file and find the didMoveToView function. Recall that this function fires every time the game switches to this scene. We will use this function to get familiar with the SKSpriteNode class. You will use SKSpriteNode extensively in your game, whenever you want to add a new 2D graphic entity. The term sprite refers to a 2D graphic or animation that moves around the screen independently from the background. Over time, the term has developed to refer to any game object on the screen in a 2D game. We will create and draw your first sprite in this article: a happy little bee. Building a SKSpriteNode class Let's begin by drawing a blue square to the screen. The SKSpriteNode class can draw both texture graphics and solid blocks of color. It is often helpful to prototype your new game ideas with blocks of color before you spend time with artwork. To draw the blue square, add an instance of SKSpriteNode to the game: override func didMoveToView(view: SKView) {// Instantiate a constant, mySprite, instance of SKSpriteNode// The SKSpriteNode constructor can set color and size// Note: UIColor is a UIKit class with built-in color presets// Note: CGSize is a type we use to set node sizeslet mySprite = SKSpriteNode(color: UIColor.blueColor(), size:CGSize(width: 50, height: 50))// Assign our sprite a position in points, relative to its// parent node (in this case, the scene)mySprite.position = CGPoint(x: 300, y: 300)// Finally, we need to add our sprite node into the node tree.// Call the SKScene's addChild function to add the node// Note: In Swift, 'self' is an automatic property// on any type instance, exactly equal to the instance itself// So in this instance, it refers to the GameScene instanceself.addChild(mySprite)} Go ahead and run the project. You should see a similar small blue square appear in your simulator: Swift allows you to define variables as constants, which can be assigned a value only once. For best performance, use let to declare constants whenever possible. Declare your variables with var when you need to alter the value later in your code. Adding animation to your Toolkit Before we dive back in to sprite theory, we should have some fun with our blue square. SpriteKit uses action objects to move sprites around the screen. Consider this example: if our goal is to move the square across the screen, we must first create a new action object to describe the animation. Then, we instruct our sprite node to execute the action. I will illustrate this concept with many examples in the article. For now, add this code in the didMoveToView function, below the self.addChild(mySprite) line: // Create a new constant for our action instance// Use the moveTo action to provide a goal position for a node// SpriteKit will tween to the new position over the course of the// duration, in this case 5 secondslet demoAction = SKAction.moveTo(CGPoint(x: 100, y: 100),duration: 5)// Tell our square node to execute the action!mySprite.runAction(demoAction) Run the project. You will see our blue square slide across the screen towards the (100,100) position. This action is re-usable; any node in your scene can execute this action to move to the (100,100) position. As you can see, SpriteKit does a lot of the heavy lifting for us when we need to animate node properties. Inbetweening, or tweening, uses the engine to animate smoothly between a start frame and an end frame. Our moveTo animation is a tween; we provide the start frame (the sprite's original position) and the end frame (the new destination position). SpriteKit generates the smooth transition between our values. Let's try some other actions. The SKAction.moveTo function is only one of many options. Try replacing the demoAction line with this code: let demoAction = SKAction.scaleTo(4, duration: 5) Run the project. You will see our blue square grow to four times its original size. Sequencing multiple animations We can execute actions together simultaneously or one after the each other with action groups and sequences. For instance, we can easily scale our sprite larger and spin it at the same time. Delete all of our action code so far and replace it with this code: // Scale up to 4x initial scalelet demoAction1 = SKAction.scaleTo(4, duration: 5)// Rotate 5 radianslet demoAction2 = SKAction.rotateByAngle(5, duration: 5)// Group the actionslet actionGroup = SKAction.group([demoAction1, demoAction2])// Execute the group!mySprite.runAction(actionGroup) When you run the project, you will see a spinning, growing square. Terrific! If you want to run these actions in sequence (rather than at the same time) change SKAction.group to SKAction.sequence: // Group the actions into a sequencelet actionSequence = SKAction.sequence([demoAction1, demoAction2])// Execute the sequence!mySprite.runAction(actionSequence) Run the code and watch as your square first grows and then spins. Good. You are not limited to two actions; we can group or sequence as many actions together as we need. We have only used a few actions so far; feel free to explore the SKAction class and try out different action combinations before moving on. Recapping your first sprite Congratulations, you have learned to draw a non-textured sprite and animate it with SpriteKit actions. Next, we will explore some important positioning concepts, and then add game art to our sprites. Before you move on, make sure your didMoveToView function matches with mine, and your sequenced animation is firing properly. Here is my code up to this point: override func didMoveToView(view: SKView) {// Instantiate a constant, mySprite, instance of SKSpriteNodelet mySprite = SKSpriteNode(color: UIColor.blueColor(), size:CGSize(width: 50, height: 50))// Assign our sprite a positionmySprite.position = CGPoint(x: 300, y: 300)// Add our sprite node into the node treeself.addChild(mySprite)// Scale up to 4x initial scalelet demoAction1 = SKAction.scaleTo(CGFloat(4), duration: 2)// Rotate 5 radianslet demoAction2 = SKAction.rotateByAngle(5, duration: 2)// Group the actions into a sequencelet actionSequence = SKAction.sequence([demoAction1,demoAction2])// Execute the sequence!mySprite.runAction(actionSequence)} The story on positioning SpriteKit uses a grid of points to position nodes. In this grid, the bottom left corner of the scene is (0,0), with a positive X-axis to the right and a positive Y-axis to the top. Similarly, on the individual sprite level, (0,0) refers to the bottom left corner of the sprite, while (1,1) refers to the top right corner. Alignment with anchor points Each sprite has an anchorPoint property, or an origin. The anchorPoint property allows you to choose which part of the sprite aligns to the sprite's overall position. The default anchor point is (0.5,0.5), so a new SKSpriteNode centers perfectly on its position. To illustrate this, let us examine the blue square sprite we just drew on the screen. Our sprite is 50 pixels wide and 50 pixels tall, and its position is (300,300). Since we have not modified the anchorPoint property, its anchor point is (0.5,0.5). This means the sprite will be perfectly centered over the (300,300) position on the scene's grid. Our sprite's left edge begins at 275 and the right edge terminates at 325. Likewise, the bottom starts at 275 and the top ends at 325. The following diagram illustrates our block's position on the grid: Why do we prefer centered sprites by default? You may think it simpler to position elements by their bottom left corner with an anchorPoint property setting of (0,0). However, the centered behavior benefits us when we scale or rotate sprites: When we scale a sprite with an anchorPoint property of (0,0) it will only expand up the y-axis and out the x-axis. Rotation actions will swing the sprite in wide circles around its bottom left corner. A centered sprite, with the default anchorPoint property of (0.5, 0.5), will expand or contract equally in all directions when scaled and will spin in place when rotated, which is usually the desired effect. There are some cases when you will want to change an anchor point. For instance, if you are drawing a rocket ship, you may want the ship to rotate around the front nose of its cone, rather than its center. Adding textures and game art You may want to take a screenshot of your blue box for your own enjoyment later. I absolutely love reminiscing over old screenshots of my finished games when they were nothing more than simple colored blocks sliding around the screen. Now it is time to move past that stage and attach some fun artwork to our sprite. Downloading the free assets I am providing a downloadable pack for all of the art assets. I recommend you use these assets so you will have everything you need for our demo game. Alternatively, you are certainly free to create your own art for your game if you prefer. These assets come from an outstanding public domain asset pack from Kenney Game Studio. I am providing a small subset of the asset pack that we will use in our game. Download the game art from this URL: http://www.thinkingswiftly.com/game-development-with-swift/assets More exceptional art If you like the art, you can download over 16,000 game assets in the same style for a small donation at http://kenney.itch.io/kenney-donation. I do not have an affiliation with Kenney; I just find it admirable that he has released so much public domain artwork for indie game developers. As CC0 assets, you can copy, modify, and distribute the art, even for commercial purposes, all without asking permission. You can read the full license here: https://creativecommons.org/publicdomain/zero/1.0/ Drawing your first textured sprite Let us use some of the graphics you just downloaded. We will start by creating a bee sprite. We will add the bee texture to our project, load the image onto a SKSpriteNode class, and then size the node for optimum sharpness on retina screens. Add the bee image to your project We need to add the image files to our Xcode project before we can use them in the game. Once we add the images, we can reference them by name in our code; SpriteKit is smart enough to find and implement the graphics. Follow these steps to add the bee image to the project: Right-click on your project in the project navigator and click on Add Files to "Pierre Penguin Escapes the Antarctic" (or the name of your game). Refer to this screenshot to find the correct menu item: Browse to the asset pack you downloaded and locate the bee.png image inside the Enemies folder. Check Copy items if needed, then click Add. You should now see bee.png in your project navigator. Loading images with SKSpriteNode It is quite easy to draw images to the screen with SKSpriteNode. Start by clearing out all of the code we wrote for the blue square inside the didMoveToView function in GameScene.swift. Replace didMoveToView with this code: override func didMoveToView(view: SKView) {// set the scene's background to a nice sky blue// Note: UIColor uses a scale from 0 to 1 for its colorsself.backgroundColor = UIColor(red: 0.4, green: 0.6, blue:0.95, alpha: 1.0);// create our bee sprite nodelet bee = SKSpriteNode(imageNamed: "bee.png")// size our bee nodebee.size = CGSize(width: 100, height: 100)// position our bee nodebee.position = CGPoint(x: 250, y: 250)// attach our bee to the scene's node treeself.addChild(bee)} Run the project and witness our glorious bee – great work! Designing for retina You may notice that our bee image is quite blurry. To take advantage of retina screens, assets need to be twice the pixel dimensions of their node's size property (for most retina screens), or three times the node size for the iPhone 6 Plus. Ignore the height for a moment; our bee node is 100 points wide but the PNG file is only 56 pixels wide. The PNG file needs to be 300 pixels wide to look sharp on the iPhone 6 Plus, or 200 pixels wide to look sharp on 2x retina devices. SpriteKit will automatically resize textures to fit their nodes, so one approach is to create a giant texture at the highest retina resolution (three times the node size) and let SpriteKit resize the texture down for lower density screens. However, there is a considerable performance penalty, and older devices can even run out of memory and crash from the huge textures. The ideal asset approach These double- and triple-sized retina assets can be confusing to new iOS developers. To solve this issue, Xcode normally lets you provide three image files for each texture. For example, our bee node is currently 100 points wide and 100 points tall. In a perfect world, you would provide the following images to Xcode: Bee.png (100 pixels by 100 pixels) Bee@2x.png (200 pixels by 200 pixels) Bee@3x.png (300 pixels by 300 pixels) However, there is currently an issue that prevents 3x textures from working correctly with texture atlases. Texture atlases group textures together and increase rendering performance dramatically (we will implement our first texture atlas in the next section). I hope that Apple will upgrade texture atlases to support 3x textures in Swift 2. For now, we need to choose between texture atlases and 3x assets for the iPhone 6 Plus. My solution for now In my opinion, texture atlases and their performance benefits are key features of SpriteKit. I will continue using texture atlases, thus serving 2x images to the iPhone 6 Plus (which still looks fairly sharp). This means that we will not be using any 3x assets. Further simplifying matters, Swift only runs on iOS7 and higher. The only non-retina devices that run iOS7 are the aging iPad 2 and iPad mini 1st generation. If these older devices are important for your finished games, you should create both standard and 2x images for your games. Otherwise, you can safely ignore non-retina assets with Swift. This means that we will only use double-sized images. The images in the downloadable asset bundle forgo the 2x suffix, since we are only using this size. Once Apple updates texture atlases to use 3x assets, I recommend that you switch to the methodology outlined in The ideal asset approach section for your games. Hands-on with retina in SpriteKit Our bee image illustrates how this all works: Because we set an explicit node size, SpriteKit automatically resizes the bee texture to fit our 100-point wide, 100-point tall sized node. This automatic size-to-fit is very handy, but notice that we have actually slightly distorted the aspect ratio of the image. If we do not set an explicit size, SpriteKit sizes the node (in points) to the match texture's dimensions (in pixels). Go ahead and delete the line that sets the size for our bee node and re-run the project. SpriteKit maintains the aspect ratio automatically, but the smaller bee is still fuzzy. That is because our new node is 56 points by 48 points, matching our PNG file's pixel dimensions of 56 pixels by 48 pixels . . . yet our PNG file needs to be 112 pixels by 96 pixels for a sharp image at this node size on 2x retina screens. We want a smaller bee anyway, so we will resize the node rather than generate larger artwork in this case. Set the size property of your bee node, in points, to half the size of the texture's pixel resolution: // size our bee in points:bee.size = CGSize(width: 28, height: 24) Run the project and you will see a smaller, crystal sharp bee, as in this screenshot: Great! The important concept here is to design your art files at twice the pixel resolution of your node point sizes to take advantage of 2x retina screens, or three times the point sizes to take full advantage of the iPhone 6 Plus. Now we will look at organizing and animating multiple sprite frames. Organizing your assets We will quickly overrun our project navigator with image files if we add all our textures as we did with our bee. Luckily, Xcode provides several solutions. Exploring Images.xcassets We can store images in an .xcassets file and refer to them easily from our code. This is a good place for our background images: Open Images.xcassets from your project navigator. We do not need to add any images here now but, in the future, you can drag image files directly into the image list, or right-click, then Import. Notice that the SpriteKit demo's spaceship image is stored here. We do not need it anymore, so we can right-click on it and choose Removed Selected Items to delete it. Collecting art into texture atlases We will use texture atlases for most of our in-game art. Texture atlases organize assets by collecting related artwork together. They also increase performance by optimizing all of the images inside each atlas as if they were one texture. SpriteKit only needs one draw call to render multiple images out of the same texture atlas. Plus, they are very easy to use! Follow these steps to build your bee texture atlas: We need to remove our old bee texture. Right-click on bee.png in the project navigator and choose Delete, then Move to Trash. Using Finder, browse to the asset pack you downloaded and locate the Enemies folder. Create a new folder inside Enemies and name it bee.atlas. Locate the bee.png and bee_fly.png images inside Enemies and copy them into your new bee.atlas folder. You should now have a folder named bee.atlas containing the two bee PNG files. This is all you need to do to create a new texture atlas – simply place your related images into a new folder with the .atlas suffix. Add the atlas to your project. In Xcode, right-click on the project folder in the project navigator and click Add Files…, as we did earlier for our single bee texture. Find the bee.atlas folder and select the folder itself. Check Copy items if needed, then click Add. The texture atlas will appear in the project navigator. Good work; we organized our bee assets into one collection and Xcode will automatically create the performance optimizations mentioned earlier. Updating our bee node to use the texture atlas We can actually run our project right now and see the same bee as before. Our old bee texture was bee.png, and a new bee.png exists in the texture atlas. Though we deleted the standalone bee.png, SpriteKit is smart enough to find the new bee.png in the texture atlas. We should make sure our texture atlas is working, and that we successfully deleted the old individual bee.png. In GameScene.swift, change our SKSpriteNode instantiation line to use the new bee_fly.png graphic in the texture atlas: // create our bee sprite// notice the new image name: bee_fly.pnglet bee = SKSpriteNode(imageNamed: "bee_fly.png") Run the project again. You should see a different bee image, its wings held lower than before. This is the second frame of the bee animation. Next, we will learn to animate between the two frames to create an animated sprite. Iterating through texture atlas frames We need to study one more texture atlas technique: we can quickly flip through multiple sprite frames to make our bee come alive with motion. We now have two frames of our bee in flight; it should appear to hover in place if we switch back and forth between these frames. Our node will run a new SKAction to animate between the two frames. Update your didMoveToView function to match mine (I removed some older comments to save space): override func didMoveToView(view: SKView) {self.backgroundColor = UIColor(red: 0.4, green: 0.6, blue:0.95, alpha: 1.0)// create our bee sprite// Note: Remove all prior arguments from this line:let bee = SKSpriteNode()bee.position = CGPoint(x: 250, y: 250)bee.size = CGSize(width: 28, height: 24)self.addChild(bee)// Find our new bee texture atlaslet beeAtlas = SKTextureAtlas(named:"bee.atlas")// Grab the two bee frames from the texture atlas in an array// Note: Check out the syntax explicitly declaring beeFrames// as an array of SKTextures. This is not strictly necessary,// but it makes the intent of the code more readable, so I// chose to include the explicit type declaration here:let beeFrames:[SKTexture] = [beeAtlas.textureNamed("bee.png"),beeAtlas.textureNamed("bee_fly.png")]// Create a new SKAction to animate between the frames oncelet flyAction = SKAction.animateWithTextures(beeFrames,timePerFrame: 0.14)// Create an SKAction to run the flyAction repeatedlylet beeAction = SKAction.repeatActionForever(flyAction)// Instruct our bee to run the final repeat action:bee.runAction(beeAction)} Run the project. You will see our bee flap its wings back and forth – cool! You have learned the basics of sprite animation with texture atlases. We will create increasingly complicated animations using this same technique later also. For now, pat yourself on the back. The result may seem simple, but you have unlocked a major building block towards your first SpriteKit game! Putting it all together First, we learned how to use actions to move, scale, and rotate our sprites. Then, we explored animating through multiple frames, bringing our sprite to life. Let us now combine these techniques to fly our bee back and forth across the screen, flipping the texture at each turn. Add this code at the bottom of the didMoveToView function, beneath the bee.runAction(beeAction) line: // Set up new actions to move our bee back and forth:let pathLeft = SKAction.moveByX(-200, y: -10, duration: 2)let pathRight = SKAction.moveByX(200, y: 10, duration: 2)// These two scaleXTo actions flip the texture back and forth// We will use these to turn the bee to face left and rightlet flipTextureNegative = SKAction.scaleXTo(-1, duration: 0)let flipTexturePositive = SKAction.scaleXTo(1, duration: 0)// Combine actions into a cohesive flight sequence for our beelet flightOfTheBee = SKAction.sequence([pathLeft,flipTextureNegative, pathRight, flipTexturePositive])// Last, create a looping action that will repeat foreverlet neverEndingFlight =SKAction.repeatActionForever(flightOfTheBee)// Tell our bee to run the flight path, and away it goes!bee.runAction(neverEndingFlight) Run the project. You will see the bee flying back and forth, flapping its wings. You have officially learned the fundamentals of animation in SpriteKit! We will build on this knowledge to create a rich, animated game world for our players. Summary You have gained foundational knowledge of sprites, nodes, and actions in SpriteKit and already taken huge strides towards your first game with Swift. You configured your project for landscape orientation, drew your first sprite, and then made it move, spin, and scale. You added a bee texture to your sprite, created an image atlas, and animated through the frames of flight. Terrific work! Resources for Article: Further resources on this subject: Network Development with Swift [Article] Installing OpenStack Swift [Article] Flappy Swift [Article]
Read more
  • 0
  • 0
  • 2472
article-image-blueprint-class
Packt
08 Jul 2015
26 min read
Save for later

The Blueprint Class

Packt
08 Jul 2015
26 min read
In this article by Nitish Misra, author of the book, Learning Unreal Engine Android Game Development, mentions about the Blueprint class. You would need to do all the scripting and everything else only once. A Blueprint class is an entity that contains actors (static meshes, volumes, camera classes, trigger box, and so on) and functionalities scripted in it. Looking at our example once again of the lamp turning on/off, say you want to place 10 such lamps. With a Blueprint class, you would just have to create and script once, save it, and duplicate it. This is really an amazing feature offered by UE4. (For more resources related to this topic, see here.) Creating a Blueprint class To create a Blueprint class, click on the Blueprints button in the Viewport toolbar, and in the dropdown menu, select New Empty Blueprint Class. A window will then open, asking you to pick your parent class, indicating the kind of Blueprint class you wish to create. At the top, you will see the most common classes. These are as follows: Actor: An Actor, as already discussed, is an object that can be placed in the world (static meshes, triggers, cameras, volumes, and so on, all count as actors) Pawn: A Pawn is an actor that can be controlled by the player or the computer Character: This is similar to a Pawn, but has the ability to walk around Player Controller: This is responsible for giving the Pawn or Character inputs in the game, or controlling it Game Mode: This is responsible for all of the rules of gameplay Actor Component: You can create a component using this and add it to any actor Scene Component: You can create components that you can attach to other scene components Apart from these, there are other classes that you can choose from. To see them, click on All Classes, which will open a menu listing all the classes you can create a Blueprint with. For our key cube, we will need to create an Actor Blueprint Class. Select Actor, which will then open another window, asking you where you wish to save it and what to name it. Name it Key_Cube, and save it in the Blueprint folder. After you are satisfied, click on OK and the Actor Blueprint Class window will open. The Blueprint class user interface is similar to that of Level Blueprint, but with a few differences. It has some extra windows and panels, which have been described as follows: Components panel: The Components panel is where you can view, and add components to the Blueprint class. The default component in an empty Blueprint class is DefaultSceneRoot. It cannot be renamed, copied, or removed. However, as soon as you add a component, it will replace it. Similarly, if you were to delete all of the components, it will come back. To add a component, click on the Add Component button, which will open a menu, from where you can choose which component to add. Alternatively, you can drag an asset from the Content Browser and drop it in either the Graph Editor or the Components panel, and it will be added to the Blueprint class as a component. Components include actors such as static or skeletal meshes, light actors, camera, audio actors, trigger boxes, volumes, particle systems, to name a few. When you place a component, it can be seen in the Graph Editor, where you can set its properties, such as size, position, mobility, material (if it is a static mesh or a skeletal mesh), and so on, in the Details panel. Graph Editor: The Graph Editor is also slightly different from that of Level Blueprint, in that there are additional windows and editors in a Blueprint class. The first window is the Viewport, which is the same as that in the Editor. It is mainly used to place actors and set their positions, properties, and so on. Most of the tools you will find in the main Viewport (the editor's Viewport) toolbar are present here as well. Event Graph: The next window is the Event Graph window, which is the same as a Level Blueprint window. Here, you can script the components that you added in the Viewport and their functionalities (for example, scripting the toggling of the lamp on/off when the player is in proximity and moves away respectively). Keep in mind that you can script the functionalities of the components only present within the Blueprint class. You cannot use it directly to script the functionalities of any actor that is not a component of the Class. Construction Script: Lastly, there is the Construction Script window. This is also similar to the Event Graph, as in you can set up and connect nodes, just like in the Event Graph. The difference here is that these nodes are activated when you are constructing the Blueprint class. They do not work during runtime, since that is when the Event Graph scripts work. You can use the Construction Script to set properties, create and add your own property of any of the components you wish to alter during the construction, and so on. Let's begin creating the Blueprint class for our key cubes. Viewport The first thing we need are the components. We require three components: a cube, a trigger box, and a PostProcessVolume. In the Viewport, click on the Add Components button, and under Rendering, select Static Mesh. It will add a Static Mesh component to the class. You now need to specify which Static Mesh you want to add to the class. With the Static Mesh actor selected in the Components panel, in the actor's Details panel, under the Static Mesh section, click on the None button and select TemplateCube_Rounded. As soon as you set the mesh, it will appear in the Viewport. With the cube selected, decrease its scale (located in the Details panel) from 1 to 0.2 along all three axes. The next thing we need is a trigger box. Click on the Add Component button and select Box Collision in the Collision section. Once added, increase its scale from 1 to 9 along all three axes, and place it in such a way that its bottom is in line with the bottom of the cube. The Construction Script You could set its material in the Details panel itself by clicking on the Override Materials button in the Rendering section, and selecting the key cube material. However, we are going to assign its material using Construction Script. Switch to the Construction Script tab. You will see a node called Construction Script, which is present by default. You cannot delete this node; this is where the script starts. However, before we can script it in, we will need to create a variable of the type Material. In the My Blueprint section, click on Add New and select Variable in the dropdown menu. Name this variable Key Cube Material, and change its type from Bool (which is the default variable type) to Material in the Details panel. Also, be sure to check the Editable box so that we can edit it from outside the Blueprint class. Next, drag the Key Cube Material variable from the My Blueprint panel, drop it in the Graph Editor, and select Set when the window opens up. Connect this to the output pin of the Construction Script node. Repeat this process, only this time, select Get and connect it to the input pin of Key Cube Material. Right-click in the Graph Editor window and type in Set Material in the search bar. You should see Set Material (Static Mesh). Click on it and add it to the scene. This node already has a reference of the Static Mesh actor (TemplateCube_Rounded), so we will not have to create a reference node. Connect this to the Set node. Finally, drag Key Cube Material from My Blueprint, drop it in the Graph Editor, select Get, and connect it to the Material input pin. After you are done, hit Compile. We will now be able to set the cube's material outside of the Blueprint class. Let's test it out. Add the Blueprint class to the level. You will see a TemplateCube_Rounded actor added to the scene. In its Details panel, you will see a Key Cube Material option under the Default section. This is the variable we created inside our Construction Script. Any material we add here will be added to the cube. So, click on None and select KeyCube_Material. As soon as you select it, you will see the material on the cube. This is one of the many things you can do using Construction Script. For now, only this will do. The Event Graph We now need to script the key cube's functionalities. This is more or less the same as what we did in the Level Blueprint with our first key cube, with some small differences. In the Event Graph panel, the first thing we are going to script is enabling and disabling input when the player overlaps and stops overlapping the trigger box respectively. In the Components section, right-click on Box. This will open a menu. Mouse over Add Event and select Add OnComponentBeginOverlap. This will add a Begin Overlap node to the Graph Editor. Next, we are going to need a Cast node. A Cast node is used to specify which actor you want to use. Right-click in the Graph Editor and add a Cast to Character node. Connect this to the OnComponentBeginOverlap node and connect the other actor pin to the Object pin of the Cast to Character node. Finally, add an Enable Input node and a Get Player Controller node and connect them as we did in the Level Blueprint. Next, we are going to add an event for when the player stops overlapping the box. Again, right-click on Box and add an OnComponentEndOverlap node. Do the exact same thing you did with the OnComponentBeginOverlap node; only here, instead of adding an Enable Input node, add a Disable Input node. The setup should look something like this: You can move the key cube we had placed earlier on top of the pedestal, set it to hidden, and put the key cube Blueprint class in its place. Also, make sure that you set the collision response of the trigger actor to Ignore. The next step is scripting the destruction of the key cube when the player touches the screen. This, too, is similar to what we had done in Level Blueprint, with a few differences. Firstly, add a Touch node and a Sequence node, and connect them to each other. Next, we need a Destroy Component node, which you can find under Components | Destroy Component (Static Mesh). This node already has a reference to the key cube (Static Mesh) inside it, so you do not have to create an external reference and connect it to the node. Connect this to the Then 0 node. We also need to activate the trigger after the player has picked up the key cube. Now, since we cannot call functions on actors outside the Blueprint class directly (like we could in Level Blueprint), we need to create a variable. This variable will be of the type Trigger Box. The way this works is, when you have created a Trigger Box variable, you can assign it to any trigger in the level, and it will call that function to that particular trigger. With that in mind, in the My Blueprint panel, click on Add New and create a variable. Name this variable Activated Trigger Box, and set its type to Trigger Box. Finally, make sure you tick on the Editable box; otherwise, you will not be able to assign any trigger to it. After doing that, create a Set Collision Response to All Channels node (uncheck the Context Sensitive box), and set the New Response option to Overlap. For the target, drag the Activated Trigger Box variable, drop it in the Graph Editor, select Get, and connect it to the Target input. Finally, for the Post Process Volume, we will need to create another variable of the type PostProcessVolume. You can name this variable Visual Indicator, again, while ensuring that the Editable box is checked. Add this variable to the Graph Editor as well. Next, click on its pin, drag it out, and release it, which will open the actions menu. Here, type in Enabled, select Set Enabled, and check Enabled. Finally, add a Delay node and a Destroy Actor and connect them to the Set Enabled node, in that order. Your setup should look something like this: Back in the Viewport, you will find that under the Default section of the Blueprint class actor, two more options have appeared: Activated Trigger Box and Visual Indicator (the variables we had created). Using this, you can assign which particular trigger box's collision response you want to change, and which exact post process volume you want to activate and destroy. In front of both variables, you will see a small icon in the shape of an eye dropper. You can use this to choose which external actor you wish to assign the corresponding variable. Anything you scripted using those variables will take effect on the actor you assigned in the scene. This is one of the many amazing features offered by the Blueprint class. All we need to do now for the remaining key cubes is: Place them in the level Using the eye dropper icon that is located next to the name of the variables, pick the trigger to activate once the player has picked up the key cube, and which post process volume to activate and destroy. In the second room, we have two key cubes: one to activate the large door and the other to activate the door leading to the third room. The first key cube will be placed on the pedestal near the big door. So, with the first key cube selected, using the eye dropper, select the trigger box on the pedestal near the big door for the Activated Trigger Box variable. Then, pick the post process volume inside which the key cube is placed for the Visual Indicator variable. The next thing we need to do is to open Level Blueprint and script in what happens when the player places the key cube on the pedestal near the big door. Doing what we did in the previous room, we set up nodes that will unhide the hidden key cube on the pedestal, and change the collision response of the trigger box around the big door to Overlap, ensuring that it was set to Ignore initially. Test it out! You will find that everything is working as expected. Now, do the same with the remaining key cubes. Pick which trigger box and which post process volume to activate when you touch on the screen. Then, in the Level Blueprint, script in which key cube to unhide, and so on (place the key cubes we had placed earlier on the pedestals and set it to Hidden), and place the Blueprint class key cube in its place. This is one of the many ways you can use Blueprint class. You can see it takes a lot of work and hassle. Let us now move on to Artificial intelligence. Scripting basic AI Coming back to the third room, we are now going to implement AI in our game. We have an AI character in the third room which, when activated, moves. The main objective is to make a path for it with the help of switches and prevent it from falling. When the AI character reaches its destination, it will unlock the key cube, which the player can then pick up and place on the pedestal. We first need to create another Blueprint class of the type Character, and name it AI_Character. When created, double-click on it to open it. You will see a few components already set up in the Viewport. These are the CapsuleComponent (which is mainly used for collision), ArrowComponent (to specify which side is the front of the character, and which side is the back), Mesh (used for character animation), and CharacterMovement. All four are there by default, and cannot be removed. The only thing we need to do here is add a StaticMesh for our character, which will be TemplateCube_Rounded. Click on Add Components, add a StaticMesh, and assign it TemplateCube_Rounded (in its Details panel). Next, scale this cube to 0.2 along all three axes and move it towards the bottom of the CapsuleComponent, so that it does not float in midair. This is all we require for our AI character. The rest we will handle in Level Blueprints. Next, place AI_Character into the scene on the Player side of the pit, with all of the switches. Place it directly over the Target Point actor. Next, open up Level Blueprint, and let's begin scripting it. The left-most switch will be used to activate the AI character, and the remaining three will be used to draw the parts of a path on which it will walk to reach the other side. To move the AI character, we will need an AI Move To node. The first thing we need is an overlapping event for the trigger over the first switch, which will enable the input, otherwise the AI character will start moving whenever the player touches the screen, which we do not want. Set up an Overlap event, an Enable Input node, and a Gate event. Connect the Overlap event to the Enable Input event, and then to the Gate node's Open input. The next thing is to create a Touch node. To this, we will attach an AI Move To node. You can either type it in or find it under the AI section. Once created, attach it to the Gate node's Exit pin. We now need to specify to the node which character we want to move, and where it should move to. To specify which character we want to move, select the AI character in the Viewport, and in the Level Blueprint's Graph Editor, right-click and create a reference for it. Connect it to the Pawn input pin. Next, for the location, we want the AI character to move towards the second Target Point actor, located on the other side of the pit. But first, we need to get its location in the world. With it selected, right-click in the Graph Editor, and type in Get Actor Location. This node returns an actor's location (coordinates) in the world (the one connected to it). This will create a Get Actor Location, with the Target Point actor connect to its input pin. Finally, connect its Return Value to the Destination input of the AI Move To node. If you were to test it out, you would find that it works fine, except for one thing: the AI character stops when it reaches the edge of the pit. We want it to fall off the pit if there is no path. For that, we will need a Nav Proxy Link actor. A Nav Proxy Link actor is used when an AI character has to step outside the Nav Mesh temporarily (for example, jump between ledges). We will need this if we want our AI character to fall off the ledge. You can find it in the All Classes section in the Modes panel. Place it in the level. The actor is depicted by two cylinders with a curved arrow connecting them. We want the first cylinder to be on one side of the pit and the other cylinder on the other side. Using the Scale tool, increase the size of the Nav Proxy Link actor. When placing the Nav Proxy Link actor, keep two things in mind: Make sure that both cylinders intersect in the green area; otherwise, the actor will not work Ensure that both cylinders are in line with the AI character; otherwise, it will not move in a straight line but instead to where the cylinder is located Once placed, you will see that the AI character falls off when it reaches the edge of the pit. We are not done yet. We need to bring the AI character back to its starting position so that the player can start over (or else the player will not be able to progress). For that, we need to first place a trigger at the bottom of the pit, making sure that if the AI character does fall into it, it overlaps the trigger. This trigger will perform two actions: first, it will teleport the AI character to its initial location (with the help of the first Target Point); second, it will stop the AI Move To node, or it will keep moving even after it has been teleported. After placing the trigger, open Level Blueprint and create an Overlap event for the trigger box. To this, we will add a Sequence node, since we are calling two separate functions for when the player overlaps the trigger. The first node we are going to create is a Teleport node. Here, we can specify which actor to teleport, and where. The actor we want to teleport is the AI character, so create a reference for it and connect it to the Target input pin. As for the destination, first use the Get Actor Location function to get the location of the first Target Point actor (upon which the AI character is initially placed), and connect it to the Dest Location input. To stop the AI character's movement, right-click anywhere in the Graph Editor, and first uncheck the Context Sensitive box, since we cannot use this function directly on our AI character. What we need is a Stop Active Movement node. Type it into the search bar and create it. Connect this to the Then 1 output node, and attach a reference of the AI character to it. It will automatically convert from a Character Reference into Character Movement component reference. This is all that we need to script for our AI in the third room. There is one more thing left: how to unlock the key cube. In the fourth room, we are going to use the same principle. Here, we are going to make a chain of AI Move To nodes, each connected to the previous one's On Success output pin. This means that when the AI character has successfully reached the destination (Target Point actor), it should move to the next, and so on. Using this, and what we have just discussed about AI, script the path that the AI will follow. Packaging the project Another way of packaging the game and testing it on your device is to first package the game, import it to the device, install it, and then play it. But first, we should discuss some settings regarding packaging, and packaging for Android. The Maps & Modes settings These settings deal with the maps (scenes) and the game mode of the final game. In the Editor, click on Edit and select Project settings. In the Project settings, Project category, select Maps & Modes. Let's go over the various sections: Default Maps: Here, you can set which map the Editor should open when you open the Project. You can also set which map the game should open when it is run. The first thing you need to change is the main menu map we had created. To do this, click on the downward arrow next to Game Default Map and select Main_Menu. Local Multiplayer: If your game has local multiplayer, you can alter a few settings regarding whether the game should have a split screen. If so, you can set what the layout should be for two and three players. Default Modes: In this section, you can set the default game mode the game should run with. The game mode includes things such as the Default Pawn class, HUD class, Controller class, and the Game State Class. For our game, we will stick to MyGame. Game Instance: Here, you can set the default Game Instance Class. The Packaging settings There are settings you can tweak when packaging your game. To access those settings, first go to Edit and open the Project settings window. Once opened, under the Project section click on Packaging. Here, you can view and tweak the general settings related to packaging the project file. There are two sections: Project and Packaging. Under the Project section, you can set options such as the directory of the packaged project, the build configuration to either debug, development, or shipping, whether you want UE4 to build the whole project from scratch every time you build, or only build the modified files and assets, and so on. Under the Packaging settings, you can set things such as whether you want all files to be under one .pak file instead of many individual files, whether you want those .pak files in chunks, and so on. Clicking on the downward arrow will open the advanced settings. Here, since we are packaging our game for distribution check the For Distribution checkbox. The Android app settings In the preceding section, we talked about the general packaging settings. We will now talk about settings specific to Android apps. This can be found in Project Settings, under the Platforms section. In this section, click on Android to open the Android app settings. Here you will find all the settings and properties you need to package your game. At the top the first thing you should do is configure your project for Android. If your project is not configured, it will prompt you to do so (since version 4.7, UE4 automatically creates the androidmanifest.xml file for you). Do this before you do anything else. Here you have various sections. These are: APKPackaging: In this section, you can find options such as opening the folder where all of the build files are located, setting the package's name, setting the version number, what the default orientation of the game should be, and so on. Advanced APKPackaging: This section contains more advanced packaging options, such as one to add extra settings to the .apk files. Build: To tweak settings in the Build section, you first need the source code which is available from GitHub. Here, you can set things like whether you want the build to support x86, OpenGL ES2, and so on. Distribution Signing: This section deals with signing your app. It is a requirement on Android that all apps have a digital signature. This is so that Android can identify the developers of the app. You can learn more about digital signatures by clicking on the hyperlink at the top of the section. When you generate the key for your app, be sure to keep it in a safe and secure place since if you lose it you will not be able to modify or update your app on Google Play. Google Play Service: Android apps are downloaded via the Google Play store. This section deals with things such as enabling/disabling Google Play support, setting your app's ID, the Google Play license key, and so on. Icons: In this section, you can set your game's icons. You can set various sizes of icons depending upon the screen density of the device you are aiming to develop on. You can get more information about icons by click on the hyperlink at the top of the section. Data Cooker: Finally, in this section, you can set how you want the audio in the game to be encoded. For our game, the first thing you need to set is the Android Package Name which is found in the APKPackaging section. The format of the naming is com.YourCompany.[PROJECT]. Here, replace YourCompany with the name of the company and [PROJECT] with the name of your project. Building a package To package your project, in the Editor go to File | Package Project | Android. You will see different types of formats to package the project in. These are as follows: ATC: Use this format if you have a device that has a Qualcomm Snapdragon processor. DXT: Use this format if your device has a Tegra graphical processing unit (GPU). ETC1: You can use this for any device. However, this format does not accept textures with alpha channels. Those textures will be uncompressed, making your game requiring more space. ETC2: Use this format is you have a MALI-based device. PVRTC: Use this format if you have a device with a PowerVR GPU. Once you have decided upon which format to use, click on it to begin the packaging process. A window will open up asking you to specify which folder you want the package to be stored in. Once you have decided where to store the package file, click OK and the build process will commence. When started, just like with launching the project, a small window will pop up at the bottom-right corner of the screen notifying the user that the build process has begun. You can open the output log and cancel the build process. Once the build process is complete, go the folder you set. You will find a .bat file of the game. Providing you have checked the packaged game data inside .apk? option (which is located in the Project settings in the Android category under the APKPackaging section), you will also find an .apk file of the game. The .bat file directly installs the game from the system onto your device. To do so, first connect your device to the system. Then double-click on the .bat file. This will open a command prompt window.   Once it has opened, you do not need to do anything. Just wait until the installation process finishes. Once the installation is done, the game will be on your device ready to be executed. To use the .apk file, you will have to do things a bit differently. An .apk file installs the game when it is on the device. For that, you need to perform the following steps: Connect the device. Create a copy of the .apk file. Paste it in the device's storage. Execute the .apk file from the device. The installation process will begin. Once completed, you can play the game. Summary In this article, we covered with Blueprints and discussed how they work. We also discussed Level Blueprints and the Blueprint class, and covered how to script AI. We discussed how to package the final product and upload the game onto the Google Play Store for people to download. Resources for Article: Further resources on this subject: Flash Game Development: Creation of a Complete Tetris Game [article] Adding Finesse to Your Game [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 2630

article-image-integrating-google-play-services
Packt
08 Jul 2015
41 min read
Save for later

Integrating Google Play Services

Packt
08 Jul 2015
41 min read
In this article Integrating Google Play Services by Raul Portales, author of the book Mastering Android Game Development, we will cover the tools that Google Play Services offers for game developers. We'll see the integration of achievements and leaderboards in detail, take an overview of events and quests, save games, and use turn-based and real-time multiplaying. Google provides Google Play Services as a way to use special features in apps. Being the game services subset the one that interests us the most. Note that Google Play Services are updated as an app that is independent from the operating system. This allows us to assume that most of the players will have the latest version of Google Play Services installed. (For more resources related to this topic, see here.) More and more features are being moved from the Android SDK to the Play Services because of this. Play Services offer much more than just services for games, but there is a whole section dedicated exclusively to games, Google Play Game Services (GPGS). These features include achievements, leaderboards, quests, save games, gifts, and even multiplayer support. GPGS also comes with a standalone app called "Play Games" that shows the user the games he or she has been playing, the latest achievements, and the games his or her friends play. It is a very interesting way to get exposure for your game. Even as a standalone feature, achievements and leaderboards are two concepts that most games use nowadays, so why make your own custom ones when you can rely on the ones made by Google? GPGS can be used on many platforms: Android, iOS and web among others. It is more used on Android, since it is included as a part of Google apps. There is extensive step-by-step documentation online, but the details are scattered over different places. We will put them together here and link you to the official documentation for more detailed information. For this article, you are supposed to have a developer account and have access to the Google Play Developer Console. It is also advisable for you to know the process of signing and releasing an app. If you are not familiar with it, there is very detailed official documentation at http://developer.android.com/distribute/googleplay/start.html. There are two sides of GPGS: the developer console and the code. We will alternate from one to the other while talking about the different features. Setting up the developer console Now that we are approaching the release state, we have to start working with the developer console. The first thing we need to do is to get into the Game services section of the console to create and configure a new game. In the left menu, we have an option labeled Game services. This is where you have to click. Once in the Game services section, click on Add new game: This bring us to the set up dialog. If you are using other Google services like Google Maps or Google Cloud Messaging (GCM) in your game, you should select the second option and move forward. Otherwise, you can just fill in the fields for I don't use any Google APIs on my game yet and continue. If you don't know whether you are already using them, you probably aren't. Now, it is time to link a game to it. I recommend you publish your game beforehand as an alpha release. This will let you select it from the list when you start typing the package name. Publishing the game to the alpha channel before adding it to Game services makes it much easier to configure. If you are not familiar with signing and releasing your app, check out the official documentation at http://developer.android.com/tools/publishing/app-signing.html. Finally, there are only two steps that we have to take when we link the first app. We need to authorize it and provide branding information. The authorization will generate an OAuth key—that we don't need to use since it is required for other platforms—and also a game ID. This ID is unique to all the linked apps and we will need it to log in. But there is no need to write it down now, it can be found easily in the console at anytime. Authorizing the app will generate the game ID, which is unique to all linked apps. Note that the app we have added is configured with the release key. If you continue and try the login integration, you will get an error telling you that the app was signed with the wrong certificate: You have two ways to work with this limitation: Always make a release build to test GPGS integration Add your debug-signed game as a linked app I recommend that you add the debug signed app as a linked app. To do this, we just need to link another app and configure it with the SHA1 fingerprint of the debug key. To obtain it, we have to open a terminal and run the keytool utility: keytool -exportcert -alias androiddebugkey -keystore <path-to-debug-keystore> -list -v Note that in Windows, the debug keystore can be found at C:Users<USERNAME>.androiddebug.keystore. On Mac and Linux, the debug keystore is typically located at ~/.android/debug.keystore. Dialog to link the debug application on the Game Services console Now, we have the game configured. We could continue creating achievements and leaderboards in the console, but we will put it aside and make sure that we can sign in and connect with GPGS. The only users who can sign in to GPGS while a game is not published are the testers. You can make the alpha and/or beta testers of a linked app become testers of the game services, and you can also add e-mail addresses by hand for this. You can modify this in the Testing tab. Only test accounts can access a game that is not published. The e-mail of the owner of the developer console is prefilled as a tester. Just in case you have problems logging in, double-check the list of testers. A game service that is not published will not appear in the feed of the Play Services app, but it will be possible to test and modify it. This is why it is a good idea to keep it in draft mode until the game itself is ready and publish both the game and the game services at the same time. Setting up the code The first thing we need to do is to add the Google Play Services library to our project. This should already have been done by the wizard when we created the project, but I recommend you to double-check it now. The library needs to be added to the build.gradle file of the main module. Note that Android Studio projects contain a top-level build.gradle and a module-level build.gradle for each module. We will modify the one that is under the mobile module. Make sure that the play services' library is listed under dependencies: apply plugin: 'com.android.application'     dependencies { compile 'com.android.support:appcompat-v7:22.1.1' compile 'com.google.android.gms:play-services:7.3.0' } At the point of writing, the latest version is 7.3.0. The basic features have not changed much and they are unlikely to change. You could force Gradle to use a specific version of the library, but in general I recommend you use the latest version. Once you have it, save the changes and click on Sync Project with Gradle Files. To be able to connect with GPGS, we need to let the game know what the game ID is. This is done through the <meta-data> tag on AndroidManifest.xml. You could hardcode the value here, but it is highly recommended that you set it as a resource in your Android project. We are going to create a new file for this under res/values, which we will name play_services.xml. In this file we will put the game ID, but later we will also have the achievements and leaderboard IDs in it. Using a separate file for these values is recommended because they are constants that do not need to be translated: <application> <meta-data android_name="com.google.android.gms.games.APP_ID" android_value="@string/app_id" /> <meta-data android_name="com.google.android.gms.version" android_value="@integer/google_play_services_version"/> [...] </application> Adding this metadata is extremely important. If you forget to update the AndroidManifest.xml, the app will crash when you try to sign in to Google Play services. Note that the integer for the gms version is defined in the library and we do not need to add it to our file. If you forget to add the game ID to the strings the app will crash. Now, it is time to proceed to sign in. The process is quite tedious and requires many checks, so Google has released an open source project named BaseGameUtils, which makes it easier. Unfortunately this project is not a part of the play services' library and it is not even available as a library. So, we have to get it from GitHub (either check it out or download the source as a ZIP file). BaseGameUtils abstracts us from the complexity of handling the connection with Play Services. Even more cumbersome, BaseGameUtils is not available as a standalone download and has to be downloaded together with another project. The fact that this significant piece of code is not a part of the official library makes it quite tedious to set up. Why it has been done like this is something that I do not comprehend myself. The project that contains BaseGameUtils is called android-basic-samples and it can be downloaded from https://github.com/playgameservices/android-basic-samples. Adding BaseGameUtils is not as straightforward as we would like it to be. Once android-basic-samples is downloaded, open your game project in Android Studio. Click on File > Import Module and navigate to the directory where you downloaded android-basic-samples. Select the BaseGameUtils module in the BasicSamples/libraries directory and click on OK. Finally, update the dependencies in the build.gradle file for the mobile module and sync gradle again: dependencies { compile project(':BaseGameUtils') [...] } After all these steps to set up the project, we are finally ready to begin the sign in. We will make our main Activity extend from BaseGamesActivity, which takes care of all the handling of the connections, and sign in with Google Play Services. One more detail: until now, we were using Activity and not FragmentActivity as the base class for YassActivity (BaseGameActivity extends from FragmentActivity) and this change will mess with the behavior of our dialogs while calling navigateBack. We can change the base class of BaseGameActivity or modify navigateBack to perform a pop-on fragment navigation hierarchy. I recommend the second approach: public void navigateBack() { // Do a pop on the navigation history getFragmentManager().popBackStack(); } This util class has been designed to work with single-activity games. It can be used in multiple activities, but it is not straightforward. This is another good reason to keep the game in a single activity. The BaseGameUtils is designed to be used in single-activity games. The default behavior of BaseGameActivity is to try to log in each time the Activity is started. If the user agrees to sign in, the sign in will happen automatically. But if the user rejects doing so, he or she will be asked again several times. I personally find this intrusive and annoying, and I recommend you to only prompt to log in to Google Play services once (and again, if the user logs out). We can always provide a login entry point in the app. This is very easy to change. The default number of attempts is set to 3 and it is a part of the code of GameHelper: // Should we start the flow to sign the user in automatically on   startup? If // so, up to // how many times in the life of the application? static final int DEFAULT_MAX_SIGN_IN_ATTEMPTS = 3; int mMaxAutoSignInAttempts = DEFAULT_MAX_SIGN_IN_ATTEMPTS; So, we just have to configure it for our activity, adding one line of code during onCreate to change the default behavior with the one we want: just try it once: getGameHelper().setMaxAutoSignInAttempts(1); Finally, there are two methods that we can override to act when the user successfully logs in and when there is a problem: onSignInSucceeded and onSignInFailed. We will use them when we update the main menu at the end of the article. Further use of GPGS is to be made via the GameHelper and/or the GoogleApiClient, which is a part of the GameHelper. We can obtain a reference to the GameHelper using the getGameHelper method of BaseGameActivity. Now that the user can sign into Google Play services we can continue with achievements and leaderboards. Let's go back to the developer console. Achievements We will first define a few achievements in the developer console and then see how to unlock them in the game. Note that to publish any game with GPGS, you need to define at least five achievements. No other feature is mandatory, but achievements are. We need to define at least five achievements to publish a game with Google Play Game services. If you want to use GPGS with a game that has no achievements, I recommend you to add five dummy secret achievements and let them be. To add an achievement, we just need to navigate to the Achievements tab on the left and click on Add achievement: The menu to add a new achievement has a few fields that are mostly self-explanatory. They are as follows: Name: the name that will be shown (can be localized to different languages). Description: the description of the achievement to be shown (can also be localized to different languages). Icon: the icon of the achievement as a 512x512 px PNG image. This will be used to show the achievement in the list and also to generate the locked image and the in-game popup when it is unlocked. Incremental achievements: if the achievement requires a set of steps to be completed, it is called an incremental achievement and can be shown with a progress bar. We will have an incremental achievement to illustrate this. Initial state: Revealed/Hidden depending on whether we want the achievement to be shown or not. When an achievement is shown, the name and description are visible, players know what they have to do to unlock it. A hidden achievement, on the other hand, is a secret and can be a funny surprise when unlocked. We will have two secret achievements. Points: GPGS allows each game to have 1,000 points to give for unlocking achievements. This gets converted to XP in the player profile on Google Play games. This can be used to highlight that some achievements are harder than others, and therefore grant a bigger reward. You cannot change these once they are published, so if you plan to have more achievements in the future, plan ahead with the points. List order: The order of the achievements is shown. It is not followed all the time, since on the Play Games app the unlocked ones are shown before the locked ones. It is still handy to rearrange them. Dialog to add an achievement on the developer console As we already decided, we will have five achievements in our game and they will be as follows: Big Score: score over 100,000 points in one game. This is to be granted while playing. Asteroid killer: destroy 100 asteroids. This will count them across different games and is an incremental achievement. Survivor: survive for 60 seconds. Target acquired: a hidden achievement. Hit 20 asteroids in a row without missing a hit. This is meant to reward players that only shoot when they should. Target lost: this is supposed to be a funny achievement, granted when you miss with 10 bullets in a row. It is also hidden, because otherwise it would be too easy to unlock. So, we created some images for them and added them to the console. The developer console with all the configured achievements Each achievement has a string ID. We will need these ids to unlock the achievements in our game, but Google has made it easy for us. We have a link at the bottom named Get resources that pops up a dialog with the string resources we need. We can just copy them from there and paste them in our project in the play_services.xml file we have already created. Architecture For our game, given that we only have five achievements, we are going to add the code for achievements directly into the ScoreObject. This will make it less code for you to read so we can focus on how it is done. However, for a real production code I recommend you define a dedicated architecture for achievements. The recommended architecture is to have an AchievementsManager class that loads all the achievements when the game starts and stores them in three lists: All achievements Locked achievements Unlocked achievements Then, we have an Achievement base class with an abstract check method that we implement for each one of them: public boolean check (GameEngine gameEngine, GameEvent gameEvent) { } This base class takes care of loading the achievement state from local storage (I recommend using SharedPreferences for this) and modify it as per the result of check. The achievements check is done at AchievementManager level using a checkLockedAchievements method that iterates over the list of achievements that can be unlocked. This method should be called as a part of onEventReceived of GameEngine. This architecture allows you to check only the achievements that are yet to be unlocked and also all the achievements included in the game in a specific dedicated place. In our case, since we are keeping the score inside the ScoreGameObject, we are going to add all achievements code there. Note that making the GameEngine take care of the score and having it as a variable that other objects can read are also recommended design patterns, but it was simpler to do this as a part of ScoreGameObject. Unlocking achievements To handle achievements, we need to have access to an object of the class GoogleApiClient. We can get a reference to it in the constructor of ScoreGameObject: private final GoogleApiClient mApiClient;   public ScoreGameObject(YassBaseFragment parent, View view, int viewResId) { […] mApiClient =  parent.getYassActivity().getGameHelper().getApiClient(); } The parent Fragment has a reference to the Activity, which has a reference to the GameHelper, which has a reference to the GoogleApiClient. Unlocking an achievement requires just a single line of code, but we also need to check whether the user is connected to Google Play services or not before trying to unlock an achievement. This is necessary because if the user has not signed it, an exception is thrown and the game crashes. Unlocking an achievement requires just a single line of code. But this check is not enough. In the edge case, when the user logs out manually from Google Play services (which can be done in the achievements screen), the connection will not be closed and there is no way to know whether he or she has logged out. We are going to create a utility method to unlock the achievements that does all the checks and also wraps the unlock method into a try/catch block and make the API client disconnect if an exception is raised: private void unlockSafe(int resId) { if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.unlock(mApiClient, getString(resId));    } catch (Exception e) {      mApiClient.disconnect();    } } } Even with all the checks, the code is still very simple. Let's work on the particular achievements we have defined for the game. Even though they are very specific, the methodology to track game events and variables and then check for achievements to unlock is in itself generic, and serves as a real-life example of how to deal with achievements. The achievements we have designed require us to count some game events and also the running time. For the last two achievements, we need to make a new GameEvent for the case when a bullet misses, which we have not created until now. The code in the Bullet object to trigger this new GameEvent is as follows: @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mY += mSpeedFactor * elapsedMillis; if (mY < -mHeight) {    removeFromGameEngine(gameEngine);    gameEngine.onGameEvent(GameEvent.BulletMissed); } } Now, let's work inside ScoreGameObject. We are going to have a method that checks achievements each time an asteroid is hit. There are three achievements that can be unlocked when that event happens: Big score, because hitting an asteroid gives us points Target acquired, because it requires consecutive asteroid hits Asteroid killer, because it counts the total number of asteroids that have been destroyed The code is like this: private void checkAsteroidHitRelatedAchievements() { if (mPoints > 100000) {    // Unlock achievement    unlockSafe(R.string.achievement_big_score); } if (mConsecutiveHits >= 20) {    unlockSafe(R.string.achievement_target_acquired); } // Increment achievement of asteroids hit if (mApiClient.isConnecting() || mApiClient.isConnected()) {    try {      Games.Achievements.increment(mApiClient, getString(R.string.achievement_asteroid_killer), 1);    } catch (Exception e) {      mApiClient.disconnect();    } } } We check the total points and the number of consecutive hits to unlock the corresponding achievements. The "Asteroid killer" achievement is a bit of a different case, because it is an incremental achievement. These type of achievements do not have an unlock method, but rather an increment method. Each time we increment the value, progress on the achievement is updated. Once the progress is 100 percent, it is unlocked automatically. Incremental achievements are automatically unlocked, we just have to increment their value. This makes incremental achievements much easier to use than tracking the progress locally. But we still need to do all the checks as we did for unlockSafe. We are using a variable named mConsecutiveHits, which we have not initialized yet. This is done inside onGameEvent, which is the place where the other hidden achievement target lost is checked. Some initialization for the "Survivor" achievement is also done here: public void onGameEvent(GameEvent gameEvent) { if (gameEvent == GameEvent.AsteroidHit) {    mPoints += POINTS_GAINED_PER_ASTEROID_HIT;    mPointsHaveChanged = true;    mConsecutiveMisses = 0;    mConsecutiveHits++;    checkAsteroidHitRelatedAchievements(); } else if (gameEvent == GameEvent.BulletMissed) {    mConsecutiveMisses++;    mConsecutiveHits = 0;    if (mConsecutiveMisses >= 20) {      unlockSafe(R.string.achievement_target_lost);    } } else if (gameEvent == GameEvent.SpaceshipHit) {    mTimeWithoutDie = 0; } […] } Each time we hit an asteroid, we increment the number of consecutive asteroid hits and reset the number of consecutive misses. Similarly, each time we miss a bullet, we increment the number of consecutive misses and reset the number of consecutive hits. As a side note, each time the spaceship is destroyed we reset the time without dying, which is used for "Survivor", but this is not the only time when the time without dying should be updated. We have to reset it when the game starts, and modify it inside onUpdate by just adding the elapsed milliseconds that have passed: @Override public void startGame(GameEngine gameEngine) { mTimeWithoutDie = 0; […] }   @Override public void onUpdate(long elapsedMillis, GameEngine gameEngine) { mTimeWithoutDie += elapsedMillis; if (mTimeWithoutDie > 60000) {    unlockSafe(R.string.achievement_survivor); } } So, once the game has been running for 60,000 milliseconds since it started or since a spaceship was destroyed, we unlock the "Survivor" achievement. With this, we have all the code we need to unlock the achievements we have created for the game. Let's finish this section with some comments on the system and the developer console: As a rule of thumb, you can edit most of the details of an achievement until you publish it to production. Once your achievement has been published, it cannot be deleted. You can only delete an achievement in its prepublished state. There is a button labeled Delete at the bottom of the achievement screen for this. You can also reset the progress for achievements while they are in draft. This reset happens for all players at once. There is a button labeled Reset achievement progress at the bottom of the achievement screen for this. Also note that GameBaseActivity does a lot of logging. So, if your device is connected to your computer and you run a debug build, you may see that it lags sometimes. This does not happen in a release build for which the log is removed. Leaderboards Since YASS has only one game mode and one score in the game, it makes sense to have only one leaderboard on Google Play Game Services. Leaderboards are managed from their own tab inside the Game services area of the developer console. Unlike achievements, it is not mandatory to have any leaderboard to be able to publish your game. If your game has different levels of difficulty, you can have a leaderboard for each of them. This also applies if the game has several values that measure player progress, you can have a leaderboard for each of them. Managing leaderboards on Play Games console Leaderboards can be created and managed in the Leaderboards tag. When we click on Add leaderboard, we are presented with a form that has several fields to be filled. They are as follows: Name: the display name of the leaderboard, which can be localized. We will simply call it High Scores. Score formatting: this can be Numeric, Currency, or Time. We will use Numeric for YASS. Icon: a 512x512 px icon to identify the leaderboard. Ordering: Larger is better / Smaller is better. We are going to use Larger is better, but other score types may be Smaller is better as in a racing game. Enable tamper protection: this automatically filters out suspicious scores. You should keep this on. Limits: if you want to limit the score range that is shown on the leaderboard, you can do it here. We are not going to use this List order: the order of the leaderboards. Since we only have one, it is not really important for us. Setting up a leaderboard on the Play Games console Now that we have defined the leaderboard, it is time to use it in the game. As happens with achievements, we have a link where we can get all the resources for the game in XML. So, we proceed to get the ID of the leaderboard and add it to the strings defined in the play_services.xml file. We have to submit the scores at the end of the game (that is, a GameOver event), but also when the user exits a game via the pause button. To unify this, we will create a new GameEvent called GameFinished that is triggered after a GameOver event and after the user exits the game. We will update the stopGame method of GameEngine, which is called in both cases to trigger the event: public void stopGame() { if (mUpdateThread != null) {    synchronized (mLayers) {      onGameEvent(GameEvent.GameFinished);    }    mUpdateThread.stopGame();  mUpdateThread = null; } […] } We have to set the updateThread to null after sending the event, to prevent this code being run twice. Otherwise, we could send each score more than once. Similarly, as happens for achievements, submitting a score is very simple, just a single line of code. But we also need to check that the GoogleApiClient is connected and we still have the same edge case when an Exception is thrown. So, we need to wrap it in a try/catch block. To keep everything in the same place, we will put this code inside ScoreGameObject: @Override public void onGameEvent(GameEvent gameEvent) { […] else if (gameEvent == GameEvent.GameFinished) {    // Submit the score    if (mApiClient.isConnecting() || mApiClient.isConnected()) {      try {        Games.Leaderboards.submitScore(mApiClient,          getLeaderboardId(), mPoints);      }      catch (Exception e){        mApiClient.disconnect();      }    } } }   private String getLeaderboardId() { return mParent.getString(R.string.leaderboard_high_scores); } This is really straightforward. GPGS is now receiving our scores and it takes care of the timestamp of the score to create daily, weekly, and all time leaderboards. It also uses your Google+ circles to show the social score of your friends. All this is done automatically for you. The final missing piece is to let the player open the leaderboards and achievements UI from the main menu as well as trigger a sign in if they are signed out. Opening the Play Games UI To complete the integration of achievements and leaderboards, we are going to add buttons to open the native UI provided by GPGS to our main menu. For this, we are going to place two buttons in the bottom–left corner of the screen, opposite the music and sound buttons. We will also check whether we are connected or not; if not, we will show a single sign-in button. For these buttons we will use the official images of GPGS, which are available for developers to use. Note that you must follow the brand guidelines while using the icons and they must be displayed as they are and not modified. This also provides a consistent look and feel across all the games that support Play Games. Since we have seen a lot of layouts already, we are not going to include another one that is almost the same as something we already have. The main menu with the buttons to view achievements and leaderboards. To handle these new buttons we will, as usual, set the MainMenuFragment as OnClickListener for the views. We do this in the same place as the other buttons, that is, inside onViewCreated: @Override public void onViewCreated(View view, Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); [...] view.findViewById(    R.id.btn_achievements).setOnClickListener(this); view.findViewById(    R.id.btn_leaderboards).setOnClickListener(this); view.findViewById(R.id.btn_sign_in).setOnClickListener(this); } As happened with achievements and leaderboards, the work is done using static methods that receive a GoogleApiClient object. We can get this object from the GameHelper that is a part of the BaseGameActivity, like this: GoogleApiClient apiClient = getYassActivity().getGameHelper().getApiClient(); To open the native UI, we have to obtain an Intent and then start an Activity with it. It is important that you use startActivityForResult, since some data is passed back and forth. To open the achievements UI, the code is like this: Intent achievementsIntent = Games.Achievements.getAchievementsIntent(apiClient); startActivityForResult(achievementsIntent, REQUEST_ACHIEVEMENTS); This works out of the box. It automatically grays out the icons for the unlocked achievements, adds a counter and progress bar to the one that is in progress, and a padlock to the hidden ones. Similarly, to open the leaderboards UI we obtain an intent from the Games.Leaderboards class instead: Intent leaderboardsIntent = Games.Leaderboards.getLeaderboardIntent( apiClient, getString(R.string.leaderboard_high_scores)); startActivityForResult(leaderboardsIntent, REQUEST_LEADERBOARDS); In this case, we are asking for a specific leaderboard, since we only have one. We could use getLeaderboardsIntent instead, which will open the Play Games UI for the list of all the leaderboards. We can have an intent to open the list of leaderboards or a specific one. What remains to be done is to replace the buttons for the login one when the user is not connected. For this, we will create a method that reads the state and shows and hides the views accordingly: private void updatePlayButtons() { GameHelper gameHelper = getYassActivity().getGameHelper(); if (gameHelper.isConnecting() || gameHelper.isSignedIn()) {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.VISIBLE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.GONE); } else {    getView().findViewById(      R.id.btn_achievements).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_leaderboards).setVisibility(View.GONE);    getView().findViewById(      R.id.btn_sign_in).setVisibility(View.VISIBLE); } } This method decides whether to remove or make visible the views based on the state. We will call it inside the important state-changing methods: onLayoutCompleted: the first time we open the game to initialize the UI. onSignInSucceeded: when the user successfully signs in to GPGS. onSignInFailed: this can be triggered when we auto sign in and there is no connection. It is important to handle it. onActivityResult: when we come back from the Play Games UI, in case the user has logged out. But nothing is as easy as it looks. In fact, when the user signs out and does not exit the game, GoogleApiClient keeps the connection open. Therefore the value of isSignedIn from GameHelper still returns true. This is the edge case we have been talking about all through the article. As a result of this edge case, there is an inconsistency in the UI that shows the achievements and leaderboards buttons when it should show the login one. When the user logs out from Play Games, GoogleApiClient keeps the connection open. This can lead to confusion. Unfortunately, this has been marked as work as expected by Google. The reason is that the connection is still active and it is our responsibility to parse the result in the onActivityResult method to determine the new state. But this is not very convenient. Since it is a rare case we will just go for the easiest solution, which is to wrap it in a try/catch block and make the user sign in if he or she taps on leaderboards or achievements while not logged in. This is the code we have to handle the click on the achievements button, but the one for leaderboards is equivalent: else if (v.getId() == R.id.btn_achievements) { try {    GoogleApiClient apiClient =      getYassActivity().getGameHelper().getApiClient();    Intent achievementsIntent =      Games.Achievements.getAchievementsIntent(apiClient);    startActivityForResult(achievementsIntent,      REQUEST_ACHIEVEMENTS); } catch (Exception e) {    GameHelper gameHelper = getYassActivity().getGameHelper();    gameHelper.disconnect();    gameHelper.beginUserInitiatedSignIn(); } } Basically, we have the old code to open the achievements activity, but we wrap it in a try/catch block. If an exception is raised, we disconnect the game helper and begin a new login using the beginUserInitiatedSignIn method. It is very important to disconnect the gameHelper before we try to log in again. Otherwise, the login will not work. We must disconnect from GPGS before we can log in using the method from the GameHelper. Finally, there is the case when the user clicks on the login button, which just triggers the login using the beginUserInitiatedSignIn method from the GameHelper: if (v.getId() == R.id.btn_sign_in) { getYassActivity().getGameHelper().beginUserInitiatedSignIn(); } Once you have published your game and the game services, achievements and leaderboards will not appear in the game description on Google Play straight away. It is required that "a fair amount of users" have used them. You have done nothing wrong, you just have to wait. Other features of Google Play services Google Play Game Services provides more features for game developers than achievements and leaderboards. None of them really fit the game we are building, but it is useful to know they exist just in case your game needs them. You can save yourself lots of time and effort by using them and not reinventing the wheel. The other features of Google Play Games Services are: Events and quests: these allow you to monitor game usage and progression. Also, they add the possibility of creating time-limited events with rewards for the players. Gifts: as simple as it sounds, you can send a gift to other players or request one to be sent to you. Yes, this is seen in the very mechanical Facebook games popularized a while ago. Saved games: the standard concept of a saved game. If your game has progression or can unlock content based on user actions, you may want to use this feature. Since it is saved in the cloud, saved games can be accessed across multiple devices. Turn-based and real-time multiplayer: Google Play Game Services provides an API to implement turn-based and real-time multiplayer features without you needing to write any server code. If your game is multiplayer and has an online economy, it may be worth making your own server and granting virtual currency only on the server to prevent cheating. Otherwise, it is fairly easy to crack the gifts/reward system and a single person can ruin the complete game economy. However, if there is no online game economy, the benefits of gifts and quests may be more important than the fact that someone can hack them. Let's take a look at each of these features. Events The event's APIs provides us with a way to define and collect gameplay metrics and upload them to Google Play Game Services. This is very similar to the GameEvents we are already using in our game. Events should be a subset of the game events of our game. Many of the game events we have are used internally as a signal between objects or as a synchronization mechanism. These events are not really relevant outside the engine, but others could be. Those are the events we should send to GPGS. To be able to send an event from the game to GPGS, we have to create it in the developer console first. To create an event, we have to go to the Events tab in the developer console, click on Add new event, and fill in the following fields: Name: a short name of the event. The name can be up to 100 characters. This value can be localized. Description: a longer description of the event. The description can be up to 500 characters. This value can also be localized. Icon: the icon for the event of the standard 512x512 px size. Visibility: as for achievements, this can be revealed or hidden. Format: as for leaderboards, this can be Numeric, Currency, or Time. Event type: this is used to mark events that create or spend premium currency. This can be Premium currency sink, Premium currency source, or None. While in the game, events work pretty much as incremental achievements. You can increment the event counter using the following line of code: Games.Events.increment(mGoogleApiClient, myEventId, 1); You can delete events that are in the draft state or that have been published as long as the event is not in use by a quest. You can also reset the player progress data for the testers of your events as you can do for achievements. While the events can be used as an analytics system, their real usefulness appears when they are combined with quests. Quests A quest is a challenge that asks players to complete an event a number of times during a specific time frame to receive a reward. Because a quest is linked to an event, to use quests you need to have created at least one event. You can create a quest from the quests tab in the developer console. A quest has the following fields to be filled: Name: the short name of the quest. This can be up to 100 characters and can be localized. Description: a longer description of the quest. Your quest description should let players know what they need to do to complete the quest. The description can be up to 500 characters. The first 150 characters will be visible to players on cards such as those shown in the Google Play Games app. Icon: a square icon that will be associated with the quest. Banner: a rectangular image that will be used to promote the quest. Completion Criteria: this is the configuration of the quest itself. It consists of an event and the number of times the event must occur. Schedule: the start and end date and time for the quest. GPGS uses your local time zone, but stores the values as UTC. Players will see these values appear in their local time zone. You can mark a checkbox to notify users when the quest is about to end. Reward Data: this is specific to each game. It can be a JSON object, specifying the reward. This is sent to the client when the quest is completed. Once configured in the developer console, you can do two things with the quests: Display the list of quests Process a quest completion To get the list of quests, we start an activity with an intent that is provided to us via a static method as usual: Intent questsIntent = Games.Quests.getQuestsIntent(mGoogleApiClient,    Quests.SELECT_ALL_QUESTS); startActivityForResult(questsIntent, QUESTS_INTENT); To be notified when a quest is completed, all we have to do is register a listener: Games.Quests.registerQuestUpdateListener(mGoogleApiClient, this); Once we have set the listener, the onQuestCompleted method will be called once the quest is completed. After completing the processing of the reward, the game should call claim to inform Play Game services that the player has claimed the reward. The following code snippet shows how you might override the onQuestCompleted callback: @Override public void onQuestCompleted(Quest quest) { // Claim the quest reward. Games.Quests.claim(mGoogleApiClient, quest.getQuestId(),    quest.getCurrentMilestone().getMilestoneId()); // Process the RewardData to provision a specific reward. String reward = new    String(quest.getCurrentMilestone().getCompletionRewardData(),    Charset.forName("UTF-8")); } The rewards themselves are defined by the client. As we mentioned before, this will make the game quite easy to crack and get rewards. But usually, avoiding the hassle of writing your own server is worth it. Gifts The gifts feature of GPGS allows us to send gifts to other players and to request them to send us one as well. This is intended to make the gameplay more collaborative and to improve the social aspect of the game. As for other GPGS features, we have a built-in UI provided by the library that can be used. In this case, to send and request gifts for in-game items and resources to and from friends in their Google+ circles. The request system can make use of notifications. There are two types of requests that players can send using the game gifts feature in Google Play Game Services: A wish request to ask for in-game items or some other form of assistance from their friends A gift request to send in-game items or some other form of assistance to their friends A player can specify one or more target request recipients from the default request-sending UI. A gift or wish can be consumed (accepted) or dismissed by a recipient. To see the gifts API in detail, you can visit https://developers.google.com/games/services/android/giftRequests. Again, as for quest rewards, this is done entirely by the client, which makes the game susceptible to piracy. Saved games The saved games service offers cloud game saving slots. Your game can retrieve the saved game data to allow returning players to continue a game at their last save point from any device. This service makes it possible to synchronize a player's game data across multiple devices. For example, if you have a game that runs on Android, you can use the saved games service to allow a player to start a game on their Android phone and then continue playing the game on a tablet without losing any of their progress. This service can also be used to ensure that a player's game play continues from where it was left off even if their device is lost, destroyed, or traded in for a newer model or if the game was reinstalled The saved games service does not know about the game internals, so it provides a field that is an unstructured binary blob where you can read and write the game data. A game can write an arbitrary number of saved games for a single player subjected to user quota, so there is no hard requirement to restrict players to a single save file. Saved games are done in an unstructured binary blob. The API for saved games also receives some metadata that is used by Google Play Games to populate the UI and to present useful information in the Google Play Game app (for example, last updated timestamp). Saved games has several entry points and actions, including how to deal with conflicts in the saved games. To know more about these check out the official documentation at https://developers.google.com/games/services/android/savedgames. Multiplayer games If you are going to implement multiplayer, GPGS can save you a lot of work. You may or may not use it for the final product, but it will remove the need to think about the server-side until the game concept is validated. You can use GPGS for turn-based and real-time multiplayer games. Although each one is completely different and uses a different API, there is always an initial step where the game is set up and the opponents are selected or invited. In a turn-based multiplayer game, a single shared state is passed among the players and only the player that owns the turn has permission to modify it. Players take turns asynchronously according to an order of play determined by the game. A turn is finished explicitly by the player using an API call. Then the game state is passed to the other players, together with the turn. There are many cases: selecting opponents, creating a match, leaving a match, canceling, and so on. The official documentation at https://developers.google.com/games/services/android/turnbasedMultiplayer is quite exhaustive and you should read through it if you plan to use this feature. In a real-time multiplayer there is no concept of turn. Instead, the server uses the concept of room: a virtual construct that enables network communication between multiple players in the same game session and lets players send data directly to one another, a common concept for game servers. Real-time multiplayer service is based on the concept of Room. The API of real-time multiplayer allows us to easily: Manage network connections to create and maintain a real-time multiplayer room Provide a player-selection user interface to invite players to join a room, look for random players for auto-matching, or a combination of both Store participant and room-state information on the Play Game services' servers while the game is running Send room invitations and updates to players To check the complete documentation for real-time games, please visit the official web at https://developers.google.com/games/services/android/realtimeMultiplayer. Summary We have added Google Play services to YASS, including setting up the game in the developer console and adding the required libraries to the project. Then, we defined a set of achievements and added the code to unlock them. We have used normal, incremental, and hidden achievement types to showcase the different options available. We have also configured a leaderboard and submitted the scores, both when the game is finished and when it is exited via the pause dialog. Finally, we have added links to the native UI for leaderboards and achievements to the main menu. We have also introduced the concepts of events, quests, and gifts and the features of saved games and multiplayer that Google Play Game services offers. The game is ready to publish now. Resources for Article: Further resources on this subject: SceneKit [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] SpriteKit Framework and Physics Simulation [article]
Read more
  • 0
  • 0
  • 2427

article-image-project-setup-and-modeling-residential-project
Packt
08 Jul 2015
20 min read
Save for later

Project Setup and Modeling a Residential Project

Packt
08 Jul 2015
20 min read
In this article by Scott H. MacKenzie and Adam Rendek, authors of the book ArchiCAD 19 – The Definitive Guide, we will see how our journey, into ArchiCAD 19, begins with an introduction to the graphic user interface, also known as the GUI. As with any software program, there is a menu bar along the top that gives access to all the tools and features. There are also toolbars and tool palettes that can be docked anywhere you like. In addition to this, there are some special palettes that pop up only when you need them. After your introduction to ArchiCAD's user interface, you can jump right in and start creating the walls and floors for your new house. Then you will learn how to create ceilings and the stairs. Before too long you will have a 3D model to orbit around. It is really fun and probably easier than you would expect. (For more resources related to this topic, see here.) The ArchiCAD GUI The first time you open ArchiCAD you will find the toolbars along the top, just under the menu bar and there will be palettes docked to the left and right of the drawing area. We will focus on the 3 following palettes to get started: The Toolbox palette: This contains all of your selection, modeling, and drafting tools. It will be located on the left hand side by default. The Info Box palette: This is your context menu that changes according to whatever tool is currently in use. By default, this will be located directly under the toolbars at the top. It has a scrolling function; hover your cursor over the palette and spin the scroll wheel on your mouse to reveal everything on the palette. The Navigator palette: This is your project navigation window. This palette gives you access to all your views, sheets, and lists. It will be located on the right-hand side by default. These three palettes can be seen in the following screenshot: All of the mentioned palettes are dockable and can be arranged however you like on your screen. They can also be dragged away from the main ArchiCAD interface. For instance, you could have palettes on a second monitor. Panning and Zooming ArchiCAD has the same panning and zooming interface as most other CAD (Computer-aided design) and BIM (Building Information Modeling) programs. Rolling the scroll wheel on your mouse will zoom in and out. Pressing down on the scroll wheel (or middle button) and moving your cursor will execute a pan. Each drawing view window has a row of zoom commands along the bottom. You should try each one to get familiar with each of their functions. View toggling When you have multiple views open, you can toggle through them by pressing the Ctrl key and tapping on the Tab key. Or, you can pick any of the open views from the bottom of the Window pull-down menu. Pressing the F2 key will open a 2D floor plan view and pressing the F3 key will open the default 3D view. Pressing the F5 key will open a 3D view of selected items. In other words, if you want to isolate specific items in a 3D view, select those items and press F5. The function keys are second nature to those that have been using ArchiCAD for a long time. If a feature has a function key shortcut, you should use it. Project setup ArchiCAD is available in multiple different language versions. The exercises in this book use the USA version of ArchiCAD. Obviously this version is in English. There is another version in English and that is referred to as the International (INT) version. You can use the International version to do the exercises in the book, just be aware that there may be some subtle differences in the way that something is named or designed. When you create a new project in ArchiCAD, you start by opening a project template. The template will have all the basic stuff you need to get started including layers, line types, wall types, doors, windows, and more. The following lesson will take you through the first steps in creating a new ArchiCAD project: Open ArchiCAD. The Start ArchiCAD dialog box will appear. Select the Create a New Project radio button at the top. Select the Use a Template radio button under Set up Project Settings. Select ArchiCAD 19 Residential Template.tpl from the drop-down list. If you have the International version of ArchiCAD, then the residential template may not be available. Therefore you can use ArchiCAD 19 Template.tpl. Click on New. This will open a blank project file. Project Settings Now that you have opened your new project, we are going to create a house with 4 stories (which includes a story for the roof). We create a story for the roof in order to facilitate a workspace to model the elements on that level. The template we just opened only has 2 stories, so we will need to add 2 more. Then we need to look at some other settings. Stories The settings for the stories are as follows: On the Navigator palette, select the Project Map icon . Double click on 1st FLOOR. Right click on Stories and select Create New Story. You will be prompted to give the new story a name. Enter the name BASEMENT. Click on the button next to Below. Enter 9' into the Height box and click on the Create button. Then double click on 2. 2nd FLOOR. Right click on Stories and then select Create New Story. You will be prompted to give the new story a name. Enter the name ROOF. Click on the button next to Above. Enter 9' into the Height box and click on the Create button. Your list of stories should now look like this 3. ROOF 2. 2nd Floor 1. 1st Floor -1. BASEMENT The International version of ArchiCAD (INT) will give the first floor the index number of 0. The second floor index number will be 1. And the roof will be 2. Now we need to adjust the heights of the other stories: Right click on Stories (on the Navigator palette) and select Story Settings. Change the number in the Height to Next box for 1st FLOOR to 9'. Do the same for 2nd FLOOR. Units On the menu bar, go to Options | Project Preferences | Working Units and perform the following steps: Ensure Model Units is set to feet & fractional inches. Ensure that Fractions is set to 1/64. Ensure that Layout Units is set to feet & fractional inches. Ensure that Angle Unit is set to Decimal degrees. Ensure that Decimals is set to 2. You are now ready to begin modeling your house, but first let's save the project. To save the project, perform the following steps: Navigate to the File menu and click on Save. If by chance you have saved it already, then click on Save As. Name your file Colonial House. Click on Save. Renovation filters The Renovation Filter feature allows you to differentiate how your drawing elements will appear in different construction phases. For renovation projects that have demolition and new work phases, you need to show the items to be demolished differently than the existing items that are to remain, or that are new. The projects we will work on in this book do not require this feature to manage phases because we will only be creating a new construction. However, it is essential that your renovation filter setting is set to New Construction. We will do this in the first modeling exercise. Selection methods Before you can do much in ArchiCAD, you need to be familiar with selecting elements. There are several ways to select something in ArchiCAD, which are as follows: Single cursor click Pick the Arrow tool from the toolbox or hold the Shift key down on the keyboard and click on what you want to select. As you click on the elements, hold the Shift key down to add them to your selection set. To remove elements from the selection set, just click on them again with the Shift key pressed. There is a mode within this mode called Quick Selection. It is toggled on and off from the Info Box palette. The icon looks like a magnet. When it is on, it works like a magnet because it will stick to faces or surfaces, such as slabs or fill patterns. If this mode is not on, then you are required to find an edge, endpoint, or hotspot node to select an element with a single click. Hold the Space button down to temporarily change the mode while selecting elements. Window Pick the Arrow tool from the toolbox or hold the Shift key down and draw your selection window. Click once for the window starting corner and click a second time for the end corner. This works just as windowing does in AutoCAD. Not as Revit does, where you need to hold the mouse button down while you draw your window. There are 3 different windowing methods. Each one is set from the Info Box palette: Partial Elements: Anything that is inside of or touching the window will be selected. AutoCAD users will know this as a Crossing Window. Entire Elements: Anything completely encapsulated by the window will be selected. If something is not completely inside the window then it will not be selected. Direction Dependent: Click and window to the left, the Partial Elements window will be used. Click and window to the right, the Entire Elements window will be used. Marquee A marquee is a selection window that stays on the screen after you create it. If you are a MicroStation CAD program user, this will be similar to a selection window. It can be used for printing a specific area in a drawing view and performing what AutoCAD users would refer to as a Stretch command. There are 2 types of marquees; single story (skinny) and multi story (fat). The single story marquee is used when you want to select elements on your current story view only. The multi-story marquee will select everything on your current story as well as the stories above and below your selections. The Find & Select tool This lets ArchiCAD select elements for you, based on the attribute criteria that you define, such as element type, layer, and pen number. When you have the criteria defined, click on the plus sign button on the palette and all the elements within that criterion inside your current view or marquee will be selected. The quickest way to open the Find & Select tool is with the Ctrl + F key combination Modification commands As you draw, you will inevitably need to move, copy, stretch, or trim something. Select your items first, and then execute the modification command. Here are the basic commands you will need to get things moving: Adjust (Extend): Press Ctrl + - or navigate to Edit | Reshape | Adjust Drag (Move): Press Ctrl + D or…navigate to Edit | Move | Drag Drag a Copy (Copy): Press Ctrl + Shift + D or navigate to Edit | Move | Drag a Copy Intersect (Fillet): Click on the Intersect button on the Standard toolbar or navigate to Edit | Reshape | Intersect Resize (Scale): Press Ctrl + K or navigate to Edit | Reshape | Resize Rotate: Press Ctrl + E or navigate to Edit | Move | Rotate Stretch: Press Ctrl + H or navigate to Edit | Reshape | Stretch Trim: Press Ctrl or click on the Trim button on the Standard toolbar or navigate to Edit | Reshape | Trim. Hold the Ctrl key down and click on the portion of wall or line that you want trimmed off. This is the fastest way to trim anything! Memorizing the keyboard combinations above is a sure way to increase your productivity. Modeling – part I We will start with the wall tool to create the main exterior walls on the 1st floor of our house, and then create the floor with the slab tool. However, before we begin, let's make sure your Renovation Filter is set to New Construction. Setting the Renovation Filter The Renovation Filter is an active setting that controls how the elements you create are displayed. Everything we create in this project is for new construction so we need the new construction filter to be active. To do so, go to the Document menu, click on Renovation and then click on 04 New Construction. Using the Wall tool The Wall tool has settings for height, width, composite, layer, pen weight and more. We will learn about these things as we go along, and learn a little bit more each time we progress into to the project. Double click on 1. 1st Story in the Navigator palette to ensure we are working on story 1. Select the Wall tool from the Toolbox palette or from the menu bar under Design | Design Tools | Wall. Notice that this will automatically change the contents of the Info Box palette. Click on the wall icon inside Info Box. This will bring up the active properties of the wall tool in the form of the Wall Default Settings window. (This can also be achieved by double clicking on the wall tool button in Toolbox). Change the composite type to Siding 2x6 Wd. Stud. Click on the wall composite button to do this.   Creating the exterior walls of the 1st Story To create the exterior walls of the 1st story perform the following steps: Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Double click on 1. 1st Story in the Navigator palette to ensure that we are working on story 1. Select the Wall tool from the Toolbox palette, or from the menu bar under Design | Design Tools | Wall. Change the composite type to be Siding 2x6 Wd. Stud. Click on the wall composite button to do this. Notice at the bottom of the Wall Default Settings window is the layer currently assigned to the wall tool. It should be set to A-WALL-EXTR. Click on OK to start your first wall. Click near the center of the drawing screen and move your cursor to the left, notice the orange dashed line that appears. That is your guide line. Keep your cursor over the guide line so that it keeps you locked in the orthogonal direction. You should also immediately see the Tracker palette pop up, displaying your distance drawn and angle after your first click. Before you make your second click, enter the number 24 from your keyboard and press Enter. You should now have 24-0" long wall. If your Tracker palette does not appear, it may be toggled off. Go up to the Standard tool bar and click on the Tracker button to turn it on. Select this again and make your first click on the upper left end corner of your first wall. Move your cursor down, so that it snaps to the guideline, enter the number 28, and press the Enter key. Draw your third wall by clicking on the bottom left endpoint of your second wall, move your cursor to the right, snapped over the guide line, type in the number 24 and press Enter. Draw your fourth wall by clicking on the bottom right end point of your third wall and the starting point of your first wall. You should now have four walls that measure 24'-0" x 28"-0, outside edge to outside edge. Move your four walls to the center of the drawing view and perform the following steps: Click on the Arrow tool at the top of the Toolbox. Click outside one of the corners of the walls, and then click on the opposite side. All four walls should be selected now. Use the Drag command to move the walls. The quickest way to activate the Drag command is by pressing Ctrl + D. The long way is from the menu bar by navigating to Edit | Move | Drag. Drag (move) the walls to the center of your drawing window. Press the Esc key or click on a blank space in your drawing window to deselect the walls. You can select all the walls in a view by activating the Wall tool and pressing Ctrl + A. You are now ready to create a floor with the slab tool. But first, let's have a little fun and see how it looks in 3D (press the F3 key): From the Navigator palette, double click on Generic Axonometry under the 3D folder icon. This will open a 3D view window. Hold your Shift key down, press down on your scroll wheel button, and slowly move your mouse around. You are now orbiting! Play around with it a little, then get back to work and go to the next step to create your first floor slab. Press the F2 key to get back to a 2D view. You can also perform a 3D orbit via the Orbit button at the bottom of any 3D view window. Creating the first story's floor with the Slab tool The slab tool is used to create floors. It is also used to create ceilings. We will begin using it now to create the first floor for our house. Similar to the Wall tool, it also has settings for layer, pen weight and composite. To create the first story's floor using the Slab tool, perform the following steps: Select the Slab tool from the Toolbox palette or from the menu bar under Design | Design Tools | Slab. This will change the contents of the Info Box palette. Click on the Slab icon in Info Box. This will bring up the Slab Default Settings (active properties) window for the Slab tool. As with the Wall tool, you have a composite setting for the slab tool. Set the composite type for the slab tool to FLR Wd Flr + 2x10. The layer should be set to A-FLOR. Click OK. You could draw the shape of the slab by tracing over the outside lines of your walls but we are going to use the Magic Wand feature. Hover your cursor over the space inside your four walls and press the space bar on your keyboard. This will automatically create the slab using the boundary created by the walls. Then, open a 3D view and look at your floor. Instead of using the tool icon inside the Info Box palette, double click on any tool icon inside the Toolbox palette to bring up the default settings window for that tool. Creating the exterior walls and floor slabs for the basement and the second story We could repeat all of the previous steps to create the floor and walls for the second story and the basement, but in this case, it will be quicker to copy what we have already drawn on the first story and copy it up with the Edit Elements by Stories tool. Perform the following steps to create the exterior walls and floor slabs for the basement and second story: Go to the Navigator palette and right click over Stories, select Edit Elements by Stories. The Edit Elements by Stories window will open. Under Select Action, you want to set it to Copy. Under From Story, set it to 1. 1st FLOOR. In the To Story section, check the box for 2nd FLOOR and -1. BASEMENT. Click on OK. You should see a dialog box appear, stating that as a result of the last operation, elements have been created and/or have changed their position on currently unseen stories. Whenever you get this message, you should confirm that you have not created any unwanted elements. Click on the Continue button. Now you should have walls and a floor on three stories; Basement, 1st FLOOR, and 2nd FLOOR. The quickest way to jump to the next story up or the next story down is with the Ctrl + Arrow Up or Ctrl + Arrow Down key combination. Basement element modification The floor and the walls on the BASEMENT story need to be changed to a different composite type. Do this by performing the following steps: Open the BASEMENT view and select the four walls by clicking on one at a time while holding down the Shift key. Right click over your selection and click on Wall Selection Settings. Change the walls to the EIFS on 8" CMU composite type. Then, click on OK. Move your cursor over the floor slab. The quick selection cursor should appear. This selection mode allows you to click on an object without needing to find an edge or endpoint. Click on the slab. Open the Slab Selection Setting window but this time, do it by pressing the Ctrl + T key combination. Change the floor slab composite to Conc. Slab: 4" on gravel. Click on OK. The Ctrl + T key combination is the quickest way to bring up an element's selection settings window when an element is selected. Open a 3D view (by pressing the F3 key) and orbit around your house. It should look similar to the following screenshot: Adding the garage We need to add the garage and the laundry room, which connects the garage to the house. Do this by performing the following steps: Open the 1st FLOOR story from the project map. Start the Wall tool. From the Info Box palette, set the wall composite setting to Siding 2x6 Wd. Stud. Click on the upper-left corner of your house for your wall starting point. Move your cursor to the left, snap to the guide line, type 6'-10", and press Enter. Change the Geometry Method setting on Info Box to Chained. Refer to the following screenshot: Start your next wall by clicking on the endpoint of your last wall, move your cursor up, snap to the guideline and type 5', and press Enter. Move your cursor to the left, snap to grid line, type in 12'-6", and press Enter. Move your cursor down, snap to grid line, type in 22'-4", and press Enter. Move your cursor to the right, snap to grid line and double click on the perpendicular west wall (double pressing your Enter key will work the same as a double click). Now we want to create the floor for this new set of walls. To do that, perform the following steps: Start the Slab tool. Change the composite to Conc. Slab: 4" on gravel. Hover your cursor inside the new set of walls and press the Space key to use the magic wand. This will create the floor slab for the garage and laundry room. There is still one more wall to create, but this time we will use the Adjust command to, in effect, create a new wall: Select the 5'-0" wall drawn in the previous exercise. Go to the Edit menu, click on Reshape, and then click on Adjust. Click on the bottom edge of the perpendicular wall down below. The wall should extend down. Refer to the following screenshot: Then Change to a 3D view (by pressing F3) and examine your work. The 3D view If you switch to a 3D view and your new modeling does not show, zoom in or out to refresh the view, or double click your scroll wheel (middle button). Your new work will appear. Summary In this article you were introduced to the ArchiCAD Graphical User Interface (GUI), project settings and learned how to select stuff. You created all the major modeling for your house and got a primer on layers. Now you should have a good understanding of the ArchiCAD way of creating architectural elements and how to control their parameters. Resources for Article: Further resources on this subject: Let There be Light! [article] Creating an AutoCAD command [article] Setting Up for Photoreal Rendering [article]
Read more
  • 0
  • 0
  • 1163
article-image-building-untangle-game-canvas-and-drawing-api
Packt
06 Jul 2015
25 min read
Save for later

Building the Untangle Game with Canvas and the Drawing API

Packt
06 Jul 2015
25 min read
In this article by Makzan, the author of HTML5 Game Development by Example: Beginner's Guide - Second Edition has discussed the new highlighted feature in HTML5—the canvas element. We can treat it as a dynamic area where we can draw graphics and shapes with scripts. (For more resources related to this topic, see here.) Images in websites have been static for years. There are animated GIFs, but they cannot interact with visitors. Canvas is dynamic. We draw and modify the context in the Canvas, dynamically through the JavaScript drawing API. We can also add interaction to the Canvas and thus make games. In this article, we will focus on using new HTML5 features to create games. Also, we will take a look at a core feature, Canvas, and some basic drawing techniques. We will cover the following topics: Introducing the HTML5 canvas element Drawing a circle in Canvas Drawing lines in the canvas element Interacting with drawn objects in Canvas with mouse events The Untangle puzzle game is a game where players are given circles with some lines connecting them. The lines may intersect the others and the players need to drag the circles so that no line intersects anymore. The following screenshot previews the game that we are going to achieve through this article: You can also try the game at the following URL: http://makzan.net/html5-games/untangle-wip-dragging/ So let's start making our Canvas game from scratch. Drawing a circle in the Canvas Let's start our drawing in the Canvas from the basic shape—circle. Time for action – drawing color circles in the Canvas First, let's set up the new environment for the example. That is, an HTML file that will contain the canvas element, a jQuery library to help us in JavaScript, a JavaScript file containing the actual drawing logic, and a style sheet: index.html js/ js/jquery-2.1.3.js js/untangle.js js/untangle.drawing.js js/untangle.data.js js/untangle.input.js css/ css/untangle.css images/ Put the following HTML code into the index.html file. It is a basic HTML document containing the canvas element: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Drawing Circles in Canvas</title> <link rel="stylesheet" href="css/untangle.css"> </head> <body> <header>    <h1>Drawing in Canvas</h1> </header> <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> <script src="js/jquery-2.1.3.min.js"></script> <script src="js/untangle.data.js"></script> <script src="js/untangle.drawing.js"></script> <script src="js/untangle.input.js"></script> <script src="js/untangle.js"></script> </body> </html> Use CSS to set the background color of the Canvas inside untangle.css: canvas { background: grey; } In the untangle.js JavaScript file, we put a jQuery document ready function and draw a color circle inside it: $(document).ready(function(){ var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }); Open the index.html file in a web browser and we will get the following screenshot: What just happened? We have just created a simple Canvas context with circles on it. There are not many settings for the canvas element itself. We set the width and height of the Canvas, the same as we have fixed the dimensions of real drawing paper. Also, we assign an ID attribute to the Canvas for an easier reference in JavaScript: <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> Putting in fallback content when the web browser does not support the Canvas Not every web browser supports the canvas element. The canvas element provides an easy way to provide fallback content if the canvas element is not supported. The content also provides meaningful information for any screen reader too. Anything inside the open and close tags of the canvas element is the fallback content. This content is hidden if the web browser supports the element. Browsers that don't support canvas will instead display that fallback content. It is good practice to provide useful information in the fallback content. For instance, if the canvas tag's purpose is a dynamic picture, we may consider placing an <img> alternative there. Or we may also provide some links to modern web browsers for the visitor to upgrade their browser easily. The Canvas context When we draw in the Canvas, we actually call the drawing API of the canvas rendering context. You can think of the relationship of the Canvas and context as Canvas being the frame and context the real drawing surface. Currently, we have 2d, webgl, and webgl2 as the context options. In our example, we'll use the 2D drawing API by calling getContext("2d"). var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); Drawing circles and shapes with the Canvas arc function There is no circle function to draw a circle. The Canvas drawing API provides a function to draw different arcs, including the circle. The arc function accepts the following arguments: Arguments Discussion X The center point of the arc in the x axis. Y The center point of the arc in the y axis. radius The radius is the distance between the center point and the arc's perimeter. When drawing a circle, a larger radius means a larger circle. startAngle The starting point is an angle in radians. It defines where to start drawing the arc on the perimeter. endAngle The ending point is an angle in radians. The arc is drawn from the position of the starting angle, to this end angle. counter-clockwise This is a Boolean indicating the arc from startingAngle to endingAngle drawn in a clockwise or counter-clockwise direction. This is an optional argument with the default value false. Converting degrees to radians The angle arguments used in the arc function are in radians instead of degrees. If you are familiar with the degrees angle, you may need to convert the degrees into radians before putting the value into the arc function. We can convert the angle unit using the following formula: radians = p/180 x degrees Executing the path drawing in the Canvas When we are calling the arc function or other path drawing functions, we are not drawing the path immediately in the Canvas. Instead, we are adding it into a list of the paths. These paths will not be drawn until we execute the drawing command. There are two drawing executing commands: one command to fill the paths and the other to draw the stroke. We fill the paths by calling the fill function and draw the stroke of the paths by calling the stroke function, which we will use later when drawing lines: ctx.fill(); Beginning a path for each style The fill and stroke functions fill and draw the paths in the Canvas but do not clear the list of paths. Take the following code snippet as an example. After filling our circle with the color red, we add other circles and fill them with green. What happens to the code is both the circles are filled with green, instead of only the new circle being filled by green: var canvas = document.getElementById('game'); var ctx = canvas.getContext('2d'); ctx.fillStyle = "red"; ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.fill();   ctx.arc(210, 100, 50, 0, Math.PI*2, true); ctx.fillStyle = "green"; ctx.fill(); This is because, when calling the second fill command, the list of paths in the Canvas contains both circles. Therefore, the fill command fills both circles with green and overrides the red color circle. In order to fix this issue, we want to ensure we call beginPath before drawing a new shape every time. The beginPath function empties the list of paths, so the next time we call the fill and stroke commands, they will only apply to all paths after the last beginPath. Have a go hero We have just discussed a code snippet where we intended to draw two circles: one in red and the other in green. The code ends up drawing both circles in green. How can we add a beginPath command to the code so that it draws one red circle and one green circle correctly? Closing a path The closePath function will draw a straight line from the last point of the latest path to the first point of the path. This is called closing the path. If we are only going to fill the path and are not going to draw the stroke outline, the closePath function does not affect the result. The following screenshot compares the results on a half circle with one calling closePath and the other not calling closePath: Pop quiz Q1. Do we need to use the closePath function on the shape we are drawing if we just want to fill the color and not draw the outline stroke? Yes, we need to use the closePath function. No, it does not matter whether we use the closePath function. Wrapping the circle drawing in a function Drawing a circle is a common function that we will use a lot. It is better to create a function to draw a circle now instead of entering several code lines. Time for action – putting the circle drawing code into a function Let's make a function to draw the circle and then draw some circles in the Canvas. We are going to put code in different files to make the code simpler: Open the untangle.drawing.js file in our code editor and put in the following code: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.drawCircle = function(x, y, radius) { var ctx = untangleGame.ctx; ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(x, y, radius, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }; Open the untangle.data.js file and put the following code into it: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.drawCircle(x, y, circleRadius); } }; Then open the untangle.js file. Replace the original code in the JavaScript file with the following code: if (untangleGame === undefined) { var untangleGame = {}; }   // Entry point $(document).ready(function(){ var canvas = document.getElementById("game"); untangleGame.ctx = canvas.getContext("2d");   var width = canvas.width; var height = canvas.height;   untangleGame.createRandomCircles(width, height);   }); Open the HTML file in the web browser to see the result: What just happened? The code of drawing circles is executed after the page is loaded and ready. We used a loop to draw several circles in random places in the Canvas. Dividing code into files We are putting the code into different files. Currently, there are the untangle.js, untangle.drawing.js, and untangle.data.js files. The untangle.js is the entry point of the game. Then we put logic that is related to the context drawing into untangle.drawing.js and logic that's related to data manipulation into the untangle.data.js file. We use the untangleGame object as the global object that's being accessed across all the files. At the beginning of each JavaScript file, we have the following code to create this object if it does not exist: if (untangleGame === undefined) { var untangleGame = {}; } Generating random numbers in JavaScript In game development, we often use random functions. We may want to randomly summon a monster for the player to fight, we may want to randomly drop a reward when the player makes progress, and we may want a random number to be the result of rolling a dice. In this code, we place the circles randomly in the Canvas. To generate a random number in JavaScript, we use the Math.random() function. There is no argument in the random function. It always returns a floating number between 0 and 1. The number is equal or bigger than 0 and smaller than 1. There are two common ways to use the random function. One way is to generate random numbers within a given range. The other way is generating a true or false value. Usage Code Discussion Getting a random integer between A and B Math.floor(Math.random()*B)+A Math.floor() function cuts the decimal point of the given number. Take Math.floor(Math.random()*10)+5 as an example. Math.random() returns a decimal number between 0 to 0.9999…. Math.random()*10 is a decimal number between 0 to 9.9999…. Math.floor(Math.random()*10) is an integer between 0 to 9. Finally, Math.floor(Math.random()*10) + 5 is an integer between 5 to 14. Getting a random Boolean (Math.random() > 0.495) (Math.random() > 0.495) means 50 percent false and 50 percent true. We can further adjust the true/false ratio. (Math.random() > 0.7) means almost 70 percent false and 30 percent true. Saving the circle position When we are developing a DOM-based game, we often put the game objects into DIV elements and accessed them later in code logic. It is a different story in the Canvas-based game development. In order to access our game objects after they are drawn in the Canvas, we need to remember their states ourselves. Let's say now we want to know how many circles are drawn and where they are, and we will need an array to store their position. Time for action – saving the circle position Open the untangle.data.js file in the text editor. Add the following circle object definition code in the JavaScript file: untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; } Now we need an array to store the circles' positions. Add a new array to the untangleGame object: untangleGame.circles = []; While drawing every circle in the Canvas, we save the position of the circle in the circles array. Add the following line before calling the drawCircle function, inside the createRandomCircles function: untangleGame.circles.push(new untangleGame.Circle(x,y,circleRadius)); After the steps, we should have the following code in the untangle.data.js file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.circles = [];   untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; };   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.circles.push(new      untangleGame.Circle(x,y,circleRadius));    untangleGame.drawCircle(x, y, circleRadius); } }; Now we can test the code in the web browser. There is no visual difference between this code and the last example when drawing random circles in the Canvas. This is because we are saving the circles but have not changed any code that affects the appearance. We just make sure it looks the same and there are no new errors. What just happened? We saved the position and radius of each circle. This is because Canvas drawing is an immediate mode. We cannot directly access the object drawn in the Canvas because there is no such information. All lines and shapes are drawn on the Canvas as pixels and we cannot access the lines or shapes as individual objects. Imagine that we are drawing on a real canvas. We cannot just move a house in an oil painting, and in the same way we cannot directly manipulate any drawn items in the canvas element. Defining a basic class definition in JavaScript We can use object-oriented programming in JavaScript. We can define some object structures for our use. The Circle object provides a data structure for us to easily store a collection of x and y positions and the radii. After defining the Circle object, we can create a new Circle instance with an x, y, and radius value using the following code: var circle1 = new Circle(100, 200, 10); For more detailed usage on object-oriented programming in JavaScript, please check out the Mozilla Developer Center at the following link: https://developer.mozilla.org/en/Introduction_to_Object-Oriented_JavaScript Have a go hero We have drawn several circles randomly on the Canvas. They are in the same style and of the same size. How about we randomly draw the size of the circles? And fill the circles with different colors? Try modifying the code and then play with the drawing API. Drawing lines in the Canvas Now we have several circles here, so how about connecting them with lines? Let's draw a straight line between each circle. Time for action – drawing straight lines between each circle Open the index.html file we just used in the circle-drawing example. Change the wording in h1 from drawing circles in Canvas to drawing lines in Canvas. Open the untangle.data.js JavaScript file. We define a Line class to store the information that we need for each line: untangleGame.Line = function(startPoint, endPoint, thickness) { this.startPoint = startPoint; this.endPoint = endPoint; this.thickness = thickness; } Save the file and switch to the untangle.drawing.js file. We need two more variables. Add the following lines into the JavaScript file: untangleGame.thinLineThickness = 1; untangleGame.lines = []; We add the following drawLine function into our code, after the existing drawCircle function in the untangle.drawing.js file. untangleGame.drawLine = function(ctx, x1, y1, x2, y2, thickness) { ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.lineWidth = thickness; ctx.strokeStyle = "#cfc"; ctx.stroke(); } Then we define a new function that iterates the circle list and draws a line between each pair of circles. Append the following code in the JavaScript file: untangleGame.connectCircles = function() { // connect the circles to each other with lines untangleGame.lines.length = 0; for (var i=0;i< untangleGame.circles.length;i++) {    var startPoint = untangleGame.circles[i];    for(var j=0;j<i;j++) {      var endPoint = untangleGame.circles[j];      untangleGame.drawLine(startPoint.x, startPoint.y,        endPoint.x,      endPoint.y, 1);      untangleGame.lines.push(new untangleGame.Line(startPoint,        endPoint,      untangleGame.thinLineThickness));    } } }; Finally, we open the untangle.js file, and add the following code before the end of the jQuery document ready function, after we have called the untangleGame.createRandomCircles function: untangleGame.connectCircles(); Test the code in the web browser. We should see there are lines connected to each randomly placed circle: What just happened? We have enhanced our code with lines connecting each generated circle. You may find a working example at the following URL: http://makzan.net/html5-games/untangle-wip-connect-lines/ Similar to the way we saved the circle position, we have an array to save every line segment we draw. We declare a line class definition to store some essential information of a line segment. That is, we save the start and end point and the thickness of the line. Introducing the line drawing API There are some drawing APIs for us to draw and style the line stroke: Line drawing functions Discussion moveTo The moveTo function is like holding a pen in our hand and moving it on top of the paper without touching it with the pen. lineTo This function is like putting the pen down on the paper and drawing a straight line to the destination point. lineWidth The lineWidth function sets the thickness of the strokes we draw afterwards. stroke The stroke function is used to execute the drawing. We set up a collection of moveTo, lineTo, or styling functions and finally call the stroke function to execute it on the Canvas. We usually draw lines by using the moveTo and lineTo pairs. Just like in the real world, we move our pen on top of the paper to the starting point of a line and put down the pen to draw a line. Then, keep on drawing another line or move to the other position before drawing. This is exactly the flow in which we draw lines on the Canvas. We just demonstrated how to draw a simple line. We can set different line styles to lines in the Canvas. For more details on line styling, please read the styling guide in W3C at http://www.w3.org/TR/2dcontext/#line-styles and the Mozilla Developer Center at https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors. Using mouse events to interact with objects drawn in the Canvas So far, we have shown that we can draw shapes in the Canvas dynamically based on our logic. There is one part missing in the game development, that is, the input. Now, imagine that we can drag the circles around on the Canvas, and the connected lines will follow the circles. In this section, we will add mouse events to the canvas to make our circles draggable. Time for action – dragging the circles in the Canvas Let's continue with our previous code. Open the html5games.untangle.js file. We need a function to clear all the drawings in the Canvas. Add the following function to the end of the untangle.drawing.js file: untangleGame.clear = function() { var ctx = untangleGame.ctx; ctx.clearRect(0,0,ctx.canvas.width,ctx.canvas.height); }; We also need two more functions that draw all known circles and lines. Append the following code to the untangle.drawing.js file: untangleGame.drawAllLines = function(){ // draw all remembered lines for(var i=0;i<untangleGame.lines.length;i++) {    var line = untangleGame.lines[i];    var startPoint = line.startPoint;    var endPoint = line.endPoint;    var thickness = line.thickness;    untangleGame.drawLine(startPoint.x, startPoint.y,      endPoint.x,    endPoint.y, thickness); } };   untangleGame.drawAllCircles = function() { // draw all remembered circles for(var i=0;i<untangleGame.circles.length;i++) {    var circle = untangleGame.circles[i];    untangleGame.drawCircle(circle.x, circle.y, circle.radius); } }; We are done with the untangle.drawing.js file. Let's switch to the untangle.js file. Inside the jQuery document-ready function, before the ending of the function, we add the following code, which creates a game loop to keep drawing the circles and lines: // set up an interval to loop the game loop setInterval(gameloop, 30);   function gameloop() { // clear the Canvas before re-drawing. untangleGame.clear(); untangleGame.drawAllLines(); untangleGame.drawAllCircles(); } Before moving on to the input handling code implementation, let's add the following code to the jQuery document ready function in the untangle.js file, which calls the handleInput function that we will define: untangleGame.handleInput(); It's time to implement our input handling logic. Switch to the untangle.input.js file and add the following code to the file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.handleInput = function(){ // Add Mouse Event Listener to canvas // we find if the mouse down position is on any circle // and set that circle as target dragging circle. $("#game").bind("mousedown", function(e) {    var canvasPosition = $(this).offset();    var mouseX = e.pageX - canvasPosition.left;    var mouseY = e.pageY - canvasPosition.top;      for(var i=0;i<untangleGame.circles.length;i++) {      var circleX = untangleGame.circles[i].x;      var circleY = untangleGame.circles[i].y;      var radius = untangleGame.circles[i].radius;      if (Math.pow(mouseX-circleX,2) + Math.pow(        mouseY-circleY,2) < Math.pow(radius,2)) {        untangleGame.targetCircleIndex = i;        break;      }    } });   // we move the target dragging circle // when the mouse is moving $("#game").bind("mousemove", function(e) {    if (untangleGame.targetCircleIndex !== undefined) {      var canvasPosition = $(this).offset();      var mouseX = e.pageX - canvasPosition.left;      var mouseY = e.pageY - canvasPosition.top;      var circle = untangleGame.circles[        untangleGame.targetCircleIndex];      circle.x = mouseX;      circle.y = mouseY;    }    untangleGame.connectCircles(); });   // We clear the dragging circle data when mouse is up $("#game").bind("mouseup", function(e) {    untangleGame.targetCircleIndex = undefined; }); }; Open index.html in a web browser. There should be five circles with lines connecting them. Try dragging the circles. The dragged circle will follow the mouse cursor and the connected lines will follow too. What just happened? We have set up three mouse event listeners. They are the mouse down, move, and up events. We also created the game loop, which updates the Canvas drawing based on the new position of the circles. You can view the example's current progress at: http://makzan.net/html5-games/untangle-wip-dragging-basic/. Detecting mouse events in circles in the Canvas After discussing the difference between DOM-based development and Canvas-based development, we cannot directly listen to the mouse events of any shapes drawn in the Canvas. There is no such thing. We cannot monitor the event in any shapes drawn in the Canvas. We can only get the mouse event of the canvas element and calculate the relative position of the Canvas. Then we change the states of the game objects according to the mouse's position and finally redraw it on the Canvas. How do we know we are clicking on a circle? We can use the point-in-circle formula. This is to check the distance between the center point of the circle and the mouse position. The mouse clicks on the circle when the distance is less than the circle's radius. We use this formula to get the distance between two points: Distance = (x2-x1)2 + (y2-y1)2. The following graph shows that when the distance between the center point and the mouse cursor is smaller than the radius, the cursor is in the circle: The following code we used explains how we can apply distance checking to know whether the mouse cursor is inside the circle in the mouse down event handler: if (Math.pow(mouseX-circleX,2) + Math.pow(mouseY-circleY,2) < Math.pow(radius,2)) { untangleGame.targetCircleIndex = i; break; } Please note that Math.pow is an expensive function that may hurt performance in some scenarios. If performance is a concern, we may use the bounding box collision checking. When we know that the mouse cursor is pressing the circle in the Canvas, we mark it as the targeted circle to be dragged on the mouse move event. During the mouse move event handler, we update the target dragged circle's position to the latest cursor position. When the mouse is up, we clear the target circle's reference. Pop quiz Q1. Can we directly access an already drawn shape in the Canvas? Yes No Q2. Which method can we use to check whether a point is inside a circle? The coordinate of the point is smaller than the coordinate of the center of the circle. The distance between the point and the center of the circle is smaller than the circle's radius. The x coordinate of the point is smaller than the circle's radius. The distance between the point and the center of the circle is bigger than the circle's radius. Game loop The game loop is used to redraw the Canvas to present the later game states. If we do not redraw the Canvas after changing the states, say the position of the circles, we will not see it. Clearing the Canvas When we drag the circle, we redraw the Canvas. The problem is the already drawn shapes on the Canvas won't disappear automatically. We will keep adding new paths to the Canvas and finally mess up everything in the Canvas. The following screenshot is what will happen if we keep dragging the circles without clearing the Canvas on every redraw: Since we have saved all game statuses in JavaScript, we can safely clear the entire Canvas and draw the updated lines and circles with the latest game status. To clear the Canvas, we use the clearRect function provided by Canvas drawing API. The clearRect function clears a rectangle area by providing a rectangle clipping region. It accepts the following arguments as the clipping region: context.clearRect(x, y, width, height) Argument Definition x The top left point of the rectangular clipping region, on the x axis. y The top left point of the rectangular clipping region, on the y axis. width The width of the rectangular region. height The height of the rectangular region. The x and y values set the top left position of the region to be cleared. The width and height values define how much area is to be cleared. To clear the entire Canvas, we can provide (0,0) as the top left position and the width and height of the Canvas to the clearRect function. The following code clears all things drawn on the entire Canvas: ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Pop quiz Q1. Can we clear a portion of the Canvas by using the clearRect function? Yes No Q2. Does the following code clear things on the drawn Canvas? ctx.clearRect(0, 0, ctx.canvas.width, 0); Yes No Summary You learned a lot in this article about drawing shapes and creating interaction with the new HTML5 canvas element and the drawing API. Specifically, you learned to draw circles and lines in the Canvas. We added mouse events and touch dragging interaction with the paths drawn in the Canvas. Finally, we succeeded in developing the Untangle puzzle game. Resources for Article: Further resources on this subject: Improving the Snake Game [article] Playing with Particles [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 3221

article-image-subtitles-tracking-video-progression
Packt
06 Jul 2015
10 min read
Save for later

Subtitles – tracking the video progression

Packt
06 Jul 2015
10 min read
In this article by Roberto Ulloa, author of the book Kivy – Interactive Applications and Games in Python Second Edition, we will learn how to use the progression of a video to display subtitles at the right moment. (For more resources related to this topic, see here.) Let's add subtitles to our application. We will do this in four simple steps: Create a Subtitle widget (subtitle.kv) derived from the Label class that will display the subtitles Place a Subtitle instance (video.kv) on top of the video widget Create a Subtitles class (subtitles.py) that will read and parse a subtitle file Track the Video progression (video.py) to display the corresponding subtitle The Step 1 involves the creation of a new widget in the subtitle.kv file: 1. # File name: subtitle.kv 2. <Subtitle@Label>: 3.     halign: 'center' 4.     font_size: '20px' 5.     size: self.texture_size[0] + 20, self.texture_size[1] + 20 6.     y: 50 7.     bcolor: .1, .1, .1, 0 8.     canvas.before: 9.         Color: 10.            rgba: self.bcolor 11.         Rectangle: 12.             pos: self.pos 13.             size: self.size There are two interesting elements in this code. The first one is the definition of the size property (line 4). We define it as 20 pixels bigger than the texture_size width and height. The texture_size property indicates the size of the text determined by the font size and text, and we use it to adjust the Subtitles widget size to its content. The texture_size is a read-only property because its value is calculated and dependent on other parameters, such as font size and height for text display. This means that we will read from this property but not write on it. The second element is the creation of the bcolor property (line 7) to store a background color, and how the rgba color of the rectangle has been bound to it (line 10). The Label widget (like many other widgets) doesn't have a background color, and creating a rectangle is the usual way to create such features. We add the bcolor property in order to change the color of the rectangle from outside the instance. We cannot directly modify parameters of the vertex instructions; however, we can create properties that control parameters inside the vertex instructions. Let's move on to Step 2 mentioned earlier. We need to add a Subtitle instance to our current Video widget in the video.kv file: 14. # File name: video.kv 15. ... 16. #:set _default_surl      "http://www.ted.com/talks/subtitles/id/97/lang/en" 18. <Video>: 19.     surl: _default_surl 20.     slabel: _slabel 21.     ... 23.     Subtitle: 24.         id: _slabel 25.         x: (root.width - self.width)/2 We added another constant variable called _default_surl (line 16), which contains the link to the URL with the corresponding subtitle TED video file. We set this value to the surl property (line 19), which we just created to store the subtitles' URL. We added the slabel property (line 20), that references the Subtitle instance through its ID (line 24). Then we made sure that the subtitle is centered (line 25). In order to start Step 3 (parse the subtitle file), we need to take a look at the format of the TED subtitles: 26. { 27.     "captions": [{ 28.         "duration":1976, 29.         "content": "When you have 21 minutes to speak,", 30.         "startOfParagraph":true, 31.         "startTime":0, 32.     }, ... TED uses a very simple JSON format (https://en.wikipedia.org/wiki/JSON) with a list of captions. Each caption contains four keys but we will only use duration, content, and startTime. We need to parse this file, and luckily Kivy provides a UrlRequest class (line 34) that will do most of the work for us. Here is the code for subtitles.py that creates the Subtitles class: 33. # File name: subtitles.py 34. from kivy.network.urlrequest import UrlRequest 36. class Subtitles: 38.     def __init__(self, url): 39.         self.subtitles = [] 40.         req = UrlRequest(url, self.got_subtitles) 42.     def got_subtitles(self, req, results): 43.         self.subtitles = results['captions'] 45.     def next(self, secs): 46.         for sub in self.subtitles: 47.             ms = secs*1000 - 12000 48.             st = 'startTime' 49.             d = 'duration' 50.             if ms >= sub[st] and ms <= sub[st] + sub[d]: 51.                 return sub 52.         return None The constructor of the Subtitles class will receive a URL (line 38) as a parameter. Then, it will make the petition to instantiate the UrlRequest class (line 40). The first parameter of the class instantiation is the URL of the petition, and the second is the method that is called when the result of the petition is returned (downloaded). Once the request returns the result, the method got_subtitles is called(line 42). The UrlRequest extracts the JSON and places it in the second parameter of got_subtitles. All we had to do is put the captions in a class attribute, which we called subtitles (line 43). The next method (line 45) receives the seconds (secs) as a parameter and will traverse the loaded JSON dictionary in order to search for the corresponding subtitle that belongs to that time. As soon as it finds one, the method returns it. We subtracted 12000 microseconds (line 47, ms = secs*1000 - 12000) because the TED videos have an introduction of approximately 12 seconds before the talk starts. Everything is ready for Step 4, in which we put the pieces together in order to see the subtitles working. Here are the modifications to the header of the video.py file: 53. # File name: video.py 54. ... 55. from kivy.properties import StringProperty 56. ... 57. from kivy.lang import Builder 59. Builder.load_file('subtitle.kv') 61. class Video(KivyVideo): 62.     image = ObjectProperty(None) 63.     surl = StringProperty(None) We imported StringProperty and added the corresponding property (line 55). We will use this property by the end of this chapter when we we can switch TED talks from the GUI. For now, we will just use _default_surl defined in video.kv (line 63). We also loaded the subtitle.kv file (line 59). Now, let's analyze the rest of the changes to the video.py file: 64.     ... 65.     def on_source(self, instance, value): 66.         self.color = (0,0,0,0) 67.         self.subs = Subtitles(name, self.surl) 68.         self.sub = None 70.     def on_position(self, instance, value): 71.         next = self.subs.next(value) 72.         if next is None: 73.             self.clear_subtitle() 74.         else: 75.             sub = self.sub 76.             st = 'startTime' 77.             if sub is None or sub[st] != next[st]: 78.                 self.display_subtitle(next) 80.     def clear_subtitle(self): 81.         if self.slabel.text != "": 82.             self.sub = None 83.             self.slabel.text = "" 84.             self.slabel.bcolor = (0.1, 0.1, 0.1, 0) 86.     def display_subtitle(self, sub): 87.         self.sub = sub 88.         self.slabel.text = sub['content'] 89.         self.slabel.bcolor = (0.1, 0.1, 0.1, .8) 90. (...) We introduced a few code lines to the on_source method in order to initialize the subtitles attribute with a Subtitles instance (line 67) using the surl property and initialize the sub attribute that contains the currently displayed subtitle (line 68), if any. Now, let's study how we keep track of the progression to display the corresponding subtitle. When the video plays inside the Video widget, the on_position event is triggered every second. Therefore, we implemented the logic to display the subtitles in the on_position method (lines 70 to 78). Each time the on_position method is called (each second), we ask the Subtitles instance (line 71) for the next subtitle. If nothing is returned, we clear the subtitle with the clear_subtitle method (line 73). If there is already a subtitle in the current second (line 74), then we make sure that there is no subtitle being displayed, or that the returned subtitle is not the one that we already display (line 164). If the conditions are met, we display the subtitle using the display_subtitle method (line 78). Notice that the clear_subtitle (lines 80 to 84) and display_subtitle (lines 86 to 89) methods use the bcolor property in order to hide the subtitle. This is another trick to make a widget invisible without removing it from its parent. Let's take a look at the current result of our videos and subtitles in the following screenshot: Summary In this article, we discussed how to control a video and how to associate the subtitles element of the screen with it. We also discussed how the Video widget incorporates synchronization of subtitles that we receive in a JSON format file with the progression of the video and a responsive control bar. We learned how to control its progression and add subtitles to it. Resources for Article: Further resources on this subject: Moving Further with NumPy Modules [article] Learning Selenium Testing Tools with Python [article] Python functions – Avoid repeating code [article]
Read more
  • 0
  • 0
  • 2360