Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Game Development

370 Articles
article-image-global-illumination
Packt
17 Jun 2015
16 min read
Save for later

Global Illumination

Packt
17 Jun 2015
16 min read
In this article by Volodymyr Gerasimov, the author of the book, Building Levels in Unity, you will see two types of lighting that you need to take into account if you want to create well lit levels—direct and indirect. Direct light is the one that is coming directly from the source. Indirect light is created by light bouncing off the affected area at a certain angle with variable intensity. In the real world, the number of bounces is infinite and that is the reason why we can see dark areas that don't have light shining directly at them. In computer software, we don't yet have the infinite computing power at our disposal to be able to use different tricks to simulate the realistic lighting at runtime. The process that simulates indirect lighting, light bouncing, reflections, and color bleeding is known as Global Illumination (GI). Unity 5 is powered by one of the industry's leading technologies for handling indirect lighting (radiosity) in the gaming industry, called Enlighten by Geomerics. Games such as Battlefield 3-4, Medal of Honor: Warfighter, Need for Speed the Run and Dragon Age: Inquisition are excellent examples of what this technology is capable of, and now all of that power is at your fingertips completely for free! Now, it's only appropriate to learn how to tame this new beast. (For more resources related to this topic, see here.) Preparing the environment Realtime realistic lighting is just not feasible at our level of computing power, which forces us into inventing tricks to simulate it as close as possible, but just like with any trick, there are certain conditions that need to be met in order for it to work properly and keep viewer's eyes from exposing our clever deception. To demonstrate how to work with these limitations, we are going to construct a simple light set up for the small interior scene and talk about solutions to the problems as we go. For example, we will use the LightmappingInterior scene that can be found in the Chapter 7 folder in the Project window. It's a very simple interior and should take us no time to set up. The first step is to place the lights. We will be required to create two lights: a Directional to imitate the moonlight coming from the crack in the dome and a Point light for the fire burning in the goblet, on the ceiling.   Tune the light's Intensity, Range (in Point light's case), and Color to your liking. So far so good! We can see the direct lighting coming from the moonlight, but there is no trace of indirect lighting. Why is this happening? Should GI be enabled somehow for it to work? As a matter of fact, it does and here comes the first limitation of Global Illumination—it only works on GameObjects that are marked as Static. Static versus dynamic objects Unity objects can be of one of the two categories: static or dynamic. Differentiation is very simple: static objects don't move, they stay still where they are at all times, they neither play any animations nor engage in any kind of interactions. The rest of the objects are dynamic. By default, all objects in Unity are dynamic and can only be converted into static by checking the Static checkbox in the Inspector window.   See it for yourself. Try to mark an object as static in Unity and attempt to move it around in the Play mode. Does it work? Global Illumination will only work with static objects; this means, before we go into the Play mode right above it, we need to be 100 percent sure that the objects that will cast and receive indirect lights will not stop doing that from their designated positions. However, why is that you may ask, isn't the whole purpose of Realtime GI to calculate indirect lighting in runtime? The answer to that would be yes, but only to an extent. The technology behind this is called Precomputed Realtime GI, according to Unity developers it precomputes all possible bounces that the light can make and encodes them to be used in realtime; so it essentially tells us that it's going to take a static object, a light and answer a question: "If this light is going to travel around, how is it going to bounce from the affected surface of the static object from every possible angle?"   During runtime, lights are using this encoded data as instructions on how the light should bounce instead of calculating it every frame. Having static objects can be beneficial in many other ways, such as pathfinding, but that's a story for another time. To test this theory, let's mark objects in the scene as Static, meaning they will not move (and can't be forced to move) by physics, code or even transformation tools (the latter is only true during the Play mode). To do that, simply select Pillar, Dome, WaterProNighttime, and Goblet GameObjects in the Hierarchy window and check the Static checkbox at the top-right corner of the Inspector window. Doing that will cause Unity to recalculate the light and encode bouncing information. Once the process has finished (it should take no time at all), you can hit the Play button and move the light around. Notice that bounce lighting is changing as well without any performance overhead. Fixing the light coming from the crack The moonlight inside the dome should be coming from the crack on its surface, however, if you rotate the directional light around, you'll notice that it simply ignores concrete walls and freely shines through. Naturally, that is incorrect behavior and we can't have that stay. We can clearly see through the dome ourselves from the outside as a result of one-sided normals. Earlier, the solution was to duplicate the faces and invert the normals; however, in this case, we actually don't mind seeing through the walls and only want to fix the lighting issue. To fix this, we need to go to the Mesh Render component of the Dome GameObject and select the Two Sided option from the drop-down menu of the Cast Shadows parameter.   This will ignore backface culling and allow us to cast shadows from both sides of the mesh, thus fixing the problem. In order to cast shadows, make sure that your directional light has Shadow Type parameter set to either Hard Shadows or Soft Shadows.   Emission materials Another way to light up the level is to utilize materials with Emission maps. Pillar_EmissionMaterial applied to the Pillar GameObject already has an Emission map assigned to it, all that is left is to crank up the parameter next to it, to a number which will give it a noticeable effect (let's say 3). Unfortunately, emissive materials are not lights, and precomputed GI will not be able to update indirect light bounce created by the emissive material. As a result, changing material in the Play mode will not cause the update. Changes done to materials in the Play mode will be preserved in the Editor. Shadows An important byproduct of lighting is shadows cast by affected objects. No surprises here! Unity allows us to cast shadows by both dynamic and static objects and have different results based on render settings. By default, all lights in Unity have shadows disabled. In order to enable shadows for a particular light, we need to modify the Shadow Type parameter to be either Hard Shadows or Soft Shadows in the Inspector window.   Enabling shadows will grant you access to three parameters: Strength: This is the darkness of shadows, from 0 to 1. Resolution: This controls the resolution of the shadows. This parameter can utilize the value set in the Use Quality Settings or be selected individually from the drop down menu. Bias and Normal Bias – this is the shadow offset. These parameters are used to prevent an artifact known as Shadow Acne (pixelated shadows in lit areas); however, setting them too high can cause another artifact known as Peter Panning (disconnected shadow). Default values usually help us to avoid both issues. Unity is using a technique known as Shadow Mapping, which determines the objects that will be lit by assuming the light's perspective—every object that light sees directly, is lit; every object that isn't seen should be in the shadow. After rendering the light's perspective, Unity stores the depth of each surface into a shadow map. In the cases where the shadow map resolution is low, this can cause some pixels to appear shaded when they shouldn't be (Shadow Acne) or not have a shadow where it's supposed to be (Peter Panning), if the offset is too high. Unity allows you to control the objects that should receive or cast shadows by changing the parameters Cast Shadows and Receive Shadows in the Rendering Mesh component of a GameObject. Lightmapping Every year, more and more games are being released with real-time rendering solutions that allow for more realistic-looking environments at the price of ever-growing computing power of modern PCs and consoles. However, due to the limiting hardware capabilities of mobile platforms, it is still a long time before we are ready to part ways with cheap and affordable techniques such as lightmapping. Lightmapping is a technology for precomputing brightness of surfaces, also known as baking, and storing it in a separate texture—a lightmap. In order to see lighting in the area, we need to be able to calculate it at least 30 times per second (or more, based on fps requirements). This is not very cheap; however, with lightmapping we can calculate lighting once and then apply it as a texture. This technology is suitable for static objects that artists know will never be moved; in a nutshell, this process involves creating a scene, setting up the lighting rig and clicking Bake to get great lighting with minimum performance issues during runtime. To demonstrate the lightmapping process, we will take the scene and try to bake it using lightmapping. Static versus dynamic lights We've just talked about a way to guarantee that the GameObjects will not move. But what about lights? Hitting the Static checkbox for lights will not achieve much (unless you simply want to completely avoid the possibility of accidentally moving them). The problem at hand is that light, being a component of an object, has a separate set of controls allowing them to be manipulated even if the holder is set to static. For that purpose, each light has a parameter that allows us to specify the role of individual light and its contribution to the baking process, this parameter is called Baking. There are three options available for it: Realtime: This option will exclude this particular light from the baking process. It is totally fine to use real-time lighting, precomputed GI will make sure that modern computers and consoles are able to handle them quite smoothly. However, they might cause an issue if you are developing for the mobile platforms which will require every bit of optimization to be able to run with a stable frame rate. There are ways to fake real-time lighting with much cheaper options,. The only thing you should consider is that the number of realtime lights should be kept at a minimum if you are going for maximum optimization. Realtime will allow lights to affect static and dynamic objects. Baked: This option will include this light into the baking process. However, there is a catch: only static objects will receive light from it. This is self-explanatory—if we want dynamic objects to receive lighting, we need to calculate it every time the position of an object changes, which is what Realtime lighting does. Baked lights are cheap, calculated once we have stored all lighting information on a hard drive and using it from there, no further recalculation is required during runtime. It is mostly used on small situational lights that won't have a significant effect on dynamic objects. Mixed: This one is a combination of the previous two options. It bakes the lights into the static objects and affects the dynamic objects as they pass by. Think of the street lights: you want the passing cars to be affected; however, you have no need to calculate the lighting for the static environment in realtime. Naturally, we can't have dynamic objects move around the level unlit, no matter how much we'd like to save on computing power. Mixed will allow us to have the benefit of the baked lighting on the static objects as well as affect the dynamic objects at runtime. The first step that we are going to take is changing the Baking parameter of our lights from Realtime to Baked and enabling Soft Shadows:   You shouldn't notice any significant difference, except for the extra shadows appearing. The final result isn't too different from the real-time lighting. Its performance is much better, but lacks the support of dynamic objects. Dynamic shadows versus static shadows One of the things that get people confused when starting to work with shadows in Unity is how they are being cast by static and dynamic objects with different Baking settings on the light source. This is one of those things that you simply need to memorize and keep in mind when planning the lighting in the scene. We are going to explore how different Baking options affect the shadow casting between different combinations of static and dynamic objects: As you can see, real-time lighting handles everything pretty well; all the objects are casting shadows onto each other and everything works as intended. There is even color bleeding happening between two static objects on the right. With Baked lighting the result isn't that inspiring. Let's break it down. Dynamic objects are not lit. If the object is subject to change at runtime, we can't preemptively bake it into the lightmap; therefore, lights that are set to Baked will simply ignore them. Shadows are only cast by static objects onto static objects. This correlates to the previous statement that if we aren't sure that the object is going to change we can't safely bake its shadows into the shadow map. With Mixed we get a similar result as with real-time lighting, except for one instance: dynamic objects are not casting shadows onto static objects, but the reverse does work: static objects are casting shadows onto the dynamic objects just fine, so what's the catch? Each object gets individual treatment from the Mixed light: those that are static are treated as if they are lit by the Baked light and dynamic are lit in realtime. In other words, when we are casting a shadow onto a dynamic object, it is calculated in realtime, while when we are casting shadow onto the static object, it is baked and we can't bake a shadow that is cast by the object that is subject to change. This was never the case with real-time lighting, since we were calculating the shadows at realtime, regardless of what they were cast by or cast onto. And again, this is just one scenario that you need to memorize. Lighting options The Lighting window has three tabs: Object, Scene, and Lightmap. For now we will focus on the first one. The main content of an Object tab is information on objects that are currently selected. This allows us to get quick access to a list of controls, to better tweak selected objects for lightmapping and GI. You can switch between object types with the help of Scene Filter at the top; this is a shortcut to filtering objects in the Hierarchy window (this will not filter the selected GameObjects, but everything in the Hierarchy window). All GameObjects need to be set to Static in order to be affected by the lightmapping process; this is why the Lightmap Static checkbox is the first in the list for Mesh Renderers. If you haven't set the object to static in the Inspector window, checking the Lightmap Static box will do just that. The Scale in Lightmap parameter controls the lightmap resolution. The greater the value, the bigger the resolution given to the object's lightmap, resulting in better lighting effects and shadows. Setting the parameter to 0 will result in an object not being affected by lightmapping. Unless you are trying to fix lighting artifacts on the object, or going for the maximum optimization, you shouldn't touch this parameter; there is a better way to adjust the lightmap resolution for all objects in the scene; Scale in Lightmap scales in relation to global value. The rest of the parameters are very situational and quite advanced, they deal with UVs, extend the effect of GI on the GameObject, and give detailed information on the lightmap. For lights, we have a baking parameter with three options: Realtime, Baked, or Mixed. Naturally, if you want this light for lightmapping, Realtime is not an option, so you should pick Baked or Mixed. Color and Intensity are referenced from the Inspector window and can be adjusted in either place. Baked Shadows allows us to choose the shadow type that will be baked (Hard, Soft, Off). Summary Lighting is a difficult process that is deceptively easy to learn, but hard to master. In Unity, lighting isn't without its issues. Attempting to apply real-world logic to 3D rendering will result in a direct confrontation with limitations posed by imperfect simulation. In order to solve issues that may arise, one must first understand what might be causing them, in order to isolate the problem and attempt to find a solution. Alas, there are still a lot of topics left uncovered that are outside of the realm of an introduction. If you wish to learn more about lighting, I would point you again to the official documentation and developer blogs, where you'll find a lot of useful information, tons of theory, practical recommendations, as well as in-depth look into all light elements discussed. Resources for Article: Further resources on this subject: Learning NGUI for Unity [article] Saying Hello to Unity and Android [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 6857

article-image-getting-started-multiplayer-game-programming
Packt
04 Jun 2015
33 min read
Save for later

Getting Started with Multiplayer Game Programming

Packt
04 Jun 2015
33 min read
In this article by Rodrigo Silveira author of the book Multiplayer gaming with HTML5 game development, if you're reading this, chances are pretty good that you are already a game developer. That being the case, then you already know just how exciting it is to program your own games, either professionally or as a highly gratifying hobby that is very time-consuming. Now you're ready to take your game programming skills to the next level—that is, you're ready to implement multiplayer functionality into your JavaScript-based games. (For more resources related to this topic, see here.) In case you have already set out to create multiplayer games for the Open Web Platform using HTML5 and JavaScript, then you may have already come to realize that a personal desktop computer, laptop, or a mobile device is not particularly the most appropriate device to share with another human player for games in which two or more players share the same game world at the same time. Therefore, what is needed in order to create exciting multiplayer games with JavaScript is some form of networking technology. We will be discussing the following principles and concepts: The basics of networking and network programming paradigms Socket programming with HTML5 Programming a game server and game clients Turn-based multiplayer games Understanding the basics of networking It is said that one cannot program games that make use of networking without first understanding all about the discipline of computer networking and network programming. Although having a deep understanding of any topic can be only beneficial to the person working on that topic, I don't believe that you must know everything there is to know about game networking in order to program some pretty fun and engaging multiplayer games. Saying that is the case is like saying that one needs to be a scholar of the Spanish language in order to cook a simple burrito. Thus, let us take a look at the most basic and fundamental concepts of networking. After you finish reading this article, you will know enough about computer networking to get started, and you will feel comfortable adding multiplayer aspects to your games. One thing to keep in mind is that, even though networked games are not nearly as old as single-player games, computer networking is actually a very old and well-studied subject. Some of the earliest computer network systems date back to the 1950s. Though some of the techniques have improved over the years, the basic idea remains the same: two or more computers are connected together to establish communication between the machines. By communication, I mean data exchange, such as sending messages back and forth between the machines, or one of the machines only sends the data and the other only receives it. With this brief introduction to the concept of networking, you are now grounded in the subject of networking, enough to know what is required to network your games—two or more computers that talk to each other as close to real time as possible. By now, it should be clear how this simple concept makes it possible for us to connect multiple players into the same game world. In essence, we need a way to share the global game data among all the players who are connected to the game session, then continue to update each player about every other player. There are several different techniques that are commonly used to achieve this, but the two most common approaches are peer-to-peer and client-server. Both techniques present different opportunities, including advantages and disadvantages. In general, neither is particularly better than the other, but different situations and use cases may be better suited for one or the other technique. Peer-to-peer networking A simple way to connect players into the same virtual game world is through the peer-to-peer architecture. Although the name might suggest that only two peers ("nodes") are involved, by definition a peer-to-peer network system is one in which two or more nodes are connected directly to each other without a centralized system orchestrating the connection or information exchange. On a typical peer-to-peer setup, each peer serves the same function as every other one—that is, they all consume the same data and share whatever data they produce so that others can stay synchronized. In the case of a peer-to-peer game, we can illustrate this architecture with a simple game of Tic-tac-toe. Once both the players have established a connection between themselves, whoever is starting the game makes a move by marking a cell on the game board. This information is relayed across the wire to the other peer, who is now aware of the decision made by his or her opponent, and can thus update their own game world. Once the second player receives the game's latest state that results from the first player's latest move, the second player is able to make a move of their own by checking some available space on the board. This information is then copied over to the first player who can update their own world and continue the process by making the next desired move. The process goes on until one of the peers disconnects or the game ends as some condition that is based on the game's own business logic is met. In the case of the game of Tic-tac-toe, the game would end once one of the players has marked three spaces on the board forming a straight line or if all nine cells are filled, but neither player managed to connect three cells in a straight path. Some of the benefits of peer-to-peer networked games are as follows: Fast data transmission: Here, the data goes directly to its intended target. In other architectures, the data could go to some centralized node first, then the central node (or the "server") contacts the other peer, sending the necessary updates. Simpler setup: You would only need to think about one instance of your game that, generally speaking, handles its own input, sends its input to other connected peers, and handles their output as input for its own system. This can be especially handy in turn-based games, for example, most board games such as Tic-tac-toe. More reliability: Here one peer that goes offline typically won't affect any of the other peers. However, in the simple case of a two-player game, if one of the players is unable to continue, the game will likely cease to be playable. Imagine, though, that the game in question has dozens or hundreds of connected peers. If a handful of them suddenly lose their Internet connection, the others can continue to play. However, if there is a server that is connecting all the nodes and the server goes down, then none of the other players will know how to talk to each other, and nobody will know what is going on. On the other hand, some of the more obvious drawbacks of peer-to-peer architecture are as follows: Incoming data cannot be trusted: Here, you don't know for sure whether or not the sender modified the data. The data that is input into a game server will also suffer from the same challenge, but once the data is validated and broadcasted to all the other peers, you can be more confident that the data received by each peer from the server will have at least been sanitized and verified, and will be more credible. Fault tolerance can be very low: If enough players share the game world, one or more crashes won't make the game unplayable to the rest of the peers. Now, if we consider the many cases where any of the players that suddenly crash out of the game negatively affect the rest of the players, we can see how a server could easily recover from the crash. Data duplication when broadcasting to other peers: Imagine that your game is a simple 2D side scroller, and many other players are sharing that game world with you. Every time one of the players moves to the right, you receive the new (x, y) coordinates from that player, and you're able to update your own game world. Now, imagine that you move your player to the right by a very few pixels; you would have to send that data out to all of the other nodes in the system. Overall, peer-to-peer is a very powerful networking architecture and is still widely used by many games in the industry. Since current peer-to-peer web technologies are still in their infancy, most JavaScript-powered games today do not make use of peer-to-peer networking. For this and other reasons that should become apparent soon, we will focus almost exclusively on the other popular networking paradigm, namely, the client-server architecture. Client-server networking The idea behind the client-server networking architecture is very simple. If you squint your eyes hard enough, you can almost see a peer-to-peer graph. The most obvious difference between them, is that, instead of every node being an equal peer, one of the nodes is special. That is, instead of every node connecting to every other node, every node (client) connects to a main centralized node called the server. While the concept of a client-server network seems clear enough, perhaps a simple metaphor might make it easier for you to understand the role of each type of node in this network format as well as differentiate it from peer-to-peer . In a peer-to-peer network, you can think of it as a group of friends (peers) having a conversation at a party. They all have access to all the other peers involved in the conversation and can talk to them directly. On the other hand, a client-server network can be viewed as a group of friends having dinner at a restaurant. If a client of the restaurant wishes to order a certain item from the menu, he or she must talk to the waiter, who is the only person in that group of people with access to the desired products and the ability to serve the products to the clients. In short, the server is in charge of providing data and services to one or more clients. In the context of game development, the most common scenario is when two or more clients connect to the same server; the server will keep track of the game as well as the distributed players. Thus, if two players are to exchange information that is only pertinent to the two of them, the communication will go from the first player to and through the server and will end up at the other end with the second player. Following the example of the two players involved in a game of Tic-tac-toe, we can see how similar the flow of events is on a client-server model. Again, the main difference is that players are unaware of each other and only know what the server tells them. While you can very easily mimic a peer-to-peer model by using a server to merely connect the two players, most often the server is used much more actively than that. There are two ways to engage the server in a networked game, namely in an authoritative and a non-authoritative way. That is to say, you can have the enforcement of the game's logic strictly in the server, or you can have the clients handle the game logic, input validation, and so on. Today, most games using the client-server architecture actually use a hybrid of the two (authoritative and non-authoritative servers). For all intents and purposes, however, the server's purpose in life is to receive input from each of the clients and distribute that input throughout the pool of connected clients. Now, regardless of whether you decide to go with an authoritative server instead of a non-authoritative one, you will notice that one of challenges with a client-server game is that you will need to program both ends of the stack. You will have to do this even if your clients do nothing more than take input from the user, forward it to the server, and render whatever data they receive from the server; if your game server does nothing more than forward the input that it receives from each client to every other client, you will still need to write a game client and a game server. We will discuss game clients and servers later. For now, all we really need to know is that these two components are what set this networking model apart from peer-to-peer. Some of the benefits of client-server networked games are as follows: Separation of concerns: If you know anything about software development, you know that this is something you should always aim for. That is, good, maintainable software is written as discrete components where each does one "thing", and it is done well. Writing individual specialized components lets you focus on performing one individual task at a time, making your game easier to design, code, test, reason, and maintain. Centralization: While this can be argued against as well as in favor of, having one central place through which all communication must flow makes it easier to manage such communication, enforce any required rules, control access, and so forth. Less work for the client: Instead of having a client (peer) in charge of taking input from the user as well as other peers, validating all the input, sharing data among other peers, rendering the game, and so on, the client can focus on only doing a few of these things, allowing the server to offload some of this work. This is particularly handy when we talk about mobile gaming, and how much subtle divisions of labor can impact the overall player experience. For example, imagine a game where 10 players are engaged in the same game world. In a peer-to-peer setup, every time one player takes an action, he or she would need to send that action to nine other players (in other words, there would need to be nine network calls, boiling down to more mobile data usage). On the other hand, on a client-server configuration, one player would only need to send his or her action to one of the peers, that is, the server, who would then be responsible for sending that data to the remaining nine players. Common drawbacks of client-server architectures, whether or not the server is authoritative, are as follows: Communication takes longer to propagate: In the very best possible scenario imaginable, every message sent from the first player to the second player would take twice as long to be delivered as compared to a peer-to-peer connection. That is, the message would be first sent from the first player to the server and then from the server to the second player. There are many techniques that are used today to solve the latency problem faced in this scenario, some of which we will discuss in much more depth later. However, the underlying dilemma will always be there. More complexity due to more moving parts: It doesn't really matter how you slice the pizza; the more code you need to write (and trust me, when you build two separate modules for a game, you will write more code), the greater your mental model will have to be. While much of your code can be reused between the client and the server (especially if you use well-established programming techniques, such as object-oriented programming), at the end of the day, you need to manage a greater level of complexity. Single point of failure and network congestion: Up until now, we have mostly discussed the case where only a handful of players participates in the same game. However, the more common case is that a handful of groups of players play different games at the same time. Using the same example of the two-player game of Tic-tac-toe, imagine that there are thousands of players facing each other in single games. In a peer-to-peer setup, once a couple of players have directly paired off, it is as though there are no other players enjoying that game. The only thing to keep these two players from continuing their game is their own connection with each other. On the other hand, if the same thousands of players are connected to each other through a server sitting between the two, then two singled out players might notice severe delays between messages because the server is so busy handling all of the messages from and to all of the other people playing isolated games. Worse yet, these two players now need to worry about maintaining their own connection with each other through the server, but they also hope that the server's connection between them and their opponent will remain active. All in all, many of the challenges involved in client-server networking are well studied and understood, and many of the problems you're likely to face during your multiplayer game development will already have been solved by someone else. Client-server is a very popular and powerful game networking model, and the required technology for it, which is available to us through HTML5 and JavaScript, is well developed and widely supported. Networking protocols – UDP and TCP By discussing some of the ways in which your players can talk to each other across some form of network, we have yet only skimmed over how that communication is actually done. Let us then describe what protocols are and how they apply to networking and, more importantly, multiplayer game development. The word protocol can be defined as a set of conventions or a detailed plan of a procedure [Citation [Def. 3,4]. (n.d.). In Merriam Webster Online, Retrieved February 12, 2015, from http://www.merriam-webster.com/dictionary/protocol]. In computer networking, a protocol describes to the receiver of a message how the data is organized so that it can be decoded. For example, imagine that you have a multiplayer beat 'em up game, and you want to tell the game server that your player just issued a kick command and moved 3 units to the left. What exactly do you send to the server? Do you send a string with a value of "kick", followed by the number 3? Otherwise, do you send the number first, followed by a capitalized letter "K", indicating that the action taken was a kick? The point I'm trying to make is that, without a well-understood and agreed-upon protocol, it is impossible to successfully and predictably communicate with another computer. The two networking protocols that we'll discuss in the section, and that are also the two most widely used protocols in multiplayer networked games, are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). Both protocols provide communication services between clients in a network system. In simple terms, they are protocols that allow us to send and receive packets of data in such a way that the data can be identified and interpreted in a predictable way. When data is sent through TCP, the application running in the source machine first establishes a connection with the destination machine. Once a connection has been established, data is transmitted in packets in such a way that the receiving application can then put the data back together in the appropriate order. TCP also provides built-in error checking mechanisms so that, if a packet is lost, the target application can notify the sender application, and any missing packets are sent again until the entire message is received. In short, TCP is a connection-based protocol that guarantees the delivery of the full data in the correct order. Use cases where this behavior is desirable are all around us. When you download a game from a web server, for example, you want to make sure that the data comes in correctly. You want to be sure that your game assets will be properly and completely downloaded before your users start playing your game. While this guarantee of delivery may sound very reassuring, it can also be thought of as a slow process, which, as we'll see briefly, may sometimes be more important than knowing that the data will arrive in full. In contrast, UDP transmits packets of data (called datagrams) without the use of a pre-established connection. The main goal of the protocol is to be a very fast and frictionless way of sending data towards some target application. In essence, you can think of UDP as the brave employees who dress up as their company's mascot and stand outside their store waving a large banner in the hope that at least some of the people driving by will see them and give them their business. While at first, UDP may seem like a reckless protocol, the use cases that make UDP so desirable and effective includes the many situations when you care more about speed than missing packets a few times, getting duplicate packets, or getting them out of order. You may also want to choose UDP over TCP when you don't care about the reply from the receiver. With TCP, whether or not you need some form of confirmation or reply from the receiver of your message, it will still take the time to reply back to you, at least acknowledging that the message was received. Sometimes, you may not care whether or not the server received the data. A more concrete example of a scenario where UDP is a far better choice over TCP is when you need a heartbeat from the client letting the server know if the player is still there. If you need to let your server know that the session is still active every so often, and you don't care if one of the heartbeats get lost every now and again, then it would be wise to use UDP. In short, for any data that is not mission-critical and you can afford to lose, UDP might be the best option. In closing, keep in mind that, just as peer-to-peer and client-server models can be built side by side, and in the same way your game server can be a hybrid of authoritative and non-authoritative, there is absolutely no reason why your multiplayer games should only use TCP or UDP. Use whichever protocol a particular situation calls for. Network sockets There is one other protocol that we'll cover very briefly, but only so that you can see the need for network sockets in game development. As a JavaScript programmer, you are doubtlessly familiar with Hypertext Transfer Protocol (HTTP). This is the protocol in the application layer that web browsers use to fetch your games from a Web server. While HTTP is a great protocol to reliably retrieve documents from web servers, it was not designed to be used in real-time games; therefore, it is not ideal for this purpose. The way HTTP works is very simple: a client sends a request to a server, which then returns a response back to the client. The response includes a completion status code, indicating to the client that the request is either in process, needs to be forwarded to another address, or is finished successfully or erroneously. There are a handful of things to note about HTTP that will make it clear that a better protocol is needed for real-time communication between the client and server. Firstly, after each response is received by the requester, the connection is closed. Thus, before making each and every request, a new connection must be established with the server. Most of the time, an HTTP request will be sent through TCP, which, as we've seen, can be slow, relatively speaking. Secondly, HTTP is by design a stateless protocol. This means that, every time you request a resource from a server, the server has no idea who you are and what is the context of the request. (It doesn't know whether this is your first request ever or if you're a frequent requester.) A common solution to this problem is to include a unique string with every HTTP request that the server keeps track of, and can thus provide information about each individual client on an ongoing basis. You may recognize this as a standard session. The major downside with this solution, at least with regard to real-time gaming, is that mapping a session cookie to the user's session takes additional time. Finally, the major factor that makes HTTP unsuitable for multiplayer game programming is that the communication is one way—only the client can connect to the server, and the server replies back through the same connection. In other words, the game client can tell the game server that a punch command has been entered by the user, but the game server cannot pass that information along to other clients. Think of it like a vending machine. As a client of the machine, we can request specific items that we wish to buy. We formalize this request by inserting money into the vending machine, and then we press the appropriate button. Under no circumstance will a vending machine issue commands to a person standing nearby. That would be like waiting for a vending machine to dispense food, expecting people to deposit the money inside it afterwards. The answer to this lack of functionality in HTTP is pretty straightforward. A network socket is an endpoint in a connection that allows for two-way communication between the client and the server. Think of it more like a telephone call, rather than a vending machine. During a telephone call, either party can say whatever they want at any given time. Most importantly, the connection between both parties remains open throughout the duration of the conversation, making the communication process highly efficient. WebSocket is a protocol built on top of TCP, allowing web-based applications to have two-way communication with a server. The way a WebSocket is created consists of several steps, including a protocol upgrade from HTTP to WebSocket. Thankfully, all of the heavy lifting is done behind the scenes by the browser and JavaScript. For now, the key takeaway here is that with a TCP socket (yes, there are other types of socket including UDP sockets), we can reliably communicate with a server, and the server can talk back to us as per the need. Socket programming in JavaScript Let's now bring the conversation about network connections, protocols, and sockets to a close by talking about the tools—JavaScript and WebSockets—that bring everything together, allowing us to program awesome multiplayer games in the language of the open Web. The WebSocket protocol Modern browsers and other JavaScript runtime environments have implemented the WebSocket protocol in JavaScript. Don't make the mistake of thinking that just because we can create WebSocket objects in JavaScript, WebSockets are part of JavaScript. The standard that defines the WebSocket protocol is language-agnostic and can be implemented in any programming language. Thus, before you start to deploy your JavaScript games that make use of WebSockets, ensure that the environment that will run your game uses an implementation of the ECMA standard that also implements WebSockets. In other words, not all browsers will know what to do when you ask for a WebSocket connection. For the most part, though, the latest versions, as of this writing, of the most popular browsers today (namely, Google Chrome, Safari, Mozilla Firefox, Opera, and Internet Explorer) implement the current latest revision of RFC 6455. Previous versions of WebSockets (such as protocol version - 76, 7, or 10) are slowly being deprecated and have been removed by some of the previously mentioned browsers. Probably the most confusing thing about the WebSocket protocol is the way each version of the protocol is named. The very first draft (which dates back to 2010), was named draft-hixie-thewebsocketprotocol-75. The next version was named draft-hixie-thewebsocketprotocol-76. Some people refer to these versions as 75 and 76, which can be quite confusing, especially since the fourth version of the protocol is named draft-ietf-hybi-thewebsocketprotocol-07, which is named in the draft as WebSocket Version 7. The current version of the protocol (RFC 6455) is 13. Let us take a quick look at the programming interface (API) that we'll use within our JavaScript code to interact with a WebSocket server. Keep in mind that we'll need to write both the JavaScript clients that use WebSockets to consume data as well as the WebSocket server, which uses WebSockets but plays the role of the server. The difference between the two will become apparent as we go over some examples. Creating a client-side WebSocket The following code snippet creates a new object of type WebSocket that connects the client to some backend server. The constructor takes two parameters; the first is required and represents the URL where the WebSocket server is running and expecting connections. The second URL, is an optional list of sub-protocols that the server may implement. var socket = new WebSocket('ws://www.game-domain.com'); Although this one line of code may seem simple and harmless enough, here are a few things to keep in mind: We are no longer in HTTP territory. The address to your WebSocket server now starts with ws:// instead of http://. Similarly, when we work with secure (encrypted) sockets, we would specify the server's URL as wss://, just like in https://. It may seem obvious to you, but a common pitfall that those getting started with WebSockets fall into is that, before you can establish a connection with the previous code, you need a WebSocket server running at that domain. WebSockets implement the same-origin security model. As you may have already seen with other HTML5 features, the same-origin policy states that you can only access a resource through JavaScript if both the client and the server are in the same domain. For those who are not familiar with the same-domain (also known as the same-origin) policy, the three things that constitute a domain, in this context, are the protocol, host, and port of the resource being accessed. In the previous example, the protocol, host, and port number were, respectively ws (and not wss, http, or ssh), www.game-domain.com (any sub-domain, such as game-domain.com or beta.game-domain.com would violate the same-origin policy), and 80 (by default, WebSocket connects to port 80, and port 443 when it uses wss). Since the server in the previous example binds to port 80, we don't need to explicitly specify the port number. However, had the server been configured to run on a different port, say 2667, then the URL string would need to include a colon followed by the port number that would need to be placed at the end of the host name, such as ws://www.game-domain.com:2667. As with everything else in JavaScript, WebSocket instances attempt to connect to the backend server asynchronously. Thus, you should not attempt to issue commands on your newly created socket until you're sure that the server has connected; otherwise, JavaScript will throw an error that may crash your entire game. This can be done by registering a callback function on the socket's onopen event as follows: var socket = new WebSocket('ws://www.game-domain.com'); socket.onopen = function(event) {    // socket ready to send and receive data }; Once the socket is ready to send and receive data, you can send messages to the server by calling the socket object's send method, which takes a string as the message to be sent. // Assuming a connection was previously established socket.send('Hello, WebSocket world!'); Most often, however, you will want to send more meaningful data to the server, such as objects, arrays, and other data structures that have more meaning on their own. In these cases, we can simply serialize our data as JSON strings. var player = {    nickname: 'Juju',    team: 'Blue' };   socket.send(JSON.stringify(player)); Now, the server can receive that message and work with it as the same object structure that the client sent it, by running it through the parse method of the JSON object. var player = JSON.parse(event.data); player.name === 'Juju'; // true player.team === 'Blue'; // true player.id === undefined; // true If you look at the previous example closely, you will notice that we extract the message that is sent through the socket from the data attribute of some event object. Where did that event object come from, you ask? Good question! The way we receive messages from the socket is the same on both the client and server sides of the socket. We must simply register a callback function on the socket's onmessage event, and the callback will be invoked whenever a new message is received. The argument passed into the callback function will contain an attribute named data, which will contain the raw string object with the message that was sent. socket.onmessage = function(event) {    event instanceof MessageEvent; // true      var msg = JSON.parse(event.data); }; Other events on the socket object on which you can register callbacks include onerror, which is triggered whenever an error related to the socket occurs, and onclose, which is triggered whenever the state of the socket changes to CLOSED; in other words, whenever the server closes the connection with the client for any reason or the connected client closes its connection. As mentioned previously, the socket object will also have a property called readyState, which behaves in a similar manner to the equally-named attribute in AJAX objects (or more appropriately, XMLHttpRequest objects). This attribute represents the current state of the connection and can have one of four values at any point in time. This value is an unsigned integer between 0 and 3, inclusive of both the numbers. For clarity, there are four accompanying constants on the WebSocket class that map to the four numerical values of the instance's readyState attribute. The constants are as follows: WebSocket.CONNECTING: This has a value of 0 and means that the connection between the client and the server has not yet been established. WebSocket.OPEN: This has a value of 1 and means that the connection between the client and the server is open and ready for use. Whenever the object's readyState attribute changes from CONNECTING to OPEN, which will only happen once in the object's life cycle, the onopen callback will be invoked. WebSocket.CLOSING: This has a value of 2 and means that the connection is being closed. WebSocket.CLOSED: This has a value of 3 and means that the connection is now closed (or could not be opened to begin with). Once the readyState has changed to a new value, it will never return to a previous state in the same instance of the socket object. Thus, if a socket object is CLOSING or has already become CLOSED, it will never OPEN again. In this case, you would need a new instance of WebSocket if you would like to continue to communicate with the server. To summarize, let us bring together the simple WebSocket API features that we discussed previously and create a convenient function that simplifies data serialization, error checking, and error handling when communicating with the game server: function sendMsg(socket, data) {    if (socket.readyState === WebSocket.OPEN) {      socket.send(JSON.stringify(data));        return true;    }      return false; }; Game clients Earlier, we talked about the architecture of a multiplayer game that was based on the client-server pattern. Since this is the approach we will take for the games that we'll be developing, let us define some of the main roles that the game client will fulfill. From a higher level, a game client will be the interface between the human player and the rest of the game universe (which includes the game server and other human players who are connected to it). Thus, the game client will be in charge of taking input from the player, communicating this to the server, receive any further instructions and information from the server, and then render the final output to the human player again. Depending on the type of game server used, the client can be more sophisticated than just an input application that renders static data received from the server. For example, the client could very well simulate what the game server will do and present the result of this simulation to the user while the server performs the real calculations and tells the results to the client. The biggest selling point of this technique is that the game would seem a lot more dynamic and real-time to the user since the client responds to input almost instantly. Game servers The game server is primarily responsible for connecting all the players to the same game world and keeping the communication going between them. However as you will soon realize, there may be cases where you will want the server to be more sophisticated than a routing application. For example, just because one of the players is telling the server to inform the other participants that the game is over, and the player sending the message is the winner, we may still want to confirm the information before deciding that the game is in fact over. With this idea in mind, we can label the game server as being of one of the two kinds: authoritative or non-authoritative. In an authoritative game server, the game's logic is actually running in memory (although it normally doesn't render any graphical output like the game clients certainly will) all the time. As each client reports the information to the server by sending messages through its corresponding socket, the server updates the current game state and sends the updates back to all of the players, including the original sender. This way we can be more certain that any data coming from the server has been verified and is accurate. In a non-authoritative server, the clients take on a much more involved part in the game logic enforcement, which gives the client a lot more trust. As suggested previously, what we can do is take the best of both worlds and create a mix of both the techniques. What we will do is, have a strictly authoritative server, but clients that are smart and can do some of the work on their own. Since the server has the ultimate say in the game, however, any messages received by clients from the server are considered as the ultimate truth and supersede any conclusions it came to on its own. Summary Overall, we discussed the basics of networking and network programming paradigms. We saw how WebSockets makes it possible to develop real-time, multiplayer games in HTML5. Finally, we implemented a simple game client and game server using widely supported web technologies and built a fun game of Tic-tac-toe. Resources for Article: Further resources on this subject: HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Creating different font files and using web fonts [article] HTML5 Canvas [article]
Read more
  • 0
  • 0
  • 27901

article-image-getting-started-opengl-es-30-using-glsl-30
Packt
04 Jun 2015
27 min read
Save for later

Getting started with OpenGL ES 3.0 Using GLSL 3.0

Packt
04 Jun 2015
27 min read
In this article by Parminder Singh, author of OpenGL ES 3.0 Cookbook, we will program shaders in Open GL ES shading language 3.0, load and compile a shader program, link a shader program, check errors in OpenGL ES 3.0, use the per-vertex attribute to send data to a shader, use uniform variables to send data to a shader, and program OpenGL ES 3.0 Hello World Triangle. (For more resources related to this topic, see here.) OpenGL ES 3.0 stands for Open Graphics Library for embedded systems version 3.0. It is a set of standard API specifications established by the Khronos Group. The Khronos Group is an association of members and organizations that are focused on producing open standards for royalty-free APIs. OpenGL ES 3.0 specifications were publicly released in August 2012. These specifications are backward compatible with OpenGL ES 2.0, which is a well-known de facto standard for embedded systems to render 2D and 3D graphics. Embedded operating systems such as Android, iOS, BlackBerry, Bada, Windows, and many others support OpenGL ES. OpenGL ES 3.0 is a programmable pipeline. A pipeline is a set of events that occur in a predefined fixed sequence, from the moment input data is given to the graphic engine to the output generated data for rendering the frame. A frame refers to an image produced as an output on the screen by the graphics engine. This article will provide OpenGL ES 3.0 development using C/C++, you can refer to the book OpenGL ES 3.0 Cookbook for more information on building OpenGL ES 3.0 applications on Android and iOS platforms. We will begin this article by understanding the basic programming of the OpenGL ES 3.0 with the help of a simple example to render a triangle on the screen. You will learn how to set up and create your first application on both platforms step by step. Understanding EGL: The OpenGL ES APIs require the EGL as a prerequisite before they can effectively be used on the hardware devices. The EGL provides an interface between the OpenGL ES APIs and the underlying native windowing system. Different OS vendors have their own ways to manage the creation of drawing surfaces, communication with hardware devices, and other configurations to manage the rendering context. EGL provides an abstraction, how the underlying system needs to be implemented in a platform-independent way. The EGL provides two important things to OpenGL ES APIs: Rendering context: This stores the data structure and important OpenGL ES states that are essentially required for rendering purpose Drawing surface: This provides the drawing surface to render primitives The following screenshot shows OpenGL ES 3.0 the programmable pipeline architecture. EGL provides the following responsibilities: Checking the available configuration to create rendering context of the device windowing system Creating the OpenGL rendering surface for drawing Compatibility and interfacing with other graphics APIs such as OpenVG, OpenAL, and so on Managing resources such as texture mapping Programming shaders in Open GL ES shading language 3.0 OpenGL ES shading language 3.0 (also called as GLSL) is a C-like language that allows us to writes shaders for programmable processors in the OpenGL ES processing pipeline. Shaders are the small programs that run on the GPU in parallel. OpenGL ES 3.0 supports two types of shaders: vertex shader and fragment shader. Each shader has specific responsibilities. For example, the vertex shader is used to process geometric vertices; however, the fragment shader processes the pixels or fragment color information. More specially, the vertex shader processes the vertex information by applying 2D/3D transformation. The output of the vertex shader goes to the rasterizer where the fragments are produced. The fragments are processed by the fragment shader, which is responsible for coloring them. The order of execution of the shaders is fixed; the vertex shader is always executed first, followed by the fragment shader. Each shader can share its processed data with the next stage in the pipeline. Getting ready There are two types of processors in the OpenGL ES 3.0 processing pipeline to execute vertex shader and fragment shader executables; it is called programmable processing unit: Vertex processor: The vertex processor is a programmable unit that operates on the incoming vertices and related data. It uses the vertex shader executable and run it on the vertex processor. The vertex shader needs to be programmed, compiled, and linked first in order to generate an executable, which can then be run on the vertex processor. Fragment processor: The fragment processor uses the fragment shader executable to process fragment or pixel data. The fragment processor is responsible for calculating colors of the fragment. They cannot change the position of the fragments. They also cannot access neighboring fragments. However, they can discard the pixels. The computed color values from this shader are used to update the framebuffer memory and texture memory. How to do it... Here are the sample codes for vertex and fragment shaders: Program the following vertex shader and store it into the vertexShader character type array variable: #version 300 es             in vec4 VertexPosition, VertexColor;       uniform float RadianAngle; out vec4     TriangleColor;     mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),                    -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; TriangleColor = VertexColor; } Program the following fragment shader and store it into another character array type variable called fragmentShader: #version 300 es         precision mediump float; in vec4   TriangleColor; out vec4 FragColor;     void main() {           FragColor = TriangleColor; }; How it works... Like most of the languages, the shader program also starts its control from the main() function. In both shader programs, the first line, #version 300 es, specifies the GLES shading language version number, which is 3.0 in the present case. The vertex shader receives a per-vertex input variable VertexPosition. The data type of this variable is vec4, which is one of the inbuilt data types provided by OpenGL ES Shading Language. The in keyword in the beginning of the variable specifies that it is an incoming variable and it receives some data outside the scope of our current shader program. Similarly, the out keyword specifies that the variable is used to send some data value to the next stage of the shader. Similarly, the color information data is received in VertexColor. This color information is passed to TriangleColor, which sends this information to the fragment shader, and is the next stage of the processing pipeline. The RadianAngle is a uniform type of variable that contains the rotation angle. This angle is used to calculate the rotation matrix to make the rendering triangle revolve. The input values received by VertexPosition are multiplied using the rotation matrix, which will rotate the geometry of our triangle. This value is assigned to gl_Position. The gl_Position is an inbuilt variable of the vertex shader. This variable is supposed to write the vertex position in the homogeneous form. This value can be used by any of the fixed functionality stages, such as primitive assembly, rasterization, culling, and so on. In the fragment shader, the precision keyword specifies the default precision of all floating types (and aggregates, such as mat4 and vec4) to be mediump. The acceptable values of such declared types need to fall within the range specified by the declared precision. OpenGL ES Shading Language supports three types of the precision: lowp, mediump, and highp. Specifying the precision in the fragment shader is compulsory. However, for vertex, if the precision is not specified, it is considered to be highest (highp). The FragColor is an out variable, which sends the calculated color values for each fragment to the next stage. It accepts the value in the RGBA color format. There's more… As mentioned there are three types of precision qualifiers, the following table describes these, the range and precision of these precision qualifiers are shown here: Loading and compiling a shader program The shader program created needs to be loaded and compiled into a binary form. This article will be helpful in understanding the procedure of loading and compiling a shader program. Getting ready Compiling and linking a shader is necessary so that these programs are understandable and executable by the underlying graphics hardware/platform (that is, the vertex and fragment processors). How to do it... In order to load and compile the shader source, use the following steps: Create a NativeTemplate.h/NativeTemplate.cpp and define a function named loadAndCompileShader in it. Use the following code, and proceed to the next step for detailed information about this function: GLuint loadAndCompileShader(GLenum shaderType, const char* sourceCode) { GLuint shader = glCreateShader(shaderType); // Create the shader if ( shader ) {      // Pass the shader source code      glShaderSource(shader, 1, &sourceCode, NULL);      glCompileShader(shader); // Compile the shader source code           // Check the status of compilation      GLint compiled = 0;      glGetShaderiv(shader,GL_COMPILE_STATUS,&compiled);      if (!compiled) {        GLint infoLen = 0;       glGetShaderiv(shader,GL_INFO_LOG_LENGTH, &infoLen);        if (infoLen) {          char* buf = (char*) malloc(infoLen);          if (buf) {            glGetShaderInfoLog(shader, infoLen, NULL, buf);            printf("Could not compile shader %s:" buf);            free(buf);          }          glDeleteShader(shader); // Delete the shader program          shader = 0;        }    } } return shader; } This function is responsible for loading and compiling a shader source. The argument shaderType accepts the type of shader that needs to be loaded and compiled; it can be GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. The sourceCode specifies the source program of the corresponding shader. Create an empty shader object using the glCreateShader OpenGL ES 3.0 API. This API returns a non-zero value if the object is successfully created. This value is used as a handle to reference this object. On failure, this function returns 0. The shaderType argument specifies the type of the shader to be created. It must be either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER: GLuint shader = glCreateShader(shaderType); Unlike in C++, where object creation is transparent, in OpenGL ES, the objects are created behind the curtains. You can access, use, and delete the objects as and when required. All the objects are identified by a unique identifier, which can be used for programming purposes. The created empty shader object (shader) needs to be bound first with the shader source in order to compile it. This binding is performed by using the glShaderSource API: // Load the shader source code glShaderSource(shader, 1, &sourceCode, NULL); The API sets the shader code string in the shader object, shader. The source string is simply copied in the shader object; it is not parsed or scanned. Compile the shader using the glCompileShader API. It accepts a shader object handle shader:        glCompileShader(shader);   // Compile the shader The compilation status of the shader is stored as a state of the shader object. This state can be retrieved using the glGetShaderiv OpenGL ES API:      GLint compiled = 0;   // Check compilation status      glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled); The glGetShaderiv API accepts the handle of the shader and GL_COMPILE_STATUS as an argument to check the status of the compilation. It retrieves the status in the compiled variable. The compiled returns GL_TRUE if the last compilation was successful. Otherwise, it returns GL_FALSE. Use glGetShaderInfoLog to get the error report. The shader is deleted if the shader source cannot be compiled. Delete the shader object using the glDeleteShader API. Return the shader object ID if the shader is compiled successfully: return shader; // Return the shader object ID How it works... The loadAndCompileShader function first creates an empty shader object. This empty object is referenced by the shader variable. This object is bound with the source code of the corresponding shader. The source code is compiled through a shader object using the glCompileShader API. If the compilation is successful, the shader object handle is returned successfully. Otherwise, the shader object returns 0 and needs to be deleted explicitly using glDeleteShader. The status of the compilation can be checked using glGetShaderiv with GL_COMPILE_STATUS. There's more... In order to differentiate among various versions of OpenGL ES and GL shading language, it is useful to get this information from the current driver of your device. This will be helpful to make the program robust and manageable by avoiding errors caused by version upgrade or application being installed on older versions of OpenGL ES and GLSL. The other vital information can be queried from the current driver, such as the vendor, renderer, and available extensions supported by the device driver. This information can be queried using the glGetString API. This API accepts a symbolic constant and returns the queried system metrics in the string form. The printGLString wrapper function in our program helps in printing device metrics: static void printGLString(const char *name, GLenum s) {    printf("GL %s = %sn", name, (const char *) glGetString(s)); } Linking a shader program Linking is a process of aggregating a set (vertex and fragment) of shaders into one program that maps to the entirety of the programmable phases of the OpenGL ES 3.0 graphics pipeline. The shaders are compiled using shader objects. These objects are used to create special objects called program objects to link it to the OpenGL ES 3.0 pipeline. How to do it... The following instructions provide a step-by-step procedure to link as shader: Create a new function, linkShader, in NativeTemplate.cpp. This will be the wrapper function to link a shader program to the OpenGL ES 3.0 pipeline. Follow these steps to understand this program in detail: GLuint linkShader(GLuint vertShaderID,GLuint fragShaderID){ if (!vertShaderID || !fragShaderID){ // Fails! return return 0; } // Create an empty program object GLuint program = glCreateProgram(); if (program) { // Attach vertex and fragment shader to it glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID);   // Link the program glLinkProgram(program); GLint linkStatus = GL_FALSE; glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);   if (linkStatus != GL_TRUE) { GLint bufLength = 0; glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength); if (bufLength) { char* buf = (char*) malloc(bufLength); if(buf) { glGetProgramInfoLog(program,bufLength,NULL, buf); printf("Could not link program:n%sn", buf); free(buf); } } glDeleteProgram(program); program = 0; } } return program; } Create a program object with glCreateProgram. This API creates an empty program object using which the shader objects will be linked: GLuint program = glCreateProgram(); //Create shader program Attach shader objects to the program object using the glAttachShader API. It is necessary to attach the shaders to the program object in order to create the program executable: glAttachShader(program, vertShaderID); glAttachShader(program, fragShaderID); How it works... The linkShader wrapper function links the shader. It accepts two parameters: vertShaderID and fragShaderID. They are identifiers of the compiled shader objects. The createProgram function creates a program object. It is another OpenGL ES object to which shader objects are attached using glAttachShader. The shader objects can be detached from the program object if they are no longer in need. The program object is responsible for creating the executable program that runs on the programmable processor. A program in OpenGL ES is an executable in the OpenGL ES 3.0 pipeline that runs on the vertex and fragment processors. The program object is linked using glLinkShader. If the linking fails, the program object must be deleted using glDeleteProgram. When a program object is deleted it automatically detached the shader objects associated with it. The shader objects need to be deleted explicitly. If a program object is requested for deletion, it will only be deleted until it's not being used by some other rendering context in the current OpenGL ES state. If the program's object link successfully, then one or more executable will be created, depending on the number of shaders attached with the program. The executable can be used at runtime with the help of the glUseProgram API. It makes the executable a part of the current OpenGL ES state. Checking errors in OpenGL ES 3.0 While programming, it is very common to get unexpected results or errors in the programmed source code. It's important to make sure that the program does not generate any error. In such a case, you would like to handle the error gracefully. OpenGL ES 3.0 allows us to check the error using a simple routine called getGlError. The following wrapper function prints all the error messages occurred in the programming: static void checkGlError(const char* op) { for(GLint error = glGetError(); error; error= glGetError()){ printf("after %s() glError (0x%x)n", op, error); } } Here are few examples of code that produce OpenGL ES errors: glEnable(GL_TRIANGLES);   // Gives a GL_INVALID_ENUM error   // Gives a GL_INVALID_VALUE when attribID >= GL_MAX_VERTEX_ATTRIBS glEnableVertexAttribArray(attribID); How it works... When OpenGL ES detects an error, it records the error into an error flag. Each error has a unique numeric code and symbolic name. OpenGL ES does not track each time an error has occurred. Due to performance reasons, detecting errors may degrade the rendering performance therefore, the error flag is not set until the glGetError routine is called. If there is no error detected, this routine will always return GL_NO_ERRORS. In distributed environment, there may be several error flags, therefore, it is advisable to call the glGetError routine in the loop, as this routine can record multiple error flags. Using the per-vertex attribute to send data to a shader The per-vertex attribute in the shader programming helps receive data in the vertex shader from OpenGL ES program for each unique vertex attribute. The received data value is not shared among the vertices. The vertex coordinates, normal coordinates, texture coordinates, color information, and so on are the example of per-vertex attributes. The per-vertex attributes are meant for vertex shaders only, they cannot be directly available to the fragment shader. Instead, they are shared via the vertex shader throughout variables. Typically, the shaders are executed on the GPU that allows parallel processing of several vertices at the same time using multicore processors. In order to process the vertex information in the vertex shader, we need some mechanism that sends the data residing on the client side (CPU) to the shader on the server side (GPU). This article will be helpful to understand the use of per-vertex attributes to communicate with shaders. Getting ready The vertex shader contains two per-vertex attributes named VertexPosition and VertexColor: // Incoming vertex info from program to vertex shader in vec4 VertexPosition; in vec4 VertexColor; The VertexPosition contains the 3D coordinates of the triangle that defines the shape of the object that we intend to draw on the screen. The VertexColor contains the color information on each vertex of this geometry. In the vertex shader, a non-negative attribute location ID uniquely identifies each vertex attribute. This attribute location is assigned at the compile time if not specified in the vertex shader program. Basically, the logic of sending data to their shader is very simple. It's a two-step process: Query attribute: Query the vertex attribute location ID from the shader. Attach data to the attribute: Attach this ID to the data. This will create a bridge between the data and the per-vertex attribute specified using the ID. The OpenGL ES processing pipeline takes care of sending data. How to do it... Follow this procedure to send data to a shader using the per-vertex attribute: Declare two global variables in NativeTemplate.cpp to store the queried attribute location IDs of VertexPosition and VertexColor: GLuint positionAttribHandle; GLuint colorAttribHandle; Query the vertex attribute location using the glGetAttribLocation API: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle    = glGetAttribLocation (programID, "VertexColor"); This API provides a convenient way to query an attribute location from a shader. The return value must be greater than or equals to 0 in order to ensure that attribute with given name exists. Send the data to the shader using the glVertexAttribPointer OpenGL ES API: // Send data to shader using queried attrib location glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); The data associated with geometry is passed in the form of an array using the generic vertex attribute with the help of the glVertexAttribPointer API. It's important to enable the attribute location. This allows us to access data on the shader side. By default, the vertex attributes are disabled. Similarly, the attribute can be disabled using glDisableVertexAttribArray. This API has the same syntax as that of glEnableVertexAttribArray. Store the incoming per-vertex attribute color VertexColor into the outgoing attribute TriangleColor in order to send it to the next stage (fragment shader): in vec4 VertexColor; // Incoming data from CPU out vec4 TriangleColor; // Outgoing to next stage void main() { . . . TriangleColor = VertexColor; } Receive the color information from the vertex shader and set the fragment color: in vec4 TriangleColor; // Incoming from vertex shader out vec4 FragColor; // The fragment color void main() { FragColor = TriangleColor; }; How it works... The per-vertex attribute variables VertexPosition and VertexColor defined in the vertex shader are the lifelines of the vertex shader. These lifelines constantly provide the data information from the client side (OpenGL ES program or CPU) to server side (GPU). Each per-vertex attribute has a unique attribute location available in the shader that can be queried using glGetAttribLocation. The per-vertex queried attribute locations are stored in positionAttribHandle; colorAttribHandle must be bound with the data using attribute location with glVertexAttribPointer. This API establishes a logical connection between client and server side. Now, the data is ready to flow from our data structures to the shader. The last important thing is the enabling of the attribute on the shader side for optimization purposes. By default, all the attribute are disabled. Therefore, even if the data is supplied for the client side, it is not visible at the server side. The glEnableVertexAttribArray API allows us to enable the per-vertex attributes on the shader side. Using uniform variables to send data to a shader The uniform variables contain the data values that are global. They are shared by all vertices and fragments in the vertex and fragment shaders. Generally, some information that is not specific to the per-vertex is treated in the form of uniform variables. The uniform variable could exist in both the vertex and fragment shaders. Getting ready The vertex shader we programmed in the programming shaders in OpenGL ES shading language 3.0 contains a uniform variable RadianAngle. This variable is used to rotate the rendered triangle: // Uniform variable for rotating triangle uniform float RadianAngle; This variable will be updated on the client side (CPU) and send to the shader at server side (GPU) using special OpenGL ES 3.0 APIs. Similar to per-vertex attributes for uniform variables, we need to query and bind data in order to make it available in the shader. How to do it... Follow these steps to send data to a shader using uniform variables: Declare a global variable in NativeTemplate.cpp to store the queried attribute location IDs of radianAngle: GLuint radianAngle; Query the uniform variable location using the glGetUniformLocation API: radianAngle=glGetUniformLocation(programID,"RadianAngle"); Send the updated radian value to the shader using the glUniform1f API: float degree = 0; // Global degree variable float radian; // Global radian variable radian = degree++/57.2957795; // Update angle and convert it into radian glUniform1f(radianAngle, radian); // Send updated data in the vertex shader uniform Use a general form of 2D rotation to apply on the entire incoming vertex coordinates: . . . . uniform float RadianAngle; mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); void main() { gl_Position = mat4(rotation)*VertexPosition; . . . . . } How it works... The uniform variable RadianAngle defined in the vertex shader is used to apply rotation transformation on the incoming per-vertex attribute VertexPosition. On the client side, this uniform variable is queried using glGetUniformLocation. This API returns the index of the uniform variable and stores it in radianAngle. This index will be used to bind the updated data information that is stored the radian with the glUniform1f OpenGL ES 3.0 API. Finally, the updated data reaches the vertex shader executable, where the general form of the Euler rotation is calculated: mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), -sin(RadianAngle),cos(RadianAngle)); The rotation transformation is calculated in the form of 2 x 2 matrix rotation, which is later promoted to a 4 x 4 matrix when multiplied by VertexPosition. The resultant vertices cause to rotate the triangle in a 2D space. Programming OpenGL ES 3.0 Hello World Triangle The NativeTemplate.h/cpp file contains OpenGL ES 3.0 code, which demonstrates a rotating colored triangle. The output of this file is not an executable on its own. It needs a host application that provides the necessary OpenGL ES 3.0 prerequisites to render this program on a device screen. Developing Android OpenGL ES 3.0 application Developing iOS OpenGL ES 3.0 application This will provide all the necessary prerequisites that are required to set up OpenGL ES, rendering and querying necessary attributes from shaders to render our OpenGL ES 3.0 "Hello World Triangle" program. In this program, we will render a simple colored triangle on the screen. Getting ready OpenGL ES requires a physical size (pixels) to define a 2D rendering surface called a viewport. This is used to define the OpenGL ES Framebuffer size. A buffer in OpenGL ES is a 2D array in the memory that represents pixels in the viewport region. OpenGL ES has three types of buffers: color buffer, depth buffer, and stencil buffer. These buffers are collectively known as a framebuffer. All the drawings commands effect the information in the framebuffer. The life cycle of this is broadly divided into three states: Initialization: Shaders are compiled and linked to create program objects Resizing: This state defines the viewport size of rendering surface Rendering: This state uses the shader program object to render geometry on screen How to do it... Follow these steps to program this: Use the NativeTemplate.cpp file and create a createProgramExec function. This is a high-level function to load, compile, and link a shader program. This function will return the program object ID after successful execution: GLuint createProgramExec(const char* VS, const char* FS) { GLuint vsID = loadAndCompileShader(GL_VERTEX_SHADER, VS); GLuint fsID = loadAndCompileShader(GL_FRAGMENT_SHADER, FS); return linkShader(vsID, fsID); } Visit the loading and compiling a shader program and linking shader program for more information on the working of loadAndCompileShader and linkShader. Use NativeTemplate.cpp, create a function GraphicsInit and create the shader program object by calling createProgramExec: GLuint programID; // Global shader program handler bool GraphicsInit(){ printOpenGLESInfo(); // Print GLES3.0 system metrics // Create program object and cache the ID programID = createProgramExec(vertexShader, fragmentShader); if (!programID) { // Failure !!! return printf("Could not create program."); return false; } checkGlError("GraphicsInit"); // Check for errors } Create a new function GraphicsResize. This will set the viewport region: bool GraphicsResize( int width, int height ){ glViewport(0, 0, width, height); } The viewport determines the portion of the OpenGL ES surface window on which the rendering of the primitives will be performed. The viewport in OpenGL ES is set using the glViewPort API. Create the gTriangleVertices global variable that contains the vertices of the triangle: GLfloat gTriangleVertices[] = { { 0.0f, 0.5f}, {-0.5f, - 0.5f}, { 0.5f, -0.5f} }; Create the GraphicsRender renderer function. This function is responsible for rendering the scene. Add the following code in it and perform the following steps to understand this function:        bool GraphicsRender(){ glClear( GL_COLOR_BUFFER_BIT ); // Which buffer to clear? – color buffer glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Clear color with black color   glUseProgram( programID ); // Use shader program and apply radian = degree++/57.2957795; // Query and send the uniform variable. radianAngle = glGetUniformLocation(programID, "RadianAngle"); glUniform1f(radianAngle, radian); // Query 'VertexPosition' from vertex shader positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); // Send data to shader using queried attribute glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); glEnableVertexAttribArray(positionAttribHandle); // Enable vertex position attribute glEnableVertexAttribArray(colorAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); // Draw 3 triangle vertices from 0th index } Choose the appropriate buffer from the framebuffer (color, depth, and stencil) that we want to clear each time the frame is rendered using the glClear API. In this, we want to clear color buffer. The glClear API can be used to select the buffers that need to be cleared. This API accepts a bitwise OR argument mask that can be used to set any combination of buffers. Query the VertexPosition generic vertex attribute location ID from the vertex shader into positionAttribHandle using glGetAttribLocation. This location will be used to send triangle vertex data that is stored in gTriangleVertices to the shader using glVertexAttribPointer. Follow the same instruction in order to get the handle of VertexColor into colorAttributeHandle: positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); Enable the generic vertex attribute location using positionAttribHandle before the rendering call and render the triangle geometry. Similarly, for the per-vertex color information, use colorAttribHandle: glEnableVertexAttribArray(positionAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); How it works... When the application starts, the control begins with GraphicsInit, where the system metrics are printed out to make sure that the device supports OpenGL ES 3.0. The OpenGL ES programmable pipeline requires vertex shader and fragment shader program executables in the rendering pipeline. The program object contains one or more executables after attaching the compiled shader objects and linking them to program. In the createProgramExec function the vertex and fragment shaders are compiled and linked, in order to generate the program object. The GraphicsResize function generates the viewport of the given dimension. This is used internally by OpenGL ES 3.0 to maintain the framebuffer. In our current application, it is used to manage color buffer. Finally, the rendering of the scene is performed by GraphicsRender, this function clears the color buffer with black background and renders the triangle on the screen. It uses a shader object program and sets it as the current rendering state using the glUseProgram API. Each time a frame is rendered, data is sent from the client side (CPU) to the shader executable on the server side (GPU) using glVertexAttribPointer. This function uses the queried generic vertex attribute to bind the data with OpenGL ES pipeline. There's more... There are other buffers also available in OpenGL ES 3.0: Depth buffer: This is used to prevent background pixels from rendering if there is a closer pixel available. The rule of prevention of the pixels can be controlled using special depth rules provided by OpenGL ES 3.0. Stencil buffer: The stencil buffer stores the per-pixel information and is used to limit the area of rendering. The OpenGL ES API allows us to control each buffer separately. These buffers can be enabled and disabled as per the requirement of the rendering. The OpenGL ES can use any of these buffers (including color buffer) directly to act differently. These buffers can be set via preset values by using OpenGL ES APIs, such as glClearColor, glClearDepthf, and glClearStencil. Summary This article covered different aspects of OpenGL ES 3.0. Resources for Article: Further resources on this subject: OpenGL 4.0: Using Uniform Blocks and Uniform Buffer Objects [article] OpenGL 4.0: Building a C++ Shader Program Class [article] Introduction to Modern OpenGL [article]
Read more
  • 0
  • 0
  • 27906

article-image-adding-graphical-user-interface
Packt
03 Jun 2015
12 min read
Save for later

Adding a Graphical User Interface

Packt
03 Jun 2015
12 min read
In this article by Dr. Edward Lavieri, the author of Getting Started with Unity 5, you will learn how to use Unity 5's new User Interface (UI) system. (For more resources related to this topic, see here.) An overview of graphical user interface Graphical User Interfaces or GUI (pronounced gooey) is a collection of visual components such as text, buttons, and images that facilitates a user's interaction with software. GUIs are also used to provide feedback to players. In the case of our game, the GUI allows players to interact with our game. Without a GUI, the user would have no visual indication of how to use the game. Imagine software without any on-screen indicators of how to use the software. The following image shows how early user interfaces were anything but intuitive: We use GUIs all the time and might not pay too close attention to them, unless they are poorly designed. If you've ever tried to figure out how to use an app on your Smartphone or could not figure out how to perform a specific action with desktop software, you've most likely encountered a poorly designed GUI. Functions of a GUI Our goal is to create a GUI for our game that both informs the user and allows for interaction between the game and the user. To that end, GUIs have two primary purposes: feedback and control. Feedback is generated by the game to the user and control is given to the user and managed by user input. Let's look at each of these more closely. Feedback Feedback can come in many forms. The most common forms of game feedback are visual and audio. Visual feedback can be something as simple as a text on a game screen. An example would be a game player's current score ever-present on the game screen. Games that include dialog systems where the player interacts with non-player characters (NPC) usually have text feedback on the screen that informs what the NPC's responses are. Visual feedback can also be non-textual, such as smoke, fire, explosions, or other graphic effect. Audio feedback can be as simple as a click sound when the user clicks or taps on a button or as complex as a radar ping when an enemy submarine is detected on long-distance sonar scans. You can probably think of all the audio feedback your favorite game provides. When you run your cart over a coin, an audio sound effect is played so there is no question that you earned the coin. If you take a moment to consider all of the audio feedback you are exposed to in games, you'll begin to appreciate the significance of them. Less typical feedback includes device vibration, which is sometimes used with smartphone applications and console games. Some attractions have taken feedback to another level through seat movement and vibration, dispensing liquid and vapor, and introducing chemicals that provide olfactory input. Control Giving players control of the game is the second function of GUIs. There is a wide gambit of types of control. The most simple is using buttons or menus in a game. A game might have a graphical icon of a backpack that, when clicked, gives the user access to the inventory management system of a game. Control seems like an easy concept and it is. Interestingly, most popular console games lack good GUI interfaces, especially when it comes to control. If you play console games, think about how many times you have to refer to the printed or in-game manual. Do you intuitively know all of the controller key mappings? How do you jump, switch weapons, crotch, throw a grenade, or go into stealth mode? In the defense of the game studios that publish these games, there is a lot of control and it can be difficult to make them intuitive. By extension, control is often physical in addition to graphical. Physical components of control include keyboards, mice, trackballs, console controllers, microphones, and other devices. Feedback and control Feedback and control GUI elements are often paired. When you click or tap a button, it usually has both visual and audio effects as well as executing the user's action. When you click (control) on a treasure chest, it opens (visual feedback) and you hear the creak of the old wooden hinges (audio feedback). This example shows the power of using adding feedback to control actions. Game Layers At a primitive level, there are three layers to every game. The core or base level is the Game Layer. The top layer is the User Layer; this is the actual person playing your game. So, it is the layer in between, the GUI Layer that serves as an intermediary between a game and its player. It becomes clear that designing and developing intuitive and well-functioning GUIs is important to a game's functionality, the user experience, and a game's success. Unity 5's UI system Unity's UI system has recently been re-engineered and is now more powerful than ever. Perhaps the most important concept to grasp is the Canvas object. All UI elements are contained in a canvas. Project and scenes can have more than one canvas. You can think of a canvas as a container for UI elements. Canvas To create a canvas, you simply navigate and select GameObject | UI | Canvas from the drop-down menu. You can see from the GameObject | UI menu pop-up that there are 11 different UI elements. Alternatively, you can create your first UI element, such as a button and Unity will automatically create a canvas for you and add it to your Hierarchy view. When you create subsequent UI elements, simply highlight the canvas in the Hierarchy view and then navigate to the GameObject | UI menu to select a new UI element. Here is a brief description of each of the UI elements: UI element Description Panel A frame object Button Standard button that can be clicked Text Text with standard text formatting Image Images can be simple, sliced, tiled, and filled Raw Image Texture file Slider Slider with min and max values Scrollbar Scrollbar with values between 0 and 1 Toggle Standard checkbox; can also be grouped Input Field Text input field Canvas The game object container for UI elements Event System Allows us to trigger scripts from UI elements. An Event System is automatically created when you create a canvas. You can have multiple canvases in your game. As you start building larger games, you'll likely find a use for more than one canvas. Render mode There are a few settings in the Inspector view that you should be aware of regarding your canvas game object. The first setting is the render mode. There are three settings: Screen Space – Overlay, Screen Space – Camera, and World Space: In this render mode, the canvas is automatically resized when the user changes the size or resolution of the game screen. The second render mode, Screen Space – Camera, has a plane distance property that determines how far the canvas is rendered from the camera. The third render mode is World Space. This mode gives you the most control and can be manipulated much like any other game object. I recommend experimenting with different render modes so you know which one you like best and when to use each one. Creating a GUI Creating a GUI in Unity is a relatively easy task. We first create a canvas, or have Unity create it for us when we create our first UI element. Next, we simply add the desired UI elements to our canvas. Once all the necessary elements are in our canvas, you can arrange and format them. It is often best to switch to 2D mode in the Scene view when placing the UI elements on the canvas. This simply makes the task a bit easier. If you have used earlier versions of Unity, you'll note that several things have changed regarding creating and referencing GUI elements. For example, you'll need to include the using UnityEngine.UI; statement before referencing UI components. Also, instead of referencing GUI text as public GUIText waterHeld; you now use public Text waterHeld;. Heads-up displays A game's heads-up display (HUD) is graphical and textual information available to the user at all times. No action should be required of the user other than to look at a specific region of the screen to read the displays. For example, if you are playing car-racing game, you might have an odometer, speedometer, compass, fuel tank level, air pressure, and other visual indicators always on the screen. Creating a HUD Here are the basic steps to create a HUD: Open the game project and load the scene. Navigate and select the GameObject | UI | Text option from the drop-down menu. This will result in a Canvas game object being added to the Hierarchy view, along with a text child item. Select the child item in the Hierarchy view. Then, in the Inspector view, change the text to what you want displayed on the screen. In the Inspector view, you can change the font size. If necessary, you can change the Horizontal Overflow option from Wrap to Overflow: Zoom out in the Scene view until you can see the GUI Canvas. Use the transform tools to place your new GUI element in the top-left corner of the screen. Depending on how you are viewing the scene in the Scene view, you might need to use the hand tool to rotate the scene. So, if your GUI text appears backwards, just rotate the scene until it is correct. Repeat steps 2 through 5 until you've created all the HUD elements you need for your game. Mini-maps Miniature-maps or mini-maps provide game players with a small visual aid that helps them maintain perspective and direction in a game. These mini-maps can be used for many different purposes, depending on the game. Some examples include the ability to view a mini-map that overlooks an enemy encampment; a zoomed out view of the game map with friendly and enemy force indicators; and a mini-map that has the overall tunnel map while the main game screen views the current section of tunnel. Creating a Mini-Map Here are the steps used to create a mini-map for our game: Navigate and select GameObject | Camera from the top menu. In the Hierarchy view, change the name from Camera to Mini-Map. With the mini-map camera selected, go to the Inspector view and click on the Layer button, then Add Layer in the pop-up menu. In the next available User Layer, add the name Mini-Map: Select the Mini-Map option in the Hierarchy view, and then select Layer | Mini-Map. Now the new mini-map camera is assigned to the Mini-Map layer: Next, we'll ensure the main camera is not rendering the Mini-Map camera. Select the Main Camera option in the Hierarchy view. In the Inspector view, select Culling Mask, and then deselect Mini-Map from the pop-up menu: Now we are ready to finish the configuration of our mini-map camera. Select the Mini-Map in the Hierarchy view. Using the transform tools in the Scene view, adjust the camera object so that it shows the area of the game environment you want visible via the mini-map. In the Inspector view, under Camera, make the settings match the following values: Setting Value Clear Flags Depth only Culling Mask Everything Projection Orthographic Size 25 Clipping Planes Near 0.3; Far 1000 Viewpoint Rect X 0.75; Y 0.75; W 1; H 1 Depth 1 Rendering Path User Player Settings Target Texture None Occlusion Culling Selected HDR Not Selected With the Mini-Map camera still selected, right-click on each of the Flare Layer, GUI Layer, and Audio Listener components in the Inspector view and select Remove Component. Save your scene and your project. You are ready to test your mini-map. Mini-maps can be very powerful game components. There are a couple of things to keep in mind if you are going to use mini-maps in your games: Make sure the mini-map size does not obstruct too much of the game environment. There is nothing worse than getting shot by an enemy that you could not see because a mini-map was in the way. The mini-map should have a purpose—we do not include them in games because they are cool. They take up screen real estate and should only be used if needed, such as helping the player make informed decisions. In our game, the player is able to keep an eye on Colt's farm animals while he is out gathering water and corn. Items should be clearly visible on the mini-map. Many games use red dots for enemies, yellow for neutral forces, and blue for friendlies. This type of color-coding provides users with a lot of information at a very quick glance. Ideally, the user should have the flexibility to move the mini-map to a corner of their choosing and toggle it on and off. In our game, we placed the mini-map in the top-right corner of the game screen so that the HUD objects would not be in the way. Summary In this article, you learned about the UI system in Unity 5. You gained an appreciation for the importance of GUIs in games we create. Resources for Article: Further resources on this subject: Bringing Your Game to Life with AI and Animations [article] Looking Back, Looking Forward [article] Introducing the Building Blocks for Unity Scripts [article]
Read more
  • 0
  • 0
  • 3634

article-image-playing-physics
Packt
03 Jun 2015
26 min read
Save for later

Playing with Physics

Packt
03 Jun 2015
26 min read
In this article by Maxime Barbier, author of the book SFML Blueprints, we will add physics into this game and turn it into a new one. By doing this, we will learn: What is a physics engine How to install and use the Box2D library How to pair the physics engine with SFML for the display How to add physics in the game In this article, we will learn the magic of physics. We will also do some mathematics but relax, it's for conversion only. Now, let's go! (For more resources related to this topic, see here.) A physics engine – késako? We will speak about physics engine, but the first question is "what is a physics engine?" so let's explain it. A physics engine is a software or library that is able to simulate Physics, for example, the Newton-Euler equation that describes the movement of a rigid body. A physics engine is also able to manage collisions, and some of them can deal with soft bodies and even fluids. There are different kinds of physics engines, mainly categorized into real-time engine and non-real-time engine. The first one is mostly used in video games or simulators and the second one is used in high performance scientific simulation, in the conception of special effects in cinema and animations. As our goal is to use the engine in a video game, let's focus on real-time-based engine. Here again, there are two important types of engines. The first one is for 2D and the other for 3D. Of course you can use a 3D engine in a 2D world, but it's preferable to use a 2D engine for an optimization purpose. There are plenty of engines, but not all of them are open source. 3D physics engines For 3D games, I advise you to use the Bullet physics library. This was integrated in the Blender software, and was used in the creation of some commercial games and also in the making of films. This is a really good engine written in C/C++ that can deal with rigid and soft bodies, fluids, collisions, forces… and all that you need. 2D physics engines As previously said, in a 2D environment, you can use a 3D physics engine; you just have to ignore the depth (Z axes). However, the most interesting thing is to use an engine optimized for the 2D environment. There are several engines like this one and the most famous ones are Box2D and Chipmunk. Both of them are really good and none of them are better than the other, but I had to make a choice, which was Box2D. I've made this choice not only because of its C++ API that allows you to use overload, but also because of the big community involved in the project. Physics engine comparing game engine Do not mistake a physics engine for a game engine. A physics engine only simulates a physical world without anything else. There are no graphics, no logics, only physics simulation. On the contrary, a game engine, most of the time includes a physics engine paired with a render technology (such as OpenGL or DirectX). Some predefined logics depend on the goal of the engine (RPG, FPS, and so on) and sometimes artificial intelligence. So as you can see, a game engine is more complete than a physics engine. The two mostly known engines are Unity and Unreal engine, which are both very complete. Moreover, they are free for non-commercial usage. So why don't we directly use a game engine? This is a good question. Sometimes, it's better to use something that is already made, instead of reinventing it. However, do we really need all the functionalities of a game engine for this project? More importantly, what do we need it for? Let's see the following: A graphic output Physics engine that can manage collision Nothing else is required. So as you can see, using a game engine for this project would be like killing a fly with a bazooka. I hope that you have understood the aim of a physics engine, the differences between a game and physics engine, and the reason for the choices made for the project. Using Box2D As previously said, Box2D is a physics engine. It has a lot of features, but the most important for the project are the following (taken from the Box2D documentation): Collision: This functionality is very interesting as it allows our tetrimino to interact with each other Continuous collision detection Rigid bodies (convex polygons and circles) Multiple shapes per body Physics: This functionality will allow a piece to fall down and more Continuous physics with the time of impact solver Joint limits, motors, and friction Fairly accurate reaction forces/impulses As you can see, Box2D provides all that we need in order to build our game. There are a lot of other features usable with this engine, but they don't interest us right now so I will not describe them in detail. However, if you are interested, you can take a look at the official website for more details on the Box2D features (http://box2d.org/about/). It's important to note that Box2D uses meters, kilograms, seconds, and radians for the angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. I will come back to this later. Preparing Box2D Now that Box2D is introduced, let's install it. You will find the list of available versions on the Google code project page at https://code.google.com/p/box2d/downloads/list. Currently, the latest stable version is 2.3. Once you have downloaded the source code (from compressed file or using SVN), you will need to build it. Install Once you have successfully built your Box2D library, you will need to configure your system or IDE to find the Box2D library and headers. The newly built library can be found in the /path/to/Box2D/build/Box2D/ directory and is named libBox2D.a. On the other hand, the headers are located in the path/to/Box2D/Box2D/ directory. If everything is okay, you will find a Box2D.h file in the folder. On Linux, the following command adds Box2D to your system without requiring any configuration: sudo make install Pairing Box2D and SFML Now that Box2D is installed and your system is configured to find it, let's build the physics "hello world": a falling square. It's important to note that Box2D uses meters, kilograms, seconds, and radian for angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. Converting radians to degrees or vice versa is not difficult, but pixels to meters… this is another story. In fact, there is no way to convert a pixel to meter, unless if the number of pixels per meter is fixed. This is the technique that we will use. So let's start by creating some utility functions. We should be able to convert radians to degrees, degrees to radians, meters to pixels, and finally pixels to meters. We will also need to fix the pixel per meter value. As we don't need any class for these functions, we will define them in a namespace converter. This will result as the following code snippet: namespace converter {    constexpr double PIXELS_PER_METERS = 32.0;    constexpr double PI = 3.14159265358979323846;      template<typename T>    constexpr T pixelsToMeters(const T& x){return x/PIXELS_PER_METERS;};      template<typename T>    constexpr T metersToPixels(const T& x){return x*PIXELS_PER_METERS;};      template<typename T>    constexpr T degToRad(const T& x){return PI*x/180.0;};      template<typename T>    constexpr T radToDeg(const T& x){return 180.0*x/PI;} } As you can see, there is no difficulty here. We start to define some constants and then the convert functions. I've chosen to make the function template to allow the use of any number type. In practice, it will mostly be double or int. The conversion functions are also declared as constexpr to allow the compiler to calculate the value at compile time if it's possible (for example, with constant as a parameter). It's interesting because we will use this primitive a lot. Box2D, how does it work? Now that we can convert SFML unit to Box2D unit and vice versa, we can pair Box2D with SFML. But first, how exactly does Box2D work? Box2D works a lot like a physics engine: You start by creating an empty world with some gravity. Then, you create some object patterns. Each pattern contains the shape of the object position, its type (static or dynamic), and some other characteristics such as its density, friction, and energy restitution. You ask the world to create a new object defined by the pattern. In each game loop, you have to update the physical world with a small step such as our world in the games we've already made. Because the physics engine does not display anything on the screen, we will need to loop all the objects and display them by ourselves. Let's start by creating a simple scene with two kinds of objects: a ground and square. The ground will be fixed and the squares will not. The square will be generated by a user event: mouse click. This project is very simple, but the goal is to show you how to use Box2D and SFML together with a simple case study. A more complex one will come later. We will need three functionalities for this small project to: Create a shape Display the world Update/fill the world Of course there is also the initialization of the world and window. Let's start with the main function: As always, we create a window for the display and we limit the FPS number to 60. I will come back to this point with the displayWorld function. We create the physical world from Box2D, with gravity as a parameter. We create a container that will store all the physical objects for the memory clean purpose. We create the ground by calling the createBox function (explained just after). Now it is time for the minimalist game loop:    Close event managements    Create a box by detecting that the right button of the mouse is pressed Finally, we clean the memory before exiting the program: int main(int argc,char* argv[]) {    sf::RenderWindow window(sf::VideoMode(800, 600, 32), "04_Basic");    window.setFramerateLimit(60);    b2Vec2 gravity(0.f, 9.8f);    b2World world(gravity);    std::list<b2Body*> bodies;    bodies.emplace_back(book::createBox(world,400,590,800,20,b2_staticBody));      while(window.isOpen()) {        sf::Event event;        while(window.pollEvent(event)) {            if (event.type == sf::Event::Closed)                window.close();        }        if (sf::Mouse::isButtonPressed(sf::Mouse::Left)) {            int x = sf::Mouse::getPosition(window).x;            int y = sf::Mouse::getPosition(window).y;            bodies.emplace_back(book::createBox(world,x,y,32,32));        }        displayWorld(world,window);    }      for(b2Body* body : bodies) {        delete static_cast<sf::RectangleShape*>(body->GetUserData());        world.DestroyBody(body);    }    return 0; } For the moment, except the Box2D world, nothing should surprise you so let's continue with the box creation. This function is under the book namespace. b2Body* createBox(b2World& world,int pos_x,int pos_y, int size_x,int size_y,b2BodyType type = b2_dynamicBody) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),                         converter::pixelsToMeters<double>(pos_y));    bodyDef.type = type;    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(size_x/2.0),                    converter::pixelsToMeters<double>(size_y/2.0));      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.4;    fixtureDef.restitution= 0.5;    fixtureDef.shape = &b2shape;      b2Body* res = world.CreateBody(&bodyDef);    res->CreateFixture(&fixtureDef);      sf::Shape* shape = new sf::RectangleShape(sf::Vector2f(size_x,size_y));    shape->setOrigin(size_x/2.0,size_y/2.0);    shape->setPosition(sf::Vector2f(pos_x,pos_y));                                                   if(type == b2_dynamicBody)        shape->setFillColor(sf::Color::Blue);    else        shape->setFillColor(sf::Color::White);      res->SetUserData(shape);      return res; } This function contains a lot of new functionalities. Its goal is to create a rectangle of a specific size at a predefined position. The type of this rectangle is also set by the user (dynamic or static). Here again, let's explain the function step-by-step: We create b2BodyDef. This object contains the definition of the body to create. So we set the position and its type. This position will be in relation to the gravity center of the object. Then, we create b2Shape. This is the physical shape of the object, in our case, a box. Note that the SetAsBox() method doesn't take the same parameter as sf::RectangleShape. The parameters are half the size of the box. This is why we need to divide the values by two. We create b2FixtureDef and initialize it. This object holds all the physical characteristics of the object such as its density, friction, restitution, and shape. Then, we properly create the object in the physical world. Now, we create the display of the object. This will be more familiar because we will only use SFML. We create a rectangle and set its position, origin, and color. As we need to associate and display SFML object to the physical object, we use a functionality of Box2D: the SetUserData() function. This function takes void* as a parameter and internally holds it. So we use it to keep track of our SFML shape. Finally, the body is returned by the function. This pointer has to be stored to clean the memory later. This is the reason for the body's container in main(). Now, we have the capability to simply create a box and add it to the world. Now, let's render it to the screen. This is the goal of the displayWorld function: void displayWorld(b2World& world,sf::RenderWindow& render) {    world.Step(1.0/60,int32(8),int32(3));    render.clear();    for (b2Body* body=world.GetBodyList(); body!=nullptr; body=body->GetNext())    {          sf::Shape* shape = static_cast<sf::Shape*>(body->GetUserData());        shape->setPosition(converter::metersToPixels(body->GetPosition().x),        converter::metersToPixels(body->GetPosition().y));        shape->setRotation(converter::radToDeg<double>(body->GetAngle()));        render.draw(*shape);    }    render.display(); } This function takes the physics world and window as a parameter. Here again, let's explain this function step-by-step: We update the physical world. If you remember, we have set the frame rate to 60. This is why we use 1,0/60 as a parameter here. The two others are for precision only. In a good code, the time step should not be hardcoded as here. We have to use a clock to be sure that the value will always be the same. Here, it has not been the case to focus on the important part: physics. We reset the screen, as usual. Here is the new part: we loop the body stored by the world and get back the SFML shape. We update the SFML shape with the information taken from the physical body and then render it on the screen. Finally, we render the result on the screen. As you can see, it's not really difficult to pair SFML with Box2D. It's not a pain to add it. However, we have to take care of the data conversion. This is the real trap. Pay attention to the precision required (int, float, double) and everything should be fine. Now that you have all the keys in hand, let's build a real game with physics. Adding physics to a game Now that Box2D is introduced with a basic project, let's focus on the real one. We will modify our basic Tetris to get Gravity-Tetris alias Gravitris. The game control will be the same as in Tetris, but the game engine will not be. We will replace the board with a real physical engine. With this project, we will reuse a lot of work previously done. As already said, the goal of some of our classes is to be reusable in any game using SFML. Here, this will be made without any difficulties as you will see. The classes concerned are those you deal with user event Action, ActionMap, ActionTarget—but also Configuration and ResourceManager. There are still some changes that will occur in the Configuration class, more precisely, in the enums and initialization methods of this class because we don't use the exact same sounds and events that were used in the Asteroid game. So we need to adjust them to our needs. Enough with explanations, let's do it with the following code: class Configuration {    public:        Configuration() = delete;        Configuration(const Configuration&) = delete;        Configuration& operator=(const Configuration&) = delete;               enum Fonts : int {Gui};        static ResourceManager<sf::Font,int> fonts;               enum PlayerInputs : int { TurnLeft,TurnRight, MoveLeft, MoveRight,HardDrop};        static ActionMap<int> playerInputs;               enum Sounds : int {Spawn,Explosion,LevelUp,};        static ResourceManager<sf::SoundBuffer,int> sounds;               enum Musics : int {Theme};        static ResourceManager<sf::Music,int> musics;               static void initialize();           private:        static void initTextures();        static void initFonts();        static void initSounds();        static void initMusics();        static void initPlayerInputs(); }; As you can see, the changes are in the enum, more precisely in Sounds and PlayerInputs. We change the values into more adapted ones to this project. We still have the font and music theme. Now, take a look at the initialization methods that have changed: void Configuration::initSounds() {    sounds.load(Sounds::Spawn,"media/sounds/spawn.flac");    sounds.load(Sounds::Explosion,"media/sounds/explosion.flac");    sounds.load(Sounds::LevelUp,"media/sounds/levelup.flac"); } void Configuration::initPlayerInputs() {    playerInputs.map(PlayerInputs::TurnRight,Action(sf::Keyboard::Up));    playerInputs.map(PlayerInputs::TurnLeft,Action(sf::Keyboard::Down));    playerInputs.map(PlayerInputs::MoveLeft,Action(sf::Keyboard::Left));    playerInputs.map(PlayerInputs::MoveRight,Action(sf::Keyboard::Right));  playerInputs.map(PlayerInputs::HardDrop,Action(sf::Keyboard::Space,    Action::Type::Released)); } No real surprises here. We simply adjust the resources to our needs for the project. As you can see, the changes are really minimalistic and easily done. This is the aim of all reusable modules or classes. Here is a piece of advice, however: keep your code as modular as possible, this will allow you to change a part very easily and also to import any generic part of your project to another one easily. The Piece class Now that we have the configuration class done, the next step is the Piece class. This class will be the most modified one. Actually, as there is too much change involved, let's build it from scratch. A piece has to be considered as an ensemble of four squares that are independent from one another. This will allow us to split a piece at runtime. Each of these squares will be a different fixture attached to the same body, the piece. We will also need to add some force to a piece, especially to the current piece, which is controlled by the player. These forces can move the piece horizontally or can rotate it. Finally, we will need to draw the piece on the screen. The result will show the following code snippet: constexpr int BOOK_BOX_SIZE = 32; constexpr int BOOK_BOX_SIZE_2 = BOOK_BOX_SIZE / 2; class Piece : public sf::Drawable {    public:        Piece(const Piece&) = delete;        Piece& operator=(const Piece&) = delete;          enum TetriminoTypes {O=0,I,S,Z,L,J,T,SIZE};        static const sf::Color TetriminoColors[TetriminoTypes::SIZE];          Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation);        ~Piece();        void update();        void rotate(float angle);        void moveX(int direction);        b2Body* getBody()const;      private:        virtual void draw(sf::RenderTarget& target, sf::RenderStates states) const override;        b2Fixture* createPart((int pos_x,int pos_y,TetriminoTypes type); ///< position is relative to the piece int the matrix coordinate (0 to 3)        b2Body * _body;        b2World& _world; }; Some parts of the class don't change such as the TetriminoTypes and TetriminoColors enums. This is normal because we don't change any piece's shape or colors. The rest is still the same. The implementation of the class, on the other side, is very different from the precedent version. Let's see it: Piece::Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation) : _world(world) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),    converter::pixelsToMeters<double>(pos_y));    bodyDef.type = b2_dynamicBody;    bodyDef.angle = converter::degToRad(rotation);    _body = world.CreateBody(&bodyDef);      switch(type)    {        case TetriminoTypes::O : {            createPart((0,0,type); createPart((0,1,type);            createPart((1,0,type); createPart((1,1,type);        }break;        case TetriminoTypes::I : {            createPart((0,0,type); createPart((1,0,type);             createPart((2,0,type); createPart((3,0,type);        }break;        case TetriminoTypes::S : {            createPart((0,1,type); createPart((1,1,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::Z : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,1,type);        }break;        case TetriminoTypes::L : {            createPart((0,1,type); createPart((0,0,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::J : {            createPart((0,0,type); createPart((1,0,type);            createPart((2,0,type); createPart((2,1,type);        }break;        case TetriminoTypes::T : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,0,type);        }break;        default:break;    }    body->SetUserData(this);    update(); } The constructor is the most important method of this class. It initializes the physical body and adds each square to it by calling createPart(). Then, we set the user data to the piece itself. This will allow us to navigate through the physics to SFML and vice versa. Finally, we synchronize the physical object to the drawable by calling the update() function: Piece::~Piece() {    for(b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        fixture->SetUserData(nullptr);        delete shape;    }    _world.DestroyBody(_body); } The destructor loop on all the fixtures attached to the body, destroys all the SFML shapes and then removes the body from the world: b2Fixture* Piece::createPart((int pos_x,int pos_y,TetriminoTypes type) {    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2),    converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2)    ,b2Vec2(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_x*BOOK_BOX_SIZE)), converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_y*BOOK_BOX_SIZE))),0);      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.5;    fixtureDef.restitution= 0.4;    fixtureDef.shape = &b2shape;      b2Fixture* fixture = _body->CreateFixture(&fixtureDef);      sf::ConvexShape* shape = new sf::ConvexShape((unsigned int) b2shape.GetVertexCount());    shape->setFillColor(TetriminoColors[type]);    shape->setOutlineThickness(1.0f);    shape->setOutlineColor(sf::Color(128,128,128));    fixture->SetUserData(shape);       return fixture; } This method adds a square to the body at a specific place. It starts by creating a physical shape as the desired box and then adds this to the body. It also creates the SFML square that will be used for the display, and it will attach this as user data to the fixture. We don't set the initial position because the constructor will do it. void Piece::update() {    const b2Transform& xf = _body->GetTransform();       for(b2Fixture* fixture = _body->GetFixtureList(); fixture != nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        const b2PolygonShape* b2shape = static_cast<b2PolygonShape*>(fixture->GetShape());        const uint32 count = b2shape->GetVertexCount();        for(uint32 i=0;i<count;++i) {            b2Vec2 vertex = b2Mul(xf,b2shape->m_vertices[i]);            shape->setPoint(i,sf::Vector2f(converter::metersToPixels(vertex.x),            converter::metersToPixels(vertex.y)));        }    } } This method synchronizes the position and rotation of all the SFML shapes from the physical position and rotation calculated by Box2D. Because each piece is composed of several parts—fixture—we need to iterate through them and update them one by one. void Piece::rotate(float angle) {    body->ApplyTorque((float32)converter::degToRad(angle),true); } void Piece::moveX(int direction) {    body->ApplyForceToCenter(b2Vec2(converter::pixelsToMeters(direction),0),true); } These two methods add some force to the object to move or rotate it. We forward the job to the Box2D library. b2Body* Piece::getBody()const {return _body;}   void Piece::draw(sf::RenderTarget& target, sf::RenderStates states) const {    for(const b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr; fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        if(shape)            target.draw(*shape,states);    } } This function draws the entire piece. However, because the piece is composed of several parts, we need to iterate on them and draw them one by one in order to display the entire piece. This is done by using the user data saved in the fixtures. Summary Since the usage of a physics engine has its own particularities such as the units and game loop, we have learned how to deal with them. Finally, we learned how to pair Box2D with SFML, integrate our fresh knowledge to our existing Tetris project, and build a new funny game. Resources for Article: Further resources on this subject: Skinning a character [article] Audio Playback [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 7397

article-image-scenekit
Packt
03 Jun 2015
25 min read
Save for later

SceneKit

Packt
03 Jun 2015
25 min read
So, this is it! Finally, we move from the 2D world to 3D. With SceneKit, we can make 3D games quite easily, especially since the syntax for SceneKit is quite similar to SpriteKit. When we say 3D games, we don't mean that you get to put on your 3D glasses to make the game. In 2D games, we mostly work in the x and y coordinates. In 3D games, we deal with all three axes x, y, and z. Additionally, in 3D games, we have different types of lights that we can use. Also, SceneKit has an inbuilt physics engine that will take care of forces such as gravity and will also aid collision detection. We can also use SpriteKit in SceneKit for GUI and buttons so that we can add scores and interactivity to the game. So, there is a lot to cover in this article. Let's get started. The topics covered in this article by Siddharth Shekar, the author of Learning iOS 8 Game Development Using Swift, are as follows: Creating a scene with SCNScene Adding objects to a scene Importing scenes from external 3D applications Adding physics to the scene Adding an enemy (For more resources related to this topic, see here.) Creating a scene with SCNScene First, we create a new SceneKit project. It is very similar to creating other projects. Only this time, make sure you select SceneKit from the Game Technology drop-down list. Don't forget to select Swift for the language field. Choose iPad as the device and click on Next to create the project in the selected directory, as shown in the following screenshot: Once the project is created, open it. Click on the GameViewController class, and delete all the contents in the viewDidLoad function, delete the handleTap function, as we will be creating a separate class, and add touch behavior. Create a new class called GameSCNScene and import the following headers. Inherit from the SCNScene class and add an init function that takes in a parameter called view of type SCNView: import Foundation import UIKit import SceneKit   class GameSCNScene: SCNScene{      let scnView: SCNView!    let _size:CGSize!    var scene: SCNScene!           required init(coder aDecoder: NSCoder) {        fatalError("init(coder:) has not been implemented")    }       init(currentview view: SCNView) {               super.init()    } } Also, create two new constants scnView and _size of type SCNView and CGSize, respectively. Also, add a variable called scene of type SCNScene. Since we are making a SceneKit game, we have to get the current view, which is the type SCNView, similar to how we got the view in SpriteKit where we typecasted the current view in SpriteKit to SKView. We create a _size constant to get the current size of the view. We then create a new variable scene of type SCNScene. SCNScene is the class used to make scenes in SceneKit, similar to how we would use SKScene to create scenes in SpriteKit. Swift would automatically ask to create the required init function, so we might as well include it in the class. Now, move to the GameViewController class and create a global variable called gameSCNScene of type GameSCNScene and assign it in the viewDidLoad function, as follows: class GameViewController: UIViewController { var gameSCNScene:GameSCNScene!    override func viewDidLoad() {      super.viewDidLoad()      let scnView = view as SCNView      gameSCNScene = GameSCNScene(currentview: scnView)    } }// UIViewController Class Great! Now we can add objects in the GameSCNScene class. It is better to move all the code to a single class so that we can keep the GameSceneController class clean. In the init function of GameSCNScene, add the following after the super.init function: scnView = view _size = scnView.bounds.size                         // retrieve the SCNView scene = SCNScene() scnView.scene = scene scnView.allowsCameraControl = true scnView.showsStatistics = true scnView.backgroundColor = UIColor.yellowColor() Here, we first assign the current view to the scnView constant. Next, we set the _size constant to the dimensions of the current view. Next we initialize the scene variable. Then, assign the scene to the scene of scnView. Next, enable allowCameraControls and showStatistics. This will enable us to control the camera and move it around to have a better look at the scene. Also, with statistics enabled, we will see the performance of the game to make sure that the FPS is maintained. The backgroundColor property of scnView enables us to set the color of the view. I have set it to yellow so that objects are easily visible in the scene, as shown in the following screenshot. With all this set we can run the scene. Well, it is not all that awesome yet. One thing to notice is that we have still not added a camera or a light, but we still see the yellow scene. This is because while we have not added anything to the scene yet, SceneKit automatically provides a default light and camera for the scene created. Adding objects to a scene Let us next add geometry to the scene. We can create some basic geometry such as spheres, boxes, cones, tori, and so on in SceneKit with ease. Let us create a sphere first and add it to the scene. Adding a sphere to the scene Create a function called addGeometryNode in the class and add the following code in it: func addGeometryNode(){      let sphereGeometry = SCNSphere(radius: 1.0)    sphereGeometry.firstMaterial?.diffuse.contents = UIColor.orangeColor()           let sphereNode = SCNNode(geometry: sphereGeometry)    sphereNode.position = SCNVector3Make(0.0, 0.0, 0.0)    scene.rootNode.addChildNode(sphereNode)       } For creating geometry, we use the SCNSphere class to create a sphere shape. We can also call SCNBox, SCNCone, SCNTorus, and so on to create box, cone, or torus shapes respectively. While creating the sphere, we have to provide the radius as a parameter, which will determine the size of the sphere. Although to place the shape, we have to attach it to a node so that we can place and add it to the scene. So, create a new constant called sphereNode of type SCNNode and pass in the sphere geometry as a parameter. For positioning the node, we have to use the SCNvector3Make property to place our object in 3D space by providing the values for x, y, and z. Finally, to add the node to the scene, we have to call scene.rootNode to add the sphereNode to scene, unlike SpriteKit where we would simply use addChild to add objects to the scene. With the sphere added, let us run the scene. Don't forget to add self.addGeometryNode() in the init function. We did add a sphere, so why are we getting a circle (shown in the following screenshot)? Well, the basic light source used by SceneKit just enables to us to see objects in the scene. If we want to see the actual sphere, we have to improve the light source of the scene. Adding light sources Let us create a new function called addLightSourceNode as follows so that we can add custom lights to our scene: func addLightSourceNode(){           let lightNode = SCNNode()    lightNode.light = SCNLight()    lightNode.light!.type = SCNLightTypeOmni    lightNode.position = SCNVector3(x: 10, y: 10, z: 10)    scene.rootNode.addChildNode(lightNode)         let ambientLightNode = SCNNode()    ambientLightNode.light = SCNLight()    ambientLightNode.light!.type = SCNLightTypeAmbient    ambientLightNode.light!.color = UIColor.darkGrayColor()    scene.rootNode.addChildNode(ambientLightNode) } We can add some light sources to see some depth in our sphere object. Here we add two types of light source. The first is an omni light. Omni lights start at a point and then the light is scattered equally in all directions. We also add an ambient light source. An ambient light is the light that is reflected by other objects, such as moonlight. There are two more types of light sources: directional and spotlight. Spotlight is easy to understand, and we usually use it if a certain object needs to be brought to attention like a singer on a stage. Directional lights are used if you want light to go in a single direction, such as sunlight. The Sun is so far from the Earth that the light rays are almost parallel to each other when we see them. For creating a light source, we create a node called lightNode of type SCNNode. We then assign SCNLight to the light property of lightNode. We assign the omni light type to be the type of the light. We assign position of the light source to be at 10 in all three x, y, and z coordinates. Then, we add it to the rootnode of the scene. Next we add an ambient light to the scene. The first two steps of the process are the same as for creating any light source: For the type of light we have to assign SCNLightTypeAmbient to assign an ambient type light source. Since we don't want the light source to be very strong, as it is reflected, we assign a darkGrayColor to the color. Finally, we add the light source to the scene. There is no need to add the ambient light source to the scene but it will make the scene have softer shadows. You can remove the ambient light source to see the difference. Call the addLightSourceNode function in the init function. Now, build and run the scene to see an actual sphere with proper lighting, as shown in the following screenshot: You can place a finger on the screen and move it to rotate the cameras as we have enabled camera control. You can use two fingers to pan the camera and you can double tap to reset the camera to its original position and direction. Adding a camera to the scene Next let us add a camera to the scene, as the default camera is very close. Create a new function called addCameraNode to the class and add the following code in it: func addCameraNode(){        let cameraNode = SCNNode()    cameraNode.camera = SCNCamera()    cameraNode.position = SCNVector3(x: 0, y: 0, z: 15)    scene.rootNode.addChildNode(cameraNode)       } Here, again we create an empty node called cameraNode. We assign SCNCamera to the camera property of cameraNode. Next we position the camera such that we keep the x and y values at zero and move the camera back in the z direction by 15 units. Then we add the camera to the rootnode of the scene. Call the addCameraNode at the bottom of the init function. In this scene, the origin is at the center of the scene, unlike SpriteKit where the origin of a scene is always at bottom right of the scene. Here the positive x and y are to the right and up from the center. The positive z direction is toward you. We didn't move the sphere back or reduce its size here. This is purely because we brought the camera backward in the scene. Let us next create a floor so that we can have a better understanding of the depth in the scene. Also, in this way, we will learn how to create floors in the scene. Adding a floor In the class, add a new function called addFloorNode and add the following code: func addFloorNode(){                   var floorNode = SCNNode()      floorNode.geometry = SCNFloor()      floorNode.position.y = -1.0      scene.rootNode.addChildNode(floorNode) } For creating a floor, we create a variable called floorNode of type SCNNode. We then assign SCNFloor to the geometry property of floorNode. For the position, we assign the y value to -1 as we want the sphere to appear above the floor. At the end, as usual, we assign the floorNode to the root node of the scene. In the following screenshot, I have rotated the camera to show the scene in full action. Here we can see the floor is gray in color and the sphere is casting its reflection on the floor, and we can also see the bright omni light at the top left of the sphere. Importing scenes from external 3D applications Although we can add objects, cameras, and lights through code, it will become very tedious and confusing when we have a lot of objects added to the scene. In SceneKit, this problem can be easily overcome by importing scenes prebuilt in other 3D applications. All 3D applications such as 3D StudioMax, Maya, Cheetah 3D, and Blender have the ability to export scenes in Collada (.dae) and Alembic (.abc) format. We can import these scenes with lighting, camera, and textured objects into SceneKit directly, without the need for setting up the scene. In this section, we will import a Collada file into the scene. Drag this file into the current project. Along with the DAE file, also add the monster.png file to the project, otherwise you will see only the untextured monster mesh in the scene. Click on the monsterScene.DAE file. If the textured monster is not automatically loaded, drag the monster.png file from the project into the monster mesh in the preview window. Release the mouse button once you see a (+) sign while over the monster mesh. Now you will be able to see the monster properly textured. The panel on the left shows the entities in the scene. Below the entities, the scene graph is shown and the view on the right is the preview pane. Entities show all the objects in the scene and the scene graph shows the relation between these entities. If you have certain objects that are children to other objects, the scene graph will show them as a tree. For example, if you open the triangle next to CATRigHub001, you will see all the child objects under it. You can use the scene graph to move and rotate objects in the scene to fine-tune your scene. You can also add nodes, which can be accessed by code. You can see that we already have a camera and a spotlight in the scene. You can select each object and move it around using the arrow at the pivot point of the object. You can also rotate the scene to get a better view by clicking and dragging the left mouse button on the preview scene. For zooming, scroll your mouse wheel up and down. To pan, hold the Alt button on the keyboard and left-click and drag on the preview pane. One thing to note is that rotating, zooming, and panning in the preview pane won't actually move your camera. The camera is still at the same position and angle. To view from the camera, again select the Camera001 option from the drop-down list in the preview pane and the view will reset to the camera view. At the bottom of the preview window, we can either choose to see the view through the camera or spotlight, or click-and-drag to rotate the free camera. If you have more than one camera in your scene, then you will have Camera002, Camera003, and so on in the drop-down list. Below the view selection dropdown in the preview panel you also have a play button. If you click on the play button, you can look at the default animation of the monster getting played in the preview window. The preview panel is just that; it is just to aid you in having a better understanding of the objects in the scene. In no way is it a replacement for a regular 3D package such as 3DSMax, Maya, or Blender. You can create cameras, lights, and empty nodes in the scene graph, but you can't add geometry such as boxes and spheres. You can add an empty node and position it in the scene graph, and then add geometry in code and attach it to the node. Now that we have an understanding of the scene graph, let us see how we can run this scene in SceneKit. In the init function, delete the line where we initialized the scene and add the following line instead. Also delete the objects, light, and camera we added earlier. init(currentview view:SCNView){    super.init()    scnView = view    _size = scnView.bounds.size       //retrieve the SCNView    //scene = SCNScene()    scene = SCNScene(named: "monsterScene.DAE")       scnView.scene = scene    scnView.allowsCameraControl = true    scnView.showsStatistics = true    scnView.backgroundColor = UIColor.yellowColor()    //   self.addGeometryNode() //   self.addLightSourceNode() //   self.addCameraNode() //   self.addFloorNode() //   } Build and run the game to see the following screenshot: You will see the monster running and the yellow background that we initially assigned to the scene. While exporting the scene, if you export the animations as well, once the scene loads in SceneKit the animation starts playing automatically. Also, you will notice that we have deleted the camera and light in the scene. So, how come the default camera and the light aren't loaded in the scene? What is happening here is that while I exported the file, I inserted a camera in the scene and also added a spotlight. So, when we imported the file into the scene, SceneKit automatically understood that there is a camera already present, so it will use the camera as its default camera. Similarly, a spotlight is already added in the scene, which is taken as the default light source, and lighting is calculated accordingly. Adding objects and physics to the scene Let us now see how we can access each of the objects in the scene graph and add gravity to the monster. Accessing the hero object and adding a physics body So, create a new function called addColladaObjects and call an addHero function in it. Create a global variable called heroNode of type SCNNode. We will use this node to access the hero object in the scene. In the addHero function, add the following code: init(currentview view:SCNView){    super.init()    scnView = view    _size = scnView.bounds.size       //retrieve the SCNView    //scene = SCNScene()    scene = SCNScene(named: "monster.scnassets/monsterScene.DAE")       scnView.scene = scene    scnView.allowsCameraControl = true    scnView.showsStatistics = true    scnView.backgroundColor = UIColor.yellowColor()       self.addColladaObjects()    //   self.addGeometryNode() //   self.addLightSourceNode() //   self.addCameraNode() //   self.addFloorNode()    }   func addHero(){      heroNode = SCNNode()        var monsterNode = scene.rootNode.childNodeWithName( "CATRigHub001", recursively: false)    heroNode.addChildNode(monsterNode!) heroNode.position = SCNVector3Make(0, 0, 0)                     let collisionBox = SCNBox(width: 10.0, height: 10.0,            length: 10.0, chamferRadius: 0)      heroNode.physicsBody?.physicsShape = SCNPhysicsShape(geometry: collisionBox, options: nil)    heroNode.physicsBody = SCNPhysicsBody.dynamicBody()      heroNode.physicsBody?.mass = 20    heroNode.physicsBody?.angularVelocityFactor = SCNVector3Zero heroNode.name = "hero"           scene.rootNode.addChildNode(heroNode) } First, we call the addColladaObjects function in the init function, as highlighted. Then we create the addHero function. In it we initiate the heroNode. Then, to actually move the monster, we need access to the CatRibHub001 node to move the monster. We gain access to it through the ChildWithName property of scene.rootNode. For each object that we wish to gain access to through code, we will have to use the ChildWithName property of the rootNode of the scene and pass in the name of the object. If recursively is set to true, to get said object, SceneKit will go through all the child nodes to get access to the specific node. Since the node that we are looking for is right on top, we said false to save processing time. We create a temporary variable called monsterNode. In the next step, we add the monsterNode variable to heroNode. We then set the position of the hero node to the origin. For heroNode to interact with other physics bodies in the scene, we have to assign a shape to the physics body of heroNode. We could use the mesh of the monster, but the shape might not be calculated properly and a box is a much simpler shape than the mesh of the monster. For creating a box collider, we create a new box geometry roughly the width, height, and depth of the monster. Then, using the physicsBody.physicsShape property of the heroNode, we assign the shape of the collisionBox we created for it. Since we want the body to be affected by gravity, we assign the physics body type to be dynamic. Later we will see other body types. Since we want the body to be highly responsive to gravity, we assign a value of 20 to the mass of the body. In the next step, we set the angularVelocityFactor to 0 in all three directions, as we want the body to move straight up and down when a vertical force is applied. If we don't do this, the body will flip-flop around. We also assign the name hero to the monster to check if the collided object is the hero or not. This will come in handy when we check for collision with other objects. Finally, we add heroNode to the scene. Add the addColladaObjects to the init function and comment or delete the self.addGeometryNode, self.addLightSourceNode, self.addCameraNode, and self.addFloorNode functions if you haven't already. Then, run the game to see the monster slowly falling through. We will create a small patch of ground right underneath the monster so that it doesn't fall down. Adding ground Create a new function called addGround and add the following: func addGround(){           let groundBox = SCNBox(width: 10, height: 2,                            length: 10, chamferRadius: 0)      let groundNode = SCNNode(geometry: groundBox)           groundNode.position = SCNVector3Make(0, -1.01, 0)    groundNode.physicsBody = SCNPhysicsBody.staticBody()    groundNode.physicsBody?.restitution = 0.0      scene.rootNode.addChildNode(groundNode) } We create a new constant called groundBox of type SCNBox, with a width and length of 10, and height of 2. Chamfer is the rounding of the edges of the box. Since we didn't want any rounding of the corners, it is set to 0. Next we create a SCNNode called groundNode and assign groundBox to it. We place it slightly below the origin. Since the height of the box is 2, we place it at –1.01 so that heroNode will be (0, 0, 0) when the monster rests on the ground. Next we assign the physics body of type static body. Also, since we don't want the hero to bounce off the ground when he falls on it, we set the restitution to 0. Finally, we then add the ground to the scene's rootnode. The reason we made this body static instead of dynamic is because a dynamic body gets affected by gravity and other forces but a static one doesn't. So, in this scene, even though gravity is acting downward, the hero will fall but groundBox won't as it is a static body. You will see that the physics syntax is very similar to SpriteKit with static bodies and dynamic bodies, gravity, and so on. And once again, similar to SpriteKit, the physics simulation is automatically turned on when we run the scene. Add the addGround function in the addColladaObjects functions and run the game to see the monster getting affected by gravity and stopping after coming in touch with the ground. Adding an enemy node To check collision in SceneKit, we can check for collision between the hero and the ground. But let us make it a little more interesting and also learn a new kind of body type: the kinematic body. For this, we will create a new box called enemy and make it move and collide with the hero. Create a new global SCNNode called enemyNode as follows: let scnView: SCNView! let _size:CGSize! var scene: SCNScene! var heroNode:SCNNode! var enemyNode:SCNNode! Also, create a new function called addEnemy to the class and add the following in it: func addEnemy(){           let geo = SCNBox(width: 4.0, height: 4.0, length: 4.0, chamferRadius: 0.0)           geo.firstMaterial?.diffuse.contents = UIColor.yellowColor()           enemyNode = SCNNode(geometry: geo)    enemyNode.position = SCNVector3Make(0, 20.0 , 60.0)    enemyNode.physicsBody = SCNPhysicsBody.kinematicBody()    scene.rootNode.addChildNode(enemyNode)           enemyNode.name = "enemy" } Nothing too fancy here! Just as when adding the groundNode, we have created a cube with all its sides four units long. We have also added a yellow color to its material. We then initialize enemyNode in the function. We position the node along the x, y, and z axes. Assign the body type as kinematic instead of static or dynamic. Then we add the body to the scene and finally name the enemyNode as enemy, which we will be needing while checking for collision. Before we forget, call the addEnemy function in the addColladaObjects function after where we called the addHero function. The difference between the kinematic body and other body types is that, like static, external forces cannot act on the body, but we can apply a force to a kinematic body to move it. In the case of a static body, we saw that it is not affected by gravity and even if we apply a force to it, the body just won't move. Here we won't be applying any force to move the enemy block but will simply move the object like we moved the enemy in the SpriteKit game. So, it is like making the same game, but in 3D instead of 2D, so that you can see that although we have a third dimension, the same principles of game development can be applied to both. For moving the enemy, we need an update function for the enemy. So, let us add it to the scene by creating an updateEnemy function and adding the following to it: func updateEnemy(){         enemyNode.position.z += -0.9             if((enemyNode.position.z - 5.0) < -40){                   var factor = arc4random_uniform(2) + 1                   if( factor == 1 ){            enemyNode.position = SCNVector3Make(0, 2.0 , 60.0)        }else{            enemyNode.position = SCNVector3Make(0, 15.0 , 60.0)        }    } } In the update function, similar to how we moved the enemy in the SpriteKit game, we increment the Z position of the enemy node by 0.9. The difference being that we are moving the z direction. Once the enemy has gone beyond –40 in the z direction, we reset the position of the enemy. To create an additional challenge to the player, when the enemy resets, a random number is chosen between 1 and 2. If it is 1, then the enemy is placed closer to the ground, otherwise it is placed at 15 units from the ground. Later, we will add a jump mechanic to the hero. So, when the enemy is closer to the ground, the hero has to jump over the enemy box, but when the enemy is spawned at a height, the hero shouldn't jump. If he jumps and hits the enemy box, then it is game over. Later we will also add a scoring mechanism to keep score. For updating the enemy, we actually need an update function to add the enemyUpdate function to so that the enemy moves and his position resets. So, create a function called update in the class and call the updateEnemy function in it as follows:    func update(){           updateEnemy()    } Summary This article has given insight on how to create a scene with SCNScene, how to add objects to a scene, how to import scenes from external 3D applications, how to adding physics to the scene, and how to add an enemy. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] iOS Security Overview [article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 6006
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-learning-ngui-unity
Packt
08 May 2015
2 min read
Save for later

Learning NGUI for Unity

Packt
08 May 2015
2 min read
NGUI is a fantastic GUI toolkit for Unity 3D, allowing fast and simple GUI creation and powerful functionality, with practically zero code required. NGUI is a robust UI system both powerful and optimized. It is an effective plugin for Unity, which gives you the power to create beautiful and complex user interfaces while reducing performance costs. Compared to Unity's GUI features, NGUI is much more powerful and optimized. GUI development in Unity requires users to createUI features by scripting lines that display labels, textures and other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. However, this is no longer necessary with NGUI, as they makethe GUI process much simpler. This book by Charles Pearson, the author of Learning NGUI for Unity, will help you leverage the power of NGUI for Unity to create stunning mobile and PC games and user interfaces. Based on this, this book covers the following topics: Getting started with NGUI Creating NGUI widgets Enhancing your UI C# with NGUI Atlas and font customization The in-game user interface 3D user interface Going mobile Screen sizes and aspect ratios User experience and best practices This book is a practical tutorial that will guide you through creating a fully functional and localized main menu along with 2D and 3D in-game user interfaces. The book starts by teaching you about NGUI's workflow and creating a basic UI, before gradually moving on to building widgets and enhancing your UI. You will then switch to the Android platform to take care of different issues mobile devices may encounter. By the end of this book, you will have the knowledge to create ergonomic user interfaces for your existing and future PC or mobile games and applications developed with Unity 3D and NGUI. The best part of this book is that it covers the user experience and also talks about the best practices to follow when using NGUI for Unity. If you are a Unity 3D developer who wants to create an effective and user-friendly GUI using NGUI for Unity, then this book is for you. Prior knowledge of C# scripting is expected; however, no knowledge of NGUI is required. Resources for Article: Further resources on this subject: Unity Networking – The Pong Game [article] Unit and Functional Tests [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 5263

Packt
04 May 2015
51 min read
Save for later

Saying Hello to Unity and Android

Packt
04 May 2015
51 min read
Welcome to the wonderful world of mobile game development. Whether you are still looking for the right development kit or have already chosen one, this article will be very important. In this article by Tom Finnegan, the author of Learning Unity Android Game Development, we explore the various features that come with choosing Unity as your development environment and Android as the target platform. Through comparison with major competitors, it is discovered why Unity and Android stand at the top of the pile. Following this, we will examine how Unity and Android work together. Finally, the development environment will be set up and we will create a simple Hello World application to test whether everything is set up correctly. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Major Unity features Major Android features Unity licensing options Installing the JDK Installing the Android software development kit (SDK) Installing Unity 3D Installing Unity Remote Understanding what makes Unity great Perhaps the greatest feature of Unity is how open-ended it is. Nearly all game engines currently on the market are limited in what one can build with them. It makes perfect sense but it can limit the capabilities of a team. The average game engine has been highly optimized for creating a specific game type. This is great if all you plan on making is the same game again and again. It can be quite frustrating when one is struck with inspiration for the next great hit, only to find that the game engine can't handle it and everyone has to retrain in a new engine or double the development time to make the game engine capable. Unity does not suffer from this problem. The developers of Unity have worked very hard to optimize every aspect of the engine, without limiting what types of games can be made using it. Everything ranging from simple 2D platformers to massive online role-playing games are possible in Unity. A development team that just finished an ultrarealistic first-person shooter can turn right around and make 2D fighting games without having to learn an entirely new system. Being so open-ended does, however, bring a drawback. There are no default tools that are optimized for building the perfect game. To combat this, Unity grants the ability to create any tool one can imagine, using the same scripting that creates the game. On top of that, there is a strong community of users that have supplied a wide selection of tools and pieces, both free and paid, that can be quickly plugged in and used. This results in a large selection of available content that is ready to jump-start you on your way to the next great game. When many prospective users look at Unity, they think that, because it is so cheap, it is not as good as an expensive AAA game engine. This is simply not true. Throwing more money at the game engine is not going to make a game any better. Unity supports all of the fancy shaders, normal maps, and particle effects that you could want. The best part is that nearly all of the fancy features that you could want are included in the free version of Unity, and 90 percent of the time beyond that, you do not even need to use the Pro-only features. One of the greatest concerns when selecting a game engine, especially for the mobile market, is how much girth it will add to the final build size. Most game engines are quite hefty. With Unity's code stripping, the final build size of the project becomes quite small. Code stripping is the process by which Unity removes every extra little bit of code from the compiled libraries. A blank project compiled for Android that utilizes full code stripping ends up being around 7 megabytes. Perhaps one of the coolest features of Unity is its multi-platform compatibility. With a single project, one can build for several different platforms. This includes the ability to simultaneously target mobiles, PCs, and consoles. This allows you to focus on real issues, such as handling inputs, resolution, and performance. In the past, if a company desired to deploy their product on more than one platform, they had to nearly double the development costs in order to essentially reprogram the game. Every platform did, and still does, run by its own logic and language. Thanks to Unity, game development has never been simpler. We can develop games using simple and fast scripting, letting Unity handle the complex translation to each platform. Unity – the best among the rest There are of course several other options for game engines. Two major ones that come to mind are cocos2d and Unreal Engine. While both are excellent choices, you will find them to be a little lacking in certain respects. The engine of Angry Birds, cocos2d, could be a great choice for your next mobile hit. However, as the name suggests, it is pretty much limited to 2D games. A game can look great in it, but if you ever want that third dimension, it can be tricky to add it to cocos2d; you may need to select a new game engine. A second major problem with cocos2d is how bare bones it is. Any tool for building or importing assets needs to be created from scratch, or it needs to be found. Unless you have the time and experience, this can seriously slow down development. Then there is the staple of major game development, Unreal Engine. This game engine has been used successfully by developers for many years, bringing great games to the world Unreal Tournament and Gears of War not the least among them. These are both, however, console and computer games, which is the fundamental problem with the engine. Unreal is a very large and powerful engine. Only so much optimization can be done on it for mobile platforms. It has always had the same problem; it adds a lot of girth to a project and its final build. The other major issue with Unreal is its rigidity in being a first-person shooter engine. While it is technically possible to create other types of games in it, such tasks are long and complex. A strong working knowledge of the underlying system is a must before achieving such a feat. All in all, Unity definitely stands strong amidst game engines. But these are still great reasons for choosing Unity for game development. Unity projects can look just as great as AAA titles. The overhead and girth in the final build are small and this is very important when working on mobile platforms. The system's potential is open enough to allow you to create any type of game that you might want, where other engines tend to be limited to a single type of game. In addition, should your needs change at any point in the project's life cycle, it is very easy to add, remove, or change your choice of target platforms. Understanding what makes Android great With over 30 million devices in the hands of users, why would you not choose the Android platform for your next mobile hit? Apple may have been the first one out of the gate with their iPhone sensation, but Android is definitely a step ahead when it comes to smartphone technology. One of its best features is its blatant ability to be opened up so that you can take a look at how the phone works, both physically and technically. One can swap out the battery and upgrade the micro SD card on nearly all Android devices, should the need arise. Plugging the phone into a computer does not have to be a huge ordeal; it can simply function as a removable storage media. From the point of view of the cost of development as well, the Android market is superior. Other mobile app stores require an annual registration fee of about 100 dollars. Some also have a limit on the number of devices that can be registered for development at one time. The Google Play market has a one-time registration fee of 25 dollars, and there is no concern about how many Android devices or what type of Android devices you are using for development. One of the drawbacks of some of the other mobile development kits is that you have to pay an annual registration fee before you have access to the SDK. With some, registration and payment are required before you can view their documentation. Android is much more open and accessible. Anybody can download the Android SDK for free. The documentation and forums are completely viewable without having to pay any fee. This means development for Android can start earlier, with device testing being a part of it from the very beginning. Understanding how Unity and Android work together As Unity handles projects and assets in a generic way, there is no need to create multiple projects for multiple target platforms. This means that you could easily start development with the free version of Unity and target personal computers. Then, at a later date, you can switch targets to the Android platform with the click of a button. Perhaps, shortly after your game is launched, it takes the market by storm and there is a great call to bring it to other mobile platforms. With just another click of the button, you can easily target iOS without changing anything in your project. Most systems require a long and complex set of steps to get your project running on a device. However, once your device is set up and recognized by the Android SDK, a single button click will allow Unity to build your application, push it to a device, and start running it. There is nothing that has caused more headaches for some developers than trying to get an application on a device. Unity makes this simple. With the addition of a free Android application, Unity Remote, it is simple and easy to test mobile inputs without going through the whole build process. While developing, there is nothing more annoying than waiting for 5 minutes for a build every time you need to test a minor tweak, especially in the controls and interface. After the first dozen little tweaks, the build time starts to add up. Unity Remote makes it simple and easy to test everything without ever having to hit the Build button. These are the big three reasons why Unity works well with Android: Generic projects A one-click build process Unity Remote We could, of course, come up with several more great ways in which Unity and Android can work together. However, these three are the major time and money savers. You could have the greatest game in the world, but if it takes 10 times longer to build and test, what is the point? Differences between the Pro and Basic versions of Unity Unity comes with two licensing options, Pro and Basic, which can be found at https://store.unity3d.com. If you are not quite ready to spend the 3,000 dollars that is required to purchase a full Unity Pro license with the Android add-on, there are other options. Unity Basic is free and comes with a 30-day free trial of Unity Pro. This trial is full and complete, as if you have purchased Unity Pro, the only downside being a watermark in the bottom-right corner of your game stating Demo Use Only. It is also possible to upgrade your license at a later date. Where Unity Basic comes with mobile options for free, Unity Pro requires the purchase of Pro add-ons for each of the mobile platforms. An overview of license comparison License comparisons can be found at http://unity3d.com/unity/licenses. This section will cover the specific differences between Unity Android Pro and Unity Android Basic. We will explore what the features are and how useful each one is in the following points: NavMeshes, pathfinding, and crowd simulation This feature is Unity's built-in pathfinding system. It allows characters to find their way from a point to another around your game. Just bake your navigation data in the editor and let Unity take over at runtime. Until recently, this was a Unity Pro only feature. Now the only part of it that is limited in Unity Basic is the use of off-mesh links. The only time you are going to need them is when you want your AI characters to be able to jump across and otherwise navigate around gaps. LOD support LOD (short for level of detail) lets you control how complex a mesh is, based on its distance from the camera. When the camera is close to an object, you can render a complex mesh with a bunch of detail in it. When the camera is far from that object, you can render a simple mesh because all that detail is not going to be seen anyway. Unity Pro provides a built-in system to manage this. However, this is another system that could be created in Unity Basic. Whether or not you are using the Pro version, this is an important feature for game efficiency. By rendering less complex meshes at a distance, everything can be rendered faster, leaving more room for awesome gameplay. The audio filter Audio filters allow you to add effects to audio clips at runtime. Perhaps you created gravel footstep sounds for your character. Your character is running and we can hear the footsteps just fine, when suddenly they enter a tunnel and a solar flare hits, causing a time warp and slowing everything down. Audio filters would allow us to warp the gravel footstep sounds to sound as if they were coming from within a tunnel and were slowed by a time warp. Of course, you could also just have the audio guy create a new set of tunnel gravel footsteps in the time warp sounds, although this might double the amount of audio in your game and limit how dynamic we can be with it at runtime. We either are or are not playing the time warp footsteps. Audio filters would allow us to control how much time warp is affecting our sounds. Video playback and streaming When dealing with complex or high-definition cut scenes, being able to play videos becomes very important. Including them in a build, especially with a mobile target, can require a lot of space. This is where the streaming part of this feature comes in. This feature not only lets us play videos but also lets us stream a video from the Internet. There is, however, a drawback to this feature. On mobile platforms, the video has to go through the device's built-in video-playing system. This means that the video can only be played in fullscreen and cannot be used as a texture for effects such as moving pictures on a TV model. Theoretically, you could break your video into individual pictures for each frame and flip through them at runtime, but this is not recommended for build size and video quality reasons. Fully-fledged streaming with asset bundles Asset bundles are a great feature provided by Unity Pro. They allow you to create extra content and stream it to users without ever requiring an update to the game. You could add new characters, levels, or just about any other content you can think of. Their only drawback is that you cannot add more code. The functionality cannot change, but the content can. This is one of the best features of Unity Pro. The 100,000 dollar turnover This one isn't so much a feature as it is a guideline. According to Unity's End User License Agreement, the basic version of Unity cannot be licensed by any group or individual who made $100,000 in the previous fiscal year. This basically means that if you make a bunch of money, you have to buy Unity Pro. Of course, if you are making that much money, you can probably afford it without an issue. This is the view of Unity at least and the reason why there is a 100,000 dollar turnover. Mecanim – IK Rigs Unity's new animation system, Mecanim, supports many exciting new features, one of which is IK (short form for Inverse Kinematics). If you are unfamiliar with the term, IK allows one to define the target point of an animation and let the system figure out how to get there. Imagine you have a cup sitting on a table and a character that wants to pick it up. You could animate the character to bend over and pick it up; but, what if the character is slightly to the side? Or any number of other slight offsets that a player could cause, completely throwing off your animation? It is simply impractical to animate for every possibility. With IK, it hardly matters that the character is slightly off. We just define the goal point for the hand and leave the animation of the arm to the IK system. It calculates how the arm needs to move in order to get the hand to the cup. Another fun use is making characters look at interesting things as they walk around a room: a guard could track the nearest person, the player's character could look at things that they can interact with, or a tentacle monster could lash out at the player without all the complex animation. This will be an exciting one to play with. Mecanim – sync layers and additional curves Sync layers, inside Mecanim, allow us to keep multiple sets of animation states in time with each other. Say you have a soldier that you want to animate differently based on how much health he has. When he is at full health, he walks around briskly. After a little damage to his health, the walk becomes more of a trudge. If his health is below half, a limp is introduced into his walk, and when he is almost dead he crawls along the ground. With sync layers, we can create one animation state machine and duplicate it to multiple layers. By changing the animations and syncing the layers, we can easily transition between the different animations while maintaining the state machine. The additional curves feature is simply the ability to add curves to your animation. This means we can control various values with the animation. For example, in the game world, when a character picks up its feet for a jump, gravity will pull them down almost immediately. By adding an extra curve to that animation, in Unity, we can control how much gravity is affecting the character, allowing them to actually be in the air when jumping. This is a useful feature for controlling such values alongside the animations, but you could just as easily create a script that holds and controls the curves. The custom splash screen Though pretty self-explanatory, it is perhaps not immediately evident why this feature is specified, unless you have worked with Unity before. When an application that is built in Unity initializes on any platform, it displays a splash screen. In Unity Basic, this will always be the Unity logo. By purchasing Unity Pro, you can substitute for the Unity logo with any image you want. Real-time spot/point and soft shadows Lights and shadows add a lot to the mood of a scene. This feature allows us to go beyond blob shadows and use realistic-looking shadows. This is all well and good if you have the processing space for it. However, most mobile devices do not. This feature should also never be used for static scenery; instead, use static lightmaps, which is what they are for. However, if you can find a good balance between simple needs and quality, this could be the feature that creates the difference between an alright and an awesome game. If you absolutely must have real-time shadows, the directional light supports them and is the fastest of the lights to calculate. It is also the only type of light available to Unity Basic that supports real-time shadows. HDR and tone mapping HDR (short for high dynamic range) and tone mapping allow us to create more realistic lighting effects. Standard rendering uses values from zero to one to represent how much of each color in a pixel is on. This does not allow for a full spectrum of lighting options to be explored. HDR lets the system use values beyond this range and processes them using tone mapping to create better effects, such as a bright morning room or the bloom from a car window reflecting the sun. The downside of this feature is in the processor. The device can still only handle values between zero and one, so converting them takes time. Additionally, the more complex the effect, the more time it takes to render it. It would be surprising to see this used well on handheld devices, even in a simple game. Maybe the modern tablets could handle it. Light probes Light probes are an interesting little feature. When placed in the world, light probes figure out how an object should be lit. Then, as a character walks around, they tell it how to be shaded. The character is, of course, lit by the lights in the scene, but there are limits on how many lights can shade an object at once. Light probes do all the complex calculations beforehand, allowing for better shading at runtime. Again, however, there are concerns about processing power. Too little power and you won't get a good effect; too much and there will be no processing power left for playing the game. Lightmapping with global illumination and area lights All versions of Unity support lightmaps, allowing for the baking of complex static shadows and lighting effects. With the addition of global illumination and area lights, you can add another touch of realism to your scenes. However, every version of Unity also lets you import your own lightmaps. This means that you could use some other program to render the lightmaps and import them separately. Static batching This feature speeds up the rendering process. Instead of spending time grouping objects for faster rendering on each frame , this allows the system to save the groups generated beforehand. Reducing the number of draw calls is a powerful step towards making a game run faster. That is exactly what this feature does. Render-to-texture effects This is a fun feature, but of limited use. It allows you to use the output from a camera in your game as a texture. This texture could then, in its most simple form, be put onto a mesh and act as a surveillance camera. You could also do some custom post processing, such as removing the color from the world as the player loses their health. However, this option could become very processor-intensive. Fullscreen post-processing effects This is another processor-intensive feature that probably will not make it into your mobile game. However, you can add some very cool effects to your scene, such as adding motion blur when the player is moving really fast or a vortex effect to warp the scene as the ship passes through a warped section of space. One of the best effects is using the bloom effect to give things a neon-like glow. Occlusion culling This is another great optimization feature. The standard camera system renders everything that is within the camera's view frustum, the view space. Occlusion culling lets us set up volumes in the space our camera can enter. These volumes are used to calculate what the camera can actually see from those locations. If there is a wall in the way, what is the point of rendering everything behind it? Occlusion culling calculates this and stops the camera from rendering anything behind that wall. Deferred rendering If you desire the best looking game possible, with highly detailed lighting and shadows, this is a feature of interest for you. Deferred rendering is a multi-pass process for calculating your game's light and shadow detail. This is, however, an expensive process and requires a decent graphics card to fully maximize its use. Unfortunately, this makes it a little outside of our use for mobile games. Stencil buffer access Custom shaders can use the stencil buffer to create special effects by selectively rendering over specific pixels. It is similar to how one might use an alpha channel to selectively render parts of a texture. GPU skinning This is a processing and rendering method by which the calculations for how a character or object appears, when using a skeleton rig, is given to the graphics card rather than getting it done by the central processor. It is significantly faster to render objects in this way. However, this is only supported on DirectX 11 and OpenGL ES 3.0, leaving it a bit out of reach for our mobile games. Navmesh – dynamic obstacles and priority This feature works in conjunction with the pathfinding system. In scripts, we can dynamically set obstacles, and characters will find their way around them. Being able to set priorities means that different types of characters can take different types of objects into consideration when finding their way around. For example, a soldier must go around the barricades to reach his target. The tank, however, could just crash through, should the player desire. Native code plugins' support If you have a custom set of code in the form of a Dynamic Link Library (DLL), this is the Unity Pro feature you need access to. Otherwise, the native plugins cannot be accessed by Unity for use with your game. Profiler and GPU profiling This is a very useful feature. The profiler provides tons of information about how much load your game puts on the processor. With this information, we can get right down into the nitty-gritties and determine exactly how long a script takes to process. Script access to the asset pipeline This is an alright feature. With full access to the pipeline, there is a lot of custom processing that can be done on assets and builds. The full range of possibilities is beyond the scope of this article. However, you can think of it as something that can make tint all of the imported textures slightly blue. Dark skin This is entirely a cosmetic feature. Its point and purpose are questionable. However, if a smooth, dark-skinned look is what you desire, this is the feature that you want. There is an option in the editor to change it to the color scheme used in Unity Basic. For this feature, whatever floats your boat goes. Setting up the development environment Before we can create the next great game for Android, we need to install a few programs. In order to make the Android SDK work, we will first install the Java Development Kit (JDK). Then we will install the Android SDK. After that, we will install Unity. We then have to install an optional code editor. To make sure everything is set up correctly, we will connect to our devices and take a look at some special strategies if the device is a tricky one. Finally, we will install Unity Remote, a program that will become invaluable in your mobile development. Installing the JDK Android's development language of choice is Java; so, to develop for it, we need a copy of the Java SE Development Kit on our computer. The process of installing the JDK is given in the following steps: The latest version of the JDK can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html. So open the site in a web browser, and you will be able to see the screen showed in the following screenshot: Select Java Platform (JDK) from the available versions and you will be brought to a page that contains the license agreement and allows you to select the type of file you wish to download. Accept the license agreement and select your appropriate Windows version from the list at the bottom. If you are unsure about which version to choose, then Windows x86 is usually a safe choice. Once the download is completed, run the new installer. After a system scan, click on Next two times, the JDK will initialize, and then click on the Next button one more time to install the JDK to the default location. It is as good there as anywhere else, so once it is installed, hit the Close button. We have just finished installing the JDK. We need this so that our Android development kit will work. Luckily, the installation process for this keystone is short and sweet. Installing the Android SDK In order to actually develop and connect to our devices, we need to have installed the Android SDK. Having the SDK installed fulfills two primary requirements. First, it makes sure that we have the bulk of the latest drivers for recognizing devices. Second, we are able to use the Android Debug Bridge (ADB). ADB is the system used for actually connecting to and interacting with a device. The process of installing the Android SDK is given in the following steps: The latest version of the Android SDK can be found at http://developer.android.com/sdk/index.html, so open a web browser and go to the given site. Once there, scroll to the bottom and find the SDK Tools Only section. This is where we can get just the SDK, which we need to make Android games with Unity, without dealing with the fancy fluff of the Android Studio. We need to select the .exe package with (Recommended) underneath it (as shown in the following screenshot): You will then be sent to a Terms and Conditions page. Read it if you prefer, but agree to it to continue. Then hit the Download button to start downloading the installer. Once it has finished downloading, start it up. Hit the first Next button and the installer will try to find an appropriate version of the JDK. You will come to a page that will notify you about not finding the JDK if you do not have it installed. If you skipped ahead and do not have the JDK installed, hit the Visit java.oracle.com button in the middle of the page and go back to the previous section for guidance on installing it. If you do have it, continue with the process. Hitting Next again will bring you to a page that will ask you about the person for whom you are installing the SDK . Select Install for anyone using this computer because the default install location is easier to get to for later purposes. Hit Next twice, followed by Install to install the SDK to the default location. Once this is done, hit Next and Finish to complete the installation of the Android SDK Manager. If Android SDK Manager does not start right away, start it up. Either way, give it a moment to initialize. The SDK Manager makes sure that we have the latest drivers, systems, and tools for developing with the Android platform. However, we have to actually install them first (which can be done from the following screen): By default, the SDK manager should select a number of options to install. If not, select the latest Android API (Android L (API 20) as of the time of writing this article), Android Support Library and Google USB Driver found in Extras. Be absolutely sure that Android SDK Platform-tools is selected. This will be very important later. It actually includes the tools that we need to connect to our device. Once everything is selected, hit Install packages at the bottom-right corner. The next screen is another set of license agreements. Every time a component is installed or updated through the SDK manager, you have to agree to the license terms before it gets installed. Accept all of the licenses and hit Install to start the process. You can now sit back and relax. It takes a while for the components to be downloaded and installed. Once this is all done, you can close it out. We have completed the process, but you should occasionally come back to it. Periodically checking the SDK manager for updates will make sure that you are using the latest tools and APIs. The installation of the Android SDK is now finished. Without it, we would be completely unable to do anything on the Android platform. Aside from the long wait to download and install components, this was a pretty easy installation. Installing Unity 3D Perform the following steps to install Unity: The latest version of Unity can be found at http://www.unity3d.com/unity/download. As of the time of writing this article, the current version is 5.0. Once it is downloaded, launch the installer and click on Next until you reach the Choose Components page, as shown in the following screenshot: Here, we are able to select the features of Unity installation:      Example Project: This is the current project built by Unity to show off some of its latest features. If you want to jump in early and take a look at what a complete Unity game can look like, leave this checked.      Unity Development Web Player: This is required if you plan on developing browser applications with Unity. As this article is focused on Android development, it is entirely optional. It is, however, a good one to check. You never know when you may need a web demo and since it is entirely free to develop for the web using Unity, there is no harm in having it.      MonoDevelop: It is a wise choice to leave this option unchecked. There is more detail in the next section, but it will suffice for now to say that it just adds an extra program for script editing that is not nearly as useful as it should be. Once you have selected or deselected your desired options, hit Next. If you wish to follow this article, note that we will uncheck MonoDevelop and leave the rest checked. Next is the location of installation. The default location works well, so hit Install and wait. This will take a couple of minutes, so sit back, relax, and enjoy your favorite beverage. Once the installation is complete, the option to run Unity will be displayed. Leave it checked and hit Finish. If you have never installed Unity before, you will be presented with a license activation page (as shown in the following screenshot): While Unity does provide a feature-rich, free version, in order to follow the entirety of this article, one is required to make use of some of the Unity Pro features. At https://store.unity3d.com, you have the ability to buy a variety of licenses. Once they are purchased, you will receive an e-mail containing your new license key. Enter that in the provided text field. If you are not ready to make a purchase, you have two alternatives. We will go over how to reset your license in the Building a simple application section later in the article. The alternatives are as follows:      The first alternative is that you can check the Activate the free version of Unity checkbox. This will allow you to use the free version of Unity. As discussed earlier, there are many reasons to choose this option. The most notable at the moment is cost.      Alternatively, you can select the Activate a free 30-day trial of Unity Pro option. Unity offers a fully functional, one-time installation and a free 30-day trial of  Unity Pro. This trial also includes the Android Pro add-on. Anything produced during the 30 days is completely yours, as if you had purchased a full Unity Pro license. They want you to see how great it is, so you will come back and make a purchase. The downside is that the Trial Version watermark will be constantly displayed at the corner of the game. After the 30 days, Unity will revert to the free version. This is a great option, should you choose to wait before making a purchase. Whatever your choice is, hit OK once you have made it. The next page simply asks you to log in with your Unity account. This will be the same account that you used to make your purchase. Just fill out the fields and hit OK. If you have not yet made a purchase, you can hit Create Account and have it ready for when you do make a purchase. The next page is a short survey on your development interests. Fill it out and hit OK or scroll straight to the bottom and hit Not right now. Finally, there is a thank you page. Hit Start using Unity. After a short initialization, the project wizard will open and we can start creating the next great game. However, there is still a bunch of work to do to connect the development device. So for now, hit the X button in the top-right corner to close the project wizard. We will cover how to create a new project in the Building a simple application section later on. We just completed installing Unity 3D. We also had to make a choice about licenses. The alternatives, though, will have a few shortcomings. You will either not have full access to all of the features or be limited to the length of the trial period while making due with a watermark in your games. The optional code editor Now a choice has to be made about code editors. Unity comes with a system called MonoDevelop. It is similar in many respects to Visual Studio. And like Visual Studio, it adds many extra files and much girth to a project, all of which it needs to operate. All this extra girth makes it take an annoying amount of time to start up, before one can actually get to the code. Technically, you can get away with a plain text editor, as Unity doesn't really care. This article recommends using Notepad++, which is found at http://notepad-plus-plus.org/download. It is free to use and it is essentially Notepad with code highlighting. There are several fancy widgets and add-ons for Notepad++ that add even greater functionality to it, but they are not necessary for following this article. If you choose this alternative, installing Notepad++ to the default location will work just fine. Connecting to a device Perhaps the most annoying step in working with Android devices is setting up the connection to your computer. Since there are so many different kinds of devices, it can get a little tricky at times just to have the device recognized by your computer. A simple device connection The simple device connection method involves changing a few settings and a little work in the command prompt. It may seem a little scary, but if all goes well you will be connected to your device shortly: The first thing you need to do is turn on the phone's Developer options. In the latest version of Android, these have been hidden. Go to your phone's settings page and find the About phone page. Next, you need to find the Build number information slot and tap it several times. At first, it will appear to do nothing, but it will shortly display that you need to press the button a few more times to activate the Developer options. The Android team did this so that the average user does not accidentally make changes. Now go back to your settings page and there should be a new Developer options page; select it now. This page controls all of the settings you might need to change while developing your applications. The only checkbox we are really concerned with checking right now is USB debugging. This allows us to actually detect our device from the development environment. If you are using Kindle, be sure to go into Security and turn on Enable ADB as well. There are several warning pop-ups that are associated with turning on these various options. They essentially amount to the same malicious software warnings associated with your computer. Applications with immoral intentions can mess with your system and get to your private information. All these settings need to be turned on if your device is only going to be used for development. However, as the warnings suggest, if malicious applications are a concern, turn them off when you are not developing. Next, open a command prompt on your computer. This can be done most easily by hitting your Windows key, typing cmd.exe, and then hitting Enter. We now need to navigate to the ADB commands. If you did not install the SDK to the default location, replace the path in the following commands with the path where you installed it. If you are running a 32-bit version of Windows and installed the SDK to the default location, type the following in the command prompt: cd c:\program files\android\android-sdk\platform-tools If you are running a 64-bit version, type the following in the command prompt: cd c:\program files (x86)\android\android-sdk\platform-tools Now, connect your device to your computer, preferably using the USB cable that came with it. Wait for your computer to finish recognizing the device. There should be a Device drivers installed type of message pop-up when it is done. The following command lets us see which devices are currently connected and recognized by the ADB system. Emulated devices will show up as well. Type the following in the command prompt: adb devices After a short pause for processing, the command prompt will display a list of attached devices along with the unique IDs of all the attached devices. If this list now contains your device, congratulations! You have a developer-friendly device. If it is not completely developer-friendly, there is one more thing that you can try before things get tricky. Go to the top of your device and open your system notifications. There should be one that looks like the USB symbol. Selecting it will open the connection settings. There are a few options here and by default Android selects to connect the Android device as a Media Device. We need to connect our device as a Camera. The reason is the connection method used. Usually, this will allow your computer to connect. We have completed our first attempt at connecting to our Android devices. For most, this should be all that you need to connect to your device. For some, this process is not quite enough. The next little section covers solutions to resolve the issue for connecting trickier devices. For trickier devices, there are a few general things that we can try; if these steps fail to connect your device, you may need to do some special research. Start by typing the following commands. These will restart the connection system and display the list of devices again: adb kill-server adb start-server adb devices If you are still not having any luck, try the following commands. These commands force an update and restart the connection system: cd ../tools android update adb cd ../platform-tools adb kill-server adb start-server adb devices If your device is still not showing up, you have one of the most annoying and tricky devices. Check the manufacturer's website for data syncing and management programs. If you have had your device for quite some time, you have probably been prompted to install this more than once. If you have not already done so, install the latest version even if you never plan on using it. The point is to obtain the latest drivers for your device, and this is the easiest way. Restart the connection system again using the first set of commands and cross your fingers! If you are still unable to connect, the best, professional recommendation that can be made is to google for the solution to your problem. Conducting a search for your device's brand with adb at the end should turn up a step-by-step tutorial that is specific to your device in the first couple of results. Another excellent resource for finding out all about the nitty-gritties of Android devices can be found at http://www.xda-developers.com/. Some of the devices that you will encounter while developing will not connect easily. We just covered some quick steps and managed to connect these devices. If we could have covered the processes for every device, we would have. However, the variety of devices is just too large and the manufacturers keep making more. Unity Remote Unity Remote is a great application created by the Unity team. It allows developers to connect their Android-powered devices to the Unity Editor and provide mobile inputs for testing. This is a definite must for any aspiring Unity and Android developer. If you are using a non-Amazon device, acquiring Unity Remote is quite easy. At the time of writing this article, it could be found on Google Play at https://play.google.com/store/apps/details?id=com.unity3d.genericremote. It is free and does nothing but connects your Android device to the Unity Editor, so the app permissions are negligible. In fact, there are currently two versions of Unity Remote. To connect to Unity 4.5 and later versions, we must use Unity Remote 4. If, however, you like the ever-growing Amazon market or seek to target Amazon's line of Android devices, adding Unity Remote will become a little trickier. First, you need to download a special Unity Package from the Unity Asset Store. It can be found at https://www.assetstore.unity3d.com/en/#!/content/18106. You will need to import the package into a fresh project and build it from there. Import the package by going to the top of Unity, navigate to Assets | Import Package | Custom Package, and then navigate to where you saved it. In the next section, we will build a simple application and put it on our device. After you have imported the package, follow along from the step where we open the Build Settings window, replacing the simple application with the created APK. Building a simple application We are now going to create a simple Hello World application. This will familiarize you with the Unity interface and how to actually put an application on your device. Hello World To make sure everything is set up properly, we need a simple application to test with and what better to do that with than a Hello World application? To build the application, perform the following steps: The first step is pretty straightforward and simple: start Unity. If you have been following along so far, once this is done you should see a screen resembling the next screenshot. As the tab might suggest, this is the screen through which we open our various projects. Right now, though, we are interested in creating one; so, select New Project from the top-right corner and we will do just that: Use the Project name* field to give your project a name; Ch1_HelloWorld fits well for a project name. Then use the three dots to the right of the Location* field to choose a place on your computer to put the new project. Unity will create a new folder in this location, based on the project name, to store your project and all of its related files: For now, we can ignore the 3D and 2D buttons. These let us determine the defaults that Unity will use when creating a new scene and importing new assets. We can also ignore the Asset packages button. This lets you select from the bits of assets and functionality that is provided by Unity. They are free for you to use in your projects. Hit the Create Project button, and Unity will create a brand-new project for us. The following screenshot shows the windows of the Unity Editor: The default layout of Unity contains a decent spread of windows that are needed to create a game:      Starting from the left-hand side, Hierarchy contains a list of all the objects that currently exist in our scene. They are organized alphabetically and are grouped under parent objects.      Next to this is the Scene view. This window allows us to edit and arrange objects in the 3D space. In the top left-hand side, there are two groups of buttons. These affect how you can interact with the Scene view.      The button on the far left that looks like a hand lets you pan around when you click and drag with the mouse.      The next button, the crossed arrows, lets you move objects around. Its behavior and the gizmo it provides will be familiar if you have made use of any modeling programs.      The third button changes the gizmo to rotation. It allows you to rotate objects.      The fourth button is for scale. It changes the gizmo as well.      The fifth button lets you adjust the position and the scale based on the bounding box of the object and its orientation relative to how you are viewing it.      The second to last button toggles between Pivot and Center. This will change the position of the gizmo used by the last three buttons to be either at the pivot point of the selected object, or at the average position point of all the selected objects.      The last button toggles between Local and Global. This changes whether the gizmo is orientated parallel with the world origin or rotated with the selected object.      Underneath the Scene view is the Game view. This is what is currently being rendered by any cameras in the scene. This is what the player will see when playing the game and is used for testing your game. There are three buttons that control the playback of the Game view in the upper-middle section of the window.      The first is the Play button. It toggles the running of the game. If you want to test your game, press this button.      The second is the Pause button. While playing, pressing this button will pause the whole game, allowing you to take a look at the game's current state.      The third is the Step button. When paused, this button will let you progress through your game one frame at a time.      On the right-hand side is the Inspector window. This displays information about any object that is currently selected.      In the bottom left-hand side is the Project window. This displays all of the assets that are currently stored in the project.      Behind this is Console. It will display debug messages, compile errors, warnings, and runtime errors. At the top, underneath Help, is an option called Manage License.... By selecting this, we are given options to control the license. The button descriptions cover what they do pretty well, so we will not cover them in more detail at this point. The next thing we need to do is connect our optional code editor. At the top, go to Edit and then click on Preferences..., which will open the following window: By selecting External Tools on the left-hand side, we can select other software to manage asset editing. If you do not want to use MonoDevelop, select the drop-down list to the right of External Script Editor and navigate to the executable of Notepad++, or any other code editor of your choice. Your Image application option can also be changed here to Adobe Photoshop or any other image-editing program that you prefer, in the same way as the script editor. If you installed the Android SDK to the default location, do not worry about it. Otherwise, click on Browse... and find the android-sdk folder. Now, for the actual creation of this application, right-click inside your Project window. From the new window that pops up, select Create and C# Script from the menu. Type in a name for the new script (HelloWorld will work well) and hit Enter twice: once to confirm the name and once to open it. In this article, this will be a simple Hello World application. Unity supports C#, JavaScript, and Boo as scripting languages. For consistency, this article will be using C#. If you, instead, wish to use JavaScript for your scripts, copies of all of the projects can be found with the other resources for this article, under a _JS suffix for JavaScript. Every script that is going to attach to an object extends the functionality of the MonoBehaviour class. JavaScript does this automatically, but C# scripts must define it explicitly. However, as you can see from the default code in the script, we do not have to worry about setting this up initially; it is done automatically. Extending the MonoBehaviour class lets our scripts access various values of the game object, such as the position, and lets the system automatically call certain functions during specific events in the game, such as the Update cycle and the GUI rendering. For now, we will delete the Start and Update functions that Unity insists on including in every new script. Replace them with a bit of code that simply renders the words Hello World in the top-left corner of the screen; you can now close the script and return to Unity: public void OnGUI() { GUILayout.Label("Hello World"); } Drag the HelloWorld script from the Project window and drop it on the Main Camera object in the Hierarchy window. Congratulations! You have just added your first bit of functionality to an object in Unity. If you select Main Camera in Hierarchy, then Inspector will display all of the components attached to it. At the bottom of the list is your brand-new HelloWorld script. Before we can test it, we need to save the scene. To do this, go to File at the top and select Save Scene. Give it the name HelloWorld and hit Save. A new icon will appear in your Project window, indicating that you have saved the scene. You are now free to hit the Play button in the upper-middle section of the editor and witness the magic of Hello World. We now get to build the application. At the top, select File and then click on Build Settings.... By default, the target platform is PC. Under Platform, select Android and hit Switch Platform in the bottom-left corner of the Build Settings window. Underneath the Scenes In Build box, there is a button labeled Add Current. Click on it to add our currently opened scene to the build. Only scenes that are in this list and checked will be added to the final build of your game. The scene with the number zero next to it will be the first scene that is loaded when the game starts. There is one last group of things to change before we can hit the Build button. Select Player Settings... at the bottom of the Build Settings window. The Inspector window will open Player Settings (shown in the following screenshot) for the application. From here, we can change the splash screen, icon, screen orientation, and a handful of other technical options: At the moment, there are only a few options that we care about. At the top, Company Name is the name that will appear under the information about the application. Product Name is the name that will appear underneath the icon on your Android device. You can largely set these to anything you want, but they do need to be set immediately. The important setting is Bundle Identifier, underneath Other Settings and Identification. This is the unique identifier that singles out your application from all other applications on the device. The format is com.CompanyName.ProductName, and it is a good practice to use the same company name across all of your products. For this article, we will be using com.TomPacktAndBegin.Ch1.HelloWorld for Bundle Identifier and opt to use an extra dot (period) for the organization. Go to File and then click on Save again. Now you can hit the Build button in the Build Settings window. Pick a location to save the file, and a file name ( Ch1_HelloWorld.apk works well). Be sure to remember where it is and hit Save. If during the build process Unity complains about where the Android SDK is, select the android-sdk folder inside the location where it was installed. The default would be C:\Program Files\Android\android-sdk for a 32-bit Windows system and C:\Program Files (x86)\Android\android-sdk for a 64-bit Windows system. Once loading is done, which should not be very long, your APK will have been made and we are ready to continue. We are through with Unity for this article. You can close it down and open a command prompt. Just as we did when we were connecting our devices, we need to navigate to the platform-tools folder in order to connect to our device. If you installed the SDK to the default location, use:      For a 32-bit Windows system: cd c:\program files\android\android-sdk\platform-tools      For a 64-bit Windows system: cd c:\program files (x86)\android\android-sdk\platform-tools Double-check to make sure that the device is connected and recognized by using the following command: adb devices Now we will install the application. This command tells the system to install an application on the connected device. The -r indicates that it should override if an application is found with the same Bundle Identifier as the application we are trying to install. This way you can just update your game as you develop, rather than uninstalling before installing the new version each time you need to make an update. The path to the .apk file that you wish to install is shown in quotes as follows: adb install -r "c:\users\tom\desktop\packt\book\ch1_helloworld.apk" Replace it with the path to your APK file; capital letters do not matter, but be sure to have all the correct spacing and punctuations. If all goes well, the console will display an upload speed when it has finished pushing your application to the device and a success message when it has finished the installation. The most common causes for errors at this stage are not being in the platform-tools folder when issuing commands and not having the correct path to the .apk file, surrounded by quotes. Once you have received your success message, find the application on your phone and start it up. Now, gaze in wonder at your ability to create Android applications with the power of Unity. We have created our very first Unity and Android application. Admittedly, it was just a simple Hello World application, but that is how it always starts. This served very well for double-checking the device connection and for learning about the build process without all the clutter from a game. If you are looking for a further challenge, try changing the icon for the application. It is a fairly simple procedure that you will undoubtedly want to perform as your game develops. How to do this was mentioned earlier in this section, but, as a reminder, take a look at Player Settings. Also, you will need to import an image. Take a look under Assets, in the menu bar, to know how to do this. Summary There were a lot of technical things in this article. First, we discussed the benefits and possibilities when using Unity and Android. That was followed by a whole lot of installation; the JDK, the Android SDK, Unity 3D, and Unity Remote. We then figured out how to connect to our devices through the command prompt. Our first application was quick and simple to make. We built it and put it on a device. Resources for Article: Further resources on this subject: What's Your Input? [article] That's One Fancy Hammer! [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 3284

article-image-writing-simple-behaviors
Packt
27 Apr 2015
18 min read
Save for later

Writing Simple Behaviors

Packt
27 Apr 2015
18 min read
In this article by Richard Sneyd, the author of Stencyl Essentials, we will learn about Stencyl's signature visual programming interface to create logic and interaction in our game. We create this logic using a WYSIWYG (What You See Is What You Get) block snapping interface. By the end of this article, you will have the Player Character whizzing down the screen, in pursuit of a zigzagging air balloon! Some of the things we will learn to do in this article are as follows: Create Actor Behaviors, and attach them to Actor Types. Add Events to our Behaviors. Use If blocks to create branching, conditional logic to handle various states within our game. Accept and react to input from the player. Apply physical forces to Actors in real-time. One of the great things about this visual approach to programming is that it largely removes the unpleasantness of dealing with syntax (the rules of the programming language), and the inevitable errors that come with it, when we're creating logic for our game. That frees us to focus on the things that matter most in our games: smooth, well wrought game mechanics and enjoyable, well crafted game-play. (For more resources related to this topic, see here.) The Player Handler The first behavior we are going to create is the Player Handler. This behavior will be attached to the Player Character (PC), which exists in the form of the Cowboy Actor Type. This behavior will be used to handle much of the game logic, and will process the lion's share of player input. Creating a new Actor Behavior It's time to create our very first behavior! Go to the Dashboard, under the LOGIC heading, select Actor Behaviors: Click on This game contains no Logic. Click here to create one. to add your first behavior. You should see the Create New... window appear: Enter the Name Player Handler, as shown in the previous screenshot, then click Create. You will be taken to the Behavior Designer: Let's take a moment to examine the various areas within the Behavior Designer. From left to right, as demonstrated in the previous screenshot, we have: The Events Pane: Here we can add, remove, and move between events in our Behavior. The Canvas: To the center of the screen, the Canvas is where we drag blocks around to click our game logic together. The blocks Palette: This is where we can find any and all of the various logic blocks that Stencyl has on offer. Simply browse to your category of choice, then click and drag the block onto the Canvas to configure it. Follow these steps: Click the Add Event button, which can be found at the very top of the Events Pane. In the menu that ensues, browse to Basics and click When Updating: You will notice that we now have an Event in our Events Pane, called Updated, along with a block called always on our Canvas. In Stencyl events lingo, always is synonymous with When Updating: Since this is the only event in our Behavior at this time, it will be selected by default. The always block (yellow with a red flag) is where we put the game logic that needs to be checked on a constant basis, for every update of the game loop (this will be commensurate with the framerate at runtime, around 60fps, depending on the game performance and system specs). Before we proceed with the creation of our conditional logic, we must first create a few attributes. If you have a programming background, it is easiest to understand attributes as being synonymous to local variables. Just like variables, they have a set data type, and you can retrieve or change the value of an attribute in real-time. Creating Attributes Switch to the Attributes context in the blocks palette: There are currently no attributes associated with this behavior. Let's add some, as we'll need them to store important information of various types which we'll be using later on to craft the game mechanics. Click on the Create an Attribute button: In the Create an Attribute… window that appears, enter the Name Target Actor, set Type to Actor, check Hidden?, and press OK: Congratulations! If you look at the lower half of the blocks palette, you will see that you have added your first attribute, Target Actor, of type Actors, and it is now available for use in our code. Next, let's add five Boolean attributes. A Boolean is a special kind of attribute that can be set to either true, or false. Those are the only two values it can accept. First, let's create the Can Be Hurt Boolean: Click Create an Attribute…. Enter the Name Can Be Hurt. Change the Type to Boolean. Check Hidden?. Press OK to commit and add the attribute to the behavior. Repeat steps 1 through 5 for the remaining four Boolean attributes to be added, each time substituting the appropriate name:     Can Switch Anim     Draw Lasso     Lasso Back     Mouse Over If you have done this correctly, you should now see six attributes in your attributes list - one under Actor, and five under Boolean - as shown in the following screenshot: Now let's follow the same process to further create seven attributes; only this time, we'll set the Type for all of them to Number. The Name for each one will be: Health (Set to Hidden?). Impact Force (Set to Hidden?). Lasso Distance (Set to Hidden?). Max Health (Don't set to Hidden?). Turn Force (Don't set to Hidden?). X Point (Set to Hidden?). Y Point (Set to Hidden?). If all goes well, you should see your list of attributes update accordingly: We will add just one additional attribute. Click the Create an Attribute… button again: Name it Mouse State. Change its Type to Text. Do not hide this attribute. Click OK to commit and add the attribute to your behavior. Excellent work, at this point, you have created all of the attributes you will need for the Player Handler behavior! Custom events We need to create a few custom events in order to complete the code for this game prototype. For programmers, custom events are like functions that don't accept parameters. You simply trigger them at will to execute a reusable bunch of code. To accept parameters, you must create a custom block: Click Add Event. Browse to Advanced.. Select Custom Event: You will see that a second event, simply called Custom Event, has been added to our list of events: Now, double-click on the Custom Event in the events stack to change its label to Obj Click Check (For readability purposes, this does not affect the event's name in code, and is completely ignored by Stencyl): Now, let's set the name as it will be used in code. Click between When and happens, and insert the name ObjectClickCheck: From now on, whenever we want to call this custom event in our code, we will refer to it as ObjectClickCheck. Go back to the When Updating event by selecting it from the events stack on the left. We are going to add a special block to this event, which calls the custom event we created just a moment ago: In the blocks palette, go to Behaviour | Triggers | For Actor, then click and drag the highlighted block onto the canvas: Drop the selected block inside of the Always block, and fill in the fields as shown (please note that I have deliberately excluded the space between Player and Handler in the behavior name, so as to demonstrate the debugging workflow. This will be corrected in a later part of the article): Now, ObjectClickCheck will be executed for every iteration of the game loop! It is usually a good practice to split up your code like this, rather than having it all in one really long event. That would be confusing, and terribly hard to sift through when behaviors become more complex! Here is a chance to assess what you have learnt from this article thus far. We will create a second custom event; see if you can achieve this goal using only the skeleton guide mentioned next. If you struggle, simply refer back to the detailed steps we followed for the ObjectClickCheck event: Click Add Event | Advanced | Custom Event. A new event will appear at the bottom of the events pane. Double Click on the event in the events pane to change its label to Handle Dir Clicks for readability purposes. Between When and happens, enter the name HandleDirectionClicks. This is the handle we will use to refer to this event in code. Go back to the Updated event, right click on the Trigger event in behavior for self block that is already in the always block, and select copy from the menu. Right-click anywhere on the canvas and select paste from the menu to create an exact duplicate of the block. Change the event being triggered from ObjectClickCheck to HandleDirectionClicks. Keep the value PlayerHandler for the behavior field. Drag and drop the new block so that it sits immediately under the original. Holding Alt on the keyboard, and clicking and dragging on a block, creates a duplicate of that block. Were you successful? If so, you should see these changes and additions in your behavior (note that the order of the events in the events pane does not affect the game logic, or the order in which code is executed). Learning to create and utilize custom events in Stencyl is a huge step towards mastering the tool, so congratulations on having come this far! Testing and debugging As with all fields of programming and software development, it is important to periodically and iteratively test your code. It's much easier to catch and repair mistakes this way. On that note, let's test the code we've written so far, using print blocks. Browse to and select Flow | Debug | print from the blocks palette: Now, drag a copy of this block into both of your custom events, snapping it neatly into the when happens blocks as you do so. For the ObjectClickCheck event, type Object Click Detected into the print block For the HandleDirectionClicks event, type Directional Click Detected into the print block. We are almost ready to test our code. Since this is an Actor Behavior, however, and we have not yet attached it to our Cowboy actor, nothing would happen yet if we ran the code. We must also add an instance of the Cowboy actor to our scene: Click the Attach to Actor Type button to the top right of the blocks palette: Choose the Cowboy Actor from the ensuing list, and click OK to commit. Go back to the Dashboard, and open up the Level 1 scene. In the Palette to the right, switch from Tiles to Actors, and select the Cowboy actor: Ensure Layer 0 is selected (as actors cannot be placed on background layers). Click on the canvas to place an instance of the actor in the scene, then click on the Inspector, and change the x and y Scale of the actor to 0.8: Well done! You've just added your first behavior to an Actor Type, and added your first Actor Instance to a scene! We are now ready to test our code. First, Click the Log Viewer button on the toolbar: This will launch the Log Viewer. The Log Viewer will open up, at which point we need only set Platform to Flash (Player), and click the Test Game Button to compile and execute our code: After a few moments, if you have followed all of the steps correctly, you will see that the game windows opens on the screen and a number of events appear on the Log Viewer. However, none of these events have anything to do with the print blocks we added to our custom events. Hence, something has gone wrong, and must be debugged. What could it be? Well, since the blocks simply are not executing, it's likely a typo of some kind. Let's look at the Player Handler again, and you'll see that within the Updated event, we've referred to the behavior name as PlayerHandler in both trigger event blocks, with no space inserted between the words Player and Handler: Update both of these fields to Player Handler, and be sure to include the space this time, so that it looks like the following (To avoid a recurrence of this error, you may wish to use the dropdown menu by clicking the downwards pointing grey arrow, then selecting Behavior Names to choose your behavior from a comprehensive list): Great work! You have successfully completed your first bit of debugging in Stencyl. Click the Test Game button again. After the game window has opened, if you scroll down to the bottom of the Log Viewer, you should see the following events piling up: These INFO events are being triggered by the print blocks we inserted into our custom events, and prove that our code is now working. Excellent job! Let's move on to a new Actor; prepare to meet Dastardly Dan! Adding the Balloon Let's add the balloon actor to our game, and insert it into Level 1: Go to the Dashboard, and select Actor Types from the RESOURCES menu. Press Click here to create a new Actor Type. Name it Balloon, and click Create. Click on This Actor Type contains no animations. Click here to add an animation. Change the text in the Name field to Default. Un-check looping?. Press the Click here to add a frame. button. The Import Frame from Image Strip window appears. Change the Scale to 4x. Click Choose Image... then browse to Game AssetsGraphicsActor Animations and select Balloon.png. Keep Columns and Rows set to 1, and click Add to commit this frame to the animation. All animations are created with a box collision shape by default. In actuality, the Balloon actor requires no collisions at all, so let's remove it. Go to the Collision context, select the Default box, and press Delete on the keyboard: The Balloon Actor Type is now free of collision shapes, and hence will not interact physically with other elements of our game levels. Next, switch to the Physics context: Set the following attributes: Set What Kind of Actor Type? to Normal. Set Can Rotate? To No. This will disable all rotational physical forces and interactions. We can still rotate the actor by setting its rotation directly in the code, however. Set Affected by Gravity? to No. We will be handling the downward trajectory of this actor ourselves, without using the gravity implemented by the physics engine. Just before we add this new actor to Level 1, let's add a behavior or two. Switch to the Behaviors context: Then, follow these steps: This Actor Type currently has no attached behaviors. Click Add Behavior, at the bottom left hand corner of the screen: Under FROM YOUR LIBRARY, go to the Motion category, and select Always Simulate. The Always Simulate behavior will make this actor operational, even if it is not on the screen, which is a desirable result in this case. It also prevents Stencyl from deleting the actor when it leaves the scene, which it would automatically do in an effort to conserve memory, if we did not explicitly dictate otherwise. Click Choose to add it to the behaviors list for this Actor Type. You should see it appear in the list: Click Add Behavior again. This time, under FROM YOUR LIBRARY, go the Motion category once more, and this time select Wave Motion (you'll have to scroll down the list to see it). Click Choose to add it to the behavior stack. You should see it sitting under the Always Simulate behavior: Configuring prefab behaviors Prefab behaviors (also called shipped behaviors) enable us to implement some common functionality, without reinventing the wheel, so to speak. The great thing about these prefab behaviors, which can be found in the behavior library, is that they can be used as templates, and modified at will. Let's learn how to add and modify a couple of these prefab behaviors now. Some prefab behaviors have exposed attributes which can be configured to suit the needs of the project. The Wave Motion behavior is one such example. Select it from the stack, and configure the attributes as follows: Set Direction to Horizontal from the dropdown menu. Set Start Speed to 5. Set Amplitude to 64. Set Wavelength to 128. Fantastic! Now let's add an Instance of the Balloon actor to Level 1: Click the Add to Scene button at the top right corner of your view. Select the Level 1 scene. Select the Balloon. Click on the canvas, below the Cowboy actor, to place an instance of the Balloon in the scene: Modifying prefab behaviors Before we test the game one last time, we must quickly add a prefab behavior to the Cowboy Actor Type, modifying it slightly to suit the needs of this game (for instance, we will need to create an offset value for the y axis, so the PC is not always at the centre of the screen): Go to the Dashboard, and double click on the Cowboy from the Actor Types list. Switch to the Behavior Context. Click Add Behavior, as you did previously when adding prefab behaviors to the Balloon Actor Type. This time, under FROM YOUR LIBRARY, go to the Game category, and select Camera Follow. As the name suggests, this is a simple behavior that makes the camera follow the actor it is attached to. Click Choose to commit this behavior to the stack, and you should see this: Click the Edit Behavior button, and it will open up in the Behavior Designer: In the Behavior Designer, towards the bottom right corner of the screen, click on the Attributes tab: Once clicked, you will see a list of all the attributes in this behavior appear in the previous window. Click the Add Attribute button: Perform the following steps: Set the Name to Y Offset. Change the Type to Number. Leave the attribute unhidden. Click OK to commit new attribute to the attribute stack: We must modify the set IntendedCameraY block in both the Created and the Updated events: Holding Shift, click and drag the set IntendedCameraY block out onto the canvas by itself: Drag the y-center of Self block out like the following: Click the little downward pointing grey arrow at the right of the empty field in the set intendedCameraY block , and browse to Math | Arithmetic | Addition block: Drag the y-center of Self block into the left hand field of the Add block: Next, click the small downward pointing grey arrow to the right of the right hand field of the Addition block to bring up the same menu as before. This time, browse to Attributes, and select Y Offset: Now, right click on the whole block, and select Copy (this will copy it to the clipboard), then simply drag it back into its original position, just underneath set intendedCameraX: Switch to the Updated Event from the events pane on the left, hold Shift, then click and drag set intendedCameraY out of the Always block and drop it in the trash can, as you won't need it anymore. Right-click and select Paste to place a copy of the new block configuration you copied to the clipboard earlier: Click and drag the pasted block so that it appears just underneath the set intendedCameraX block, and save your changes: Testing the changes Go back to the Cowboy Actor Type, and open the Behavior context; click File | Reload Document (Ctrl-R or Cmd-R) to update all the changes. You should see a new configurable attribute for the Camera Follow Behavior, called Y Offset. Set its value to 70: Excellent! Now go back to the Dashboard and perform the following: Open up Level 1 again. Under Physics, set Gravity (Vertical) to 8.0. Click Test Game, and after a few moments, a new game window should appear. At this stage, what you should see is the Cowboy shooting down the hill with the camera following him, and the Balloon floating around above him. Summary In this article, we learned the basics of creating behaviors, adding and setting Attributes of various types, adding and modifying prefab behaviors, and even some rudimentary testing and debugging. Give yourself a pat on the back; you've learned a lot so far! Resources for Article: Further resources on this subject: Form Handling [article] Managing and Displaying Information [article] Background Animation [article]
Read more
  • 0
  • 0
  • 4836

article-image-using-basic-projectiles
Packt
27 Apr 2015
22 min read
Save for later

Using Basic Projectiles

Packt
27 Apr 2015
22 min read
"Flying is learning how to throw yourself at the ground and miss."                                                                                              – Douglas Adams In this article by Michael Haungs, author of the book Creative Greenfoot, we will create a simple game using basic movements in Greenfoot. Actors in creative Greenfoot applications, such as games and animations, often have movement that can best be described as being launched. For example, a soccer ball, bullet, laser, light ray, baseball, and firework are examples of this type of object. One common method of implementing this type of movement is to create a set of classes that model real-world physical properties (mass, velocity, acceleration, friction, and so on) and have game or simulation actors inherit from these classes. Some refer to this as creating a physics engine for your game or simulation. However, this course of action is complex and often overkill. There are often simple heuristics we can use to approximate realistic motion. This is the approach we will take here. In this article, you will learn about the basics of projectiles, how to make an object bounce, and a little about particle effects. We will apply what you learn to a small platform game that we will build up over the course of this article. Creating realistic flying objects is not simple, but we will cover this topic in a methodical, step-by-step approach, and when we are done, you will be able to populate your creative scenarios with a wide variety of flying, jumping, and launched objects. It's not as simple as Douglas Adams makes it sound in his quote, but nothing worth learning ever is. (For more resources related to this topic, see here.) Cupcake Counter It is beneficial to the learning process to discuss topics in the context of complete scenarios. Doing this forces us to handle issues that might be elided in smaller, one-off examples. In this article, we will build a simple platform game called Cupcake Counter (shown in Figure 1). We will first look at a majority of the code for the World and Actor classes in this game without showing the code implementing the topic of this article, that is, the different forms of projectile-based movement. Figure 1: This is a screenshot of Cupcake Counter How to play The goal of Cupcake Counter is to collect as many cupcakes as you can before being hit by either a ball or a fountain. The left and right arrow keys move your character left and right and the up arrow key makes your character jump. You can also use the space bar key to jump. After touching a cupcake, it will disappear and reappear randomly on another platform. Balls will be fired from the turret at the top of the screen and fountains will appear periodically. The game will increase in difficulty as your cupcake count goes up. The game requires good jumping and avoiding skills. Implementing Cupcake Counter Create a scenario called Cupcake Counter and add each class to it as they are discussed. The CupcakeWorld class This subclass of World sets up all the actors associated with the scenario, including a score. It is also responsible for generating periodic enemies, generating rewards, and increasing the difficulty of the game over time. The following is the code for this class: import greenfoot.*; import java.util.List;   public class CupcakeWorld extends World { private Counter score; private Turret turret; public int BCOUNT = 200; private int ballCounter = BCOUNT; public int FCOUNT = 400; private int fountainCounter = FCOUNT; private int level = 0; public CupcakeWorld() {    super(600, 400, 1, false);    setPaintOrder(Counter.class, Turret.class, Fountain.class,    Jumper.class, Enemy.class, Reward.class, Platform.class);    prepare(); } public void act() {    checkLevel(); } private void checkLevel() {    if( level > 1 ) generateBalls();    if( level > 4 ) generateFountains();    if( level % 3 == 0 ) {      FCOUNT--;      BCOUNT--;      level++;    } } private void generateFountains() {    fountainCounter--;    if( fountainCounter < 0 ) {      List<Brick> bricks = getObjects(Brick.class);      int idx = Greenfoot.getRandomNumber(bricks.size());      Fountain f = new Fountain();      int top = f.getImage().getHeight()/2 + bricks.get(idx).getImage().getHeight()/2;      addObject(f, bricks.get(idx).getX(),      bricks.get(idx).getY()-top);      fountainCounter = FCOUNT;    } } private void generateBalls() {    ballCounter--;    if( ballCounter < 0 ) {      Ball b = new Ball();      turret.setRotation(15 * -b.getXVelocity());      addObject(b, getWidth()/2, 0);      ballCounter = BCOUNT;    } } public void addCupcakeCount(int num) {    score.setValue(score.getValue() + num);    generateNewCupcake(); } private void generateNewCupcake() {    List<Brick> bricks = getObjects(Brick.class);    int idx = Greenfoot.getRandomNumber(bricks.size());    Cupcake cake = new Cupcake();    int top = cake.getImage().getHeight()/2 +    bricks.get(idx).getImage().getHeight()/2;    addObject(cake, bricks.get(idx).getX(),    bricks.get(idx).getY()-top); } public void addObjectNudge(Actor a, int x, int y) {    int nudge = Greenfoot.getRandomNumber(8) - 4;    super.addObject(a, x + nudge, y + nudge); } private void prepare(){    // Add Bob    Bob bob = new Bob();    addObject(bob, 43, 340);    // Add floor    BrickWall brickwall = new BrickWall();    addObject(brickwall, 184, 400);    BrickWall brickwall2 = new BrickWall();    addObject(brickwall2, 567, 400);    // Add Score    score = new Counter();    addObject(score, 62, 27);    // Add turret    turret = new Turret();    addObject(turret, getWidth()/2, 0);    // Add cupcake    Cupcake cupcake = new Cupcake();    addObject(cupcake, 450, 30);    // Add platforms    for(int i=0; i<5; i++) {      for(int j=0; j<6; j++) {        int stagger = (i % 2 == 0 ) ? 24 : -24;        Brick brick = new Brick();        addObjectNudge(brick, stagger + (j+1)*85, (i+1)*62);      }    } } } Let's discuss the methods in this class in order. First, we have the class constructor CupcakeWorld(). After calling the constructor of the superclass, it calls setPaintOrder() to set the actors that will appear in front of other actors when displayed on the screen. The main reason why we use it here, is so that no actor will cover up the Counter class, which is used to display the score. Next, the constructor method calls prepare() to add and place the initial actors into the scenario. Inside the act() method, we will only call the function checkLevel(). As the player scores points in the game, the level variable of the game will also increase. The checkLevel() function will change the game a bit according to its level variable. When our game first starts, no enemies are generated and the player can easily get the cupcake (the reward). This gives the player a chance to get accustomed to jumping on platforms. As the cupcake count goes up, balls and fountains will be added. As the level continues to rise, checkLevel() reduces the delay between creating balls (BCOUNT) and fountains (FCOUNT). The level variable of the game is increased in the addCupcakeCount() method. The generateFountains() method adds a Fountain actor to the scenario. The rate at which we create fountains is controlled by the delay variable fountainContainer. After the delay, we create a fountain on a randomly chosen Brick (the platforms in our game). The getObjects() method returns all of the actors of a given class presently in the scenario. We then use getRandomNumber() to randomly choose a number between one and the number of Brick actors. Next, we use addObject() to place the new Fountain object on the randomly chosen Brick object. Generating balls using the generateBalls() method is a little easier than generating fountains. All balls are created in the same location as the turret at the top of the screen and sent from there with a randomly chosen trajectory. The rate at which we generate new Ball actors is defined by the delay variable ballCounter. Once we create a Ball actor, we rotate the turret based on its x velocity. By doing this, we create the illusion that the turret is aiming and then firing Ball Actor. Last, we place the newly created Ball actor into the scenario using the addObject() method. The addCupcakeCount() method is called by the actor representing the player (Bob) every time the player collides with Cupcake. In this method, we increase score and then call generateNewCupcake() to add a new Cupcake actor to the scenario. The generateNewCupcake() method is very similar to generateFountains(), except for the lack of a delay variable, and it randomly places Cupcake on one of the bricks instead of a Fountain actor. In all of our previous scenarios, we used a prepare() method to add actors to the scenario. The major difference between this prepare() method and the previous ones, is that we use the addObjectNudge() method instead of addObject() to place our platforms. The addObjectNudge() method simply adds a little randomness to the placement of the platforms, so that every new game is a little different. The random variation in the platforms will cause the Ball actors to have different bounce patterns and require the player to jump and move a bit more carefully. In the call to addObjectNudge(), you will notice that we used the numbers 85 and 62. These are simply numbers that spread the platforms out appropriately, and they were discovered through trial and error. I created a blue gradient background to use for the image of CupcakeWorld. Enemies In Cupcake Counter, all of the actors that can end the game if collided with are subclasses of the Enemy class. Using inheritance is a great way to share code and reduce redundancy for a group of similar actors. However, we often will create class hierarchies in Greenfoot solely for polymorphism. Polymorphism refers to the ability of a class in an object-orientated language to take on many forms. We are going to use it, so that our player actor only has to check for collision with an Enemy class and not every specific type of Enemy, such as Ball or RedBall. Also, by coding this way, we are making it very easy to add code for additional enemies, and if we find that our enemies have redundant code, we can easily move that code into our Enemy class. In other words, we are making our code extensible and maintainable. Here is the code for our Enemy class: import greenfoot.*;   public abstract class Enemy extends Actor { } The Ball class extends the Enemy class. Since Enemy is solely used for polymorphism, the Ball class contains all of the code necessary to implement bouncing and an initial trajectory. Here is the code for this class: import greenfoot.*;   public class Ball extends Enemy { protected int actorHeight; private int speedX = 0; public Ball() {    actorHeight = getImage().getHeight();    speedX = Greenfoot.getRandomNumber(8) - 4;    if( speedX == 0 ) {      speedX = Greenfoot.getRandomNumber(100) < 50 ? -1 : 1;    } } public void act() {    checkOffScreen(); } public int getXVelocity() {    return speedX; } private void checkOffScreen() {    if( getX() < -20 || getX() > getWorld().getWidth() + 20 ) {      getWorld().removeObject(this);    } else if( getY() > getWorld().getHeight() + 20 ) {      getWorld().removeObject(this);    } } } The implementation of Ball is missing the code to handle moving and bouncing. As we stated earlier, we will go over all the projectile-based code after providing the code we are using as the starting point for this game. In the Ball constructor, we randomly choose a speed in the x direction and save it in the speedX instance variable. We have included one accessory method to return the value of speedX (getXVelocity()). Last, we include checkOffScreen() to remove Ball once it goes off screen. If we do not do this, we would have a form of memory leak in our application because Greenfoot will continue to allocate resources and manage any actor until it is removed from the scenario. For the Ball class, I choose to use the ball.png image, which comes with the standard installation of Greenfoot. In this article, we will learn how to create a simple particle effect. Creating an effect is more about the use of a particle as opposed to its implementation. In the following code, we create a generic particle class, Particles, that we will extend to create a RedBall particle. We have organized the code in this way to easily accommodate adding particles in the future. Here is the code: import greenfoot.*;   public class Particles extends Enemy { private int turnRate = 2; private int speed = 5; private int lifeSpan = 50; public Particles(int tr, int s, int l) {    turnRate = tr;    speed = s;    lifeSpan = l;    setRotation(-90); } public void act() {    move();    remove(); } private void move() {    move(speed);    turn(turnRate); } private void remove() {    lifeSpan--;    if( lifeSpan < 0 ) {      getWorld().removeObject(this);    } } } Our particles are implemented to move up and slightly turn each call of the act() method. A particle will move lifeSpan times and then remove itself. As you might have guessed, lifeSpan is another use of a delay variable. The turnRate property can be either positive (to turn slightly right) or negative (to turn slightly left). We only have one subclass of Particles, RedBall. This class supplies the correct image for RedBall, supplies the required input for the Particles constructor, and then scales the image according to the parameters scaleX and scaleY. Here's the implementation: import greenfoot.*;   public class RedBall extends Particles { public RedBall(int tr, int s, int l, int scaleX, int scaleY) {    super(tr, s, l);    getImage().scale(scaleX, scaleY); } } For RedBall, I used the Greenfoot-supplied image red-draught.png. Fountains In this game, fountains add a unique challenge. After reaching level five (see the World class CupcakeWorld), Fountain objects will be generated and randomly placed in the game. Figure 2 shows a fountain in action. A Fountain object continually spurts RedBall objects into the air like water from a fountain. Figure 2: This is a close-up of a Fountain object in the game Cupcake Counter Let's take a look at the code that implements the Fountain class: import greenfoot.*; import java.awt.Color;   public class Fountain extends Actor { private int lifespan = 75; private int startDelay = 100; private GreenfootImage img; public Fountain() {    img = new GreenfootImage(20,20);    img.setColor(Color.blue);    img.setTransparency(100);    img.fill();    setImage(img); } public void act() {    if( --startDelay == 0 ) wipeView();    if( startDelay < 0 ) createRedBallShower(); } private void wipeView() {    img.clear(); } private void createRedBallShower() { } } The constructor for Fountain creates a new blue, semitransparent square and sets that to be its image. We start with a blue square to give the player of the game a warning that a fountain is about to erupt. Since fountains are randomly placed at any location, it would be unfair to just drop one on our player and instantly end the game. This is also why RedBall is a subclass of Enemy and Fountain is not. It is safe for the player to touch the blue square. The startDelay delay variable is used to pause for a short amount of time, then remove the blue square (using the function wipeView()), and then start the RedBall shower (using the createRedBallShower() function). We can see this in the act() method. Turrets In the game, there is a turret in the top-middle of the screen that shoots purple bouncy balls at the player. It is shown in Figure 1. Why do we use a bouncy-ball shooting turret? Because this is our game and we can! The implementation of the Turret class is very simple. Most of the functionality of rotating the turret and creating Ball to shoot is handled by CupcakeWorld in the generateBalls() method already discussed. The main purpose of this class is to just draw the initial image of the turret, which consists of a black circle for the base of the turret and a black rectangle to serve as the cannon. Here is the code: import greenfoot.*; import java.awt.Color;   public class Turret extends Actor { private GreenfootImage turret; private GreenfootImage gun; private GreenfootImage img; public Turret() {    turret = new GreenfootImage(30,30);    turret.setColor(Color.black);    turret.fillOval(0,0,30,30);       gun = new GreenfootImage(40,40);    gun.setColor(Color.black);    gun.fillRect(0,0,10,35);       img = new GreenfootImage(60,60);    img.drawImage(turret, 15, 15);    img.drawImage(gun, 25, 30);    img.rotate(0);       setImage(img); } } We previously talked about the GreenfootImage class and how to use some of its methods to do custom drawing. One new function we introduced is drawImage(). This method allows you to draw one GreenfootImage into another. This is how you compose images, and we used it to create our turret from a rectangle image and a circle image. Rewards We create a Reward class for the same reason we created an Enemy class. We are setting ourselves up to easily add new rewards in the future. Here is the code: import greenfoot.*;   public abstract class Reward extends Actor { } The Cupcake class is a subclass of the Reward class and represents the object on the screen the player is constantly trying to collect. However, cupcakes have no actions to perform or state to keep track of; therefore, its implementation is simple: import greenfoot.*;   public class Cupcake extends Reward { } When creating this class, I set its image to be muffin.png. This is an image that comes with Greenfoot. Even though the name of the image is a muffin, it still looks like a cupcake to me. Jumpers The Jumper class is a class that will allow all subclasses of it to jump when pressing either the up arrow key or the spacebar. At this point, we just provide a placeholder implementation: import greenfoot.*;   public abstract class Jumper extends Actor { protected int actorHeight; public Jumper() {    actorHeight = getImage().getHeight(); } public void act() {    handleKeyPresses(); } protected void handleKeyPresses() { } } The next class we are going to present is the Bob class. The Bob class extends the Jumper class and then adds functionality to let the player move it left and right. Here is the code: import greenfoot.*;   public class Bob extends Jumper { private int speed = 3; private int animationDelay = 0; private int frame = 0; private GreenfootImage[] leftImages; private GreenfootImage[] rightImages; private int actorWidth; private static final int DELAY = 3; public Bob() {    super();       rightImages = new GreenfootImage[5];    leftImages = new GreenfootImage[5];       for( int i=0; i<5; i++ ) {      rightImages[i] = new GreenfootImage("images/Dawson_Sprite_Sheet_0" + Integer.toString(3+i) + ".png");      leftImages[i] = new GreenfootImage(rightImages[i]);      leftImages[i].mirrorHorizontally();    }       actorWidth = getImage().getWidth(); } public void act() {    super.act();    checkDead();    eatReward(); } private void checkDead() {    Actor enemy = getOneIntersectingObject(Enemy.class);    if( enemy != null ) {      endGame();    } } private void endGame() {    Greenfoot.stop(); } private void eatReward() {    Cupcake c = (Cupcake) getOneIntersectingObject(Cupcake.class);    if( c != null ) {      CupcakeWorld rw = (CupcakeWorld) getWorld();      rw.removeObject(c);      rw.addCupcakeCount(1);    } } // Called by superclass protected void handleKeyPresses() {    super.handleKeyPresses();       if( Greenfoot.isKeyDown("left") ) {      if( canMoveLeft() ) {moveLeft();}    }    if( Greenfoot.isKeyDown("right") ) {      if( canMoveRight() ) {moveRight();}    } } private boolean canMoveLeft() {    if( getX() < 5 ) return false;    return true; } private void moveLeft() {    setLocation(getX() - speed, getY());    if( animationDelay % DELAY == 0 ) {      animateLeft();      animationDelay = 0;    }    animationDelay++; } private void animateLeft() {    setImage( leftImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } private boolean canMoveRight() {    if( getX() > getWorld().getWidth() - 5) return false;    return true; } private void moveRight() {    setLocation(getX() + speed, getY());    if( animationDelay % DELAY == 0 ) {      animateRight();      animationDelay = 0;    }    animationDelay++; } private void animateRight() {    setImage( rightImages[frame++]);    frame = frame % 5;    actorWidth = getImage().getWidth(); } } Like CupcakeWorld, this class is substantial. We will discuss each method it contains sequentially. First, the constructor's main duty is to set up the images for the walking animation. The images came from www.wikia.com and were supplied, in the form of a sprite sheet, by the user Mecha Mario. A direct link to the sprite sheet is http://smbz.wikia.com/wiki/File:Dawson_Sprite_Sheet.PNG. Note that I manually copied and pasted the images I used from this sprite sheet using my favorite image editor. Free Internet resources Unless you are also an artist or a musician in addition to being a programmer, you are going to be hard pressed to create all of the assets you need for your Greenfoot scenario. If you look at the credits for AAA video games, you will see that the number of artists and musicians actually equal or even outnumber the programmers. Luckily, the Internet comes to the rescue. There are a number of websites that supply legally free assets you can use. For example, the website I used to get the images for the Bob class supplies free content under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC-BY-SA) license. It is very important that you check the licensing used for any asset you download off the Internet and follow those user agreements carefully. In addition, make sure that you fully credit the source of your assets. For games, you should include a Credits screen to cite all the sources for the assets you used. The following are some good sites for free, online assets: www.wikia.com newgrounds.com http://incompetech.com opengameart.org untamed.wild-refuge.net/rpgxp.php Next, we have the act() method. It first calls the act() method of its superclass. It needs to do this so that we get the jumping functionality that is supplied by the Jumper class. Then, we call checkDead() and eatReward(). The checkDead()method ends the game if this instance of the Bob class touches an enemy, and eatReward() adds one to our score, by calling the CupcakeWorld method addCupcakeCount(), every time it touches an instance of the Cupcake class. The rest of the class implements moving left and right. The main method for this is handleKeyPresses(). Like in act(), the first thing we do, is call handleKeyPresses() contained in the Jumper superclass. This runs the code in Jumper that handles the spacebar and up arrow key presses. The key to handling key presses is the Greenfoot method isKeyDown() (see the following information box). We use this method to check if the left arrow or right arrow keys are presently being pressed. If so, we check whether or not the actor can move left or right using the methods canMoveLeft() and canMoveRight(), respectively. If the actor can move, we then call either moveLeft() or moveRight(). Handling key presses in Greenfoot The second tutorial explains how to control actors with the keyboard. To refresh your memory, we are going to present some information on the keyboard control here. The primary method we use in implementing keyboard control is isKeyDown(). This method provides a simple way to check whether a certain key is being pressed. Here is an excerpt from Greenfoot's documentation: public static boolean isKeyDown(java.lang.String keyName) Check whether a given key is currently pressed down.   Parameters: keyName:This is the name of the key to check.   This returns : true if the key is down.   Using isKeyDown() is easy. The ease of capturing and using input is one of the major strengths of Greenfoot. Here is example code that will pause the execution of the game if the "p" key is pressed:   if( Greenfoot.isKeyDown("p") { Greenfoot.stop(); } Next, we will discuss canMoveLeft(), moveLeft(), and animateLeft(). The canMoveRight(), moveRight(), and animateRight()methods mirror their functionality and will not be discussed. The sole purpose of canMoveLeft() is to prevent the actor from walking off the left-hand side of the screen. The moveLeft() method moves the actor using setLocation() and then animates the actor to look as though it is moving to the left-hand side. It uses a delay variable to make the walking speed look natural (not too fast). The animateLeft() method sequentially displays the walking-left images. Platforms The game contains several platforms that the player can jump or stand on. The platforms perform no actions and only serve as placeholders for images. We use inheritance to simplify collision detection. Here is the implementation of Platform: import greenfoot.*;   public class Platform extends Actor { } Here's the implementation of BrickWall: import greenfoot.*;   public class BrickWall extends Platform { } Here's the implementation of Brick: import greenfoot.*;   public class Brick extends Platform { } You should now be able to compile and test Cupcake Counter. Make sure you handle any typos or other errors you introduced while inputting the code. For now, you can only move left and right. Summary We have created a simple game using basic movements in Greenfoot. Resources for Article: Further resources on this subject: A Quick Start Guide to Scratch 2.0 [article] Games of Fortune with Scratch 1.4 [article] Cross-platform Development - Build Once, Deploy Anywhere [article]
Read more
  • 0
  • 0
  • 2816
article-image-creating-cool-content
Packt
23 Apr 2015
26 min read
Save for later

Creating Cool Content

Packt
23 Apr 2015
26 min read
In this article by Alex Ogorek, author of the book Mastering Cocos2d Game Development you'll be learning how to implement the really complex, subtle game mechanics that not many developers do. This is what separates the good games from the great games. There will be many examples, tutorials, and code snippets in this article intended for adaption in your own projects, so feel free to come back at any time to look at something you may have either missed the first time, or are just curious to know about in general. In this article, we will cover the following topics: Adding a table for scores Adding subtle sliding to the units Creating movements on a Bézier curve instead of straight paths (For more resources related to this topic, see here.) Adding a table for scores Because "we want a way to show the user their past high scores, in the GameOver scene, we're going to add a table that displays the most recent high scores that are saved. For this, we're going to use CCTableView. It's still relatively new, but it works for what we're going to use it. CCTableView versus UITableView Although UITableView might be known to some of you who've made non-Cocos2d apps before, you "should be aware of its downfalls when it comes to using it within Cocos2d. For example, if you want a BMFont in your table, you can't add LabelBMFont (you could try to convert the BMFont into a TTF font and use that within the table, but that's outside the scope of this book). If you still wish to use a UITableView object (or any UIKit element for that matter), you can create the object like normal, and add it to the scene, like this (tblScores is the name of the UITableView object): [[[CCDirector sharedDirector] view] addSubview:tblScores]; Saving high scores (NSUserDefaults) Before "we display any high scores, we have to make sure we save them. "The easiest way to do this is by making use of Apple's built-in data preservation tool—NSUserDefaults. If you've never used it before, let me tell you that it's basically a dictionary with "save" mechanics that stores the values in the device so that the next time the user loads the device, the values are available for the app. Also, because there are three different values we're tracking for each gameplay, let's only say a given game is better than another game when the total score is greater. Therefore, let's create a saveHighScore method that will go through all the total scores in our saved list and see whether the current total score is greater than any of the saved scores. If so, it will insert itself and bump the rest down. In MainScene.m, add the following method: -(NSInteger)saveHighScore { //save top 20 scores //an array of Dictionaries... //keys in each dictionary: // [DictTotalScore] // [DictTurnsSurvived] // [DictUnitsKilled]   //read the array of high scores saved on the user's device NSMutableArray *arrScores = [[[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores] mutableCopy]; //sentinel value of -1 (in other words, if a high score was not found on this play through) NSInteger index = -1; //loop through the scores in the array for (NSDictionary *dictHighScore in arrScores) { //if the current game's total score is greater than the score stored in the current index of the array...    if (numTotalScore > [dictHighScore[DictTotalScore] integerValue])    { //then store that index and break out of the loop      index = [arrScores indexOfObject:dictHighScore];      break;    } } //if a new high score was found if (index > -1) { //create a dictionary to store the score, turns survived, and units killed    NSDictionary *newHighScore = @{ DictTotalScore : @(numTotalScore),    DictTurnsSurvived : @(numTurnSurvived),    DictUnitsKilled : @(numUnitsKilled) };    //then insert that dictionary into the array of high scores    [arrScores insertObject:newHighScore atIndex:index];    //remove the very last object in the high score list (in other words, limit the number of scores)    [arrScores removeLastObject];    //then save the array    [[NSUserDefaults standardUserDefaults] setObject:arrScores forKey:DataHighScores];    [[NSUserDefaults standardUserDefaults] synchronize]; }   //finally return the index of the high score (whether it's -1 or an actual value within the array) return index; } Finally, call "this method in the endGame method right before you transition to the next scene: -(void)endGame { //call the method here to save the high score, then grab the index of the high score within the array NSInteger hsIndex = [self saveHighScore]; NSDictionary *scoreData = @{ DictTotalScore : @(numTotalScore), DictTurnsSurvived : @(numTurnSurvived), DictUnitsKilled : @(numUnitsKilled), DictHighScoreIndex : @(hsIndex)}; [[CCDirector sharedDirector] replaceScene:[GameOverScene sceneWithScoreData:scoreData]]; } Now that we have our high scores being saved, let's create the table to display them. Creating the table It's "really simple to set up a CCTableView object. All we need to do is modify the contentSize object, and then put in a few methods that handle the size and content of each cell. So first, open the GameOverScene.h file and set the scene as a data source for the CCTableView: @interface GameOverScene : CCScene <CCTableViewDataSource> Then, in the initWithScoreData method, create the header labels as well as initialize the CCTableView: //get the high score array from the user's device arrScores = [[NSUserDefaults standardUserDefaults] arrayForKey:DataHighScores];    //create labels CCLabelBMFont *lblTableTotalScore = [CCLabelBMFont labelWithString:@"Total Score:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableUnitsKilled = [CCLabelBMFont labelWithString:@"Units Killed:" fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTableTurnsSurvived = [CCLabelBMFont labelWithString:@"Turns Survived:" fntFile:@"bmFont.fnt"]; //position the labels lblTableTotalScore.position = ccp(winSize.width * 0.5, winSize.height * 0.85); lblTableUnitsKilled.position = ccp(winSize.width * 0.675, winSize.height * 0.85); lblTableTurnsSurvived.position = ccp(winSize.width * 0.875, winSize.height * 0.85); //add the labels to the scene [self addChild:lblTableTurnsSurvived]; [self addChild:lblTableTotalScore]; [self addChild:lblTableUnitsKilled]; //create the tableview and add it to the scene CCTableView * tblScores = [CCTableView node]; tblScores.contentSize = CGSizeMake(0.6, 0.4); CGFloat ratioX = (1.0 - tblScores.contentSize.width) * 0.75; CGFloat ratioY = (1.0 - tblScores.contentSize.height) / 2; tblScores.position = ccp(winSize.width * ratioX, winSize.height * ratioY); tblScores.dataSource = self; tblScores.block = ^(CCTableView *table){    //if the press a cell, do something here.    //NSLog(@"Cell %ld", (long)table.selectedRow); }; [self addChild: tblScores]; With the CCTableView object's data source being set to self we can add the three methods that will determine exactly how our table looks and what data goes in each cell (that is, row). Note that if we don't set the data source, the table view's method will not be called; and if we set it to anything other than self, the methods will be called on that object/class instead. That being" said, add these three methods: -(CCTableViewCell*)tableView:(CCTableView *)tableView nodeForRowAtIndex:(NSUInteger)index { CCTableViewCell* cell = [CCTableViewCell node]; cell.contentSizeType = CCSizeTypeMake(CCSizeUnitNormalized, CCSizeUnitPoints); cell.contentSize = CGSizeMake(1, 40); // Color every other row differently CCNodeColor* bg; if (index % 2 != 0) bg = [CCNodeColor nodeWithColor:[CCColor colorWithRed:0 green:0 blue:0 alpha:0.3]]; else bg = [CCNodeColor nodeWithColor: [CCColor colorWithRed:0 green:0 blue:0 alpha:0.2]]; bg.userInteractionEnabled = NO; bg.contentSizeType = CCSizeTypeNormalized; bg.contentSize = CGSizeMake(1, 1); [cell addChild:bg]; return cell; }   -(NSUInteger)tableViewNumberOfRows:(CCTableView *)tableView { return [arrScores count]; }   -(float)tableView:(CCTableView *)tableView heightForRowAtIndex:(NSUInteger)index { return 40.f; } The first method, tableView:nodeForRowAtIndex:, will format each cell "based on which index it is. For now, we're going to color each cell in one of two different colors. The second method, tableViewNumberOfRows:, returns the number of rows, "or cells, that will be in the table view. Since we know there are going to be 20, "we can technically type 20, but what if we decide to change that number later? "So, let's stick with using the count of the array. The third "method, tableView:heightForRowAtIndex:, is meant to return the height of the row, or cell, at the given index. Since we aren't doing anything different with any cell in particular, we can hardcode this value to a fairly reasonable height of 40. At this point, you should be able to run the game, and when you lose, you'll be taken to the game over screen with the labels across the top as well as a table that scrolls on the right side of the screen. It's good practice when learning Cocos2d to just mess around with stuff to see what sort of effects you can make. For example, you could try using some ScaleTo actions to scale the text up from 0, or use a MoveTo action to slide it from the bottom or the side. Feel free to see whether you can create a cool way to display the text right now. Now that we have the table in place, let's get the data displayed, shall we? Showing the scores Now that "we have our table created, it's a simple addition to our code to get the proper numbers to display correctly. In the nodeForRowAtIndex method, add the following block of code right after adding the background color to the cell: //Create the 4 labels that will be used within the cell (row). CCLabelBMFont *lblScoreNumber = [CCLabelBMFont labelWithString: [NSString stringWithFormat:@"%d)", index+1] fntFile:@"bmFont.fnt"]; //Set the anchor point to the middle-right (default middle-middle) lblScoreNumber.anchorPoint = ccp(1,0.5); CCLabelBMFont *lblTotalScore = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTotalScore] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblUnitsKilled = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictUnitsKilled] integerValue]] fntFile:@"bmFont.fnt"];   CCLabelBMFont *lblTurnsSurvived = [CCLabelBMFont labelWithString:[NSString stringWithFormat:@"%d", [arrScores[index][DictTurnsSurvived] integerValue]] fntFile:@"bmFont.fnt"]; //set the position type of each label to normalized (where (0,0) is the bottom left of its parent and (1,1) is the top right of its parent) lblScoreNumber.positionType = lblTotalScore.positionType = lblUnitsKilled.positionType = lblTurnsSurvived.positionType = CCPositionTypeNormalized;   //position all of the labels within the cell lblScoreNumber.position = ccp(0.15,0.5); lblTotalScore.position = ccp(0.35,0.5); lblUnitsKilled.position = ccp(0.6,0.5); lblTurnsSurvived.position = ccp(0.9,0.5); //if the index we're iterating through is the same index as our High Score index... if (index == highScoreIndex) { //then set the color of all the labels to a golden color    lblScoreNumber.color =    lblTotalScore.color =    lblUnitsKilled.color =    lblTurnsSurvived.color = [CCColor colorWithRed:1 green:183/255.f blue:0]; } //add all of the labels to the individual cell [cell addChild:lblScoreNumber]; [cell addChild:lblTurnsSurvived]; [cell addChild:lblTotalScore]; [cell addChild:lblUnitsKilled]; And that's it! When you play the game and end up at the game over screen, you'll see the high scores being displayed (even the scores from earlier attempts, because they were saved, remember?). Notice the high score that is yellow. It's an indication that the score you got in the game you just played is on the scoreboard, and shows you where it is. Although the CCTableView might feel a bit weird with things disappearing and reappearing as you scroll, let's get some Threes!—like sliding into our game. If you're considering adding a CCTableView to your own project, the key takeaway here is to make sure you modify the contentSize and position properly. By default, the contentSize is a normalized CGSize, so from 0 to 1, and the anchor point is (0,0). Plus, make sure you perform these two steps: Set the data source of the table view Add the three table view methods With all that in mind, it should be relatively easy to implement a CCTableView. Adding subtle sliding to the units If you've ever played Threes! (or if you haven't, check out the trailer at http://asherv.com/threes/, and maybe even download the game on your phone), you would be aware of the sliding feature when a user begins to make "their move but hasn't yet completed the move. At the speed of the dragging finger, the units slide in the direction they're going to move, showing the user where each unit will go and how each unit will combine with another. This is useful as it not only adds that extra layer of "cool factor" but also provides a preview of the future for the user if they want to revert their decision ahead of time and make a different, more calculated move. Here's a side note: if you want your game to go really viral, you have to make the user believe it was their fault that they lost, and not your "stupid game mechanics" (as some players might say). Think Angry Birds, Smash Hit, Crossy Road, Threes!, Tiny Wings… the list goes on and on with more games that became popular, and all had one underlying theme: when the user loses, it was entirely in their control to win or lose, and they made the wrong move. This" unseen mechanic pushes players to play again with a better strategy in mind. And this is exactly why we want our users to see their move before it gets made. It's a win-win situation for both the developers and the players. Sliding one unit If we can get one unit to slide, we can surely get the rest of the units to slide by simply looping through them, modularizing the code, or some other form of generalization. That being said, we need to set up the Unit class so that it can detect how far "the finger has dragged. Thus, we can determine how far to move the unit. So, "open Unit.h and add the following variable. It will track the distance from the previous touch position: @property (nonatomic, assign) CGPoint previousTouchPos; Then, in the touchMoved method of Unit.m, add the following assignment to previousTouchPos. It sets the previous touch position to the touch-down position, but only after the distance is greater than 20 units: if (!self.isBeingDragged && ccpDistance(touchPos, self.touchDownPos) > 20) { self.isBeingDragged = YES; //add it here: self.previousTouchPos = self.touchDownPos; Once that's in place, we can begin calculating the distance while the finger is being dragged. To do that, we'll do a simple check. Add the following block of code at the end of touchMoved, after the end of the initial if block: //only if the unit is currently being dragged if (self.isBeingDragged) {    CGFloat dist = 0;    //if the direction the unit is being dragged is either UP or "     DOWN    if (self.dragDirection == DirUp || self.dragDirection == DirDown)    //then subtract the current touch position's Y-value from the "     previously-recorded Y-value to determine the distance to "     move      dist = touchPos.y - self.previousTouchPos.y;      //else if the direction the unit is being dragged is either "       LEFT or RIGHT    else if (self.dragDirection == DirLeft ||        self.dragDirection == DirRight)        //then subtract the current touch position's Y-value from "         the previously-recorded Y-value to determine the "         distance to move      dist = touchPos.x - self.previousTouchPos.x;   //then assign the touch position for the next iteration of touchMoved to work properly self.previousTouchPos = touchPos;   } The "assignment of previousTouchPos at the end will ensure that while the unit is being dragged, we continue to update the touch position so that we can determine the distance. Plus, the distance is calculated in only the direction in which the unit is being dragged (up and down are denoted by Y, and left and right are denoted by X). Now that we have the distance between finger drags being calculated, let's push "this into a function that will move our unit based on which direction it's being dragged in. So, right after you've calculated dist in the previous code block, "call the following method to move our unit based on the amount dragged: dist /= 2; //optional [self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Dividing the distance by 2 is optional. You may think the squares are too small, and want the user to be able to see their square. So note that dividing by 2, or a larger number, will mean that for every 1 point the finger moves, the unit will move by 1/2 (or less) points. With that method call being ready, we need to implement it, so add the following method body for now. Since this method is rather complicated, it's going to be "added in parts: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { } The first thing "we need to do is set up a variable to calculate the new x and y positions of the unit. We'll call these newX and newY, and set them to the unit's current position: CGFloat newX = self.position.x, newY = self.position.y; Next, we want to grab the position that the unit starts at, that is, the position the "unit would be at if it was positioned at its current grid coordinate. To do that, "we're going to call the getPositionForGridCoordinate method from MainScene, (since that's where the positions are being calculated anyway, we might as well use that function): CGPoint originalPos = [MainScene "getPositionForGridCoord:self.gridPos]; Next, we're going to move the newX or newY based on the direction in which the unit is being dragged. For now, let's just add the up direction: if (self.dragDirection == DirUp) {    newY += dist;    if (newY > originalPos.y + self.gridWidth)      newY = originalPos.y + self.gridWidth;    else if (newY < originalPos.y)      newY = originalPos.y; } In this if block, we're first going to add the distance to the newY variable "(because we're going up, we're adding to Y instead of X). Then, we want to "make sure the position is at most 1 square up. We're going to use the gridWidth (which is essentially the width of the square, assigned in the initCommon method). Also, we need to make sure that if they're bringing the square back to its original position, it doesn't go into the square beneath it. So let's add the rest of the directions as else if statements: else if (self.dragDirection == DirDown) {    newY += dist;    if (newY < originalPos.y - self.gridWidth)      newY = originalPos.y - self.gridWidth;    else if (newY > originalPos.y)      newY = originalPos.y; } else if (self.dragDirection == DirLeft) {    newX += dist;    if (newX < originalPos.x - self.gridWidth)      newX = originalPos.x - self.gridWidth;    else if (newX > originalPos.x)      newX = originalPos.x; } else if (self.dragDirection == DirRight) {    newX += dist;    if (newX > originalPos.x + self.gridWidth)      newX = originalPos.x + self.gridWidth;    else if (newX < originalPos.x)      newX = originalPos.x; } Finally, we "will set the position of the unit based on the newly calculated "x and y positions: self.position = ccp(newX, newY); Running the game at this point should cause the unit you drag to slide along "with your finger. Nice, huh? Since we have a function that moves one unit, "we can very easily alter it so that every unit can be moved like this. But first, there's something you've probably noticed a while ago (or maybe just recently), and that's the unit movement being canceled only when you bring your finger back to the original touch down position. Because we're dragging the unit itself, we can "cancel" the move by dragging the unit back to where it started. However, the finger might be in a completely different position, so we need to modify how the cancelling gets determined. To do that, in your touchEnded method of Unit.m, locate this if statement: if (ccpDistance(touchPos, self.touchDownPos) > "self.boundingBox.size.width/2) Change it to the following, which will determine the unit's distance, and not the finger's distance: CGPoint oldSelfPos = [MainScene "getPositionForGridCoord:self.gridPos];   CGFloat dist = ccpDistance(oldSelfPos, self.position); if (dist > self.gridWidth/2) Yes, this means you no longer need the touchPos variable in touchEnded if "you're getting that "warning and wish to get rid of it. But that's it for sliding 1 unit. Now we're ready to slide all the units, so let's do it! Sliding all units Now "that we have the dragging unit being slid, let's continue and make all the units slide (even the enemy units so that we can better predict our troops' movement). First, we need a way to move all the units on the screen. However, since the Unit class only contains information about the individual unit (which is a good thing), "we need to call a method in MainScene, since that's where the arrays of units are. Moreover, we cannot simply call [MainScene method], since the arrays are "instance variables, and instance variables must be accessed through an instance "of the object itself. That being said, because we know that our unit will be added to the scene as "a child, we can use Cocos2d to our advantage, and call an instance method on the MainScene class via the parent parameter. So, in touchMoved of Unit.m, make the following change: [(MainScene*)self.parent slideAllUnitsWithDistance:dist "withDragDirection:self.dragDirection]; //[self slideUnitWithDistance:dist "withDragDirection:self.dragDirection]; Basically we've commented out (or deleted) the old method call here, and instead called it on our parent object (which we cast as a MainScene so that we know "which functions it has). But we don't have that method created yet, so in MainScene.h, add the following method declaration: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir; Just in case you haven't noticed, the enum UnitDirection is declared in Unit.h, which is why MainScene.h imports Unit.h—so that we can make use of that enum in this class, and the function to be more specific. Then in MainScene.m, we're going to loop through both the friendly and enemy arrays, and call the slideUnitWithDistance function on each individual unit: -(void)slideAllUnitsWithDistance:(CGFloat)dist "withDragDirection:(enum UnitDirection)dir { for (Unit *u in arrFriendlies)    [u slideUnitWithDistance:dist withDragDirection:dir]; for (Unit *u in arrEnemies)    [u slideUnitWithDistance:dist withDragDirection:dir]; } However, that" still isn't functional, as we haven't declared that function in the "header file for the Unit class. So go ahead and do that now. Declare the function header in Unit.h: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum "UnitDirection)dir; We're almost done. We initially set up our slideUnitWithDistance method with a drag direction in mind. However, only the unit that's currently being dragged will have a drag direction. Every other unit will need to use the direction it's currently facing "(that is, the direction in which it's already going). To do that, we just need to modify how the slideUnitWithDistance method does its checking to determine which direction to modify the distance by. But first, we need to handle the negatives. What does that mean? Well, if you're dragging a unit to the left and a unit being moved is supposed to be moving to the left, it will work properly, as x-10 (for example) will still be less than the grid's width. However, if you're dragging left and a unit being moved is supposed to be moving right, it won't be moving at all, as it tries to add a negative value x -10, but because it needs to be moving to the right, it'll encounter the left-bound right away (of less than the original position), and stay still. The following diagram should help explain what is meant by "handling negatives." As you can see, in the top section, when the non-dragged unit is supposed to be going left by 10 (in other words, negative 10 in the x direction), it works. But when the non-dragged unit is going the opposite sign (in other words, positive 10 in the x direction), it doesn't. To" handle this, we set up a pretty complicated if statement. It checks when the drag direction and the unit's own direction are opposite (positive versus negative), and multiplies the distance by -1 (flips it). Add this to the top of the slideUnitWithDistance method, right after you grab the newX and the original position: -(void)slideUnitWithDistance:(CGFloat)dist withDragDirection:(enum UnitDirection)dir { CGFloat newX = self.position.x, newY = self.position.y; CGPoint originalPos = [MainScene getPositionForGridCoord:self.gridPos]; if (!self.isBeingDragged &&   (((self.direction == DirUp || self.direction == DirRight) && (dir == DirDown || dir == DirLeft)) ||   ((self.direction == DirDown || self.direction == DirLeft) && (dir == DirUp || dir == DirRight)))) {    dist *= -1; } } The logic of this if statement works is as follows: Suppose the unit is not being dragged. Also suppose that either the direction is positive and the drag direction is negative, or the direction is negative and the drag direction is positive. Then multiply by -1. Finally, as mentioned earlier, we just need to handle the non-dragged units. So, in every if statement, add an "or" portion that will check for the same direction, but only if the unit is not currently being dragged. In other words, in the slideUnitWithDistance method, modify your if statements to look like this: if (self.dragDirection == DirUp || (!self.isBeingDragged && self.direction == DirUp)) {} else if (self.dragDirection == DirDown || (!self.isBeingDragged && self.direction == DirDown)) {} else if (self.dragDirection == DirLeft || (!self.isBeingDragged && self.direction == DirLeft)) {} else if (self.dragDirection == DirRight || (!self.isBeingDragged && self.direction == DirRight)) {} Finally, we can run the game. Bam! All the units go gliding across the screen with our drag. Isn't it lovely? Now the player can better choose their move. That's it for the sliding portion. The key to unit sliding is to loop through the arrays to ensure that all the units get moved by an equal amount, hence passing the distance to the move function. Creating movements on a Bézier curve If you don't know what a Bézier curve is, it's basically a line that goes from point A to point B over a curve. Instead of being a straight line with two points, it uses a second set of points called control points that bend the line in a smooth way. When you want to apply movement with animations in Cocos2d, it's very tempting to queue up a bunch of MoveTo actions in a sequence. However, it's going to look a lot nicer ( in both the game and the code) if you use a smoother Bézier curve animation. Here's a good example of what a Bézier curve looks like: As you can see, the red line goes from point P0 to P3. However, the line is influenced in the direction of the control points, P1 and P2. Examples of using a Bézier curve Let's list" a few examples where it would be a good choice to use a Bézier curve instead of just the regular MoveTo or MoveBy actions: A character that will perform a jumping animation, for example, in Super Mario Bros A boomerang as a weapon that the player throws Launching a missile or rocket and giving it a parabolic curve A tutorial hand that indicates a curved path the user must make with their finger A skateboarder on a half-pipe ramp (if not done with Chipmunk) There are obviously a lot of other examples that could use a Bézier curve for their movement. But let's actually code one, shall we? Sample project – Bézier map route First, to make things go a lot faster—as this isn't going to be part of the book's project—simply download the project from the code repository or the website. If you open the project and run it on your device or a simulator, you will notice a blue screen and a square in the bottom-left corner. If you tap anywhere on the screen, you'll see the blue square make an M shape ending in the bottom-right corner. If you hold your finger, it will repeat. Tap again and the animation will reset. Imagine the path this square takes is over a map, and indicates what route a player will travel with their character. This is a very choppy, very sharp path. Generally, paths are curved, so let's make one that is! Here is a screenshot that shows a very straight path of the blue square: The following screenshot shows the Bézier path of the yellow square: Curved M-shape Open MainScene.h and add another CCNodeColor variable, named unitBezier: CCNodeColor *unitBezier; Then open MainScene.m and add the following code to the init method so that your yellow block shows up on the screen: unitBezier = [[CCNodeColor alloc] initWithColor:[CCColor colorWithRed:1 green:1 blue:0] width:50 height:50]; [self addChild:unitBezier]; CCNodeColor *shadow2 = [[CCNodeColor alloc] initWithColor:[CCColor blackColor] width:50 height:50]; shadow2.anchorPoint = ccp(0.5,0.5); shadow2.position = ccp(26,24); shadow2.opacity = 0.5; [unitBezier addChild:shadow2 z:-1]; Then, in the sendFirstUnit method, add the lines of code that will reset the yellow block's position as well as queue up the method to move the yellow block: -(void)sendFirstUnit { unitRegular.position = ccp(0,0); //Add these 2 lines unitBezier.position = ccp(0,0); [self scheduleOnce:@selector(sendSecondUnit) delay:2]; CCActionMoveTo *move1 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/4, winSize.height * 0.75)]; CCActionMoveTo *move2 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width/2, winSize.height/4)]; CCActionMoveTo *move3 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width*3/4, winSize.height * 0.75)]; CCActionMoveTo *move4 = [CCActionMoveTo actionWithDuration:0.5 "position:ccp(winSize.width - 50, 0)]; [unitRegular runAction:[CCActionSequence actions:move1, move2, "move3, move4, nil]]; } After this, you'll need to actually create the sendSecondUnit method, like this: -(void)sendSecondUnit { ccBezierConfig bezConfig1; bezConfig1.controlPoint_1 = ccp(0, winSize.height); bezConfig1.controlPoint_2 = ccp(winSize.width*3/8, "winSize.height); bezConfig1.endPosition = ccp(winSize.width*3/8, "winSize.height/2); CCActionBezierTo *bez1 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig1]; ccBezierConfig bezConfig2; bezConfig2.controlPoint_1 = ccp(winSize.width*3/8, 0); bezConfig2.controlPoint_2 = ccp(winSize.width*5/8, 0); bezConfig2.endPosition = ccp(winSize.width*5/8, winSize.height/2); CCActionBezierBy *bez2 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig2]; ccBezierConfig bezConfig3; bezConfig3.controlPoint_1 = ccp(winSize.width*5/8, "winSize.height); bezConfig3.controlPoint_2 = ccp(winSize.width, winSize.height); bezConfig3.endPosition = ccp(winSize.width - 50, 0); CCActionBezierTo *bez3 = [CCActionBezierTo "actionWithDuration:1.0 bezier:bezConfig3]; [unitBezier runAction:[CCActionSequence actions:bez1, bez2, bez3, nil]]; } The preceding method creates three Bézier configurations and attaches them to a MoveTo command that takes a Bézier configuration. The reason for this is that each Bézier configuration can take only two control points. As you can see in this marked-up screenshot, where each white and red square represents a control point, you can make only a U-shaped parabola with a single Bézier configuration. Thus, to make three U-shapes, you need three Bézier configurations. Finally, make sure that in the touchBegan method, you make the unitBezier stop all its actions (that is, stop on reset): [unitBezier stopAllActions]; And that's it! When you run the project and tap on the screen (or tap and hold), you'll see the blue square M-shape its way across, followed by the yellow square in its squiggly M-shape. If you" want to adapt the Bézier MoveTo or MoveBy actions for your own project, you should know that you can create only one U-shape with each Bézier configuration. They're fairly easy to implement and can quickly be copied and pasted, as shown in the sendSecondUnit function. Plus, as the control points and end position are just CGPoint values, they can be relative (that is, relative to the unit's current position, the world's position, or an enemy's position), and as a regular CCAction, they can be run with any CCNode object quite easily. Summary In this article, you learned how to do a variety of things, from making a score table and previewing the next move, to making use of Bézier curves. The code was built with a copy-paste mindset, so it can be adapted for any project without much reworking (if it is required at all). Resources for Article: Further resources on this subject: Cocos2d-x: Installation [article] Dragging a CCNode in Cocos2D-Swift [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 3656

article-image-creating-games-cocos2d-x-easy-and-100-percent-free-0
Packt
01 Apr 2015
5 min read
Save for later

Creating Games with Cocos2d-x is Easy and 100 percent Free

Packt
01 Apr 2015
5 min read
In this article by Raydelto Hernandez, the author of the book Building Android games with Cocos2d-x, we will talk about the Cocos2d-x game engine, which is widely used to create Android games. The launch of the Apple App Store back in 2008 leveraged the reach capacity of indie game developers who since its occurrence are able to reach millions of users and compete with large companies, outperforming them in some situations. This reality led the trend of creating reusable game engines, such as Cocos2d-iPhone, which is written natively using Objective-C by the Argentine iPhone developer, Ricardo Quesada. Cocos2d-iPhone allowed many independent developers to reach the top charts of downloads. (For more resources related to this topic, see here.) Picking an existing game engine is a smart choice for indies and large companies since it allows them to focus on the game logic rather than rewriting core features over and over again. Thus, there are many game engines out there with all kinds of licenses and characteristics. The most popular game engines for mobile systems right now are Unity, Marmalade, and Cocos2d-x; the three of them have the capabilities to create 2D and 3D games. Determining which one is the best in terms of ease of use and availability of tools may be debatable, but there is one objective fact, which we can mention that could be easily verified. Among these three engines, Cocos2d-x is the only one that you can use for free no matter how much money you make using it. We highlighted in this article's title that Cocos2d-x is completely free. This was emphasized because the other two frameworks also allow some free usage; nevertheless, both of these at some point require a payment for the usage license. In order to understand why Cocos2d-x is still free and open source, we need to understand how this tool was born. Ricardo, an enthusiastic Python programmer, often participated in game creation challenges that required participants to develop games from scratch within a week. Back in those days, Ricardo and his team rewrote the core engine for each game until they came up with the idea of creating a framework to encapsulate core game capabilities. These capabilities could be used on any two-dimensional game to make it open source, so contributions could be received worldwide. This is why Cocos2d was originally written for fun. With the launch of the first iPhone in 2007, Ricardo led the development of the port of the Cocos2d Python framework to the iPhone platform using its native language, Objective-C. Cocos2d-iPhone quickly became popular among indie game developers, some of them turning into Appillionaires, as Chris Stevens called these individuals and enterprises that made millions of dollars during the App Store bubble period. This phenomenon made game development companies look at this framework created by hobbyists as a tool to create their products. Zynga was one of the first big companies to adopt Cocos2d as their framework to deliver their famous Farmville game to iPhone in 2009. This company has been trading on NASDAQ since 2011 and has more than 2,000 employees. In July 2010, a C++ port of the Cocos2d iPhone called Cocos2d-x, was written in China with the objective of taking the power of this framework to other platforms, such as the Android operating system, which by that time was gaining market share at a spectacular rate. In 2011, this Cocos2d port was acquired by Chukong Technologies, the third largest mobile game development company in China, who later hired the original Cocos2d-IPhone author to join their team. Today, Cocos2d-x-based games dominate the top grossing charts of Google Play and the App Store, especially in Asia. Recognized companies and leading studios, such as Konami, Zynga, Bandai Namco, Wooga, Disney Mobile, and Square Enix are using Cocos2d-x in their games. Currently, there are 400,000 developers working on adding new functionalities and making this framework as stable as possible. These include engineers from Google, ARM, Intel, BlackBerry, and Microsoft who officially support the ports of their products, such as Windows Phone, Windows, Windows Metro Interface, and they're planning to support Cocos2d-x for the Xbox in this year. Cocos2d-x is a very straightforward engine that requires a little learning to grasp it. I teach game development courses at many universities using this framework; during the first week, the students are capable of creating a game with the complexity of the famous title Doodle Jump. This can be easily achieved because the framework provides us all the single components that are required for our game, such as physics, audio handling, collision detection, animation, networking, data storage, user input, map rendering, scene transitions, 3D rendering, particle systems rendering, font handling, menu creation, displaying forms, threads handling, and so on. This abstracts us from the low-level logic and allows us to focus on the game logic. Summary In conclusion, if you are willing to learn how to develop games for mobile platforms, I strongly recommend you to learn and use the Cocos2d-x framework because it is easy to use, is totally free, is an open source. This means that you can better understand it by reading its source, you could modify it if needed, and you have the warranty that you will never be forced to pay a license fee if your game becomes a hit. Another big advantage of this framework is its highly available documentation, including the Packt Publishing collection of Cocos2d-x game development books. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [article] Why should I make cross-platform games? [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 2924

Packt
25 Mar 2015
35 min read
Save for later

Fun with Sprites – Sky Defense

Packt
25 Mar 2015
35 min read
This article is written by Roger Engelbert, the author of Cocos2d-x by Example: Beginner's Guide - Second Edition. Time to build our second game! This time, you will get acquainted with the power of actions in Cocos2d-x. I'll show you how an entire game could be built just by running the various action commands contained in Cocos2d-x to make your sprites move, rotate, scale, fade, blink, and so on. And you can also use actions to animate your sprites using multiple images, like in a movie. So let's get started. In this article, you will learn: How to optimize the development of your game with sprite sheets How to use bitmap fonts in your game How easy it is to implement and run actions How to scale, rotate, swing, move, and fade out a sprite How to load multiple .png files and use them to animate a sprite How to create a universal game with Cocos2d-x (For more resources related to this topic, see here.) The game – sky defense Meet our stressed-out city of...your name of choice here. It's a beautiful day when suddenly the sky begins to fall. There are meteors rushing toward the city and it is your job to keep it safe. The player in this game can tap the screen to start growing a bomb. When the bomb is big enough to be activated, the player taps the screen again to detonate it. Any nearby meteor will explode into a million pieces. The bigger the bomb, the bigger the detonation, and the more meteors can be taken out by it. But the bigger the bomb, the longer it takes to grow it. But it's not just bad news coming down. There are also health packs dropping from the sky and if you allow them to reach the ground, you'll recover some of your energy. The game settings This is a universal game. It is designed for the iPad retina screen and it will be scaled down to fit all the other screens. The game will be played in landscape mode, and it will not need to support multitouch. The start project The command line I used was: cocos new SkyDefense -p com.rengelbert.SkyDefense -l cpp -d /Users/rengelbert/Desktop/SkyDefense In Xcode you must set the Devices field in Deployment Info to Universal, and the Device Family field is set to Universal. And in RootViewController.mm, the supported interface orientation is set to Landscape. The game we are going to build requires only one class, GameLayer.cpp, and you will find that the interface for this class already contains all the information it needs. Also, some of the more trivial or old-news logic is already in place in the implementation file as well. But I'll go over this as we work on the game. Adding screen support for a universal app Now things get a bit more complicated as we add support for smaller screens in our universal game, as well as some of the most common Android screen sizes. So open AppDelegate.cpp. Inside the applicationDidFinishLaunching method, we now have the following code: auto screenSize = glview->getFrameSize(); auto designSize = Size(2048, 1536); glview->setDesignResolutionSize(designSize.width, designSize.height, ResolutionPolicy::EXACT_FIT); std::vector<std::string> searchPaths; if (screenSize.height > 768) {    searchPaths.push_back("ipadhd");    director->setContentScaleFactor(1536/designSize.height); } else if (screenSize.height > 320) {    searchPaths.push_back("ipad");    director->setContentScaleFactor(768/designSize.height); } else {    searchPaths.push_back("iphone");    director->setContentScaleFactor(380/designSize.height); } auto fileUtils = FileUtils::getInstance(); fileUtils->setSearchPaths(searchPaths); Once again, we tell our GLView object (our OpenGL view) that we designed the game for a certain screen size (the iPad retina screen) and once again, we want our game screen to resize to match the screen on the device (ResolutionPolicy::EXACT_FIT). Then we determine where to load our images from, based on the device's screen size. We have art for iPad retina, then for regular iPad which is shared by iPhone retina, and for the regular iPhone. We end by setting the scale factor based on the designed target. Adding background music Still inside AppDelegate.cpp, we load the sound files we'll use in the game, including a background.mp3 (courtesy of Kevin MacLeod from incompetech.com), which we load through the command: auto audioEngine = SimpleAudioEngine::getInstance(); audioEngine->preloadBackgroundMusic(fileUtils->fullPathForFilename("background.mp3").c_str()); We end by setting the effects' volume down a tad: //lower playback volume for effects audioEngine->setEffectsVolume(0.4f); For background music volume, you must use setBackgroundMusicVolume. If you create some sort of volume control in your game, these are the calls you would make to adjust the volume based on the user's preference. Initializing the game Now back to GameLayer.cpp. If you take a look inside our init method, you will see that the game initializes by calling three methods: createGameScreen, createPools, and createActions. We'll create all our screen elements inside the first method, and then create object pools so we don't instantiate any sprite inside the main loop; and we'll create all the main actions used in our game inside the createActions method. And as soon as the game initializes, we start playing the background music, with its should loop parameter set to true: SimpleAudioEngine::getInstance()-   >playBackgroundMusic("background.mp3", true); We once again store the screen size for future reference, and we'll use a _running Boolean for game states. If you run the game now, you should only see the background image: Using sprite sheets in Cocos2d-x A sprite sheet is a way to group multiple images together in one image file. In order to texture a sprite with one of these images, you must have the information of where in the sprite sheet that particular image is found (its rectangle). Sprite sheets are often organized in two files: the image one and a data file that describes where in the image you can find the individual textures. I used TexturePacker to create these files for the game. You can find them inside the ipad, ipadhd, and iphone folders inside Resources. There is a sprite_sheet.png file for the image and a sprite_sheet.plist file that describes the individual frames inside the image. This is what the sprite_sheet.png file looks like: Batch drawing sprites In Cocos2d-x, sprite sheets can be used in conjunction with a specialized node, called SpriteBatchNode. This node can be used whenever you wish to use multiple sprites that share the same source image inside the same node. So you could have multiple instances of a Sprite class that uses a bullet.png texture for instance. And if the source image is a sprite sheet, you can have multiple instances of sprites displaying as many different textures as you could pack inside your sprite sheet. With SpriteBatchNode, you can substantially reduce the number of calls during the rendering stage of your game, which will help when targeting less powerful systems, though not noticeably in more modern devices. Let me show you how to create a SpriteBatchNode. Time for action – creating SpriteBatchNode Let's begin implementing the createGameScreen method in GameLayer.cpp. Just below the lines that add the bg sprite, we instantiate our batch node: void GameLayer::createGameScreen() {   //add bg auto bg = Sprite::create("bg.png"); ...   SpriteFrameCache::getInstance()-> addSpriteFramesWithFile("sprite_sheet.plist"); _gameBatchNode = SpriteBatchNode::create("sprite_sheet.png"); this->addChild(_gameBatchNode); In order to create the batch node from a sprite sheet, we first load all the frame information described by the sprite_sheet.plist file into SpriteFrameCache. And then we create the batch node with the sprite_sheet.png file, which is the source texture shared by all sprites added to this batch node. (The background image is not part of the sprite sheet, so it's added separately before we add _gameBatchNode to GameLayer.) Now we can start putting stuff inside _gameBatchNode. First, the city: for (int i = 0; i < 2; i++) { auto sprite = Sprite::createWithSpriteFrameName   ("city_dark.png");    sprite->setAnchorPoint(Vec2(0.5,0)); sprite->setPosition(_screenSize.width * (0.25f + i * 0.5f),0)); _gameBatchNode->addChild(sprite, kMiddleground); sprite = Sprite::createWithSpriteFrameName ("city_light.png"); sprite->setAnchorPoint(Vec2(0.5,0)); sprite->setPosition(Vec2(_screenSize.width * (0.25f + i * 0.5f), _screenSize.height * 0.1f)); _gameBatchNode->addChild(sprite, kBackground); } Then the trees: //add trees for (int i = 0; i < 3; i++) { auto sprite = Sprite::createWithSpriteFrameName("trees.png"); sprite->setAnchorPoint(Vec2(0.5f, 0.0f)); sprite->setPosition(Vec2(_screenSize.width * (0.2f + i * 0.3f),0)); _gameBatchNode->addChild(sprite, kForeground);   } Notice that here we create sprites by passing it a sprite frame name. The IDs for these frame names were loaded into SpriteFrameCache through our sprite_sheet.plist file. The screen so far is made up of two instances of city_dark.png tiling at the bottom of the screen, and two instances of city_light.png also tiling. One needs to appear on top of the other and for that we use the enumerated values declared in GameLayer.h: enum { kBackground, kMiddleground, kForeground }; We use the addChild( Node, zOrder) method to layer our sprites on top of each other, using different values for their z order. So for example, when we later add three sprites showing the trees.png sprite frame, we add them on top of all previous sprites using the highest value for z that we find in the enumerated list, which is kForeground. Why go through the trouble of tiling the images and not using one large image instead, or combining some of them with the background image? Because I wanted to include the greatest number of images possible inside the one sprite sheet, and have that sprite sheet to be as small as possible, to illustrate all the clever ways you can use and optimize sprite sheets. This is not necessary in this particular game. What just happened? We began creating the initial screen for our game. We are using a SpriteBatchNode to contain all the sprites that use images from our sprite sheet. So SpriteBatchNode behaves as any node does—as a container. And we can layer individual sprites inside the batch node by manipulating their z order. Bitmap fonts in Cocos2d-x The Cocos2d-x Label class has a static create method that uses bitmap images for the characters. The bitmap image we are using here was created with the program GlyphDesigner, and in essence, it works just as a sprite sheet does. As a matter of fact, Label extends SpriteBatchNode, so it behaves just like a batch node. You have images for all individual characters you'll need packed inside a PNG file (font.png), and then a data file (font.fnt) describing where each character is. The following screenshot shows how the font sprite sheet looks like for our game: The difference between Label and a regular SpriteBatchNode class is that the data file also feeds the Label object information on how to write with this font. In other words, how to space out the characters and lines correctly. The Label objects we are using in the game are instantiated with the name of the data file and their initial string value: _scoreDisplay = Label::createWithBMFont("font.fnt", "0"); And the value for the label is changed through the setString method: _scoreDisplay->setString("1000"); Just as with every other image in the game, we also have different versions of font.fnt and font.png in our Resources folders, one for each screen definition. FileUtils will once again do the heavy lifting of finding the correct file for the correct screen. So now let's create the labels for our game. Time for action – creating bitmap font labels Creating a bitmap font is somewhat similar to creating a batch node. Continuing with our createGameScreen method, add the following lines to the score label: _scoreDisplay = Label::createWithBMFont("font.fnt", "0"); _scoreDisplay->setAnchorPoint(Vec2(1,0.5)); _scoreDisplay->setPosition(Vec2   (_screenSize.width * 0.8f, _screenSize.height * 0.94f)); this->addChild(_scoreDisplay); And then add a label to display the energy level, and set its horizontal alignment to Right: _energyDisplay = Label::createWithBMFont("font.fnt", "100%", TextHAlignment::RIGHT); _energyDisplay->setPosition(Vec2   (_screenSize.width * 0.3f, _screenSize.height * 0.94f)); this->addChild(_energyDisplay); Add the following line for an icon that appears next to the _energyDisplay label: auto icon = Sprite::createWithSpriteFrameName ("health_icon.png"); icon->setPosition( Vec2(_screenSize.   width * 0.15f, _screenSize.height * 0.94f) ); _gameBatchNode->addChild(icon, kBackground); What just happened? We just created our first bitmap font object in Cocos2d-x. Now let's finish creating our game's sprites. Time for action – adding the final screen sprites The last sprites we need to create are the clouds, the bomb and shockwave, and our game state messages. Back to the createGameScreen method, add the clouds to the screen: for (int i = 0; i < 4; i++) { float cloud_y = i % 2 == 0 ? _screenSize.height * 0.4f : _screenSize.height * 0.5f; auto cloud = Sprite::createWithSpriteFrameName("cloud.png"); cloud->setPosition(Vec2 (_screenSize.width * 0.1f + i * _screenSize.width * 0.3f, cloud_y)); _gameBatchNode->addChild(cloud, kBackground); _clouds.pushBack(cloud); } Create the _bomb sprite; players will grow when tapping the screen: _bomb = Sprite::createWithSpriteFrameName("bomb.png"); _bomb->getTexture()->generateMipmap(); _bomb->setVisible(false);   auto size = _bomb->getContentSize();   //add sparkle inside bomb sprite auto sparkle = Sprite::createWithSpriteFrameName("sparkle.png"); sparkle->setPosition(Vec2(size.width * 0.72f, size.height *   0.72f)); _bomb->addChild(sparkle, kMiddleground, kSpriteSparkle);   //add halo inside bomb sprite auto halo = Sprite::createWithSpriteFrameName   ("halo.png"); halo->setPosition(Vec2(size.width * 0.4f, size.height *   0.4f)); _bomb->addChild(halo, kMiddleground, kSpriteHalo); _gameBatchNode->addChild(_bomb, kForeground); Then create the _shockwave sprite that appears after the _bomb goes off: _shockWave = Sprite::createWithSpriteFrameName ("shockwave.png"); _shockWave->getTexture()->generateMipmap(); _shockWave->setVisible(false); _gameBatchNode->addChild(_shockWave); Finally, add the two messages that appear on the screen, one for our intro state and one for our gameover state: _introMessage = Sprite::createWithSpriteFrameName ("logo.png"); _introMessage->setPosition(Vec2   (_screenSize.width * 0.5f, _screenSize.height * 0.6f)); _introMessage->setVisible(true); this->addChild(_introMessage, kForeground);   _gameOverMessage = Sprite::createWithSpriteFrameName   ("gameover.png"); _gameOverMessage->setPosition(Vec2   (_screenSize.width * 0.5f, _screenSize.height * 0.65f)); _gameOverMessage->setVisible(false); this->addChild(_gameOverMessage, kForeground); What just happened? There is a lot of new information regarding sprites in the previous code. So let's go over it carefully: We started by adding the clouds. We put the sprites inside a vector so we can move the clouds later. Notice that they are also part of our batch node. Next comes the bomb sprite and our first new call: _bomb->getTexture()->generateMipmap(); With this we are telling the framework to create antialiased copies of this texture in diminishing sizes (mipmaps), since we are going to scale it down later. This is optional of course; sprites can be resized without first generating mipmaps, but if you notice loss of quality in your scaled sprites, you can fix that by creating mipmaps for their texture. The texture must have size values in so-called POT (power of 2: 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, and so on). Textures in OpenGL must always be sized this way; when they are not, Cocos2d-x will do one of two things: it will either resize the texture in memory, adding transparent pixels until the image reaches a POT size, or stop the execution on an assert. With textures used for mipmaps, the framework will stop execution for non-POT textures. I add the sparkle and the halo sprites as children to the _bomb sprite. This will use the container characteristic of nodes to our advantage. When I grow the bomb, all its children will grow with it. Notice too that I use a third parameter to addChild for halo and sparkle: bomb->addChild(halo, kMiddleground, kSpriteHalo); This third parameter is an integer tag from yet another enumerated list declared in GameLayer.h. I can use this tag to retrieve a particular child from a sprite as follows: auto halo = (Sprite *)   bomb->getChildByTag(kSpriteHalo); We now have our game screen in place: Next come object pools. Time for action – creating our object pools The pools are just vectors of objects. And here are the steps to create them: Inside the createPools method, we first create a pool for meteors: void GameLayer::createPools() { int i; _meteorPoolIndex = 0; for (i = 0; i < 50; i++) { auto sprite = Sprite::createWithSpriteFrameName("meteor.png"); sprite->setVisible(false); _gameBatchNode->addChild(sprite, kMiddleground, kSpriteMeteor); _meteorPool.pushBack(sprite); } Then we create an object pool for health packs: _healthPoolIndex = 0; for (i = 0; i < 20; i++) { auto sprite = Sprite::createWithSpriteFrameName("health.png"); sprite->setVisible(false); sprite->setAnchorPoint(Vec2(0.5f, 0.8f)); _gameBatchNode->addChild(sprite, kMiddleground, kSpriteHealth); _healthPool.pushBack(sprite); } We'll use the corresponding pool index to retrieve objects from the vectors as the game progresses. What just happened? We now have a vector of invisible meteor sprites and a vector of invisible health sprites. We'll use their respective pool indices to retrieve these from the vector as needed as you'll see in a moment. But first we need to take care of actions and animations. With object pools, we reduce the number of instantiations during the main loop, and it allows us to never destroy anything that can be reused. But if you need to remove a child from a node, use ->removeChild or ->removeChildByTag if a tag is present. Actions in a nutshell If you remember, a node will store information about position, scale, rotation, visibility, and opacity of a node. And in Cocos2d-x, there is an Action class to change each one of these values over time, in effect animating these transformations. Actions are usually created with a static method create. The majority of these actions are time-based, so usually the first parameter you need to pass an action is the time length for the action. So for instance: auto fadeout = FadeOut::create(1.0f); This creates a fadeout action that will take one second to complete. You can run it on a sprite, or node, as follows: mySprite->runAction(fadeout); Cocos2d-x has an incredibly flexible system that allows us to create any combination of actions and transformations to achieve any effect we desire. You may, for instance, choose to create an action sequence (Sequence) that contains more than one action; or you can apply easing effects (EaseIn, EaseOut, and so on) to your actions. You can choose to repeat an action a certain number of times (Repeat) or forever (RepeatForever); and you can add callbacks to functions you want called once an action is completed (usually inside a Sequence action). Time for action – creating actions with Cocos2d-x Creating actions with Cocos2d-x is a very simple process: Inside our createActions method, we will instantiate the actions we can use repeatedly in our game. Let's create our first actions: void GameLayer::createActions() { //swing action for health drops auto easeSwing = Sequence::create( EaseInOut::create(RotateTo::create(1.2f, -10), 2), EaseInOut::create(RotateTo::create(1.2f, 10), 2), nullptr);//mark the end of a sequence with a nullptr _swingHealth = RepeatForever::create( (ActionInterval *) easeSwing ); _swingHealth->retain(); Actions can be combined in many different forms. Here, the retained _swingHealth action is a RepeatForever action of Sequence that will rotate the health sprite first one way, then the other, with EaseInOut wrapping the RotateTo action. RotateTo takes 1.2 seconds to rotate the sprite first to -10 degrees and then to 10. And the easing has a value of 2, which I suggest you experiment with to get a sense of what it means visually. Next we add three more actions: //action sequence for shockwave: fade out, callback when //done _shockwaveSequence = Sequence::create( FadeOut::create(1.0f), CallFunc::create(std::bind(&GameLayer::shockwaveDone, this)), nullptr); _shockwaveSequence->retain();   //action to grow bomb _growBomb = ScaleTo::create(6.0f, 1.0); _growBomb->retain();   //action to rotate sprites auto rotate = RotateBy::create(0.5f , -90); _rotateSprite = RepeatForever::create( rotate ); _rotateSprite->retain(); First, another Sequence. This will fade out the sprite and call the shockwaveDone function, which is already implemented in the class and turns the _shockwave sprite invisible when called. The last one is a RepeatForever action of a RotateBy action. In half a second, the sprite running this action will rotate -90 degrees and will do that again and again. What just happened? You just got your first glimpse of how to create actions in Cocos2d-x and how the framework allows for all sorts of combinations to accomplish any effect. It may be hard at first to read through a Sequence action and understand what's happening, but the logic is easy to follow once you break it down into its individual parts. But we are not done with the createActions method yet. Next come sprite animations. Animating a sprite in Cocos2d-x The key thing to remember is that an animation is just another type of action, one that changes the texture used by a sprite over a period of time. In order to create an animation action, you need to first create an Animation object. This object will store all the information regarding the different sprite frames you wish to use in the animation, the length of the animation in seconds, and whether it loops or not. With this Animation object, you then create a Animate action. Let's take a look. Time for action – creating animations Animations are a specialized type of action that require a few extra steps: Inside the same createActions method, add the lines for the two animations we have in the game. First, we start with the animation that shows an explosion when a meteor reaches the city. We begin by loading the frames into an Animation object: auto animation = Animation::create(); int i; for(i = 1; i <= 10; i++) { auto name = String::createWithFormat("boom%i.png", i); auto frame = SpriteFrameCache::getInstance()->getSpriteFrameByName(name->getCString()); animation->addSpriteFrame(frame); } Then we use the Animation object inside a Animate action: animation->setDelayPerUnit(1 / 10.0f); animation->setRestoreOriginalFrame(true); _groundHit = Sequence::create(    MoveBy::create(0, Vec2(0,_screenSize.height * 0.12f)),    Animate::create(animation),    CallFuncN::create(CC_CALLBACK_1(GameLayer::animationDone, this)), nullptr); _groundHit->retain(); The same steps are repeated to create the other explosion animation used when the player hits a meteor or a health pack. animation = Animation::create(); for(int i = 1; i <= 7; i++) { auto name = String::createWithFormat("explosion_small%i.png", i); auto frame = SpriteFrameCache::getInstance()->getSpriteFrameByName(name->getCString()); animation->addSpriteFrame(frame); }   animation->setDelayPerUnit(0.5 / 7.0f); animation->setRestoreOriginalFrame(true); _explosion = Sequence::create(      Animate::create(animation),    CallFuncN::create(CC_CALLBACK_1(GameLayer::animationDone, this)), nullptr); _explosion->retain(); What just happened? We created two instances of a very special kind of action in Cocos2d-x: Animate. Here is what we did: First, we created an Animation object. This object holds the references to all the textures used in the animation. The frames were named in such a way that they could easily be concatenated inside a loop (boom1, boom2, boom3, and so on). There are 10 frames for the first animation and seven for the second. The textures (or frames) are SpriteFrame objects we grab from SpriteFrameCache, which as you remember, contains all the information from the sprite_sheet.plist data file. So the frames are in our sprite sheet. Then when all frames are in place, we determine the delay of each frame by dividing the total amount of seconds we want the animation to last by the total number of frames. The setRestoreOriginalFrame method is important here. If we set setRestoreOriginalFrame to true, then the sprite will revert to its original appearance once the animation is over. For example, if I have an explosion animation that will run on a meteor sprite, then by the end of the explosion animation, the sprite will revert to displaying the meteor texture. Time for the actual action. Animate receives the Animation object as its parameter. (In the first animation, we shift the position of the sprite just before the explosion appears, so there is an extra MoveBy method.) And in both instances, I make a call to an animationDone callback already implemented in the class. It makes the calling sprite invisible: void GameLayer::animationDone (Node* pSender) { pSender->setVisible(false); } We could have used the same method for both callbacks (animationDone and shockwaveDone) as they accomplish the same thing. But I wanted to show you a callback that receives as an argument, the node that made the call and one that did not. Respectively, these are CallFuncN and CallFunc, and were used inside the action sequences we just created. Time to make our game tick! Okay, we have our main elements in place and are ready to add the final bit of logic to run the game. But how will everything work? We will use a system of countdowns to add new meteors and new health packs, as well as a countdown that will incrementally make the game harder to play. On touch, the player will start the game if the game is not running, and also add bombs and explode them during gameplay. An explosion creates a shockwave. On update, we will check against collision between our _shockwave sprite (if visible) and all our falling objects. And that's it. Cocos2d-x will take care of all the rest through our created actions and callbacks! So let's implement our touch events first. Time for action – handling touches Time to bring the player to our party: Time to implement our onTouchBegan method. We'll begin by handling the two game states, intro and game over: bool GameLayer::onTouchBegan (Touch * touch, Event * event){   //if game not running, we are seeing either intro or //gameover if (!_running) {    //if intro, hide intro message    if (_introMessage->isVisible()) {      _introMessage->setVisible(false);        //if game over, hide game over message    } else if (_gameOverMessage->isVisible()) {      SimpleAudioEngine::getInstance()->stopAllEffects();      _gameOverMessage->setVisible(false);         }       this->resetGame();    return true; } Here we check to see if the game is not running. If not, we check to see if any of our messages are visible. If _introMessage is visible, we hide it. If _gameOverMessage is visible, we stop all current sound effects and hide the message as well. Then we call a method called resetGame, which will reset all the game data (energy, score, and countdowns) to their initial values, and set _running to true. Next we handle the touches. But we only need to handle one each time so we use ->anyObject() on Set: auto touch = (Touch *)pTouches->anyObject();   if (touch) { //if bomb already growing... if (_bomb->isVisible()) {    //stop all actions on bomb, halo and sparkle    _bomb->stopAllActions();    auto child = (Sprite *) _bomb->getChildByTag(kSpriteHalo);    child->stopAllActions();    child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle);    child->stopAllActions();       //if bomb is the right size, then create shockwave    if (_bomb->getScale() > 0.3f) {      _shockWave->setScale(0.1f);      _shockWave->setPosition(_bomb->getPosition());      _shockWave->setVisible(true);      _shockWave->runAction(ScaleTo::create(0.5f, _bomb->getScale() * 2.0f));      _shockWave->runAction(_shockwaveSequence->clone());      SimpleAudioEngine::getInstance()->playEffect("bombRelease.wav");      } else {      SimpleAudioEngine::getInstance()->playEffect("bombFail.wav");    }    _bomb->setVisible(false);    //reset hits with shockwave, so we can count combo hits    _shockwaveHits = 0; //if no bomb currently on screen, create one } else {    Point tap = touch->getLocation();    _bomb->stopAllActions();    _bomb->setScale(0.1f);    _bomb->setPosition(tap);    _bomb->setVisible(true);    _bomb->setOpacity(50);    _bomb->runAction(_growBomb->clone());         auto child = (Sprite *) _bomb->getChildByTag(kSpriteHalo);      child->runAction(_rotateSprite->clone());      child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle);      child->runAction(_rotateSprite->clone()); } } If _bomb is visible, it means it's already growing on the screen. So on touch, we use the stopAllActions() method on the bomb and we use the stopAllActions() method on its children that we retrieve through our tags: child = (Sprite *) _bomb->getChildByTag(kSpriteHalo); child->stopAllActions(); child = (Sprite *) _bomb->getChildByTag(kSpriteSparkle); child->stopAllActions(); If _bomb is the right size, we start our _shockwave. If it isn't, we play a bomb failure sound effect; there is no explosion and _shockwave is not made visible. If we have an explosion, then the _shockwave sprite is set to 10 percent of the scale. It's placed at the same spot as the bomb, and we run a couple of actions on it: we grow the _shockwave sprite to twice the scale the bomb was when it went off and we run a copy of _shockwaveSequence that we created earlier. Finally, if no _bomb is currently visible on screen, we create one. And we run clones of previously created actions on the _bomb sprite and its children. When _bomb grows, its children grow. But when the children rotate, the bomb does not: a parent changes its children, but the children do not change their parent. What just happened? We just added part of the core logic of the game. It is with touches that the player creates and explodes bombs to stop meteors from reaching the city. Now we need to create our falling objects. But first, let's set up our countdowns and our game data. Time for action – starting and restarting the game Let's add the logic to start and restart the game. Let's write the implementation for resetGame: void GameLayer::resetGame(void) {    _score = 0;    _energy = 100;       //reset timers and "speeds"    _meteorInterval = 2.5;    _meteorTimer = _meteorInterval * 0.99f;    _meteorSpeed = 10;//in seconds to reach ground    _healthInterval = 20;    _healthTimer = 0;    _healthSpeed = 15;//in seconds to reach ground       _difficultyInterval = 60;    _difficultyTimer = 0;       _running = true;       //reset labels    _energyDisplay->setString(std::to_string((int) _energy) + "%");    _scoreDisplay->setString(std::to_string((int) _score)); } Next, add the implementation of stopGame: void GameLayer::stopGame() {       _running = false;       //stop all actions currently running    int i;    int count = (int) _fallingObjects.size();       for (i = count-1; i >= 0; i--) {        auto sprite = _fallingObjects.at(i);        sprite->stopAllActions();        sprite->setVisible(false);        _fallingObjects.erase(i);    }    if (_bomb->isVisible()) {        _bomb->stopAllActions();        _bomb->setVisible(false);        auto child = _bomb->getChildByTag(kSpriteHalo);        child->stopAllActions();        child = _bomb->getChildByTag(kSpriteSparkle);        child->stopAllActions();    }    if (_shockWave->isVisible()) {        _shockWave->stopAllActions();        _shockWave->setVisible(false);    }    if (_ufo->isVisible()) {        _ufo->stopAllActions();        _ufo->setVisible(false);        auto ray = _ufo->getChildByTag(kSpriteRay);       ray->stopAllActions();        ray->setVisible(false);    } } What just happened? With these methods we control gameplay. We start the game with default values through resetGame(), and we stop all actions with stopGame(). Already implemented in the class is the method that makes the game more difficult as time progresses. If you take a look at the method (increaseDifficulty) you will see that it reduces the interval between meteors and reduces the time it takes for meteors to reach the ground. All we need now is the update method to run the countdowns and check for collisions. Time for action – updating the game We already have the code that updates the countdowns inside the update. If it's time to add a meteor or a health pack we do it. If it's time to make the game more difficult to play, we do that too. It is possible to use an action for these timers: a Sequence action with a Delay action object and a callback. But there are advantages to using these countdowns. It's easier to reset them and to change them, and we can take them right into our main loop. So it's time to add our main loop: What we need to do is check for collisions. So add the following code: if (_shockWave->isVisible()) { count = (int) _fallingObjects.size(); for (i = count-1; i >= 0; i--) {    auto sprite = _fallingObjects.at(i);    diffx = _shockWave->getPositionX() - sprite->getPositionX();    diffy = _shockWave->getPositionY() - sprite->getPositionY();    if (pow(diffx, 2) + pow(diffy, 2) <= pow(_shockWave->getBoundingBox().size.width * 0.5f, 2)) {    sprite->stopAllActions();    sprite->runAction( _explosion->clone());    SimpleAudioEngine::getInstance()->playEffect("boom.wav");    if (sprite->getTag() == kSpriteMeteor) {      _shockwaveHits++;      _score += _shockwaveHits * 13 + _shockwaveHits * 2;    }    //play sound    _fallingObjects.erase(i); } } _scoreDisplay->setString(std::to_string(_score)); } If _shockwave is visible, we check the distance between it and each sprite in _fallingObjects vector. If we hit any meteors, we increase the value of the _shockwaveHits property so we can award the player for multiple hits. Next we move the clouds: //move clouds for (auto sprite : _clouds) { sprite->setPositionX(sprite->getPositionX() + dt * 20); if (sprite->getPositionX() > _screenSize.width + sprite->getBoundingBox().size.width * 0.5f)    sprite->setPositionX(-sprite->getBoundingBox().size.width * 0.5f); } I chose not to use a MoveTo action for the clouds to show you the amount of code that can be replaced by a simple action. If not for Cocos2d-x actions, we would have to implement logic to move, rotate, swing, scale, and explode all our sprites! And finally: if (_bomb->isVisible()) {    if (_bomb->getScale() > 0.3f) {      if (_bomb->getOpacity() != 255)        _bomb->setOpacity(255);    } } We give the player an extra visual cue to when a bomb is ready to explode by changing its opacity. What just happened? The main loop is pretty straightforward when you don't have to worry about updating individual sprites, as our actions take care of that for us. We pretty much only need to run collision checks between our sprites, and to determine when it's time to throw something new at the player. So now the only thing left to do is grab the meteors and health packs from the pools when their timers are up. So let's get right to it. Time for action – retrieving objects from the pool We just need to use the correct index to retrieve the objects from their respective vector: To retrieve meteor sprites, we'll use the resetMeteor method: void GameLayer::resetMeteor(void) {    //if too many objects on screen, return    if (_fallingObjects.size() > 30) return;       auto meteor = _meteorPool.at(_meteorPoolIndex);      _meteorPoolIndex++;    if (_meteorPoolIndex == _meteorPool.size())      _meteorPoolIndex = 0;      int meteor_x = rand() % (int) (_screenSize.width * 0.8f) + _screenSize.width * 0.1f;    int meteor_target_x = rand() % (int) (_screenSize.width * 0.8f) + _screenSize.width * 0.1f;       meteor->stopAllActions();    meteor->setPosition(Vec2(meteor_x, _screenSize.height + meteor->getBoundingBox().size.height * 0.5));    //create action    auto rotate = RotateBy::create(0.5f , -90);    auto repeatRotate = RepeatForever::create( rotate );    auto sequence = Sequence::create (                MoveTo::create(_meteorSpeed, Vec2(meteor_target_x, _screenSize.height * 0.15f)),                CallFunc::create(std::bind(&GameLayer::fallingObjectDone, this, meteor) ), nullptr);   meteor->setVisible ( true ); meteor->runAction(repeatRotate); meteor->runAction(sequence); _fallingObjects.pushBack(meteor); } We grab the next available meteor from the pool, then we pick a random start and end x value for its MoveTo action. The meteor starts at the top of the screen and will move to the bottom towards the city, but the x value is randomly picked each time. We rotate the meteor inside a RepeatForever action, and we use Sequence to move the sprite to its target position and then call back fallingObjectDone when the meteor has reached its target. We finish by adding the new meteor we retrieved from the pool to the _fallingObjects vector so we can check collisions with it. The method to retrieve the health (resetHealth) sprites is pretty much the same, except that swingHealth action is used instead of rotate. You'll find that method already implemented in GameLayer.cpp. What just happened? So in resetGame we set the timers, and we update them in the update method. We use these timers to add meteors and health packs to the screen by grabbing the next available one from their respective pool, and then we proceed to run collisions between an exploding bomb and these falling objects. Notice that in both resetMeteor and resetHealth we don't add new sprites if too many are on screen already: if (_fallingObjects->size() > 30) return; This way the game does not get ridiculously hard, and we never run out of unused objects in our pools. And the very last bit of logic in our game is our fallingObjectDone callback, called when either a meteor or a health pack has reached the ground, at which point it awards or punishes the player for letting sprites through. When you take a look at that method inside GameLayer.cpp, you will notice how we use ->getTag() to quickly ascertain which type of sprite we are dealing with (the one calling the method): if (pSender->getTag() == kSpriteMeteor) { If it's a meteor, we decrease energy from the player, play a sound effect, and run the explosion animation; an autorelease copy of the _groundHit action we retained earlier, so we don't need to repeat all that logic every time we need to run this action. If the item is a health pack, we increase the energy or give the player some points, play a nice sound effect, and hide the sprite. Play the game! We've been coding like mad, and it's finally time to run the game. But first, don't forget to release all the items we retained. In GameLayer.cpp, add our destructor method: GameLayer::~GameLayer () {       //release all retained actions    CC_SAFE_RELEASE(_growBomb);    CC_SAFE_RELEASE(_rotateSprite);    CC_SAFE_RELEASE(_shockwaveSequence);    CC_SAFE_RELEASE(_swingHealth);    CC_SAFE_RELEASE(_groundHit);    CC_SAFE_RELEASE(_explosion);    CC_SAFE_RELEASE(_ufoAnimation);    CC_SAFE_RELEASE(_blinkRay);       _clouds.clear();    _meteorPool.clear();    _healthPool.clear();    _fallingObjects.clear(); } The actual game screen will now look something like this: Now, let's take this to Android. Time for action – running the game in Android Follow these steps to deploy the game to Android: This time, there is no need to alter the manifest because the default settings are the ones we want. So, navigate to proj.android and then to the jni folder and open the Android.mk file in a text editor. Edit the lines in LOCAL_SRC_FILES to read as follows: LOCAL_SRC_FILES := hellocpp/main.cpp \                    ../../Classes/AppDelegate.cpp \                    ../../Classes/GameLayer.cpp Follow the instructions from the HelloWorld and AirHockey examples to import the game into Eclipse. Save it and run your application. This time, you can try out different size screens if you have the devices. What just happened? You just ran a universal app in Android. And nothing could have been simpler. Summary In my opinion, after nodes and all their derived objects, actions are the second best thing about Cocos2d-x. They are time savers and can quickly spice things up in any project with professional-looking animations. And I hope with the examples found in this article, you will be able to create any action you need with Cocos2d-x. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [article] Moving the Space Pod Using Touch [article] Cocos2d-x: Installation [article]
Read more
  • 0
  • 0
  • 3733
article-image-spritekit-framework-and-physics-simulation
Packt
24 Mar 2015
15 min read
Save for later

SpriteKit Framework and Physics Simulation

Packt
24 Mar 2015
15 min read
In this article by Bhanu Birani, author of the book iOS Game Programming Cookbook, you will learn about the SpriteKit game framework and about the physics simulation. (For more resources related to this topic, see here.) Getting started with the SpriteKit game framework With the release of iOS 7.0, Apple has introduced its own native 2D game framework called SpriteKit. SpriteKit is a great 2D game engine, which has support for sprite, animations, filters, masking, and most important is the physics engine to provide a real-world simulation for the game. Apple provides a sample game to get started with the SpriteKit called Adventure Game. The download URL for this example project is http://bit.ly/Rqaeda. This sample project provides a glimpse of the capability of this framework. However, the project is complicated to understand and for learning you just want to make something simple. To have a deeper understanding of SpriteKit-based games, we will be building a bunch of mini games in this book. Getting ready To get started with iOS game development, you have the following prerequisites for SpriteKit: You will need the Xcode 5.x The targeted device family should be iOS 7.0+ You should be running OS X 10.8.X or later If all the above requisites are fulfilled, then you are ready to go with the iOS game development. So let's start with game development using iOS native game framework. How to do it... Let's start building the AntKilling game. Perform the following steps to create your new SpriteKit project: Start your Xcode. Navigate to File | New | Project.... Then from the prompt window, navigate to iOS | Application | SpriteKit Game and click on Next. Fill all the project details in the prompt window and provide AntKilling as the project name with your Organization Name, device as iPhone, and Class Prefix as AK. Click on Next. Select a location on the drive to save the project and click on Create. Then build the sample project to check the output of the sample project. Once you build and run the project with the play button, you should see the following on your device: How it works... The following are the observations of the starter project: As you have seen, the sample project of SpriteKit plays a label with a background color. SpriteKit works on the concept of scenes, which can be understood as the layers or the screens of the game. There can be multiple scenes working at the same time; for example, there can be a gameplay scene, hud scene, and the score scene running at the same time in the game. Now we can look into the project for more detail arrangements of the starter project. The following are the observations: In the main directory, you already have one scene created by default called AKMyScene. Now click on AKMyScene.m to explore the code to add the label on the screen. You should see something similar to the following screenshot: Now we will be updating this file with our code to create our AntKilling game in the next sections. We have to fulfill a few prerequisites to get started with the code, such as locking the orientation to landscape as we want a landscape orientation game. To change the orientation of the game, navigate to AntKilling project settings | TARGETS | General. You should see something similar to the following screenshot: Now in the General tab, uncheck Portrait from the device orientation so that the final settings should look similar to the following screenshot: Now build and run the project. You should be able to see the app running in landscape orientation. The bottom-right corner of the screen shows the number of nodes with the frame rate. Introduction to physics simulation We all like games that have realistic effects and actions. In this article we will learn about the ways to make our games more realistic. Have you ever wondered how to provide realistic effect to game objects? It is physics that provides a realistic effect to the games and their characters. In this article, we will learn how to use physics in our games. While developing the game using SpriteKit, you will need to change the world of your game frequently. The world is the main object in the game that holds all the other game objects and physics simulations. We can also update the gravity of the gaming world according to our need. The default world gravity is 9.8, which is also the earth's gravity, World gravity makes all bodies fall down to the ground as soon as they are created. More about SpriteKit can be explored using the following link: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Physics/Physics.html Getting ready The first task is to create the world and then add bodies to it, which can interact according to the principles of physics. You can create game objects in the form of sprites and associate physics bodies to them. You can also set various properties of the object to specify its behavior. How to do it... In this section, we will learn about the basic components that are used to develop games. We will also learn how to set game configurations, including the world settings such as gravity and boundary. The initial step is to apply the gravity to the scene. Every scene has a physics world associated with it. We can update the gravity of the physics world in our scene using the following line of code: self.physicsWorld.gravity = CGVectorMake(0.0f, 0.0f); Currently we have set the gravity of the scene to 0, which means the bodies will be in a state of free fall. They will not experience any force due to gravity in the world. In several games we also need to set a boundary to the games. Usually, the bounds of the view can serve as the bounds for our physics world. The following code will help us to set up the boundary for our game, which will be as per the bounds of our game scene: // 1 Create a physics body that borders the screenSKPhysicsBody* gameBorderBody = [SKPhysicsBody   bodyWithEdgeLoopFromRect:self.frame];// 2 Set physicsBody of scene to gameBorderBodyself.physicsBody = gameBorderBody;// 3 Set the friction of that physicsBody to 0self.physicsBody.friction = 0.0f; In the first line of code we are initializing a SKPhysicsBody object. This object is used to add the physics simulation to any SKSpriteNode. We have created the gameBorderBody as a rectangle with the dimensions equal to the current scene frame. Then we assign that physics object to the physicsBody of our current scene (every SKSpriteNode object has the physicsBody property through which we can associate physics bodies to any node). After this we update the physicsBody.friction. This line of code updates the friction property of our world. The friction property defines the friction value of one physics body with another physics body. Here we have set this to 0, in order to make the objects move freely, without slowing down. Every game object is inherited from the SKSpriteNode class, which allows the physics body to hold on to the node. Let us take an example and create a game object using the following code: // 1SKSpriteNode* gameObject = [SKSpriteNode   spriteNodeWithImageNamed: @"object.png"];gameObject.name = @"game_object";gameObject.position = CGPointMake(self.frame.size.width/3,   self.frame.size.height/3);[self addChild:gameObject]; // 2gameObject.physicsBody = [SKPhysicsBody   bodyWithCircleOfRadius:gameObject.frame.size.width/2];// 3gameObject.physicsBody.friction = 0.0f; We are already familiar with the first few lines of code wherein we are creating the sprite reference and then adding it to the scene. Now in the next line of code, we are associating a physics body with that sprite. We are initializing the circular physics body with radius and associating it with the sprite object. Then we can update various other properties of the physics body such as friction, restitution, linear damping, and so on. The physics body properties also allow you to apply force. To apply force you need to provide the direction where you want to apply force. [gameObject.physicsBody applyForce:CGVectorMake(10.0f,   -10.0f)]; In the code we are applying force in the bottom-right corner of the world. To provide the direction coordinates we have used CGVectorMake, which accepts the vector coordinates of the physics world. You can also apply impulse instead of force. Impulse can be defined as a force that acts for a specific interval of time and is equal to the change in linear momentum produced over that interval. [gameObject.physicsBody applyImpulse:CGVectorMake(10.0f,   -10.0f)]; While creating games, we frequently use static objects. To create a rectangular static object we can use the following code: SKSpriteNode* box = [[SKSpriteNode alloc]   initWithImageNamed: @"box.png"];box.name = @"box_object";box.position = CGPointMake(CGRectGetMidX(self.frame),   box.frame.size.height * 0.6f);[self addChild:box];box.physicsBody = [SKPhysicsBody   bodyWithRectangleOfSize:box.frame.size];box.physicsBody.friction = 0.4f;// make physicsBody staticbox.physicsBody.dynamic = NO; So all the code is the same except one special property, which is dynamic. By default this property is set to YES, which means that all the physics bodies will be dynamic by default and can be converted to static after setting this Boolean to NO. Static bodies do not react to any force or impulse. Simply put, dynamic physics bodies can move while the static physics bodies cannot . Integrating physics engine with games From this section onwards, we will develop a mini game that will have a dynamic moving body and a static body. The basic concept of the game will be to create an infinite bouncing ball with a moving paddle that will be used to give direction to the ball. Getting ready... To develop a mini game using the physics engine, start by creating a new project. Open Xcode and go to File | New | Project and then navigate to iOS | Application | SpriteKit Game. In the pop-up screen, provide the Product Name as PhysicsSimulation, navigate to Devices | iPhone and click on Next as shown in the following screenshot: Click on Next and save the project on your hard drive. Once the project is saved, you should be able to see something similar to the following screenshot: In the project settings page, just uncheck the Portrait from Device Orientation section as we are supporting only landscape mode for this game. Graphics and games cannot be separated for long; you will also need some graphics for this game. Download the graphics folder, drag it and import it into the project. Make sure that the Copy items into destination group's folder (if needed) is checked and then click on Finish button. It should be something similar to the following screenshot: How to do it... Now your project template is ready for a physics-based mini game. We need to update the game template project to get started with code game logic. Take the following steps to integrate the basic physics object in the game. Open the file GameScene.m .This class creates a scene that will be plugged into the games. Remove all code from this class and just add the following function: -(id)initWithSize:(CGSize)size { if (self = [super initWithSize:size]) { SKSpriteNode* background = [SKSpriteNode spriteNodeWithImageNamed:@"bg.png"]; background.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); [self addChild:background]; } } This initWithSize method creates an blank scene of the specified size. The code written inside the init function allows you to add the background image at the center of the screen in your game. Now when you compile and run the code, you will observe that the background image is not placed correctly on the scene. To resolve this, open GameViewController.m. Remove all code from this file and add the following function; -(void)viewWillLayoutSubviews {   [super viewWillLayoutSubviews];     // Configure the view.   SKView * skView = (SKView *)self.view;   if (!skView.scene) {       skView.showsFPS = YES;       skView.showsNodeCount = YES;             // Create and configure the scene.       GameScene * scene = [GameScene sceneWithSize:skView.bounds.size];       scene.scaleMode = SKSceneScaleModeAspectFill;             // Present the scene.       [skView presentScene:scene];   }} To ensure that the view hierarchy is properly laid out, we have implemented the viewWillLayoutSubviews method. It does not work perfectly in viewDidLayoutSubviews method because the size of the scene is not known at that time. Now compile and run the app. You should be able to see the background image correctly. It will look something similar to the following screenshot: So now that we have the background image in place, let us add gravity to the world. Open GameScene.m and add the following line of code at the end of the initWithSize method: self.physicsWorld.gravity = CGVectorMake(0.0f, 0.0f); This line of code will set the gravity of the world to 0, which means there will be no gravity. Now as we have removed the gravity to make the object fall freely, it's important to create a boundary around the world, which will hold all the objects of the world and prevent them to go off the screen. Add the following line of code to add the invisible boundary around the screen to hold the physics objects: // 1 Create a physics body that borders the screenSKPhysicsBody* gameborderBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:self.frame];// 2 Set physicsBody of scene to borderBodyself.physicsBody = gameborderBody;// 3 Set the friction of that physicsBody to 0self.physicsBody.friction = 0.0f; In the first line, we are are creating an edge-based physics boundary object, with a screen size frame. This type of a physics body does not have any mass or volume and also remains unaffected by force and impulses. Then we associate the object with the physics body of the scene. In the last line we set the friction of the body to 0, for a seamless interaction between objects and the boundary surface. The final file should look something like the following screenshot: Now we have our surface ready to hold the physics world objects. Let us create a new physics world object using the following line of code: // 1SKSpriteNode* circlularObject = [SKSpriteNode spriteNodeWithImageNamed: @"ball.png"];circlularObject.name = ballCategoryName;circlularObject.position = CGPointMake(self.frame.size.width/3, self.frame.size.height/3);[self addChild:circlularObject]; // 2circlularObject.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:circlularObject.frame.size.width/2];// 3circlularObject.physicsBody.friction = 0.0f;// 4circlularObject.physicsBody.restitution = 1.0f;// 5circlularObject.physicsBody.linearDamping = 0.0f;// 6circlularObject.physicsBody.allowsRotation = NO; Here we have created the sprite and then we have added it to the scene. Then in the later steps we associate the circular physics body with the sprite object. Finally, we alter the properties of that physics body. Now compile and run the application; you should be able to see the circular ball on the screen as shown in screenshot below: The circular ball is added to the screen, but it does nothing. So it's time to add some action in the code. Add the following line of code at the end of the initWithSize method: [circlularObject.physicsBody applyImpulse:CGVectorMake(10.0f, -10.0f)]; This will apply the force on the physics body, which in turn will move the associated ball sprite as well. Now compile and run the project. You should be able to see the ball moving and then collide with the boundary and bounce back, as there is no friction between the boundary and the ball. So now we have the infinite bouncing ball in the game. How it works… There are several properties used while creating physics bodies to define their behavior in the physics world. The following is a detailed description of the properties used in the preceding code: Restitution property defines the bounciness of an object. Setting the restitution to 1.0f, means that the ball collision will be perfectly elastic with any object. This means that the ball will bounce back with a force equal to the impact. Linear Damping property allows the simulation of fluid or air friction. This is accomplished by reducing the linear velocity of the body. In our case, we do not want the ball to slow down while moving and hence we have set the restitution to 0.0f. There's more… You can read about all these properties in detail at Apple's developer documentation: https://developer.apple.com/library/IOs/documentation/SpriteKit/Reference/SKPhysicsBody_Ref/index.html. Summary In this article, you have learned about the SpriteKit game framework, how to create a simple game using SpriteKit framework, physics simulation, and also how to integrate physics engine with games. Resources for Article: Further resources on this subject: Code Sharing Between iOS and Android [article] Linking OpenCV to an iOS project [article] Interface Designing for Games in iOS [article]
Read more
  • 0
  • 0
  • 2612

article-image-introducing-gamemaker
Packt
24 Mar 2015
5 min read
Save for later

Introducing GameMaker

Packt
24 Mar 2015
5 min read
In this article by Nathan Auckett, author of the book GameMaker Essentials, you will learn what GameMaker is all about, who made it, what it is used for, and more. You will then also be learning how to install GameMaker on your computer that is ready for use. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Understanding GameMaker Installing GameMaker: Studio What is this article about? Understanding GameMaker Before getting started with GameMaker, it is best to know exactly what it is and what it's designed to do. GameMaker is a 2D game creation software by YoYo Games. It was designed to allow anyone to easily develop games without having to learn complex programming languages such as C++ through the use of its drag and drop functionality. The drag and drop functionality allows the user to create games by visually organizing icons on screen, which represent actions and statements that will occur during the game. GameMaker also has a built-in programming language called GameMaker Language, or GML for short. GML allows users to type out code to be run during their game. All drag and drop actions are actually made up of this GML code. GameMaker is primarily designed for 2D games, and most of its features and functions are designed for 2D game creation. However, GameMaker does have the ability to create 3D games and has a number of functions dedicated to this. GameMaker: Studio There are a number of different versions of GameMaker available, most of which are unsupported because they are outdated; however, support can still be found in the GameMaker Community Forums. GameMaker: Studio is the first version of GameMaker after GameMaker HTML5 to allow users to create games and export them for use on multiple devices and operating systems including PC, Mac, Linux, and Android, on both mobile and desktop versions. GameMaker: Studio is designed to allow one code base (GML) to run on any device with minimal changes to the base code. Users are able to export their games to run on any supported device or system such as HTML5 without changing any code to make things work. GameMaker: Studio was also the first version available for download and use through the Steam marketplace. YoYo Games took advantage of the Steam workshop and allowed Steam-based users to post and share their creations through the service. GameMaker: Studio is sold in a number of different versions, which include several enhanced features and capabilities as the price gets higher. The standard version is free to download and use. However, it lacks some advanced features included in higher versions and only allows for exporting to the Windows desktop. The professional version is the second cheapest from the standard version. It includes all features, but only has the Windows desktop and Windows app exports. Other exports can be purchased at an extra cost ranging from $99.99 to $300. The master version is the most expensive of all the options. It comes with every feature and every export, including all future export modules in version 1.x. If you already own exports in the professional version, you can get the prices of those exports taken off the price of the master version. Installing GameMaker: Studio Installing GameMaker is performed much like any other program. In this case, we will be installing GameMaker: Studio as this is the most up-to-date version at this point. You can find the download at the YoYo Games website, https://www.yoyogames.com/. From the site, you can pick the free version or purchase one of the others. All the installations are basically the same. Once the installer is downloaded, we are ready to install GameMaker: Studio. This is just like installing any other program. Just run the file, and then follow the on-screen instructions to accept the license agreement, choose an install location, and install the software. On the first run, you may see a progress bar appear at the top left of your screen. This is just GameMaker running its first time setup. It will also do this during the update process as YoYo Games releases new features. Once it is done, you should see a welcome screen and will be prompted to enter your registration code. The key should be e-mailed to you when you make an account during the purchase/download process. Enter this key and your copy of GameMaker: Studio should be registered. You may be prompted to restart GameMaker at this time. Close GameMaker and re-open it and you should see the welcome screen and be able to choose from a number of options on it: What is this article about? We now have GameMaker: Studio installed and are ready to get started with it. In this article, we will be covering the essential things to know about GameMaker: Studio. This includes everything from drag and drop actions to programming in GameMaker using GameMaker Language (GML). You will learn about how things in GameMaker are structured, and how to organize resources to keep things as clean as possible. Summary In this article, we looked into what GameMaker actually is and learned that there are different versions available. We also looked at the different types of GameMaker: Studio available for download and purchase. We then learned how to install GameMaker: Studio, which is the final step in getting ready to learn the essential skills and starting to make our very own games. Resources for Article: Further resources on this subject: Getting Started – An Introduction to GML [Article] Animating a Game Character [Article] Why should I make cross-platform games? [Article]
Read more
  • 0
  • 0
  • 4875