Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - 2D Game Development

64 Articles
article-image-harrison-ferrone-why-c-preferred-programming-language-building-games-unity
Sugandha Lahoti
16 Dec 2019
6 min read
Save for later

Harrison Ferrone explains why C# is the preferred programming language for building games in Unity

Sugandha Lahoti
16 Dec 2019
6 min read
C# is one of the most popular programming languages which is used to create games in the Unity game engine. Experiences (games, AR/VR apps, etc) built with Unity have reached nearly 3 billion devices worldwide and were installed 24 billion times in the last 12 months. We spoke to Harrison Ferrone, software engineer, game developer, creative technologist and author of the book, “Learning C# by Developing Games with Unity 2019”. We talked about why C# is used for game designing, the recent Unity 2019.2 release, and some tips and tricks tips for those developing games with Unity. On C# and Game development Why is C# is widely-used to create games? How does it compare to C++? How is C# being used in other areas such as mobile and web development? I think Unity chose to move forward with C# instead of Javascript or Boo because of its learning curve and its history with Microsoft. [Boo was one of the three scripting languages for the Unity game engine until it was dropped in 2014]. In my experience, C# is easier to learn than languages like C++, and that accessibility is a huge draw for game designers and programmers in general. With Xamarin mobile development and ASP.NET web applications in the mix, there’s really no stopping the C# language any time soon. What are C# scripts? How are they useful for creating games with Unity? C# scripts are the code files that store behaviors in Unity, powering everything the engine does. While there are a lot of new tools that will allow a developer to make a game without them, scripts are still the best way to create custom actions and interactions within a game space. Editor’s Tip: To get started with how to create a C# script in Unity, you can go through Chapter 1 of Harrison Ferrone’s book Learning C# by Developing Games with Unity 2019. On why Harrison wrote his book, Learning C# by Developing Games with Unity 2019 Tell us the motivation behind writing your book Learning C# by Developing Games with Unity 2019. Why is developing Unity games a good way to learn the C# programming language? Why do you prefer Unity over other game engines? My main motivation for writing the book was two-fold. First, I always wanted to be a writer, so marrying my love for technology with a lifelong dream was a no-brainer. Second, I wanted to write a beginner’s book that would stay true to a beginner audience, always keeping them in mind. In terms of choosing games as a medium for learning, I’ve found that making something interesting and novel while learning a new skill-set leads to greater absorption of the material and more overall enjoyment. Unity has always been my go-to engine because its interface is highly intuitive and easy to get started with. You have 3 years of experience building iOS applications in Swift. You also have a number of articles and tutorials on the same on the Ray Wenderlich website. Recently, you started branching out into C++ and Unreal Engine 4. How did you get into game design and Unity development? What made you interested in building games?  I actually got into Game design and Unity development first, before all the iOS and Swift experience. It was my major in university, and even though I couldn’t find a job in the game industry right after I graduated, I still held onto it as a passion. On developing games The latest release of Unity, Unity 2019.2 has a number of interesting features such as ProBuilder, Shader Graph, and effects, 2D Animation, Burst Compiler, etc. What are some of your favorite features in this release? What are your expectations from Unity 2019.3?  I’m really excited about ProBuilder in this release, as it’s a huge time saver for someone as artistically challenged as I am. I think tools like this will level the playing field for independent developers who may not have access to the environment or level builders. What are some essential tips and tricks that a game developer must keep in mind when working in Unity? What are the do’s and don’ts? I’d say the biggest thing to keep in mind when working with Unity is the component architecture that it’s built on. When you’re writing your own scripts, think about how they can be separated into their individual functions and structure them like that - with purpose. There’s nothing worse than having a huge, bloated C# script that does everything under the sun and attaching it to a single game object in your project, then realizing it really needs to be separated into its component parts. What are the biggest challenges today in the field of game development? What is your advice for those developing games using C#? Reaching the right audience is always challenge number one in any industry, and game development is no different. This is especially true for indie game developers as they have to always be mindful of who they are making their game for and purposefully design and program their games accordingly. As far as advice goes, I always say the same thing - learn design patterns and agile development methodologies, they will open up new avenues for professional programming and project management. Rust has been touted as one of the successors of the C family of languages. The present state of game development in Rust is also quite encouraging. What are your thoughts on Rust for game dev? Do you think major game engines like Unity and Unreal will support Rust for game development in the future? I don’t have any experience with Rust, but major engines like Unity and Unreal are unlikely to adopt a new language because of the huge cost associated with a changeover of that magnitude. However, that also leaves the possibility open for another engine to be developed around Rust in the future that targets games, mobile, and/or web development. About the Author Harrison Ferrone was born in Chicago, IL, and raised all over. Most days, you can find him creating instructional content for LinkedIn Learning and Pluralsight, or tech editing for the Ray Wenderlich website. After a few years as an iOS developer at small start-ups, and one Fortune 500 company, he fell into a teaching career and never looked back. Throughout all this, he's bought many books, acquired a few cats, worked abroad, and continually wondered why Neuromancer isn't on more course syllabi. You can follow him on Linkedin, and GitHub.
Read more
  • 0
  • 0
  • 15850

article-image-unreal-engine-4-23-releases-with-major-new-features-like-chaos-virtual-production-improvement-in-real-time-ray-tracing-and-more
Vincy Davis
09 Sep 2019
5 min read
Save for later

Unreal Engine 4.23 releases with major new features like Chaos, Virtual Production, improvement in real-time ray tracing and more

Vincy Davis
09 Sep 2019
5 min read
Last week, Epic released the stable version of Unreal Engine 4.23 with a whopping 192 improvements. The major features include beta varieties like Chaos - Destruction, Multi-Bounce Reflection fallback in Real-Time Ray Tracing, Virtual Texturing, Unreal Insights, HoloLens 2 native support, Niagara improvements and many more. Unreal Engine 4.23 will no longer support iOS 10, as iOS 11 is now the minimum required version. What’s new in Unreal Engine 4.23? Chaos - Destruction Labelled as “Unreal Engine's new high-performance physics and destruction system” Chaos is available in beta for users to attain cinematic-quality visuals in real-time scenes. It also supports high level artist control over content creation and destruction. https://youtu.be/fnuWG2I2QCY Chaos supports many distinct characteristics like- Geometry Collections: It is a new type of asset in Unreal for short-lived objects. The Geometry assets can be built using one or more Static Meshes. It offers flexibility to the artist on choosing what to simulate, how to organize and author the destruction. Fracturing: A Geometry Collection can be broken into pieces either individually, or by applying one pattern across multiple pieces using the Fracturing tools. Clustering: Sub-fracturing is used by artists to increase optimization. Every sub-fracture is an extra level added to the Geometry Collection. The Chaos system keeps track of the extra levels and stores the information in a Cluster, to be controlled by the artist. Fields: It can be used to control simulation and other attributes of the Geometry Collection. Fields enable users to vary the mass, make something static, to make the corner more breakable than the middle, and others. Unreal Insights Currently in beta, Unreal Insights enable developers to collect and analyze data about Unreal Engine's behavior in a fixed way. The Trace System API system is one of its components and is used to collect information from runtime systems consistently. Another component of Unreal Insights is called the Unreal Insights Tool. It supplies interactive visualization of data through the Analysis API. For in-depth details about Unreal Insights and other features, you can also check out the first preview release of Unreal Engine 4.23. Virtual Production Pipeline Improvements Unreal Engine 4.23 explores advancements in virtual production pipeline by improving virtually scout environments and compose shots by connecting live broadcast elements with digital representations and more. In-Camera VFX: With improvements in-Camera VFX, users can achieve final shots live on set by combining real-world actors and props with Unreal Engine environment backgrounds. VR Scouting for Filmmakers: The new VR Scouting tools can be used by filmmakers to navigate and interact with the virtual world in VR. Controllers and settings can also be customized in Blueprints,rather than rebuilding the engine in C++. Live Link Datatypes and UX Improvements: The Live Link Plugin be used to drive character animation, camera, lights, and basic 3D transforms dynamically from other applications and data sources in the production pipeline. Other improvements include save and load presets for Live Link setups, better status indicators to show the current Live Link sources, and more. Remote Control over HTTP: Unreal Engine 4.23 users can send commands to Unreal Engine and Unreal Editor remotely over HTTP. This makes it possible for users to create customized web user interfaces to trigger changes in the project's content. Read Also: Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Real-Time Ray tracing Improvements Performance and Stability Expanded DirectX 12 Support Improved Denoiser quality Increased Ray Traced Global Illumination (RTGI) quality Additional Geometry and Material Support Landscape Terrain Hierarchical Instanced Static Meshes (HISM) and Instanced Static Meshes (ISM) Procedural Meshes Transmission with SubSurface Materials World Position Offset (WPO) support for Landscape and Skeletal Mesh geometries Multi-Bounce Reflection Fallback Unreal Engine 4.23 provides improved support for multi-bounce Ray Traced Reflections (RTR) by using Reflection Captures. This will increase the performance of all types of intra-reflections. Virtual Texturing The beta version of Virtual Texturing in Unreal Engine 4.23 enables users to create and use large textures for a lower and more constant memory footprint at runtime. Streaming Virtual Texturing: The Streaming Virtual Texturing uses the Virtual Texture assets to present an option to stream textures from disk rather than the existing Mip-based streaming. It minimizes the texture memory overhead and increases performance when using very large textures. Runtime Virtual Texturing: The Runtime Virtual Texturing avails a Runtime Virtual Texture asset. It can be used to supply shading data over large areas, thus making it suitable for Landscape shading. Unreal Engine 4.23 also presents new features like Skin Weight Profiles, Animation Streaming, Dynamic Animation Graphs, Open Sound Control, Sequencer Curve Editor Improvements, and more. As expected, users love the new features in Unreal Engine 4.23, especially Chaos. https://twitter.com/rista__m/status/1170608746692673537 https://twitter.com/jayakri59101140/status/1169553133518782464 https://twitter.com/NoisestormMusic/status/1169303013149806595 To know about the full updates in Unreal Engine 4.23, users can head over to the Unreal Engine blog. Other news in Game Development Japanese Anime studio Khara is switching its primary 3D CG tools to Blender Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 5890

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 5640

article-image-unity-2018-2-unity-release-for-this-year-2nd-time-in-a-row
Sugandha Lahoti
12 Jul 2018
4 min read
Save for later

Unity 2018.2: Unity release for this year 2nd time in a row!

Sugandha Lahoti
12 Jul 2018
4 min read
It has only been two months since the release of Unity 2018.1 and Unity is back again with their next release for this year. Unity 2018.2 builds on the features of Unity 2018.1 such as Scriptable Render Pipeline (SRP), Shader Graph, and Entity component system. It also adds support for managed code debugging on iOS and Android, along with the final release of 64-bit (ARM64) support for Android devices. Let us look at the features in detail. Scriptable Render Pipeline improvements As mentioned above, Unity 2018.2 builds on Scriptable Render Pipeline introduced in 2018.1. The version 2 comes with two additional features: The SRP batcher: It is a new Unity engine inner loop for speeding up CPU rendering without compromising GPU performance. It works with the High Definition Render Pipeline (HDRP) and Lightweight Render Pipeline (LWRP), with PC DirectX-11, Metal and PlayStation 4 currently supported. A Scriptable shader variants stripping: It can manage the number of shader variants generated, without affecting iteration time or maintenance complexity. This leads to a dramatic reduction in player build time and data size. Performance optimizations in Lightweight Render Pipeline and High Definition Render Pipeline Unity 2018.2 improves performance and optimization of Lightweight Render Pipeline (LWRP) with an Optimized Tile utilization. This feature adjusts the number of load-and-store to tiles in order to optimize the memory of mobile GPUs. It also shades light in batches, which reduces overdraw and draw calls. Unity 2018.2 comes with better high-end visual quality in High Definition Render Pipeline (HDRP). Improvements include volumetrics, glossy planar reflection, Geometric specular AA, and Proxy Screen Space Reflection & Refraction, Mesh decals, and Shadow Mask. Improvements in C# Job System, Entity Component System and Burst Compiler Unity 2018.2 introduces new Reactive system samples in the Entity Component system (ECS) to let developers respond when there are changes to component state and emulate event-driven behavior. Burst compiling for ECS is now available on all editor platforms (Windows, Mac, Linux), and game developers will be able to build AOT for standalone players (Desktop, PS4, Xbox, iOS and Android). The C# Job system, allows developers to take full advantage of the multicore processors currently available and write parallel code without worrying about programming. Updates to Shader Graph Shader Graph, announced as a preview package in Unity 2018.2 will allow developers to build shaders visually. It has further added additional improvements like: High Definition Render Pipeline (HDRP) support, Manual modification of vertex position, Editing of the Reference name for a property, Editable paths for graphs, Texture 2D and 3D array, and more. Texture Mipmap Streaming Game developers can now stream texture mipmaps into memory on demand to reduce the texture memory requirements of a Unity application. This feature speeds up initial load time, gives developers more control, and is simple to enable and manage. Particle System improvements Unity 2018.2 has 7 major improvements to Particle system which are: Support for eight UVs, to use more custom data. MinMaxCurve and MinMaxGradient types in custom scripts to match the style used by the Particle System UI. Particle Systems now converts colors into linear space, when appropriate, before uploading them to the GPU. Two new modes to the Shape module to emit from a sprite or SpriteRenderer component. Two new APIs for baking the geometry of a Particle System into a mesh. Show Only Selected (aka Solo Mode) with the Play/Restart/Stop, etc; controls. Shaders that use separate alpha textures can now be used with particles, while using sprites in the Texture Sheet Animation module. Unity Hub Unity Hub (v1.0) is a new tool, to be released soon, designed to streamline onboarding and setup processes for all users. It is a centralized location to manage all Unity Projects, simpliflying how developers find, download, and manage Unity Editor licenses and add-on components. The Hub 1.0 will be shipped with: Project templates Custom install location Added Asset Store packages to new projects Modified project build target Editor: Added components post-installation There are additional features like Vulkan support for Editor on Windows and Linux and improvements to Progressive Lightmapper, 2D games, SVG importer, etc. It will also support .java and .cpp source files as plugins in a Unity project along with updates to Cinematics and Unity core engine. In total, there are 183 improvements and 1426 fixes in Unity 2018.2 release. Refer to the release notes to view the full list of new features, improvements and fixes. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginner
Read more
  • 0
  • 0
  • 4063

article-image-implementing-unity-2017-game-audio-tutorial
Amarabha Banerjee
11 Jul 2018
11 min read
Save for later

Implementing Unity 2017 Game Audio [Tutorial]

Amarabha Banerjee
11 Jul 2018
11 min read
Background music and audio effects play a big role in determining any game's success or failure. Creating engaging game audio, importing audio from other sources and working and customizing Audio FX clips as per the game flow is a vital task for any game developer.  In this article, we are going to discuss about how to create, customize and use third party audio in Unity games. This article is a part of the book titled "Unity 2017 2D Game Development Projects" written by Lauren S. Ferro & Francesco Sapio. Basics of audio and sound FX in Unity Adding sound in Unity is simple enough, but you can implement it better if you understand how sound travels. While this is extremely important in 3D games because of the added third dimension, it is quite important in 2D games, just in a slightly different way. Before we discuss the differences, let's first learn about what and how sound works from a quick physics lesson. Listening to the physics behind sound What we hear is not just music, sound effects (FX) and ambient background noise. The sound is a longitudinal, mechanical (vibrating) wave. These "waves" can pass through different mediums (for example, air, water, your desk) but not through a vacuum. Therefore, no one will hear your screams in space. The sound is a variation in pressure. A region of increased pressure on a sound wave is called a compression (or condensation). A region of decreased pressure on a sound wave is called a rarefaction (or dilation). You can see this concept illustrated in the following image: The density of certain materials, such as glass and plastic, allows a certain amount of light to pass through them. This will influence how the light will behave when it passes through them, such as bending/refracting (that is, the index of refraction), various materials (for example, liquids, solids, gases) have the same effect when it comes to allowing sound waves to pass. Some materials allow the sound to pass easily, while others dampen it. Therefore, sound studios/booths are made of certain materials to remove things such as echoes. It has a similar effect to when you scream underwater that there is a shark. It won't be as loud as if you scream from your kitchen to tell everyone dinner is ready. Another thing to consider is what is known as the Doppler Effect. The Doppler Effect results from an increase (or decrease) in the frequency of sound (and other things such as light, ripples in water) as the source of the sound and person/player move toward (or away from) each other. A simple example of this is when an emergency vehicle passes by you. You will notice that the sound of the siren is different before it reaches you when it is near you, and once it passes you. Considering this example, it is because there is a sudden change in pitch in the passing siren. This is visualized in the following image: So, what is the point of knowing this when it comes to developing games? Well, this is particularly important when creating games, more so in 3D, in relation to how sounds are heard by players in many ways. For example, imagine that you're nearing a creek, but there are dense bushes, large pine trees, and a rugged terrain. The sound that creek makes from where a player is in the game world is going to sound very different if it was a completely flat plane free from any vegetation. When it comes to 2D games, this is not necessarily as important because we are working without depth (z-axis) but similar principles apply when players may be navigating around a top-down environment and they are near a point of interest. You don't want that sound to be as loud when the player is far away as it would be if they were up close. Within the context of 2D and 3D sounds, Unity has a parameter for this exact thing called Spatial Blend. We will discuss this more in the Audio Source section. There are several ways that you can create audio within Unity, from importing your own/downloaded sounds to recording it live. Like images, Unity can import most standard audio file formats: AIFF, WAV, MP3, and Ogg, and tracker modules (for example, short instrument samples): .xm, .mod, .it, and .s3m. Importing audio Importing audio into Unity follows the same processes as importing any other type of asset. We will cover the basics of what you need to know in the following sections. Audio Listener Have you heard the saying, If a tree falls in a forest and no one is there to hear it, does it still make a sound? Well, in Unity, if there is nothing to hear your audio, then the answer is no. This is because Unity has a component called an Audio Listener, which works like a microphone. To locate the Audio Listener, click the Main Camera, and then look over at the Inspector; it should be located near the bottom, like in the following image: If for some reason, it isn't there, you can always add it by clicking the following button titled Add Component, type Audio Listener, and select it (click it) from the list, like in the following image: The important thing to remember is that an Audio Listener is the location of the sound, so it makes sense as to why it is typically placed on the Main Camera, but it can also be placed on a Player. A single scene can only have one Audio Listener; therefore, it's best to experiment with the one that works best for your game. It is important to remember that an Audio Listener works with an Audio Source, and must have one to work. Audio Source The Audio Source is where the sound comes from. This can be from many different objects within a Scene as well as background and sound FX. The Audio Source has several parameters; later we will briefly discuss the main ones. To see more information about all the parameters, you can check out the official Unity documentation by visiting the link or scanning the QR code: https://docs.unity3d.com/2017.2/Documentation/Manual/class-AudioSource.html You may be wondering why we should have a slider for Spatial Blend, instead of a checkbox. This is because we need to fade between 2D and 3D, and there is a good reason for this. Imagine that you're in a game and you're looking at a screen on a computer. In this case, your camera is going to be fixated on whatever is on the screen. This could be checking an inventory or even entering nuclear codes. In any case, you will want the sound that is being emitted from the screen to be the focal audio. Therefore, the slider in the Spatial Blend parameter is going to be closer to 2D. This is because you may still want ambient noises that are in the background incorporated into the experience. So, if you are closer to 2D, the sound will be the same in both speakers (or headphones). The closer you slide toward 3D, the more the volume will depend on the proximity of the Sound Listener to the Sound Source. It will also allow for things, such as the Doppler Effect, to be more noticeable, as it takes in 3D space. There are also specific settings for these things. Choosing sounds for background and FX When it comes to picking the right kind of music for your game, just like the aesthetics, you need to think about what kind of "mood" you're trying to create. Is it a somber or uplifting kind of mood, are you ironically contrasting the graphics (for example, happy) with gloomy music? There is really no right or wrong when it comes to your musical selection if you can communicate to the player what they are supposed to feel, at least in general. For this game, I have provided you with some example "moods" that you can apply to this game. Of course, you're welcome to choose sounds other than this that are more to your liking! All the sounds that we will use will be from the Free Sound website: https://freesound.org. You will need to create an account to download them, but it's free and there are many great sounds that you can use when creating games. In saying this, if you're intending to create your games for commercial purposes, please make sure that you check the Terms and Conditions on Free Sound to make sure that you're not violating any of them. Each track will have its own attribution licenses, including those for commercial use, so always check! For this project, we're going to stick with the "Happy" version. But I encourage you to experiment! Happy Collecting Angel Cakes: Chime sound (https://freesound.org/people/jgreer/sounds/333629/) Being attacked by the enemy: Cat Purr/Twit4.wav (https://freesound.org/people/steffcaffrey/sounds/262309/) Collecting health: correct (https://freesound.org/people/ertfelda/sounds/243701/) Collecting bonuses: Signal-Ring 1 (https://freesound.org/people/Vendarro/sounds/399315/) Background: Kirmes_Orgel_004_2_Rosamunde.mp3 (https://freesound.org/people/bilwiss/sounds/24720/) Sad Collecting Angel Cakes: Glass Tap (https://freesound.org/people/Unicornaphobist/sounds/262958/) Being attacked by the enemy: musicbox1.wav (https://freesound.org/people/sandocho/sounds/17700/) Collecting health: chime.wav (https://freesound.org/people/Psykoosiossi/sounds/398661/) Collecting bonuses: short metallic hit (https://freesound.org/people/waveplay/sounds/366400/) Background: improvised chill 8 (https://freesound.org/people/waveplay/sounds/238529/) Retro Collecting Angel Cakes: TF_Buzz.flac (https://freesound.org/people/copyc4t/sounds/235652/) Being attacked by the enemy: Game Die (https://freesound.org/people/josepharaoh99/sounds/364929/) Collecting health: galanghee.wav (https://freesound.org/people/metamorphmuses/sounds/91387/) Collecting bonuses: SW05.WAV (https://freesound.org/people/mad-monkey/sounds/66684/) Background: Angel-techno pop music loop (https://freesound.org/people/frankum/sounds/387410/) Not everyone can hear well or at all, so it pays to keep this in mind when you're developing games that may rely on audio to provide feedback to players. While subtitles can enable dialogue to be more accessible, sound FX can be a little trickier. Therefore, when it comes to implementing audio, think about how you could complement it, even if the same effect that you're trying to achieve with sound is subtle. For example, if you play a "bleep" for every item collected, perhaps you could associate it with a slight glow or flash of color. The choice is up to you, but it's something to keep in mind. On the other end of the spectrum, those who can hear might also want to turn the sounds off. We've all played that game (or several) that really begins to become irritating, so make sure that you also check this while you're playtesting. You don't want an awesome game to suck because your audio is intolerable and there is not an option to TURN THE SOUND OFF! You’ve been warned. Integrating background music in our game Once you choose which music better suits the kind of feel you want to create for your game, import both the sound and the music inside the project. If you want, you can create two folders for them, SoundFX and Music, respectively. Now, in our scene, we need to do the following: Create an empty game object (by clicking GameObject | Create empty), rename it Background Music. Attach an Audio Source component (in the Inspector, click Add Component | Audio | Audio Source). Next, we need to drag and drop the music we decided on/downloaded into the AudioClip variable and check the Loop option, so the background music will never stop. Also, check that Play on Awake is checked as well, even if it should be by default, so the music will start playing as soon as the game starts. Hit Play to start the game. Lastly, adjust the volume, depending on the music you chose. This may require a bit of playtesting (remember to set the value after the play mode, because the settings you adjust during play mode are not kept). In the end, this is how the component should look (in the image, I chose the happy theme music, and set a Volume of 0.1): Here in this article we have shown you how to incorporate game audio effects and background music in Unity games. If you liked this article, then check out the complete book Unity 2017 2D Game Development Projects. AI for Unity game developers: How to emulate real-world senses in your NPC agent Working with Unity Variables to script powerful Unity 2017 games How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 7083

article-image-normal-maps
Packt
19 Jan 2017
12 min read
Save for later

Normal maps

Packt
19 Jan 2017
12 min read
In this article by Raimondas Pupius, the author of the book Mastering SFML Game Development we will learn about normal maps and specular maps. (For more resources related to this topic, see here.) Lighting can be used to create visually complex and breath-taking scenes. One of the massive benefits of having a lighting system is the ability it provides to add extra details to your scene, which wouldn't have been possible otherwise. One way of doing so is using normal maps. Mathematically speaking, the word "normal" in the context of a surface is simply a directional vector that is perpendicular to the said surface. Consider the following illustration: In this case, what's normal is facing up because that's the direction perpendicular to the plane. How is this helpful? Well, imagine you have a really complex model with many vertices; it'd be extremely taxing to render the said model because of all the geometry that would need to be processed with each frame. A clever trick to work around this, known as normal mapping, is to take the information of all of those vertices and save them on a texture that looks similar to this one: It probably looks extremely funky, especially if being looked of physical release in grayscale, but try not to think of this in terms of colors, but directions. The red channel of a normal map encodes the –x and +x values. The green channel does the same for –y and +y values, and the blue channel is used for –z to +z. Looking back at the previous image now, it's easier to confirm which direction each individual pixel is facing. Using this information on geometry that's completely flat would still allow us to light it in such a way that it would make it look like it has all of the detail in there; yet, it would still remain flat and light on performance: These normal maps can be hand-drawn or simply generated using software such as Crazybump. Let's see how all of this can be done in our game engine. Implementing normal map rendering In the case of maps, implementing normal map rendering is extremely simple. We already have all the material maps integrated and ready to go, so at this time, it's simply a matter of sampling the texture of the tile-sheet normals: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textureName = m_tileMap.GetTileSet().GetTextureName(); auto normalMaterial = m_textureManager-> GetResource(textureName + "_normal"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { auto tile = m_tileMap.GetTile(x, y, layer); if (!tile) { continue; } auto& sprite = tile->m_properties->m_sprite; sprite.setPosition( static_cast<float>(x * Sheet::Tile_Size), static_cast<float>(y * Sheet::Tile_Size)); // Normal pass. if (normalMaterial) { shader->setUniform("material", *normalMaterial); renderer->Draw(sprite, &m_normals[layer]); } } } } } ... } The process is exactly the same as drawing a normal tile to a diffuse map, except that here we have to provide the material shader with the texture of the tile-sheet normal map. Also note that we're now drawing to a normal buffer texture. The same is true for drawing entities as well: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. auto shader = renderer->GetCurrentShader(); auto textures = m_systemManager-> GetEntityManager()->GetTextureManager(); for (auto &entity : m_entities) { auto position = entities->GetComponent<C_Position>( entity, Component::Position); if (position->GetElevation() < l_layer) { continue; } if (position->GetElevation() > l_layer) { break; } C_Drawable* drawable = GetDrawableFromType(entity); if (!drawable) { continue; } if (drawable->GetType() != Component::SpriteSheet) { continue; } auto sheet = static_cast<C_SpriteSheet*>(drawable); auto name = sheet->GetSpriteSheet()->GetTextureName(); auto normals = textures->GetResource(name + "_normal"); // Normal pass. if (normals) { shader->setUniform("material", *normals); drawable->Draw(&l_window, l_materials[MaterialMapType::Normal].get()); } } } ... } You can try obtaining a normal texture through the texture manager. If you find one, you can draw it to the normal map material buffer. Dealing with particles isn't much different from what we've seen already, except for one little piece of detail: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { if (l_layer >= 0) { if (positions[i].z < l_layer * Sheet::Tile_Size) { continue; } if (positions[i].z >= (l_layer + 1) * Sheet::Tile_Size) { continue; } } else if (positions[i].z < Sheet::Num_Layers * Sheet::Tile_Size) { continue; } // Normal pass. shader->setUniform("material", sf::Glsl::Vec3(0.5f, 0.5f, 1.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Normal].get()); } } ... } As you can see, we're actually using the material value shader in order to give particles' static normals, which are always sort of pointing to the camera. A normal map buffer should look something like this after you render all the normal maps to it: Changing the lighting shader Now that we have all of this information, let's actually use it when calculating the illumination of the pixels inside the light pass shader: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; void main() { vec4 pixel = texture2D(LastPass, gl_TexCoord[0].xy); vec4 diffusepixel = texture2D(DiffuseMap, gl_TexCoord[0].xy); vec4 normalpixel = texture2D(NormalMap, gl_TexCoord[0].xy); vec3 PixelCoordinates = vec3(gl_FragCoord.x, gl_FragCoord.y, gl_FragCoord.z); vec4 finalPixel = gl_Color * pixel; vec3 viewDirection = vec3(0, 0, 1); if(PassNumber == 1) { finalPixel *= vec4(AmbientLight, 1.0); } // IF FIRST PASS ONLY! vec3 N = normalize(normalpixel.rgb * 2.0 - 1.0); for(int i = 0; i < LightCount; ++i) { vec3 L = Lights[i].position - PixelCoordinates; float distance = length(L); float d = max(distance - Lights[i].radius, 0); L /= distance; float attenuation = 1 / pow(d/Lights[i].radius + 1, 2); attenuation = (attenuation - Lights[i].falloff) / (1 - Lights[i].falloff); attenuation = max(attenuation, 0); float normalDot = max(dot(N, L), 0.0); finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation))) * normalDot; } gl_FragColor = finalPixel; } First, the normal map texture needs to be passed to it as well as sampled, which is where the first two highlighted lines of code come in. Once this is done, for each light we're drawing on the screen, the normal directional vector is calculated. This is done by first making sure that it can go into the negative range and then normalizing it. A normalized vector only represents a direction. Since the color values range from 0 to 255, negative values cannot be directly represented. This is why we first bring them into the right range by multiplying them by 2.0 and subtracting by 1.0. A dot product is then calculated between the normal vector and the normalized L vector, which now represents the direction from the light to the pixel. How much a pixel is lit up from a specific light is directly contingent upon the dot product, which is a value from 1.0 to 0.0 and represents magnitude. A dot product is an algebraic operation that takes in two vectors, as well as the cosine of the angle between them, and produces a scalar value between 0.0 and 1.0 that essentially represents how “orthogonal” they are. We use this property to light pixels less and less, given greater and greater angles between their normals and the light. Finally, the dot product is used again when calculating the final pixel value. The entire influence of the light is multiplied by it, which allows every pixel to be drawn differently as if it had some underlying geometry that was pointing in a different direction. The last thing left to do now is to pass the normal map buffer to the shader in our C++ code: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("NormalMap", m_materialMaps[MaterialMapType::Normal]->getTexture()); ... } ... } This effectively enables normal mapping and gives us beautiful results such as this: The leaves, the character, and pretty much everything in this image now looks like they have a definition, ridges, and crevices; it is lit as if it had geometry, although it's paper-thin. Note the lines around each tile in this particular instance. This is one of the main reasons why normal maps for pixel art, such as tile sheets, shouldn't be automatically generated; it can sample the tiles adjacent to it and incorrectly add bevelled edges. Specular maps While normal maps provide us with the possibility to fake how bumpy a surface is, specular maps allow us to do the same with the shininess of a surface. This is what the same segment of the tile sheet we used as an example for a normal map looks like in a specular map: It's not as complex as a normal map since it only needs to store one value: the shininess factor. We can leave it up to each light to decide how much shine it will cast upon the scenery by letting it have its own values: struct LightBase { ... float m_specularExponent = 10.f; float m_specularStrength = 1.f; }; Adding support for specularity Similar to normal maps, we need to use the material pass shader to render to a specularity buffer texture: void Map::Redraw(sf::Vector3i l_from, sf::Vector3i l_to) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... auto specMaterial = m_textureManager->GetResource( textureName + "_specular"); for (auto x = l_from.x; x <= l_to.x; ++x) { for (auto y = l_from.y; y <= l_to.y; ++y) { for (auto layer = l_from.z; layer <= l_to.z; ++layer) { ... // Normal pass. // Specular pass. if (specMaterial) { shader->setUniform("material", *specMaterial); renderer->Draw(sprite, &m_speculars[layer]); } } } } } ... } The texture for specularity is once again attempted to be obtained; it is passed down to the material pass shader if found. The same is true when you render entities: void S_Renderer::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialPass")) { // Material pass. ... for (auto &entity : m_entities) { ... // Normal pass. // Specular pass. if (specular) { shader->setUniform("material", *specular); drawable->Draw(&l_window, l_materials[MaterialMapType::Specular].get()); } } } ... } Particles, on the other hand, also use the material value pass shader: void ParticleSystem::Draw(MaterialMapContainer& l_materials, Window& l_window, int l_layer) { ... if (renderer->UseShader("MaterialValuePass")) { // Material pass. auto shader = renderer->GetCurrentShader(); for (size_t i = 0; i < container->m_countAlive; ++i) { ... // Normal pass. // Specular pass. shader->setUniform("material", sf::Glsl::Vec3(0.f, 0.f, 0.f)); renderer->Draw(drawables[i], l_materials[MaterialMapType::Specular].get()); } } } For now, we don't want any of them to be specular at all. This can obviously be tweaked later on, but the important thing is that we have that functionality available and yielding results, such as the following: This specularity texture needs to be sampled inside a light-pass shader, just like a normal texture. Let's see what this involves. Changing the lighting shader Just as before, a uniform sampler2D needs to be added to sample the specularity of a particular fragment: uniform sampler2D LastPass; uniform sampler2D DiffuseMap; uniform sampler2D NormalMap; uniform sampler2D SpecularMap; uniform vec3 AmbientLight; uniform int LightCount; uniform int PassNumber; struct LightInfo { vec3 position; vec3 color; float radius; float falloff; float specularExponent; float specularStrength; }; const int MaxLights = 4; uniform LightInfo Lights[MaxLights]; const float SpecularConstant = 0.4; void main() { ... vec4 specularpixel = texture2D(SpecularMap, gl_TexCoord[0].xy); vec3 viewDirection = vec3(0, 0, 1); // Looking at positive Z. ... for(int i = 0; i < LightCount; ++i){ ... float specularLevel = 0.0; specularLevel = pow(max(0.0, dot(reflect(-L, N), viewDirection)), Lights[i].specularExponent * specularpixel.a) * SpecularConstant; vec3 specularReflection = Lights[i].color * specularLevel * specularpixel.rgb * Lights[i].specularStrength; finalPixel += (diffusepixel * ((vec4(Lights[i].color, 1.0) * attenuation)) + vec4(specularReflection, 1.0)) * normalDot; } gl_FragColor = finalPixel; } We also need to add in the specular exponent and strength to each light's struct, as it's now part of it. Once the specular pixel is sampled, we need to set up the direction of the camera as well. Since that's static, we can leave it as is in the shader. The specularity of the pixel is then calculated by taking into account the dot product between the pixel’s normal and the light, the color of the specular pixel itself, and the specular strength of the light. Note the use of a specular constant in the calculation. This is a value that can and should be tweaked in order to obtain best results, as 100% specularity rarely ever looks good. Then, all that's left is to make sure the specularity texture is also sent to the light-pass shader in addition to the light's specular exponent and strength values: void LightManager::RenderScene() { ... if (renderer->UseShader("LightPass")) { // Light pass. ... shader->setUniform("SpecularMap", m_materialMaps[MaterialMapType::Specular]->getTexture()); ... for (auto& light : m_lights) { ... shader->setUniform(id + ".specularExponent", light.m_specularExponent); shader->setUniform(id + ".specularStrength", light.m_specularStrength); ... } } } The result may not be visible right away, but upon closer inspection of moving a light stream, we can see that correctly mapped surfaces will have a glint that will move around with the light: While this is nearly perfect, there's still some room for improvement. Summary Lighting is a very powerful tool when used right. Different aspects of a material may be emphasized depending on the setup of the game level, additional levels of detail can be added in without too much overhead, and the overall aesthetics of the project will be leveraged to new heights. The full version of “Mastering SFML Game Development” offers all of this and more by not only utilizing normal and specular maps, but also using 3D shadow-mapping techniques to create Omni-directional point light shadows that breathe new life into the game world. Resources for Article: Further resources on this subject: Common Game Programming Patterns [article] Sprites in Action [article] Warfare Unleashed Implementing Gameplay [article]
Read more
  • 0
  • 0
  • 2320
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cooking-cupcakes-towers
Packt
03 Jan 2017
6 min read
Save for later

Cooking cupcakes towers

Packt
03 Jan 2017
6 min read
In this article by Francesco Sapio author of the book Getting Started with Unity 2D Game Development - Second Edition we will see how to create our towers. This is not an easy task, but at the end we will acquire a lot of scripting skills. (For more resources related to this topic, see here.) What a cupcake Tower does First of all, it's useful to write down what we want to achieve and define what exactly a cupcake tower is supposed to do. The best way is to write down a list, to have clear idea of what we are trying to achieve: A cupcake tower is able to detect pandas within a certain range. A cupcake tower shoots a different kind of projectile according to its typology against the pandas within a certain range. Furthermore, among this range, it uses a policy to decide which panda to shoot. There is a reload time, before the cupcake tower is able to shoot again. The cupcake tower can be upgraded (in a bigger cupcake!), increasing its stats and therefore changing its appearance. Scripting the cupcake tower There are a lot of things to implement. Let's start by creating a new script and naming it CupcakeTowerScript. As we already mentioned for the Projectile Script, in this article, we implement the main logic, but of course there is always space to improve. Shooting to pandas Even if we don't have enemies yet, we can already start to program the behavior of the cupcake towers to shoot to the enemies. In this article we will learn a bit about using Physics to detect objects within a range. Let's start by defining four variables. The first three are public, so we can set them in the Inspector, the last one is private, since we only need it to check how much time is elapsed. In particular, the first three variables store the parameters of our tower. So, the projectile prefab, its range and its reload time. We can write the following: public float rangeRadius; //Maximum distance that the Cupcake Tower can shoot public float reloadTime; //Time before the Cupcake Tower is able to shoot again public GameObject projectilePrefab; //Projectile type that is fired from the Cupcake Tower private float elapsedTime; //Time elapsed from the last time the Cupcake Tower has shot Now, in the Update() function we need to check if enough time has elapsed in order to shoot. This can be easily done by using an if-statement. In any case, at the end, the time elapsed should be increased: void Update () { if (elapsedTime >= reloadTime) { //Rest of the code } elapsedTime += Time.deltaTime; } Within the if statement, we need to reset the elapsed time, so to be able to shoot the next time. Then, we need to check if within its range there are some game objects or not. if (elapsedTime >= reloadTime) { //Reset elapsed Time elapsedTime = 0; //Find all the gameObjects with a collider within the range of the Cupcake Tower Collider2D[] hitColliders = Physics2D.OverlapCircleAll(transform.position, rangeRadius); //Check if there is at least one gameObject found if (hitColliders.Length != 0) { //Rest of the code } } If there are enemies within range, we need to decide a policy about which enemy the tower should be targeted. There are different ways to do this and different strategies that the tower itself could choose. Here, we are going to implement one where the nearest enemy to the tower will be the one targeted. To implement this policy, we need to loop all all the game objects that we have found in range, check if they actually are enemies, and using distances, pick the nearest one. To achieve this, write the following code inside the previous if statement: if (hitColliders.Length != 0) { //Loop over all the gameObjects to identify the closest to the Cupcake Tower float min = int.MaxValue; int index = -1; for (int i = 0; i < hitColliders.Length; i++) { if (hitColliders[i].tag == "Enemy") { float distance = Vector2.Distance(hitColliders[i].transform.position, transform.position); if (distance < min) { index = i; min = distance; } } } if (index == -1) return; //Rest of the code } Once we got the target, we need to get the direction, that the tower will use to throw the projectile. So, let's write this: //Get the direction of the target Transform target = hitColliders[index].transform; Vector2 direction = (target.position - transform.position).normalized; Finally, we need to instantiate a new Projectile, and assign to it the direction of the enemy, as the following: //Create the Projectile GameObject projectile = GameObject.Instantiate(projectilePrefab, transform.position, Quaternion.identity) as GameObject; projectile.GetComponent<ProjectileScript>().direction = direction; Instantiate Game Objects it is usually slow, and it should be avoided. However, for the learning propose we can live with that. And that is it for shooting to the enemies. Upgrading the cupcake tower, making it even tastier In order to create a function to upgrade the tower, we first need to define a variable to store the actual level of the tower: public int upgradeLevel; //Level of the Cupcake Tower Then, we need an array with all the Sprites for the different upgrades, like the following: public Sprite[] upgradeSprites; //Different sprites for the different levels of the Cupcake Tower Finally, we can create our Upgrade function. We need to upgrade the graphics, and increase the stats. Feel free to tweak this values as you prefer. However, don't forget to increase the level of the tower as well as to assign the new sprite. At the end, you should have something like the following: public void Upgrade() { rangeRadius += 1f; reloadTime -= 0.5f; upgradeLevel++; GetComponent<SpriteRenderer>().sprite = upgradeSprites[upgradeLevel]; } Save the script, and for now we have done with it. A pre-cooked cupcake tower through Prefabs As we have done with the Sprinkle, we need to do something similar for the cupcake Tower. In the Prefabs folder in the Project Panel, create a new Prefab by right clicking and then navigate to Create | Prefab. Name it SprinklesCupcakeTower. Now, drag and drop the Sprinkles_Cupcake_Tower_0 from the Graphics/towers folder (within the cupcake_tower_sheet-01 file) in the Scene View. Attach the CupcakeTowerScript to the object by navigating to Add Component | Script | CupcakeTowerScript. The Inspector should look like the following: We need to assign the Pink_Sprinkle_Projectile_Prefab to the Projectile Prefab variable. Then, we need to assign the different Sprites for the upgrades. In particular, we can use Sprinkles_Cupcake_Tower_* (replacing the * with the level of the cupcake tower) from the same sheet as before. Don't worry too much about the other parameters of the tower, like the range radius or the reload time, since we will see how to balance the game later on. At the end, this is what we should see: The last step is to drag this game object inside the prefab. As a result, our cupcake tower is ready. Summary In this article we covered the topic of creating a cupcake tower and scripting it. Resources for Article: Further resources on this subject: Animating a Game Character [article] What's Your Input? [article] Components in Unity [article]
Read more
  • 0
  • 0
  • 1440

article-image-buildbox-2-game-development-peek-boo
Packt
23 Sep 2016
20 min read
Save for later

Buildbox 2 Game Development: peek-a-boo

Packt
23 Sep 2016
20 min read
In this article by Ty Audronis author of the book Buildbox 2 Game Development, teaches the reader the Buildbox 2 game development environment by example.The following excerpts from the book should help you gain an understanding of the teaching style and the feel of the book.The largest example we give is by making a game called Ramblin' Rover (a motocross-style game that uses some of the most basic to the most advanced features of Buildbox).Let's take a quick look. (For more resources related to this topic, see here.) Making the Rover Jump As we've mentioned before, we're making a hybrid game. That is, it's a combination of a motocross game, a platformer, and a side-scrolling shooter game. Our initial rover will not be able to shoot at anything (we'll save this feature for the next upgraded rover that anyone can buy with in-game currency). But this rover will need to jump in order to make the game more fun. As we know, NASA has never made a rover for Mars that jumps. But if they did do this, how would they do it? The surface of Mars is a combination of dust and rocks, so the surface conditions vary greatly in both traction and softness. One viable way is to make the rover move in the same way a spacecraft manoeuvres (using little gas jets). And since the gravity on Mars is lower than that on Earth, this seems legit enough to include it in our game. While in our Mars Training Ground world, open the character properties for Training Rover. Drag the animated PNG sequence located in our Projects/RamblinRover/Characters/Rover001-Jump folder (a small four-frame animation) into the JumpAnimation field. Now we have an animation of a jump-jet firing when we jump. We just need to make our rover actually jump. Your Properties window should look like the following screenshot: The preceding screenshot shows the relevant sections of the character's properties window We're now going to revisit the Character Gameplay Settings section. Scroll the Properties window all the way down to this section. Here's where we actually configure a few settings in order to make the rover jump. The previous screenshot shows the section as we're going to set it up. You can configure your settings similarly. The first setting we are considering is Jump Force. You may notice that the vertical force is set to 55. Since our gravity is -20 in this world, we need enough force to not only counteract the gravity, but also to give us a decent height (about half the screen). A good rule is to just make our Jump Force 2x our Gravity. Next is Jump Counter. We've set it to 1. By default, it's set to 0. This actually means infinity. When JumpCounter is set to 0, there is no limit to how many times a player can use the jump boost… They could effectively ride the top of the screen using the jump boost,such as a flappy bird control. So, we set it to 1 in order to limit the jumps to one at a time. There is also a strange oddity with the Buildbox that we can exploit with this. The jump counter resets only after the rover hits the ground. But, there's a funny thing… The rover itself never actually touches the ground (unless it crashes), only the wheels do. There is one other way to reset the jump counter: by doing a flip. What this means is that once players use their jump up, the only way to reset it is to do a flip-trick off a ramp. Add a level of difficulty and excitement to the game using a quirk of the development software! We could trick the software into believing that the character is simply close enough to the ground to reset the counter by increasing Ground Threshold to the distance that the body is from the ground when the wheels have landed. But why do this? It's kind of cool that a player has to do a trick to reset the jump jets. Finally, let's untick the Jump From Ground checkbox. Since we're using jets for our boost, it makes sense that the driver could activate them while in the air. Plus, as we've already said, the body never meets the ground. Again, we could raise the ground threshold, but let's not (for the reasonsstated previously). Awesome! Go ahead and give it a try by previewing the level. Try jumping on the small ramp that we created, which is used to get on top of our cave. Now, instead of barely clearing it, the rover will easily clear it, and the player can then reset the counter by doing a flip off the big ramp on top. Making a Game Over screen This exercise will show you how to make some connections and new nodes using Game Mind Map. The first thing we're going to want is an event listener to sense when a character dies. It sounds complex, and if we were coding a game, this would take several lines of code to accomplish. In Buildbox, it's a simple drag-and-drop method. If you double-click on the Game Field UI node, you'll be presented with the overlay for the UI and controls during gameplay. Since this is a basic template, you are actually presented with a blank screen. This template is for you to play around with on a computer, so no controls are on the screen. Instead, it is assumed that you would use keyboard controls to play the demo game. This is why the screen looks blank: There are some significant differences between the UI editor and the World editor. You can notice that the Character tab from the asset library is missing, and there is a timeline editor on the bottom. We'll get into how to use this timeline later. For now, let's keep things simple and add our Game Over sensor. If you expand the Logic tab in the asset library, you'll find the Event Observer object. You can drag this object anywhere onto the stage. It doesn't even have to be in the visible window (the dark area in the center of the stage). So long as it's somewhere on the stage, the game can use this logic asset. If you do put it on the visible area of the stage, don't worry; it's an invisible asset, and won't show in your game. While the Event observer is selected on the stage, you'll notice that its properties pop up in the properties window (on the right side of the screen). By default, the Game Over type of event is selected. But if you select this drop-down menu, you'll notice a ton of different event types that this logic asset can handle. Let's leave all of the properties at their default values (except the name; change this to Game Over) and go back to Game Mind Map (the top-left button): Do you notice anything different? The Game Field UI node now has a Game Over output. Now, we just need a place to send this output. Right-click on the blank space of the grid area. Now you can either create a new world or new UI. Select Add New UI and you'll see a new green node that is titled New UI1. This new UI will be your Game Over screen when a character dies. Before we can use this new node, it needs to be connected to the Game Over output of Game Field UI. This process is exceedingly simple. Just hold down your left mouse button on the Game Over output's dark dot, and drag it to the New UI1's Load dark dot (on the left side of the New UI1 node). Congratulations, you've just created your first connected node. We're not done yet, though. We need to make this Game Over screen link back to restart the game. First, by selecting the New UI1 node, change its name using the parameters window (on the right of the screen) to Game Over UI. Make sure you hit your Enter key; this will commit the changed name. Now double-click on the Game Over UI node so we can add some elements to the screen. You can't have a Game Over screen without the words Game Over, so let's add some text. So, we've pretty much completed the game field (except for some minor items that we'll address quite soon). But believe it or not, we're only halfway there! In this article, we're going to finally create our other two rovers, and we'll test and tweak our scenes with them. We'll set up all of our menus, information screens, and even a coin shop where we can use in-game currency to buy the other two rovers, or even use some real-world currency to short-cut and buy more in-game currency. And speaking of monetization, we'll set up two different types of advertising from multiple providers to help us make some extra cash. Or, in the coin-shop, players can pay a modest fee to remove all advertising! Ready? Well, here we go! We got a fever, and the only cure is more rovers! So now that we've created other worlds, we definitely need to set up some rovers that are capable of traversing them. Let's begin with the optimal rover for Gliese. This one is called the K.R.A.B.B. (no, it doesn't actually stand for anything…but the rover looks like a crab, and acronyms look more military-like). Go ahead and drag all of the images in the Rover002-Body folder as characters. Don't worry about the error message. This just tells you that only one character can be on the stage at a time. The software still loads this new character into the library, and that's all we really want at this time anyway. Of course, drag the images in the Rover002-Jump folder to the Jump Animation field, and the LaserShot.png file to the Bullet Animation field. Set up your K.R.A.B.B. with the following settings: For Collision Shape, match this: In the Asset Library, drag the K.R.A.B.B. above the Mars Training Rover. This will make it the default rover. Now, you can test your Gliese level (by soloing each scene) with this rover to make sure it's challenging, yet attainable. You'll notice some problems with the gun destroying ground objects, but we'll solve that soon enough. Now, let's do the same with Rover 003. This one uses a single image for the Default Animation, but an image sequence for the jump. We'll get to the bullet for this one in a moment, but set it up as follows: Collision Shape should look as follows: You'll notice that a lot of the settings are different on this character, and you may wonder what the advantage of this is (since it doesn't lean as much as the K.R.A.B.B.). Well, it's a tank, so the damage it can take will be higher (which we'll set up shortly), and it can do multiple jumps before recharging (five, to be exact). This way, this rover can fly using flappy-bird style controls for short distances. It's going to take a lot more skill to pilot this rover, but once mastered, it'll be unstoppable. Let's move onto the bullet for this rover. Click on the Edit button (the little pencil icon) inside Bullet Animation (once you've dragged the missile.png file into the field), and let's add a flame trail. Set up a particle emitter on the missile, and position it as shown in the following screenshots: The image on the left shows the placement of the missile and the particle emitter. On the right, you can see the flame set up. You may wonder why it is pointed in the opposite direction. This will actually make the flames look more realistic (as if they're drifting behind the missile). Preparing graphic assets for use in Buildbox Okay, so as I said before, the only graphic assets that Buildbox can use are PNG files. If this was just a simple tutorial on how to make Ramblin' Rover, we could leave it there. But it's not just it. Ramblin' Rover is just an example of how a game is made, but we want to give you all of the tools and baseknowledge you need to create all of your own games from scratch. Even if you don't create your own graphic assets, you need to be able to tell anybody creating them for you how you want them. And more importantly, you need to know why. Graphics are absolutely the most important thing in developing a game. After all, you saw how just some eyes and sneakers made a cute character that people would want to see. Graphics create your world. They create characters that people want to succeed. Most importantly, graphics create the feel of your game, and differentiate it from other games on the market. What exactly is a PNG file? Anybody remember GIF files? No, not animated GIFs that you see on most chat-rooms and on Facebook (although they are related). Back in the 1990s, a still-frame GIF file was the best way to have a graphics file that had a transparent background. GIFs can be used for animation, and can have a number of different purposes. However, GIFs were clunky. How so? Well, they had a type of compression known as lossy. This just means that when compressed, information was lost, and artifacts and noise could pop up and be present. Furthermore, GIFs used indexed colors. This means that anywhere from 2 to 256 colors could be used, and that's why you see something known as banding in GIF imagery. Banding is where something in real life goes from dark to light because of lighting and shadows. In real life, it's a smooth transition known as a gradient. With indexed colors, banding can occur when these various shades are outside of the index. In this case, the colors of these pixels are quantized (or snapped) to the nearest color in the index. The images here show a noisy and banded GIF (left) versus the original picture (right): So, along came PNGs (Portable Network Graphics is what it stands for). Originally, the PNG format was what a program called Macromedia Fireworks used to save projects. Now,the same software is called Adobe Fireworks and is part of the Creative Cloud. Fireworks would cut up a graphics file into a table or image map and make areas of the image clickable via hyperlink for HTML web files. PNGs were still not widely supported by web browsers, so it would export the final web files as GIFs or JPEGs. But somewhere along the line, someone realized that the PNG image itself was extremely bandwidthefficient. So, in the 2000s, PNGs started to see some support on browsers. Up until around 2008, though, Microsoft's Internet Explorer still did not support PNGs with transparency, so some strange CSS hacks needed to be done to utilize them. Today, though, the PNG file is the most widely used network-based image file. It's lossless, has great transparency, and is extremely efficient. Since PNGs are very widely used, and this is probably why Buildbox restricts compatibility to this format. Remember, Buildbox can export for multiple mobile and desktop platforms. Alright, so PNGs are great and very compatible. But there are multiple flavours of PNG files. So, what differentiates them? What bit-ratings mean? When dealing with bit-ratings, you have to understand that when you hear 8-bit image and 24-bit image, it maybe talking about two different types of rating, or even exactly the same type of image. Confused? Good, because when dealing with a graphics professional to create your assets, you're going to have to be a lot more specific, so let's give you a brief education in this. Your typical image is 8 bits per channel (8 bpc), or 24 bits total (because there are three channels: red, green, and blue). This is also what they mean by a 16.7 million-color image. The math is pretty simple. A bit is either 0 or 1. 8 bits may look something as 01100110. This means that there are 256 possible combinations on that channel. Why? Because to calculate the number of possibilities, you take the number of possible values per slot and take it to that power. 0 or 1; that's 2 possibilities, and 8 bit is 8 slots. 2x2x2x2x2x2x2x2 (2 to the 8th power) is 256. To combine colors on a pixel, you'd need to multiply the possibilities such as 256x256x256 (which is 16.7 million). This is how they know that there are 16.7 million possible colors in an 8 bpc or 24-bit image. So saying 8 bit may mean per channel or overall. This is why it's extremely important to add thr "channel" word if that's what you mean. Finally, there is a fourth channel called alpha. The alpha channel is the transparency channel. So when you're talking about a 24-bit PNG with transparency, you're really talking about a 32-bit image. Why is this important to know? This is because some graphics programs (such as Photoshop) have 24-bit PNG as an option with a checkbox for transparency. But some other programs (such as the 3D software we used called Lightwave) have an option for a 24-bit PNG and a 32-bit PNG. This is essentially the same as the Photoshop options, but with different names. By understanding what these bits per channel are and what they do, you can navigate your image-creating software options better. So, what's an 8-bit PNG, and why is it so important to differentiate it from an 8-bit per channel PNG (or 24-bit PNG)? It is because an 8-bit PNG is highly compressed. Much like a GIF, it uses indexed colors. It also uses a great algorithm to "dither" or blend the colors to fill them in to avoid banding. 8-bit PNG files are extremely efficient on resources (that is, they are much smaller files), but they still look good, unless they have transparency. Because they are so highly compressed, the alpha channel is included in the 8-bits. So, if you use 8-bit PNG files for objects that require transparency, they will end up with a white-ghosting effect around them and look terrible on screen, much like a weather report where the weather reporter's green screen is bad. So, the rule is… So, what all this means to you is pretty simple. For objects that require transparency channels, always use 24-bit PNG files with transparency (also called 8 bits per channel, or 32-bit images). For objects that have no transparency (such as block-shaped obstacles and objects), use 8-bit PNG files. By following this rule, you'll keep your game looking great while avoiding bloating your project files. In the end, Buildbox repacks all of the images in your project into atlases (which we'll cover later) that are 32 bit. However, it's always a good practice to stay lean. If you were a Buildbox 1.x user, you may remember that Buildbox had some issues with DPI (dots per inch) between the standard 72 and 144 on retina displays. This issue is a thing of the past with Buildbox 2. Image sequences Think of a film strip. It's just a sequence of still-images known as frames. Your standard United States film runs at 24 frames per second (well, really 23.976, but let's just round up for our purposes). Also, in the US, television runs at 30 frames per second (again, 29.97, but whatever…let's round up). Remember that each image in our sequence is a full image with all of the resources associated with it. We can quite literally cut our necessary resources in half by cutting this to 15 frames per second (fps). If you open the content you downloaded, and navigate to Projects/RamblinRover/Characters/Rover001-Body, you'll see that the images are named Rover001-body_001.png, Rover001-body_002.png and so on. The final number indicates the number that should play in the sequence (first 001, then 002, and so on). The animation is really just the satellite dish rotating, and the scanner light in the window rotating as well. But what you'll really notice is that this animation is loopable. All loopable means is that the animation can loop (play over and over again) without you noticing a bump in the footage (the final frame leads seamlessly back to the first). If you're not creating these animations yourself, you'll need to make sure to specify to your graphics professional to make these animations loopable at 15 fps. They should understand exactly what you mean, and if they don't…you may consider finding a new animator. Recommended software for graphics assets For the purposes of context (now that you understand more about graphics and Buildbox), a bit of reinforcement couldn't hurt. A key piece of graphics software is the Adobe Creative Cloud subscription (http://www.adobe.com/CreativeCloud ). Given its bang for the buck, it just can't be beaten. With it, you'll get Photoshop (which can be used for all graphics assets from your game's icon to obstacles and other objects), Illustrator (which is great for navigational buttons), After Effects (very useful for animated image sequences), Premiere Pro (a video editing application for marketing videos from screen-captured gameplay), and Audition (for editing all your sound). You may also want some 3D software, such as Lightwave, 3D Studio Max, or Maya. This can greatly improve the ability to make characters, enemies, and to create still renders for menus and backgrounds. Most of the assets in Ramblin' Rover were created with the 3D software Lightwave. There are free options for all of these tools. However, there are not nearly as many tutorials and resources available on the web to help you learn and create using these. One key thing to remember when using free software: if it's free…you're the product. In other words, some benefits come with paid software, such as better support, and being part of the industry standard. Free software seems to be in a perpetual state of "beta testing." If using free software, read your End User License Agreement (known as a EULA) very carefully. Some software may require you to credit them in some way for the privilege of using their software for profit. They may even lay claim to part of your profits. Okay, let's get to actually using our graphics in Ramblin' Rover… Summary See? It's not that tough to follow. By using plain-English explanations combined with demonstrating some significant and intricate processes, you'll be taken on a journey meant to stimulate your imagination and educate you on how to use the software. Along with the book comes the complete project files and assets to help you follow along the entire way through the build process. You'll be making your own games in no time! Resources for Article: Further resources on this subject: What Makes a Game a Game? [article] Alice 3: Controlling the Behavior of Animations [article] Building a Gallery Application [article]
Read more
  • 0
  • 0
  • 3402

article-image-getting-ready-fight
Packt
08 Sep 2016
15 min read
Save for later

Getting Ready to Fight

Packt
08 Sep 2016
15 min read
In this article by Ashley Godbold author of book Mastering Unity 2D Game Development, Second Edition, we will start out by laying the main foundation for the battle system of our game. We will create the Heads Up Display (HUD) as well as design the overall logic of the battle system. The following topics will be covered in this article: Creating a state manager to handle the logic behind a turn-based battle system Working with Mecanim in the code Exploring RPG UI Creating the game's HUD (For more resources related to this topic, see here.) Setting up our battle statemanager The most unique and important part of a turn-based battle system is the turns. Controlling the turns is incredibly important, and we will need something to handle the logic behind the actual turns for us. We'll accomplish this by creating a battle state machine. The battle state manager Starting back in our BattleScene, we need to create a state machine using all of Mecanim's handy features. Although we will still only be using a fraction of the functionality with the RPG sample, I advise you to investigate and read more about its capabilities. Navigate to AssetsAnimationControllers and create a new Animator Controller called BattleStateMachine, and then we can begin putting together the battle state machine. The following screenshot shows you the states, transitions, and properties that we will need: As shown in the preceding screenshot, we have created eight states to control the flow of a battle with two Boolean parameters to control its transition. The transitions are defined as follows: From Begin_Battle to Intro BattleReadyset to true Has Exit Timeset to false (deselected) Transition Durationset to 0 From Intro to Player_Move Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Player_Move to Player_Attack PlayerReadyset totrue Has Exit Timeset tofalse Transition Durationset to0 From Player_Attack to Change_Control PlayerReadyset tofalse Has Exit Timeset tofalse Transition Durationset to2 From Change_Control to Enemy_Attack Has Exit Timeset totrue Exit Timeset to0.9 Transition Durationset to2 From Enemy_Attack to Player_Move BattleReadyset totrue Has Exit Timeset tofalse Transition Durationset to2 From Enemy_Attack to Battle_Result BattleReadyset tofalse Has Exit Timeset tofalse Transition Timeset to2 From Battle_Result to Battle_End Has Exit Timeset totrue Exit Timeset to0.9 Transition Timeset to5 Summing up, what we have built is a steady flow of battle, which can be summarized as follows: The battle begins and we show a little introductory clip to tell the player about the battle. Once the player has control, we wait for them to finish their move. We then perform the player's attack and switch the control over to the enemy AI. If there are any enemies left, they get to attack the player (if they are not too scared and have not run away). If the battle continues, we switch back to the player, otherwise we show the battle result. We show the result for five seconds (or until the player hits a key), and then finish the battle and return the player to the world together with whatever loot and experience gained. This is just a simple flow, which can be extended as much as you want, and as we continue, you will see all the points where you could expand it. With our animator state machine created, we now just need to attach it to our battle manager so that it will be available when the battle runs; the following are the ensuing steps to do this: Open up BattleScene. Select the BattleManager game object in the project Hierarchy and add an Animator component to it. Now drag the BattleStateMachine animator controller we just created into the Controller property of the Animator component. The preceding steps attached our new battle state machine to our battle engine. Now, we just need to be able to reference the BattleStateMachine Mecanim state machine from theBattleManager script. To do so, open up the BattleManager script in AssetsScripts and add the following variable to the top of the class: private Animator battleStateManager; Then, to capture the configuredAnimator in our BattleManager script, we add the following to an Awake function place before the Start function: voidAwake(){ battleStateManager=GetComponent<Animator>(); if(battleStateManager==null){ Debug.LogError("NobattleStateMachineAnimatorfound."); }   } We have to assign it this way because all the functionality to integrate the Animator Controller is built into the Animator component. We cannot simply attach the controller directly to the BattleManager script and use it. Now that it's all wired up, let's start using it. Getting to the state manager in the code Now that we have our state manager running in Mecanim, we just need to be able to access it from the code. However, at first glance, there is a barrier to achieving this. The reason being that the Mecanim system uses hashes (integer ID keys for objects) not strings to identify states within its engine (still not clear why, but for performance reasons probably). To access the states in Mecanim, Unity provides a hashing algorithm to help you, which is fine for one-off checks but a bit of an overhead when you need per-frame access. You can check to see if a state's name is a specific string using the following: GetCurrentAnimatorStateInfo(0).IsName("Thing you're checking") But there is no way to store the names of the current state, to a variable. A simple solution to this is to generate and cache all the state hashes when we start and then use the cache to talk to the Mecanim engine. First, let's remove the placeholder code, for the old enum state machine.So, remove the following code from the top of the BattleManager script: enum BattlePhase {   PlayerAttack,   EnemyAttack } private BattlePhase phase; Also, remove the following line from the Start method: phase = BattlePhase.PlayerAttack; There is still a reference in the Update method for our buttons, but we will update that shortly; feel free to comment it out now if you wish, but don't delete it. Now, to begin working with our new state machine, we need a replica of the available states we have defined in our Mecanim state machine. For this, we just need an enumeration using the same names (you can create this either as a new C# script or simply place it in the BattleManager class) as follows: publicenumBattleState { Begin_Battle, Intro, Player_Move, Player_Attack, Change_Control, Enemy_Attack, Battle_Result, Battle_End } It may seem strange to have a duplicate of your states in the state machine and in the code; however, at the time of writing, it is necessary. Mecanim does not expose the names of the states outside of the engine other than through using hashes. You can either use this approach and make it dynamic, or extract the state hashes and store them in a dictionary for use. Mecanim makes the managing of state machines very simple under the hood and is extremely powerful, much better than trawling through code every time you want to update the state machine. Next, we need a location to cache the hashes the state machine needs and a property to keep the current state so that we don't constantly query the engine for a hash. So, add a new using statement to the beginning of the BattleManager class as follows: using System.Collections; using System.Collections.Generic; using UnityEngine; Then, add the following variables to the top of the BattleManager class: private Dictionary<int, BattleState> battleStateHash = new Dictionary<int, BattleState>(); private BattleState currentBattleState; Finally, we just need to integrate the animator state machine we have created. So, create a new GetAnimationStates method in the BattleManager class as follows: void GetAnimationStates() {   foreach (BattleState state in (BattleState[])System.Enum.     GetValues(typeof(BattleState)))   {     battleStateHash.Add(Animator.StringToHash       (state.ToString()), state);   } } This simply generates a hash for the corresponding animation state in Mecanim and stores the resultant hashes in a dictionary that we can use without having to calculate them at runtime when we need to talk to the state machine. Sadly, there is no way at runtime to gather the information from Mecanim as this information is only available in the editor. You could gather the hashes from the animator and store them in a file to avoid this, but it won't save you much. To complete this, we just need to call the new method in the Start function of the BattleManager script by adding the following: GetAnimationStates(); Now that we have our states, we can use them in our running game to control both the logic that is applied and the GUI elements that are drawn to the screen. Now add the Update function to the BattleManager class as follows: voidUpdate() {   currentBattleState = battleStateHash[battleStateManager.     GetCurrentAnimatorStateInfo(0).shortNameHash];     switch (currentBattleState)   {     case BattleState.Intro:       break;     case BattleState.Player_Move:       break;     case BattleState.Player_Attack:       break;     case BattleState.Change_Control:       break;     case BattleState.Enemy_Attack:       break;     case BattleState.Battle_Result:       break;     case BattleState.Battle_End:       break;     default:       break;   } } The preceding code gets the current state from the animator state machine once per frame and then sets up a choice (switch statement) for what can happen based on the current state. (Remember, it is the state machine that decides which state follows which in the Mecanim engine, not nasty nested if statements everywhere in code.) Now we are going to update the functionality that turns our GUI button on and off. Update the line of code in the Update method we wrote as follows: if(phase==BattlePhase.PlayerAttack){ so that it now reads: if(currentBattleState==BattleState.Player_Move){ This will make it so that the buttons are now only visible when it is time for the player to perform his/her move. With these in place, we are ready to start adding in some battle logic. Starting the battle As it stands, the state machine is waiting at the Begin_Battle state for us to kick things off. Obviously, we want to do this when we are ready and all the pieces on the board are in place. When the current Battle scene we added, starts, we load up the player and randomly spawn in a number of enemies into the fray using a co-routine function called SpawnEnemies. So, only when all the dragons are ready and waiting to be chopped down do we want to kick things off. To tell the state machine to start the battle, we simple add the following line just after the end of the forloop in the SpawnEnemies IEnumerator co-routine function: battleStateManager.SetBool("BattleReady", true); Now when everything is in place, the battle will finally begin. Introductory animation When the battle starts, we are going to display a little battle introductory image that states who the player is going to be fighting against. We'll have it slide into the scene and then slide out. You can do all sorts of interesting stuff with this introductory animation, like animating the individual images, but I'll leave that up to you to play with. Can't have all the fun now, can I? Start by creating a new Canvas and renaming it IntroCanvas so that we can distinguish it from the canvas that will hold our buttons. At this point, since we are adding a second canvas into the scene, we should probably rename ours to something that is easier for you to identify. It's a matter of preference, but I like to use different canvases for different UI elements. For example, one for the HUD, one for pause menus, one for animations, and so on. You can put them all on a single canvas and use Panels and CanvasGroup components to distinguish between them; it's really up to you. As a child of the new IntroCanvas, create a Panel with the properties shown in the following screenshot. Notice that the Imageoblect's Color property is set to black with the alpha set to about half: Now add as a child of the Panel two UI Images and a UI Text. Name the first image PlayerImage and set its properties as shown in the following screenshot. Be sure to set Preserve Aspect to true: Name the second image EnemyImage and set the properties as shown in the following screenshot: For the text, set the properties as shown in the following screenshot: Your Panel should now appear as mine did in the image at the beginning of this section. Now let's give this Panel its animation. With the Panel selected, select the Animation tab. Now hit the Create button. Save the animation as IntroSlideAnimation in the Assets/Animation/Clipsfolder. At the 0:00 frame, set the Panel's X position to 600, as shown in the following screenshot: Now, at the 0:45 frame, set the Panel's X position to 0. Place the playhead at the 1:20 frame and set the Panel's X position to 0, there as well, by selecting Add Key, as shown in the following screenshot: Create the last frame at 2:00 by setting the Panel's X position to -600. When the Panel slides in, it does this annoying bounce thing instead of staying put. We need to fix this by adjusting the animation curve. Select the Curves tab: When you select the Curves tab, you should see something like the following: The reason for the bounce is the wiggle that occurs between the two center keyframes. To fix this, right-click on the two center points on the curve represented by red dots and select Flat,as shown in the following screenshot: After you do so, the curve should be constant (flat) in the center, as shown in the following screenshot: The last thing we need to do to connect this to our BattleStateMananger isto adjust the properties of the Panel's Animator. With the Panel selected, select the Animator tab. You should see something like the following: Right now, the animation immediately plays when the scene is entered. However, since we want this to tie in with our BattleStateManager and only begin playing in the Intro state, we do not want this to be the default animation. Create an empty state within the Animator and set it as the default state. Name this state OutOfFrame. Now make a Trigger Parameter called Intro. Set the transition between the two states so that it has the following properties: The last things we want to do before we move on is make it so this animation does not loop, rename this new Animator, and place our Animator in the correct subfolder. In the project view, select IntroSlideAnimation from the Assets/Animation/Clips folder and deselect Loop Time. Rename the Panel Animator to VsAnimator and move it to the Assets/Animation/Controllersfolder. Currently, the Panel is appearing right in the middle of the screen at all times, so go ahead and set the Panel's X Position to600, to get it out of the way. Now we can access this in our BattleStateManager script. Currently, the state machine pauses at the Intro state for a few seconds; let's have our Panel animation pop in. Add the following variable declarations to our BattleStateManager script: public GameObjectintroPanel; Animator introPanelAnim; And add the following to the Awake function: introPanel Anim=introPanel.GetComponent<Animator>(); Now add the following to the case line of the Intro state in the Updatefunction: case BattleState.Intro: introPanelAnim.SetTrigger("Intro"); break; For this to work, we have to drag and drop the Panel into the Intro Panel slot in the BattleManager Inspector. As the battle is now in progress and the control is being passed to the player, we need some interaction from the user. Currently, the player can run away, but that's not at all interesting. We want our player to be able to fight! So, let's design a graphic user interface that will allow her to attack those adorable, but super mean, dragons. Summary Getting the battle right based on the style of your game is very important as it is where the player will spend the majority of their time. Keep the player engaged and try to make each battle different in some way, as receptiveness is a tricky problem to solve and you don't want to bore the player. Think about different attacks your player can perform that possibly strengthen as the player strengthens. In this article, you covered the following: Setting up the logic of our turn-based battle system Working with state machines in the code Different RPG UI overlays Setting up the HUD of our game so that our player can do more than just run away Resources for Article: Further resources on this subject: Customizing an Avatar in Flash Multiplayer Virtual Worlds [article] Looking Good – The Graphical Interface [article] The Vertex Functions [article]
Read more
  • 0
  • 0
  • 1693

article-image-animations-sprites
Packt
03 Aug 2016
10 min read
Save for later

Animations Sprites

Packt
03 Aug 2016
10 min read
In this article by, Abdelrahman Saher and Francesco Sapio, from the book, Unity 5.x 2D Game Development Blueprints, we will learn how to create and play animations for the player character to see as Unity controls the player and other elements in the game. The following is what we will go through: (For more resources related to this topic, see here.) Animating sprites Integrating animations into animators Continuing our platform game Animating sprites Creating and using animation for sprites is a bit easier than other parts of the development stage. By using animations and tools to animate our game, we have the ability to breathe some life into it. Let's start by creating a running animation for our player. There are two ways of creating animations in Unity: automatic clip creation and manual clip creation. Automatic clip creation This is the recommended method for creating 2D animations. Here, Unity is able to create the entire animation for you with a single-click. If you navigate in the Project Panel to Platformer Pack | Player | p1_walk, you can find an animation sheet as a single file p1_walk.png and a folder of a PNG image for each frame of the animation. We will use the latter. The reason for this is because the single sprite sheet will not work perfectly as it is not optimized for Unity. In the Project Panel, create a new folder and rename it to Animations. Then, select all the PNG images in Platformer Pack | Player | p1_walk | PNG and drop them in the Hierarchy Panel: A new window will appear that will give us the possibility to save them as a new animation in a folder that we chose. Let's save the animation in our new folder titled Animations as WalkAnim: After saving the animation, look in the Project Panel next to the animation file. Now, there is another asset with the name of one of the dropped sprites. This is an Animator Controller and, as the name suggests, it is used to control the animation. Let's rename it to PlayerAnimator so that we can distinguish it later on. In the Hierarchy panel, a game object has been automatically created with the original name of our controller. If we select it, the Inspector should look like the following: You can always add an Animator component to a game object by clicking on Add Component | Miscellaneous | Animator. As you can see, below the Sprite Renderer component there is an Animator component. This component will control the animation for the player and is usually accessed through a custom script to change the animations. For now, drag and drop the new controller PlayerAnimator on to our Player object. Manual clip creation Now, we also need a jump animation for our character. However, since we only have one sprite for the player jumping, we will manually create the animation clip for it. To achieve this, select the Player object in the Hierarchy panel and open the Animation window from Window | Animation. The Animation window will appear, as shown in the screenshot below: As you can see, our animation WalkAnim is already selected. To create a new animation clip, click on where the text WalkAnim is. As a result, a dropdown menu appears and here you can select Create New Clip. Save the new animation in the Animations folder as JumpAnim. On the right, you can find the animation timeline. Select from the Project Panel the folder Platformer Pack/Player. Drag and drop the sprite p1_jump on the timeline. You can see that the timeline for the animation has changed. In fact, now it contains the jumping animation, even if it is made out of only one sprite. Finally, save what we have done so far. The Animation window's features are best used to make fine tunes for the animation or even merging two or more animations into one. Now the Animations folder should look like this in the Project panel: By selecting the WalkAnim file, you will be able to see the Preview panel, which is collocated at the bottom of the Inspector when an object that may contain animation is selected. To test the animation, drag the Player object and drop it in the Preview panel and hit play: In the Preview panel, you can check out your animations without having to test them directly from code. In addition, you can easily select the desired animation and then drag the animation into a game object with the corresponding Animator Controller and dropping it in the Preview panel. The Animator In order to display an animation on a game object, you will be using both Animator Components and Animator Controllers. These two work hand in hand to control the animation of any animated object that you might have, and are described below: Animator Controller uses a state-machine to manage the animation states and the transitions between one another, almost like a flow chart of animations. Animator Component uses an Animator Controller to define which animation clips to use and applies them on the game object when needed. It also controls the blending and the transitions between them. Let's start modifying our controller to make it right for our character animations. Click on the Player and then open the Animator window from Window | Animator. We should see something like this: This is a state-machine, although it is automatically generated. To move around the grid, hold the middle mouse button and drag around. First, let's understand how all the different kinds of nodes work: Entry node (marked green): It is used when transitioning into a state machine, provided the required conditions were met. Exit node (marked red): It is used to exit a state machine when the conditions have been changed or completed. By default, it is not present, as there isn't one in the previous image. Default node (marked orange): It is the default state of the Animator and is automatically transitioned to from the entry node. Sub-state nodes (marked grey): They are also called custom nodes. They are used typically to represent a state for an object where an event will occur (in our case, an animation will be played). Transitions (arrows): They allow state machines to switch between one another by setting the conditions that will be used by Animator to decide which state will be activated. To keep things organized, let's reorder the nodes in the grid. Drag the three sub-states just right under the Entry node. Order them from left to right WalkAnim, New Animation, and JumpAnim. Then, right-click on New Animation and choose Set as Layer Default State. Now, our Animator window should look like the following: To edit a node, we need to select it and modify it as needed in the Inspector. So, select New Animation and the Inspector should be like the screenshot below: Here, we can have access to all the properties of the state or node New Animation. Let's change its name to Idle. Next, we need to change the speed of the state machine, which controls how fast the animation will be played. Next, we have Motion which refers to the animation that will be used for this state. After we have changed the name, save the scene, and this is what everything should look like now: We can test what we have done so far, by hitting play. As we can see in the Game view, the character is not animated. This is because the character is always in the Idle state and there are no transitions to let him change state. While the game is in runtime, we can see in the Animator window that the Idle state is running. Stop the game, and right-click on the WalkAnim node in the Animator window. Select from the menu Set as Layer Default State. As a result, the walking animation will be played automatically at the beginning of the game. If we press the play button again, we can notice that the walk animation is played, as shown in the screenshot below: You can experiment with the other states of the Animator. For example, you can try to set JumpAnim as the default animation or even tweak the speed of each state to see how they will be affected. Now that we know the basics of how the Animator works, let's stop the playback and revert the default state to the Idle state. To be able to connect our states together, we need to create transitions. To achieve this, right-click on the Idle state and select Make Transition which turns the mouse cursor into an arrow. By clicking on other states, we can connect them with a transition. In our case, click on the WalkAnim state to make a transition from the Idle state to the WalkAnim state. The animator window should look like the following: If we click on the arrow, we can have access to its properties in the Inspector, as shown in the following screenshot: The main properties that we might want to change are: Name (optional): We can assign a name to the transition. This is useful to keep everything organized and easy to access. In this case, let's name this transition Start Walking. Has Exit Time: Whether or not the animation should be played to the end before exiting its state when the conditions are not being met anymore. Conditions: The conditions that should be met so that the transition takes place. Let's try adding a condition and see what happens: When we try to create a condition for our transition, the following message appears next to Parameter does not exist in Controller which means that we need to add parameters that will be used for our condition. To create a parameter, switch to Parameters in the top left of the Animator window and add a new float using the + button and name it PlayerSpeed, as shown in the following screenshot: Any parameters that are created in the Animator are usually changed from code and those changes affect the state of animation. In the following screenshot, we can see the PlayerSpeed parameter on the left side: Now that we have created a parameter, let's head back to the transition. Click the drop down button next to the condition we created earlier and choose the parameter PlayerSpeed. After choosing the parameter, another option appears next to it. You can either choose Greater or Less, which means that the transition will happen when this parameter is respectively less than X or greater than X. Don't worry, as that X will be changed by our code later on. For now, choose Greater and set the value to 1, which means that when the player speed is more than one, the walk animation starts playing. You can test what we have done so far and change the PlayerSpeed parameter in runtime. Summary This wraps up everything that we will cover in this article. So far, we have added animations to our character to be played according to the player controls. Resources for Article: Further resources on this subject: Animations in Cocos2d-x [Article] Adding Animations [Article] Bringing Your Game to Life with AI and Animations [Article]
Read more
  • 0
  • 0
  • 2388
article-image-game-world
Packt
23 Feb 2016
39 min read
Save for later

The Game World

Packt
23 Feb 2016
39 min read
In this article, we will cover the basics of creating immersive areas where players can walk around and interact, as well as some of the techniques used to manage those areas. This article will give you some practical tips and tricks of the spritesheet system introduced with Unity 4.3 and how to get it to work for you. Lastly, we will also have a cursory look at how shaders work in the 2D world and the considerations you need to keep in mind when using them. However, we won't be implementing shaders as that could be another book in itself. The following is the list of topics that will be covered in this article: Working with environments Looking at sprite layers Handling multiple resolutions An overview of parallaxing and effects Shaders in 2D – an overview (For more resources related to this topic, see here.) Backgrounds and layers Now that we have our hero in play, it would be nice to give him a place to live and walk around, so let's set up the home town and decorate it. Firstly, we are going to need some more assets. So, from the asset pack you downloaded earlier, grab the following assets from the Environments pack, place them in the AssetsSpritesEnvironment folder, and name them as follows: Name the ENVIRONMENTS STEAMPUNKbackground01.png file Assets SpritesEnvironmentbackground01 Name the ENVIRONMENTSSTEAMPUNKenvironmentalAssets.png file AssetsSpritesEnvironmentenvironmentalAssets Name the ENVIRONMENTSFANTASYenvironmentalAssets.png file Assets SpritesEnvironmentenvironmentalAssets2 To slice or not to slice It is always better to pack many of the same images on to a single asset/atlas and then use the Sprite Editor to define the regions on that texture for each sprite, as long as all the sprites on that sheet are going to get used in the same scene. The reason for this is when Unity tries to draw to the screen, it needs to send the images to draw to the graphics card; if there are many images to send, this can take some time. If, however, it is just one image, it is a lot simpler and more performant with only one file to send. There needs to be a balance; too large an image and the upload to the graphics card can take up too many resources, too many individual images and you have the same problem. The basic rule of thumb is as follows: If the background is a full screen background or large image, then keep it separately. If you have many images and all are for the same scene, then put them into a spritesheet/atlas. If you have many images but all are for different scenes, then group them as best you can—common items on one sheet and scene-specific items on different sheets. You'll have several spritesheets to use. You basically want to keep as much stuff together as makes sense and not send unnecessary images that won't get used to the graphics card. Find your balance. The town background First, let's add a background for the town using the AssetsSpritesEnvironmentbackground01 texture. It is shown in the following screenshot: With the background asset, we don't need to do anything else other than ensure that it has been imported as a sprite (in case your project is still in 3D mode), as shown in the following screenshot: The town buildings For the steampunk environmental assets (AssetsSpritesEnvironmentenvironmentalAssets) that are shown in the following screenshot, we need a bit more work; once these assets are imported, change the Sprite Mode to Multiple and load up the Sprite Editor using the Sprite Editor button. Next, click on the Slice button, leave the settings at their default options, and then click on the Slice button in the new window as shown in the following screenshot: Click on Apply and close the Sprite Editor. You will have four new sprite textures available as seen in the following screenshot: The extra scenery We saw what happens when you use a grid type split on a spritesheet and when the automatic split works well, so what about when it doesn't go so well? If we look at the Fantasy environment pack (AssetsSpritesEnvironmentenvironmentalAssets2), we will see the following: After you have imported it and run the Split in Sprite Editor, you will notice that one of the sprites does not get detected very well; altering the automatic split settings in this case doesn't help, so we need to do some manual manipulation as shown in the following screenshot: In the previous screenshot, you can see that just two of the rocks in the top-right sprite have been identified by the splicing routine. To fix this, just delete one of the selections and then expand the other manually using the selection points in the corner of the selection box (after clicking on the sprite box). Here's how it will look before the correction: After correction, you should see something like the following screenshot: This gives us some nice additional assets to scatter around our towns and give it a more homely feel, as shown in the following screenshot: Building the scene So, now that we have some nice assets to build with, we can start building our first town. Adding the town background Returning to the scene view, you should see the following: If, however, we add our town background texture (AssetsSpritesBackgroundsBackground.png) to the scene by dragging it to either the project hierarchy or the scene view, you will end up with the following: Be sure to set the background texture position appropriately once you add it to the scene; in this case, be sure the position of the transform is centered in the view at X = 0, Y = 0, Z = 0. Unity does have a tendency to set the position relative to where your 3D view is at the time of adding it—almost never where you want it. Our player has vanished! The reason for this is simple: Unity's sprite system has an ordering system that comes in two parts. Sprite sorting layers Sorting Layers (Edit | Project Settings | Tags and Layers) are a collection of sprites, which are bulked together to form a single group. Layers can be configured to be drawn in a specific order on the screen as shown in the following screenshot: Sprite sorting order Sprites within an individual layer can be sorted, allowing you to control the draw order of sprites within that layer. The sprite Inspector is used for this purpose, as shown in the following screenshot: Sprite's Sorting Layers should not be confused with Unity's rendering layers. Layers are a separate functionality used to control whether groups of game objects are drawn or managed together, whereas Sorting Layers control the draw order of sprites in a scene. So the reason our player is no longer seen is that it is behind the background. As they are both in the same layer and have the same sort order, they are simply drawn in the order that they are in the project hierarchy. Updating the scene Sorting Layers To resolve the update of the scene's Sorting Layers, let's organize our sprite rendering by adding some sprite Sorting Layers. So, open up the Tags and Layers inspector pane as shown in the following screenshot (by navigating to Edit | Project settings | Tags and Layers), and add the following Sorting Layers: Background Player Foreground GUI You can reorder the layers underneath the default anytime by selecting a row and dragging it up and down the sprite's Sorting Layers list. With the layers set up, we can now configure our game objects accordingly. So, set the Sorting Layer on our background01 sprite to the Background layer as shown in the following screenshot: Then, update the PlayerSprite layer to Player; our character will now be displayed in front of the background. You can just keep both objects on the same layer and set the Sort Order value appropriately, keeping the background to a Sort Order value of 0 and the player to 10, which will draw the player in front. However, as you add more items to the scene, things will get tricky quickly, so it is better to group them in a layer accordingly. Now when we return to the scene, our hero is happily displayed but he is seen hovering in the middle of our village. So let's fix that next by simply changing its position transform in the Inspector window. Setting the Y position transform to -2 will place our hero nicely in the middle of the street (provided you have set the pivot for the player sprite to bottom), as shown in the following screenshot: Feel free at this point to also add some more background elements such as trees and buildings to fill out the scene using the environment assets we imported earlier. Working with the camera If you try and move the player left and right at the moment, our hero happily bobs along. However, you will quickly notice that we run into a problem: the hero soon disappears from the edge of the screen. To solve this, we need to make the camera follow the hero. When creating new scripts to implement something, remember that just about every game that has been made with Unity has most likely implemented either the same thing or something similar. Most just get on with it, but others and the Unity team themselves are keen to share their scripts to solve these challenges. So in most cases, we will have something to work from. Don't just start a script from scratch (unless it is a very small one to solve a tiny issue) if you can help it; here's some resources to get you started: Unity sample projects: http://Unity3d.com/learn/tutorials/projects Unity Patterns: http://unitypatterns.com/ Unity wiki scripts section: http://wiki.Unity3d.com/index.php/Scripts (also check other stuff for detail) Once you become more experienced, it is better to just use these scripts as a reference and try to create your own and improve on them, unless they are from a maintained library such as https://github.com/nickgravelyn/UnityToolbag. To make the camera follow the players, we'll take the script from the Unity 2D sample and modify it to fit in our game. This script is nice because it also includes a Mario style buffer zone, which allows the players to move without moving the camera until they reach the edge of the screen. Create a new script called FollowCamera in the AssetsScripts folder, remove the Start and Update functions, and then add the following properties: using UnityEngine;   public class FollowCamera : MonoBehavior {     // Distance in the x axis the player can move before the   // camera follows.   public float xMargin = 1.5f;     // Distance in the y axis the player can move before the   // camera follows.   public float yMargin = 1.5f;     // How smoothly the camera catches up with its target   // movement in the x axis.   public float xSmooth = 1.5f;     // How smoothly the camera catches up with its target   // movement in the y axis.   public float ySmooth = 1.5f;     // The maximum x and y coordinates the camera can have.   public Vector2 maxXAndY;     // The minimum x and y coordinates the camera can have.   public Vector2 minXAndY;     // Reference to  the player's transform.   public Transform player; } The variables are all commented to explain their purpose, but we'll cover each as we use them. First off, we need to get the player object's position so that we can track the camera to it by discovering it from the object it is attached to. This is done by adding the following code in the Awake function: void Awake()     {         // Setting up the reference.         player = GameObject.Find("Player").transform;   if (player == null)   {     Debug.LogError("Player object not found");   }       } An alternative to discovering the player this way is to make the player property public and then assign it in the editor. There is no right or wrong way—just your preference. It is also a good practice to add some element of debugging to let you know if there is a problem in the scene with a missing reference, else all you will see are errors such as object not initialized or variable was null. Next, we need a couple of helper methods to check whether the player has moved near the edge of the camera's bounds as defined by the Max X and Y variables. In the following code, we will use the settings defined in the preceding code to control how close you can get to the end result:   bool CheckXMargin()     {         // Returns true if the distance between the camera and the   // player in the x axis is greater than the x margin.         return Mathf.Abs (transform.position.x - player.position.x) > xMargin;     }       bool CheckYMargin()     {         // Returns true if the distance between the camera and the   // player in the y axis is greater than the y margin.         return Mathf.Abs (transform.position.y - player.position.y) > yMargin;     } To finish this script, we need to check each frame when the scene is drawn to see whether the player is close to the edge and update the camera's position accordingly. Also, we need to check if the camera bounds have reached the edge of the screen and not move it beyond. Comparing Update, FixedUpdate, and LateUpdate There is usually a lot of debate about which update method should be used within a Unity game. To put it simply, the FixedUpdate method is called on a regular basis throughout the lifetime of the game and is generally used for physics and time sensitive code. The Update method, however, is only called after the end of each frame that is drawn to the screen, as the time taken to draw the screen can vary (due to the number of objects to be drawn and so on). So, the Update call ends up being fairly irregular. For more detail on the difference between Update and FixedUpdate see the Unity Learn tutorial video at http://unity3d.com/learn/tutorials/modules/beginner/scripting/update-and-fixedupdate. As the player is being moved by the physics system, it is better to update the camera in the FixedUpdate method: void FixedUpdate()     {         // By default the target x and y coordinates of the camera         // are it's current x and y coordinates.         float targetX = transform.position.x;         float targetY = transform.position.y;           // If the player has moved beyond the x margin...         if (CheckXMargin())             // the target x coordinate should be a Lerp between             // the camera's current x position and the player's  // current x position.             targetX = Mathf.Lerp(transform.position.x,  player.position.x, xSmooth * Time.fixedDeltaTime );           // If the player has moved beyond the y margin...         if (CheckYMargin())             // the target y coordinate should be a Lerp between             // the camera's current y position and the player's             // current y position.             targetY = Mathf.Lerp(transform.position.y,  player.position.y, ySmooth * Time. fixedDeltaTime );           // The target x and y coordinates should not be larger         // than the maximum or smaller than the minimum.         targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);         targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);           // Set the camera's position to the target position with         // the same z component.         transform.position =          new Vector3(targetX, targetY, transform.position.z);     } As they say, every game is different and how the camera acts can be different for every game. In a lot of cases, the camera should be updated in the LateUpdate method after all drawing, updating, and physics are complete. This, however, can be a double-edged sword if you rely on math calculations that are affected in the FixedUpdate method, such as Lerp. It all comes down to tweaking your camera system to work the way you need it to do. Once the script is saved, just attach it to the Main Camera element by dragging the script to it or by adding a script component to the camera and selecting the script. Finally, we just need to configure the script and the camera to fit our game size as follows: Set the orthographic Size of the camera to 2.7 and the Min X and Max X sizes to 5 and -5 respectively. The perils of resolution When dealing with cameras, there is always one thing that will trip us up as soon as we try to build for another platform—resolution. By default, the Unity player in the editor runs in the Free Aspect mode as shown in the following screenshot: The Aspect mode (from the Aspect drop-down) can be changed to represent the resolutions supported by each platform you can target. The following is what you get when you switch your build target to each platform: To change the build target, go into your project's Build Settings by navigating to File | Build Settings or by pressing Ctrl + Shift + B, then select a platform and click on the Switch Platform button. This is shown in the following screenshot: When you change the Aspect drop-down to view in one of these resolutions, you will notice how the aspect ratio for what is drawn to the screen changes by either stretching or compressing the visible area. If you run the editor player in full screen by clicking on the Maximize on Play button () and then clicking on the play icon, you will see this change more clearly. Alternatively, you can run your project on a target device to see the proper perspective output. The reason I bring this up here is that if you used fixed bounds settings for your camera or game objects, then these values may not work for every resolution, thereby putting your settings out of range or (in most cases) too undersized. You can handle this by altering the settings for each build or using compiler predirectives such as #if UNITY_METRO to force the default depending on the build (in this example, Windows 8). To read more about compiler predirectives, check the Unity documentation at http://docs.unity3d.com/Manual/PlatformDependentCompilation.html. A better FollowCamera script If you are only targeting one device/resolution or your background scrolls indefinitely, then the preceding manual approach works fine. However, if you want it to be a little more dynamic, then we need to know what resolution we are working in and how much space our character has to travel. We will perform the following steps to do this: We will change the min and max variables to private as we no longer need to configure them in the Inspector window. The code is as follows:   // The maximum x and y coordinates the camera can have.     private Vector2 maxXAndY;       // The minimum x and y coordinates the camera can have.     private Vector2 minXAndY; To work out how much space is available in our town, we need to interrogate the rendering size of our background sprite. So, in the Awake function, we add the following lines of code: // Get the bounds for the background texture - world       size     var backgroundBounds = GameObject.Find("background")      .renderer.bounds; In the Awake function, we work out our resolution and viewable space by interrogating the ViewPort method on the camera and converting it to the same coordinate type as the sprite. This is done using the following code:   // Get the viewable bounds of the camera in world     // coordinates     var camTopLeft = camera.ViewportToWorldPoint      (new Vector3(0, 0, 0));     var camBottomRight = camera.ViewportToWorldPoint      (new Vector3(1, 1, 0)); Finally, in the Awake function, we update the min and max values using the texture size and camera real-world bounds. This is done using the following lines of code: // Automatically set the min and max values     minXAndY.x = backgroundBounds.min.x - camTopLeft.x;     maxXAndY.x = backgroundBounds.max.x - camBottomRight.x; In the end, it is up to your specific implementation for the type of game you are making to decide which pattern works for your game. Transitioning and bounds So our camera follows our player, but our hero can still walk off the screen and keep going forever, so let us stop that from happening. Towns with borders As you saw in the preceding section, you can use Unity's camera logic to figure out where things are on the screen. You can also do more complex ray testing to check where things are, but I find these are overly complex unless you depend on that level of interaction. The simpler answer is just to use the native Box2D physics system to keep things in the scene. This might seem like overkill, but the 2D physics system is very fast and fluid, and it is simple to use. Once we add the physics components, Rigidbody 2D (to apply physics) and a Box Collider 2D (to detect collisions) to the player, we can make use of these components straight away by adding some additional collision objects to stop the player running off. To do this and to keep things organized, we will add three empty game objects (either by navigating to GameObject | Create Empty, or by pressing Ctrl + Shift +N) to the scene (one parent and two children) to manage these collision points, as shown in the following screenshot: I've named them WorldBounds (parent) and LeftBorder and RightBorder (children) for reference. Next, we will position each of the child game objects to the left- and right-hand side of the screen, as shown in the following screenshot: Next, we will add a Box Collider 2D to each border game object and increase its height just to ensure that it works for the entire height of the scene. I've set the Y value to 5 for effect, as shown in the following screenshot: The end result should look like the following screenshot with the two new colliders highlighted in green: Alternatively, you could have just created one of the children, added the box collider, duplicated it (by navigating to Edit | Duplicate or by pressing Ctrl + D), and moved it. If you have to create multiples of the same thing, this is a handy tip to remember. If you run the project now, then our hero can no longer escape this town on his own. However, as we want to let him leave, we can add a script to the new Boundary game object so that when the hero reaches the end of the town, he can leave. Journeying onwards Now that we have collision zones on our town's borders, we can hook into this by using a script to activate when the hero approaches. Create a new C# script called NavigationPrompt, clear its contents, and populate it with the following code: using UnityEngine;   public class NavigationPrompt : MonoBehavior {     bool showDialog;     void OnCollisionEnter2D(Collision2D col)   {     showDialog = true;   }     void OnCollisionExit2D(Collision2D col)   {     showDialog = false;   } } The preceding code gives us the framework of a collision detection script that sets a flag on and off if the character interacts with what the script is attached to, provided it has a physics collision component. Without it, this script would do nothing and it won't cause an error. Next, we will do something with the flag and display some GUI when the flag is set. So, add the following extra function to the preceding script: void OnGUI()     {       if (showDialog)       {         //layout start         GUI.BeginGroup(new Rect(Screen.width / 2 - 150, 50, 300,           250));           //the menu background box         GUI.Box(new Rect(0, 0, 300, 250), "");           // Information text         GUI.Label(new Rect(15, 10, 300, 68), "Do you want to           travel?");           //Player wants to leave this location         if (GUI.Button(new Rect(55, 100, 180, 40), "Travel"))         {           showDialog = false;             // The following line is commented out for now           // as we have nowhere to go :D           //Application.LoadLevel(1);}           //Player wants to stay at this location         if (GUI.Button(new Rect(55, 150, 180, 40), "Stay"))         {           showDialog = false;         }           //layout end         GUI.EndGroup();       }     } The function itself is very simple and only activates if the showDialog flag is set to true by the collision detection. Then, we will perform the following steps: In the OnGUI method, we set up a dialog window region with some text and two buttons. One button asks if the player wants to travel, which would load the next area (commented out for now as we only have one scene), and close the dialog. One button simply closes the dialog if the hero didn't actually want to leave. As we haven't stopped moving the player, the player can also do this by moving away. If you now add the NavigationPrompt script to the two world border (LeftBorder and RightBorder) game objects, this will result in the following simple UI whenever the player collides with the edges of our world: We can further enhance this by tagging or naming our borders to indicate a destination. I prefer tagging, as it does not interfere with how my scene looks in the project hierarchy; also, I can control what tags are available and prevent accidental mistyping. To tag a game object, simply select a Tag using the drop-down list in the Inspector when you select the game object in the scene or project. This is shown in the following screenshot: If you haven't set up your tags yet or just wish to add a new one, select Add Tag in the drop-down menu; this will open up the Tags and Layers window of Inspector. Alternatively, you can call up this window by navigating to Edit | Project Settings | Tags and layers in the menu. It is shown in the following screenshot: You can only edit or change user-defined tags. There are several other tags that are system defined. You can use these as well; you just cannot change, remove, or edit them. These include Player, Respawn, Finish, Editor Only, Main Camera, and GameController. As you can see from the preceding screenshot, I have entered two new tags called The Cave and The World, which are the two main exit points from our town. Unity also adds an extra item to the arrays in the editor. This helps you when you want to add more items; it's annoying when you want a fixed size but it is meant to help. When the project runs, however, the correct count of items will be exposed. Once these are set up, just return to the Inspector for the two borders, and set the right one to The World and the left to The Cave. Now, I was quite specific in how I named these tags, as you can now reuse these tags in the script to both aid navigation and also to notify the player where they are going. To do this, simply update the Do you want to travel to line to the following: //Information text GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel to " +   this.tag + "?"); Here, we have simply appended the dialog as it is presented to the user with the name of the destination we set in the tag. Now, we'll get a more personal message, as shown in the following screenshot: Planning for the larger picture Now for small games, the preceding implementation is fine; however, if you are planning a larger world with a large number of interactions, provide complex decisions to prevent the player continuing unless they are ready. As the following diagram shows, there are several paths the player can take and in some cases, these is only one way. Now, we could just build up the logic for each of these individually as shown in the screenshot, but it is better if we build a separate navigation system so that we have everything in one place; it's just easier to manage that way. This separation is a fundamental part of any good game design. Keeping the logic and game functionality separate makes it easier to maintain in the future, especially when you need to take internationalization into account (but we will learn more about that later). Now, we'll change to using a manager to handle all the world/scene transitions, and simplify the tag names we use as they won't need to be displayed. So, The Cave will be renamed as just Cave, and we will get the text to display from the navigation manager instead of the tag. So, by separating out the core decision making functionality out of the prompt script, we can build the core manager for navigation. Its primary job is to maintain where a character can travel and information about that destination. First, we'll update the tags we created earlier to simpler identities that we can use in our navigation manager (update The Cave to Cave01 and The World to World). Next, we'll create a new C# script called NavigationManager in our AssetsScripts folder, and then replace its contents with the following lines of code: public static class NavigationManager {       public static Dictionary<string,string> RouteInformation =     new Dictionary<string,string>()   {     { "World", "The big bad world"},     { "Cave", "The deep dark cave"},   };     public static string GetRouteInfo(string destination)   {     return RouteInformation.ContainsKey(destination) ?     RouteInformation[destination] : null;   }     public static bool CanNavigate(string destination)   {     return true;   }     public static void NavigateTo(string destination)   {     // The following line is commented out for now     // as we have nowhere to go :D     //Application.LoadLevel(destination);   } } Notice the ? and : operators in the following statement: RouteInformation.ContainsKey(destination) ?   RouteInformation[destination] : null; These operators are C# conditional operators. They are effectively the shorthand of the following: if(RouteInformation.ContainsKey(destination)) {   return RouteInformation[destination]; } else {   return null; } Shorter, neater, and much nicer, don't you think? For more information, see the MSDN C# page at http://bit.ly/csharpconditionaloperator. The script is very basic for now, but contains several following key elements that can be expanded to meet the design goals of your game: RouteInformation: This is a list of all the possible destinations in the game in a dictionary. A static list of possible destinations in the game, and it is a core part of the manager as it knows everywhere you can travel in the game in one place. GetRouteInfo: This is a basic information extraction function. A simple controlled function to interrogate the destination list. In this example, we just return the text to be displayed in the prompt, which allows for more detailed descriptions that we could use in tags. You could use this to provide alternate prompts depending on what the player is carrying and whether they have a lit torch, for example. CanNavigate: This is a test to see if navigation is possible. If you are going to limit a player's travel, you need a way to test if they can move, allowing logic in your game to make alternate choices if the player cannot. You could use a different system for this by placing some sort of block in front of a destination to limit choice (as used in the likes of Zelda), such as an NPC or rock. As this is only an example, we can always travel and add logic to control it if you wish. NavigateTo: This is a function to instigate navigation. Once a player can travel, you can control exactly what happens in the game: does navigation cause the next scene to load straight away (as in the script currently), or does the current scene fade out and then a traveling screen is shown before fading the next level in? Granted, this does nothing at present as we have nowhere to travel to. The script you will notice is different to the other scripts used so far, as it is a static class. This means it sits in the background, only exists once in the game, and is accessible from anywhere. This pattern is useful for fixed information that isn't attached to anything; it just sits in the background waiting to be queried. Later, we will cover more advanced types and classes to provide more complicated scenarios. With this class in place, we just need to update our previous script (and the tags) to make use of this new manager. Update the NavigationPrompt script as follows: Update the collision function to only show the prompt if we can travel. The code is as follows: void OnCollisionEnter2D(Collision2D col) {   //Only allow the player to travel if allowed   if (NavigationManager.CanNavigate(this.tag))   {     showDialog = true;   } } When the dialog shows, display the more detailed destination text provided by the manager for the intended destination. The code is as follows: //Dialog detail - updated to get better detail GUI.Label(new Rect(15, 10, 300, 68), "Do you want to travel   to " + NavigationManager.GetRouteInfo(this.tag) + "?"); If the player wants to travel, let the manager start the travel process. The code is as follows: //Player wants to leave this location if (GUI.Button(new Rect(55, 100, 180, 40), "Travel")) {   showDialog = false;   NavigationManager.NavigateTo(this.tag); } The functionality I've shown here is very basic and it is intended to make you think about how you would need to implement it for your game. With so many possibilities available, I could fill several articles on this kind of subject alone. Backgrounds and active elements A slightly more advanced option when building game worlds is to add a level of immersive depth to the scene. Having a static image to show the village looks good, especially when you start adding houses and NPCs to the mix; but to really make it shine, you should layer the background and add additional active elements to liven it up. We won't add them to the sample project at this time, but it is worth experimenting with in your own projects (or try adding it to this one)—it is a worthwhile effect to look into. Parallaxing If we look at the 2D sample provided by Unity, the background is split into several panes—each layered on top of one another and each moving at a different speed when the player moves around. There are also other elements such as clouds, birds, buses, and taxes driving/flying around, as shown in the following screenshot: Implementing these effects is very easy technically. You just need to have the art assets available. There are several scripts in the wiki I described earlier, but the one in Unity's own 2D sample is the best I've seen. To see the script, just download the Unity Projects: 2D Platformer asset from https://www.assetstore.unity3d.com/en/#!/content/11228, and check out the BackgroundParallax script in the AssetsScripts folder. The BackgroundParallax script in the platformer sample implements the following: An array of background images, which is layered correctly in the scene (which is why the script does not just discover the background sprites) A scaling factor to control how much the background moves in relation to the camera target, for example, the camera A reducing factor to offset how much each layer moves so that they all don't move as one (or else what is the point, might as well be a single image) A smoothing factor so that each background moves smoothly with the target and doesn't jump around Implementing this same model in your game would be fairly simple provided you have texture assets that could support it. Just replicate the structure used in the platformer 2D sample and add the script. Remember to update the FollowCamera script to be able to update the base background, however, to ensure that it can still discover the size of the main area. Foreground objects The other thing you can do to liven up your game is to add random foreground objects that float across your scene independently. These don't collide with anything and aren't anything to do with the game itself. They are just eye candy to make your game look awesome. The process to add these is also fairly simple, but it requires some more advanced Unity features such as coroutines, which we are not going to cover here. So, we will come back to these later. In short, if you examine the BackgroundPropSpawner.cs script from the preceding Unity platformer 2D sample, you will have to perform the following steps: Create/instantiate an object to spawn. Set a random position and direction for the object to travel. Update the object over its lifetime. Once it's out of the scene, destroy or hide it. Wait for a time, and then start again. This allows them to run on their own without impacting the gameplay itself and just adds that extra bit of depth. In some cases, I've seen particle effects are also used to add effect, but they are used sparingly. Shaders and 2D Believe it or not, all 2D elements (even in their default state) are drawn using a shader—albeit a specially written shader designed to light and draw the sprite in a very specific way. If you look at the player sprite in the inspector, you will see that it uses a special Material called Sprites-Default, as shown in the following screenshot: This section is purely meant to highlight all the shading options you have in the 2D system. Shaders have not changed much in this update except for the addition of some 2D global lighting found in the default sprite shader. For more detail on shaders in general, I suggest a dedicated Unity shader book such as https://www.packtpub.com/game-development/unity-shaders-and-effects-cookbook. Clicking on the button next to Material field will bring up the material selector, which also shows the two other built-in default materials, as shown in the following screenshot: However, selecting either of these will render your sprite invisible as they require a texture and lighting to work; they won't inherit from the Sprite Renderer texture. You can override this by creating your own material and assigning alternate sprite style shaders. To create a new material, just select the AssetsMaterials folder (this is not crucial, but it means we create the material in a sensible place in our project folder structure) and then right click on and select Create | Material. Alternatively, do the same using the project view's Edit... menu option, as shown in the following screenshot: This gives us a basic default Diffuse shader, which is fine for basic 3D objects. However, we also have two default sprite rendering shaders available. Selecting the shader dropdown gives us the screen shown in the following screenshot: Now, these shaders have the following two very specific purposes: Default: This shader inherits its texture from the Sprite Renderer texture to draw the sprite as is. This is a very basic functionality—just enough to draw the sprite. (It contains its own static lighting.) Diffuse: This shader is the same as the Default shader; it inherits the texture of Default, but it requires an external light source as it does not contain any lighting—this has to be applied separately. It is a slightly more advanced shader, which includes offsets and other functions. Creating one of these materials and applying it to the Sprite Renderer texture of a sprite will override its default constrained behavior. This opens up some additional shader options in the Inspector, as shown in the following screenshot: These options include the following: Sprite Texture: Although changing the Tiling and Offset values causes a warning to appear, they still display a function (even though the actual displayed value resets). Tint: This option allows changing the default light tint of the rendered sprite. It is useful to create different colored objects from the same sprite. Pixel snap: This option makes the rendered sprite crisper but narrows the drawn area. It is a trial and error feature (see the following sections for more information). Achieving pixel perfection in your game in Unity can be a challenge due to the number of factors that can affect it, such as the camera view size, whether the image texture is a Power Of Two (POT) size, and the import setting for the image. This is basically a trial and error game until you are happy with the intended result. If you are feeling adventurous, you can extend these default shaders (although this is out of the scope of this article). The full code for these shaders can be found at http://Unity3d.com/unity/download/archive. If you are writing your own shaders though, be sure to add some lighting to the scene; otherwise, they are just going to appear dark and unlit. Only the default sprite shader is automatically lit by Unity. Alternatively, you can use the default sprite shader as a base to create your new custom shader and retain the 2D basic lighting. Another worthy tip is to check out the latest version of the Unity samples (beta) pack. In it, they have added logic to have two sets of shaders in your project: one for mobile and one for desktop, and a script that will swap them out at runtime depending on the platform. This is very cool; check out on the asset store at https://www.assetstore.unity3d.com/#/content/14474 and the full review of the pack at http://darkgenesis.zenithmoon.com/unity3dsamplesbeta-anoverview/. Going further If you are the adventurous sort, try expanding your project to add the following: Add some buildings to the town Set up some entry points for a building and work that into your navigation system, for example, a shop Add some rocks to the scene and color each differently using a manual material, maybe even add a script to randomly set the pixel color in the shader instead of creating several materials Add a new scene for the cave using another environment background, and get the player to travel between them Summary This certainly has been a very busy article just to add a background to our scene, but working out how each scene will work is a crucial design element for the entire game; you have to pick a pattern that works for you and your end result once as changing it can be very detrimental (and a lot of work) in the future. In this article, we covered the following topics: Some more practice with the Sprite Editor and sprite slicer including some tips and tricks when it doesn't work (or you want to do it yourself) Some camera tips, tricks, and scripts An overview of sprite layers and sprite sorting Defining boundaries in scenes Scene navigation management and planning levels in your game Some basics of how shaders work for 2D For learning Unity 2D from basic you can refer to https://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Resources for Article:   Further resources on this subject: Build a First Person Shooter [article] Let's Get Physical – Using GameMaker's Physics System [article] Using the Tiled map editor [article]
Read more
  • 0
  • 0
  • 3055

article-image-lets-get-physical-using-gamemakers-physics-system
Packt
02 Nov 2015
28 min read
Save for later

Let's Get Physical – Using GameMaker's Physics System

Packt
02 Nov 2015
28 min read
 In this article by Brandon Gardiner and Julián Rojas Millán, author of the book GameMaker Cookbook we'll cover the following topics: Creating objects that use physics Alternating the gravity Applying a force via magnets Creating a moving platform Making a rope (For more resources related to this topic, see here.) The majority of video games are ruled by physics in one way or another. 2D platformers require coded movement and jump physics. Shooters, both 2D and 3D, use ballistic calculators that vary in sophistication to calculate whether you shot that guy or missed him and he's still coming to get you. Even Pong used rudimentary physics to calculate the ball's trajectory after bouncing off of a paddle or wall. The next time you play a 3D shooter or action-adventure game, check whether or not you see the logo for Havok, a physics engine used in over 500 games since it was introduced in 2000. The point is that physics, however complex, is important in video games. GameMaker comes with its own engine that can be used to recreate physics-based sandbox games, such as The Incredible Machine, or even puzzle games, such as Cut the Rope or Angry Birds. Let's take a look at how elements of these games can be accomplished using GameMaker's built-in physics engine. Physics engine 101 In order to use GameMaker's physics engine, we first need to set it up. Let's create and test some basic physics before moving on to something more complicated. Gravity and force One of the things that we learned with regards to GameMaker physics was to create our own simplistic gravity. Now that we've set up gravity using the physics engine, let's see how we can bend it according to our requirements. Physics in the environment GameMaker's physics engine allows you to choose not only the objects that are affected by external forces but also allows you to see how they are affected. Let's take a look at how this can be applied to create environmental objects in your game. Advanced physics-based objects Many platforming games, going all the way back to Pitfall!, have used objects, such as a rope as a gameplay feature. Pitfall!, mind you, uses static rope objects to help the player avoid crocodiles, but many modern games use dynamic ropes and chains, among other things, to create a more immersive and challenging experience. Creating objects that use physics There's a trend in video games where developers create products that have less games than play areas; worlds and simulators in which a player may or may not be given an objective and it wouldn't matter either way. These games can take on a life of their own; Minecraft is essentially a virtual game of building blocks and yet has become a genre of its own, literally making its creator, Markus Persson (also known as Notch), a billionaire in the process. While it is difficult to create, the fun in games such as Minecraft is designed by the player. If you give a player a set of tools or objects to play with, you may end up seeing an outcome you hadn't initially thought of and that's a good thing. The reason why I have mentioned all of this is to show you how it binds to GameMaker and what we can do with it. In a sense, GameMaker is a lot like Minecraft. It is a set of tools, such as the physics engine we're about to use, that the user can employ if he/she desires (of course, within limits), in order to create something funny or amazing or both. What you do with these tools is up to you, but you have to start somewhere. Let's take a look at how to build a simple physics simulator. Getting ready The first thing you'll need is a room. Seems simple enough, right? Well, it is. One difference, however, is that you'll need to enable physics before we begin. With the room open, click on the Physics tab and make sure that the box marked Room is Physics World is checked. After this, we'll need some sprites and objects. For sprites, you'll need a circle, triangle, and two squares, each of a different color. The circle is for obj_ball. The triangle is for obj_poly. One of the squares is for obj_box, while the other is for obj_ground. You'll also need four objects without sprites: obj_staticParent, obj_dynamicParent, obj_button, and obj_control. How to do it Open obj_staticParent and add two collision events: one with itself and one with obj_dynamicParent. In each of the collision events, drag and drop a comment from the Control tab to the Actions box. In each comment, write Collision. Close obj_staticParent and repeat steps 1-3 for obj_dynamicParent. In obj_dynamicParent, click on Add Event, and then click on Other and select Outside Room. From the Main1 tab, drag and drop Destroy Instance in the Actions box. Select Applies to Self. Open obj_ground and set the parent to obj_staticParent. Add a create event with a code block containing the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open the room that you created and start placing instances of obj_ground around it to create platforms, stairs, and so on. This is how mine looked like: Open obj_ball and set the parent to obj_dynamicParent. Add a create event and enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ball) / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_box, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.5); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.01); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_poly, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_polygon_shape(fixture); physics_fixture_add_point(fixture, 0, -(sprite_height / 2)); physics_fixture_add_point(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_add_point(fixture, -(sprite_width / 2), sprite_height / 2); physics_fixture_set_density(fixture, 0.01); physics_fixture_set_restitution(fixture, 0.1); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_control and add a create event using the following code: globalvar shape_select; globalvar shape_output; shape_select = 0; Add a Step and add the following code to a code block: if mouse_check_button(mb_left) && alarm[0] < 0 && !place_meeting(x, y, obj_button) { instance_create(mouse_x, mouse_y, shape_output); alarm[0] = 5; } if mouse_check_button_pressed(mb_right) { shape_select += 1; } Now, add an event to alarm[0] and give it a comment stating Set Timer. Place an instance of obj_control in the room that you created, but make sure that it is placed in the coordinates (0, 0). Open obj_button and add a step event. Drag a code block to the Actions tab and input the following code: if shape_select > 2 { shape_select = 0; } if shape_select = 0 { sprite_index = spr_ball; shape_output = obj_ball; } if shape_select = 1 { sprite_index = spr_box; shape_output = obj_box; } if shape_select = 2 { sprite_index = spr_poly; shape_output = obj_poly; } Once these steps are completed, you can test your physics environment. Use the right mouse button to select the shape you would like to create, and use the left mouse button to create it. Have fun! How it works While not overly complicated, there is a fair amount of activity in this recipe. Let's take a quick look at the room itself. When you created this room, you checked the box for Room is Physics World. This does exactly what it says it does; it enables physics in the room. If you have any physics-enabled objects in a room that is not a physics world, errors will occur. In the same menu, you have the gravity settings (which are vector-based) and pixels to meters, which sets the scale of objects in the room. This setting is important as it controls how each object is affected by the coded physics. YoYo Games based GameMaker's physics on the real world (as they should) and so GameMaker needs to know how many meters are represented by each pixel. The higher the number, the larger the world in the room. If you place an object in two different rooms with different pixel to meter settings, even though the objects have the same settings, GameMaker will apply physics to them differently because it views them as being of differing size and weight. Let's take a look at the objects in this simulation. Firstly, you have two parent objects: one static and the other dynamic. The static object is the only parent to one object: obj_ground. The reason for this is that static objects are not affected by outside forces in a physics world, that is, the room you built. Because of this, the ground pieces are able to ignore gravity and forces applied by other objects that collide with them. Now, neither obj_staticParent nor obj_dynamicParent contain any physics code; we saved this for our other objects. We use our parent objects to govern our collision groups using two objects instead of coding collisions in each object. So, we use drag and drop collision blocks to ensure that any children can collide with instances of one another and with themselves. Why did you drag comment blocks into these collision events? We did this so that GameMaker doesn't ignore them; the contents of each comment block are irrelevant. Also, the dynamic parent has an event that destroys any instance of its children that end up outside the room. The reason for this is simply to save memory. Otherwise, each object, even those off-screen, will be accounted for calculations at every step and this will slow everything down and eventually crash the program. Now, as we're using physics-enabled objects, let's see how each one differs from the others. When working with the object editor, you may have noticed the checkbox labelled Uses Physics. This checkbox will automatically set up the basic physics code within the selected object, but only after assuming that you're using the drag and drop method of programming. If you click on it, you'll see a new menu with basic collision options as well as several values and associated options: Density: Density in GameMaker works exactly as it does in real life. An object with a high density will be much heavier and harder to move via force than a low-density object of the same size. Think of how far you can kick an empty cardboard box versus how far you can kick a cardboard box full of bricks, assuming that you don't break your foot. Restitution: Restitution essentially governs an object's bounciness. A higher restitution will cause an object to bounce like a rubber ball, whereas a lower restitution will cause an object to bounce like a box of bricks, as mentioned in the previous example. Collision group: Collision grouping tells GameMaker how certain objects react with one another. By default, all physics objects are set to collision group 0. This means that they will not collide with other objects without a specific collision event. Assigning a positive number to this setting will cause the object in question to collide with all other objects in the same collision group, regardless of collision events. Assigning a negative number will prevent the object from colliding with any objects in that group. I don't recommend that you use collision groups unless absolutely necessary, as it takes a great deal of memory to work properly. Linear damping: Linear damping works a lot like air friction in real life. This setting affects the velocity (momentum) of objects in motion over time. Imagine a military shooter where thrown grenades don't arc, they just keep soaring through the air. We don't need this. This is what rockets are for. Angular damping: Angular damping is similar to linear damping. It only affects an object's rotation. This setting keeps objects from spinning forever. Have you ever ridden the Teacup ride at Disneyland? If so, you will know that angular damping is a good thing. Friction: Friction also works in a similar way to linear damping, but it affects an object's momentum as it collides with another object or surface. If you want to create icy surfaces in a platformer, friction is your friend. We didn't use this menu in this recipe but we did set and modify these settings through code. First, in each of the objects, we set them to use physics and then declared their shapes and collision masks. We started with declaring the fixture variable because, as you can see, it is part of each of the functions we used and typing fixture is easier than typing physics_fixture_create() every time. The fixture variable that we bind to the object is what is actually being affected by forces and other physics objects, so we must set its shape and properties in order to tell GameMaker how it should react. In order to set the fixture's shape, we use physics_set_circle_shape, physics_set_box_shape, and physics_set_polygon_shape. These functions define the collision mask associated with the object in question. In the case of the circle, we got the radius from half the width of the sprite, whereas for the box, we found the outer edges used via half the width and half the height. GameMaker then uses this information to create a collision mask to match the sprite from which the information was gathered. When creating a fixture from a more complex sprite, you can either use the aforementioned methods to approximate a mask, or you can create a more complex shape using a polygon like we did for the triangle. You'll notice that the code to create the triangle fixture had extra lines. This is because polygons require you to map each point on the shape you're trying to create. You can map three to eight points by telling GameMaker where each one is situated in relation to the center of the image (0, 0). One very important detail is that you cannot create a concave shape; this will result in an error. Every fixture you create must have a convex shape. The only way to create a concave fixture is to actually create multiple fixtures in the same object. If you were to take the code for the triangle, duplicate all of it in the same code block and alter the coordinates for each point in the duplicated code; you can create concave shapes. For example, you can use two rectangles to make an L shape. This can only be done using a polygon fixture, as it is the only fixture that allows you to code the position of individual points. Once you've coded the shape of your fixture, you can begin to code its attributes. I've described what each physics option does, and you've coded and tested them using the instructions mentioned earlier. Now, take a look at the values for each setting. The ball object has a higher restitution than the rest; did you notice how it bounced? The box object has a very low friction; it slides around on platforms as though it is made of ice. The triangle has very low density and angular damping; it is easily knocked around by the other objects and spins like crazy. You can change how objects react to forces and collisions by changing one or more of these values. I definitely recommend that you play around with these settings to see what you can come up with. Remember how the ground objects are static? Notice how we still had to code them? Well, that's because they still interact with other objects but in an almost opposite fashion. Since we set the object's density to 0, GameMaker more or less views this as an object that is infinitely dense; it cannot be moved by outside forces or collisions. It can, however, affect other objects. We don't have to set the angular and linear damping values simply because the ground doesn't move. We do, however, have to set the restitution and friction levels because we need to tell GameMaker how other objects should react when they come in contact with the ground. Do you want to make a rubber wall to bounce a player off? Set the restitution to a higher level. Do you want to make that icy patch we talked about? Then, you need to lower the friction. These are some fun settings to play around with, so try it out. Alternating gravity Gravity can be a harsh mistress; if you've ever fallen from a height, you will understand what I mean. I often think it would be great if we could somehow lessen gravity's hold on us, but then I wonder what it would be like if we could just reverse it all together! Imagine flipping a switch and then walking on the ceiling! I, for one, think that it would be great. However, since we don't have the technology to do it in real life, I'll have to settle for doing it in video games. Getting ready For this recipe, let's simplify things and use the physics environment that we created in the previous recipe. How to do it In obj_control, open the code block in the create event. Add the following code: physics_world_gravity(0, -10); That's it! Test the environment and see what happens when you create your physics objects. How it works GameMaker's physics world of gravity is vector-based. This means that you simply need to change the values of x and y in order to change how gravity works in a particular room. If you take a look at the Physics tab in the room editor, you'll see that there are values under x and y. The default value is 0 for x and 10 for y. When we added this code to the control object's create event, we changed the value of y to -10, which means that it will flow in the opposite direction. You can change the direction to 360 degrees by altering both x and y, and you can change the gravity's strength by raising and lowering the values. There's more Alternating the gravity's flow can be a lot of fun in a platformer. Several games have explored this in different ways. Your character can change the gravity by hitting a switch in a game, the player can change it by pressing a button, or you can just give specific areas different gravity settings. Play around with this and see what you can create. Applying force via magnets Remember playing with magnets in a science class when you were a kid. It was fun back then, right? Well, it's still fun; powerful magnets make a great gift for your favorite office worker. What about virtual magnets, though? Are they still fun? The answer is yes. Yes, they are. Getting ready Once again, we're simply going to modify our existing physics environment in order to add some new functionality.  How to do it In obj_control, open the code block in the step event. Add the following code: if keyboard_check(vk_space) { with (obj_dynamicParent) { var dir = point_direction(x,y,mouse_x,mouse_y); physics_apply_force(x, y, lengthdir_x(30, dir), lengthdir_y(30, dir)); } } Once you close the code block, you can test your new magnet. Add some objects, hold down the spacebar, and see what happens. How it works Applying a force to a physics-enabled object in GameMaker will add a given value to the direction, rotation, and speed of the said object. Force can be used to gradually propel an object in a given direction, or through a little math, as in this case, draw objects nearer. What we're doing here is that while the Spacebar is held down, any objects in the vicinity are drawn to the magnet (in this case, your mouse). In order to accomplish this, we first declare that the following code needs to act on obj_dynamicParent, as opposed to acting on the control object where the code resides. We then set the value of a dir variable to the point_direction of the mouse, as it relates to any child of obj_dynamicParent. From there, we can begin to apply force. With physics_apply_force, the first two values represent the x and y coordinates of the object to which the force is being applied. Since the object(s) in question is/are not static, we simply set the coordinates to whatever value they have at the time. The other two values are used in tandem to calculate the direction in which the object will travel and the force propelling it in Newtons. We get these values, in this instance, by calculating the lengthdir for both x and y. The lengthdir finds the x or y value of a point at a given length (we used 30) at a given angle (we used dir, which represents point_direction, that finds the angle where the mouse's coordinates lie). If you want to increase the length value, then you need to increase the power of the magnet. Creating a moving platform We've now seen both static and dynamic physics objects in GameMaker, but what happens when we want the best of both the worlds? Let's take a look at how to create a platform that can move and affect other objects via collisions but is immune to said collisions. Getting ready Again, we'll be using our existing physics environment, but this time, we'll need a new object. Create a sprite that is128 px wide by 32 px high and assign it to an object called obj_platform. Also, create another object called obj_kinematicParent but don't give it a sprite. Add collision events to obj_staticParent, obj_dynamicParent, and itself. Make sure that there is a comment in each event. How to do it In obj_platform, add a create event. Drag a code block to the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); phy_speed_x = 5; Add a Step event with a code block containing the following code: if (x <64) or (x > room_width-64) { phy_speed_x = phy_speed_x * -1; } Place an instance of obj_platform in the room, which is slightly higher than the highest instance of obj_ground. Once this is done, you can go ahead and test it. Try dropping various objects on the platform and see what happens! How it works Kinematic objects in GameMaker's physics world are essentially static objects that can move. While the platform has a density of 0, it also has a speed of 5 along the x axis. You'll notice that we didn't just use speed equal to 5, as this would not have the desired effect in a physics world. The code in the step simply causes the platform to remain within a set boundary by multiplying its current horizontal speed by -1. Any static object to which a movement is applied automatically becomes a kinematic object. Making a rope Is there anything more useful than a rope? I mean besides your computer, your phone or even this book. Probably, a lot of things, but that doesn't make a rope any less useful. Ropes and chains are also useful in games. Some games, such as Cut the Rope, have based their entire gameplay structure around them. Let's see how we can create ropes and chains in GameMaker. Getting ready For this recipe, you can either continue using the physics environment that we've been working with, or you can simply start from scratch. If you've gone through the rest of this chapter, you should be fairly comfortable with setting up physics objects. I completed this recipe with a fresh .gmx file. Before we begin, go ahead and set up obj_dynamicParent and obj_staticParent with collision events for one another. Next, you'll need to create the obj_ropeHome, obj_rope, obj_block, and obj_ropeControl objects. The sprite for obj_rope can simply be a 4 px wide by 16 px high box, while obj_ropeHome and obj_block can be 32 px squares. Obj_ropeControl needs to use the same sprite as obj_rope, but with the y origin set to 0. Obj_ropeControl should also be invisible. As for parenting, obj_rope should be a child of obj_dynamicParent and obj_ropeHome, and obj_block should be children of obj_staticParent, and obj_ropeControl does not require any parent at all. As always, you'll also need a room in which you need to place your objects. How to do it Open obj_ropeHome and add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); In obj_rope, add a create event with a code block. Enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_ropeControl and add a create event. Drag a code block to the actions box and enter the following code: setLength = image_yscale-1; ropeLength = 16; rope1 = instance_create(x,y,obj_ropeHome2); rope2 = instance_create(x,y,obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); repeat (setLength) { ropeLength += 16; rope1 = rope2; rope2 = instance_create(x, y+ropeLength, obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); } In obj_block, add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ropeHome)/2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Now, add a step event with the following code in a code block: phy_position_x = mouse_x; phy_position_y = mouse_y; Place an instance of obj_ropeControl anywhere in the room. This will be the starting point of the rope. You can place multiple instances of the object if you wish. For every instance of obj_ropeControl you place in the room, use the bounding box to stretch it to however long you wish. This will determine the length of your rope. Place a single instance of obj_block in the room. Once you've completed these steps, you can go ahead and test them. How it works This recipe may seem somewhat complicated but it's really not. What you're doing here is that we are taking multiple instances of the same physics-enabled object and stringing them together. Since you're using instances of the same object, you only have to code one and the rest will follow. Once again, our collisions are handled by our parent objects. This way, you don't have to set collisions for each object. Also, setting the physical properties of each object is done exactly as we have done in previous recipes. By setting the density of obj_ropeHome and obj_block to 0, we're ensuring that they are not affected by gravity or collisions, but they can still collide with other objects and affect them. In this case, we set the physics coordinates of obj_block to those of the mouse so that, when testing, you can use them to collide with the rope, moving it. The most complex code takes place in the create event for obj_ropeControl. Here, we not only define how many sections of a rope or chain will be used, but we also define how they are connected. To begin, the y scale of the control object is measured in order to determine how many instances of obj_rope are required. Based on how long you stretched obj_ropeControl in the room, the rope will be longer (more instances) or shorter (fewer instances). We then set a variable (ropeLength) to the size of the sprite used for obj_rope. This will be used later to tell GameMaker where each instance of obj_rope should be so that we can connect them in a line. Next, we create the object that will hold the obj_ropeHome rope. This is a static object that will not move, no matter how much the rope moves. This is connected to the first instance of obj_rope via a revolute joint. In GameMaker, a revolute joint is used in several ways: it can act as part of a motor, moving pistons; it can act as a joint on a ragdoll body; in this case, it acts as the connection between instances of obj_rope. A revolute joint allows the programmer to code its angle and torque; but for our purposes, this isn't necessary. We declared the objects that are connected via the joint as well as the anchor location, but the other values remain null. Once the rope holder (obj_ropeHome) and initial joint are set up, we can automate the creation of the rest. Using the repeat function, we can tell GameMaker to repeat a block of code a set number of times. In this case, this number is derived from how many instances of obj_rope can fit within the distance between the y origin of obj_ropeControl and the point to which you stretched it. We subtract 1 from this number as GameMaker will calculate too many in order to cover the distance in its entirety. The code that will be repeated does a few things at once. First, it increases the value of the ropeLength variable by 16 for each instance that is calculated. Then, GameMaker changes the value of rope1 (which creates an instance of obj_ropeHome) to that of rope2 (which creates an instance of obj_rope). The rope2 variable is then reestablished to create an instance of obj_rope, but also adds the new value of ropeLength so as to move its coordinates directly below those of the previous instance, thus creating a chain. This process is repeated until the set length of the overall rope is reached. There's more Each section of a rope is a physics object and acts in the physics world. By changing the physics settings, when initially creating the rope sections, you can see how they react to collisions. How far and how quickly the rope moves when pushed by another object is very much related to the difference between their densities. If you make the rope denser than the object colliding with it, the rope will move very little. If you reverse these values, you can cause the rope to flail about, wildly. Play around with the settings and see what happens, but when placing a rope or chain in a game, you really must consider what the rope and other objects are made of. It wouldn't seem right for a lead chain to be sent flailing about by a collision with a pillow; now would it? Summary This article introduces the physics system and demonstrates how GameMaker handles gravity, friction, and so on. Learn how to implement this system to make more realistic games. Resources for Article: Further resources on this subject: Getting to Know LibGDX [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Introducing GameMaker [article]
Read more
  • 0
  • 0
  • 5208

article-image-getting-started-cocos2d-x
Packt
19 Oct 2015
11 min read
Save for later

Getting started with Cocos2d-x

Packt
19 Oct 2015
11 min read
 In this article written by Akihiro Matsuura, author of the book Cocos2d-x Cookbook, we're going to install Cocos2d-x and set up the development environment. The following topics will be covered in this article: Installing Cocos2d-x Using Cocos command Building the project by Xcode Building the project by Eclipse Cocos2d-x is written in C++, so it can build on any platform. Cocos2d-x is open source written in C++, so we can feel free to read the game framework. Cocos2d-x is not a black box, and this proves to be a big advantage for us when we use it. Cocos2d-x version 3, which supports C++11, was only recently released. It also supports 3D and has an improved rendering performance. (For more resources related to this topic, see here.) Installing Cocos2d-x Getting ready To follow this recipe, you need to download the zip file from the official site of Cocos2d-x (http://www.cocos2d-x.org/download). In this article we've used version 3.4 which was the latest stable version that was available. How to do it... Unzip your file to any folder. This time, we will install the user's home directory. For example, if the user name is syuhari, then the install path is /Users/syuhari/cocos2d-x-3.4. We call it COCOS_ROOT. The following steps will guide you through the process of setting up Cocos2d-x: Open the terminal Change the directory in terminal to COCOS_ROOT, using the following comand: $ cd ~/cocos2d-x-v3.4 Run setup.py, using the following command: $ ./setup.py The terminal will ask you for NDK_ROOT. Enter into NDK_ROOT path. The terminal will will then ask you for ANDROID_SDK_ROOT. Enter the ANDROID_SDK_ROOT path. Finally, the terminal will ask you for ANT_ROOT. Enter the ANT_ROOT path. After the execution of the setup.py command, you need to execute the following command to add the system variables: $ source ~/.bash_profile Open the .bash_profile file, and you will find that setup.py shows how to set each path in your system. You can view the .bash_profile file using the cat command: $ cat ~/.bash_profile We now verify whether Cocos2d-x can be installed: Open the terminal and run the cocos command without parameters. $ cocos If you can see a window like the following screenshot, you have successfully completed the Cocos2d-x install process. How it works... Let's take a look at what we did throughout the above recipe. You can install Cocos2d-x by just unzipping it. You know setup.py is only setting up the cocos command and the path for Android build in the environment. Installing Cocos2d-x is very easy and simple. If you want to install a different version of Cocos2d-x, you can do that too. To do so, you need to follow the same steps that are given in this recipe, but which will be for a different version. There's more... Setting up the Android environment  is a bit tough. If you started to develop at Cocos2d-x soon, you can turn after the settings part of Android. And you would do it when you run on Android. In this case, you don't have to install Android SDK, NDK, and Apache. Also, when you run setup.py, you only press Enter without entering a path for each question. Using the cocos command The next step is using the cocos command. It is a cross-platform tool with which you can create a new project, build it, run it, and deploy it. The cocos command works for all Cocos2d-x supported platforms. And you don't need to use an IDE if you don't want to. In this recipe, we take a look at this command and explain how to use it. How to do it... You can use the cocos command help by executing it with the --help parameter, as follows: $ cocos --help We then move on to generating our new project: Firstly, we create a new Cocos2d-x project with the cocos new command, as shown here: $ cocos new MyGame -p com.example.mygame -l cpp -d ~/Documents/ The result of this command is shown the following screenshot: Behind the new parameter is the project name. The other parameters that are mentioned denote the following: MyGame is the name of your project. -p is the package name for Android. This is the application id in Google Play store. So, you should use the reverse domain name to the unique name. -l is the programming language used for the project. You should use "cpp" because we will use C++. -d is the location in which to generate the new project. This time, we generate it in the user's documents directory. You can look up these variables using the following command: $ cocos new —help Congratulations, you can generate your new project. The next step is to build and run using the cocos command. Compiling the project If you want to build and run for iOS, you need to execute the following command: $ cocos run -s ~/Documents/MyGame -p ios The parameters that are mentioned are explained as follows: -s is the directory of the project. This could be an absolute path or a relative path. -p denotes which platform to run on.If you want to run on Android you use -p android. The available options are IOS, android, win32, mac, and linux. You can run cocos run –help for more detailed information. The result of this command is shown in the following screenshot: You can now build and run iOS applications of cocos2d-x. However, you have to wait for a long time if this is your first time building an iOS application. That's why it takes a long time to build cocos2d-x library, depending on if it was clean build or first build. How it works... The cocos command can create a new project and build it. You should use the cocos command if you want to create a new project. Of course, you can build by using Xcode or Eclipse. You can easier of there when you develop and debug. There's more... The cocos run command has other parameters. They are the following: --portrait will set the project as a portrait. This command has no argument. --ios-bundleid will set the bundle ID for the iOS project. However, it is not difficult to set it later. The cocos command also includes some other commands, which are as follows: The compile command: This command is used to build a project. The following patterns are useful parameters. You can see all parameters and options if you execute cocos compile [–h] command. cocos compile [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The deploy command: This command only takes effect when the target platform is android. It will re-install the specified project to the android device or simulator. cocos deploy [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The run command continues to compile and deploy commands. Building the project by Xcode Getting ready Before building the project by Xcode, you require Xcode with an iOS developer account to test it on a physical device. However, you can also test it on an iOS simulator. If you did not install Xcode, you can get it from Mac App Store. Once you have installed it, get it activated. How to do it... Open your project from Xcode. You can open your project by double-clicking on the file placed at: ~/Documents/MyGame/proj.ios_mac/MyGame.xcodeproj. Build and Run by Xcode You should select an iOS simulator or real device on which you want to run your project. How it works... If this is your first time building, it will take a long time. But continue to build with confidence as it's the first time. You can develop your game faster if you develop and debug it using Xcode rather than Eclipse. Building the project by Eclipse Getting ready You must finish the first recipe before you begin this step. If you have not finished it yet, you will need to install Eclipse. How to do it... Setting up NDK_ROOT: Open the preference of Eclipse Open C++ | Build | Environment Click on Add and set the new variable, name is NDK_ROOT, value is NDK_ROOT path. Importing your project into Eclipse: Open the file and click on Import Go to Android | Existing Android Code into Workspace Click on Next Import the project to Eclipse at ~/Documents/MyGame/proj.android. Importing Cocos2d-x library into Eclipse Perform the same steps from Step 3 to Step 4. Import the project cocos2d lib at ~/Documents/MyGame/cocos2d/cocos/platform/android/java, using the folowing command: importing cocos2d lib Build and Run Click on Run icon The first time, Eclipse asks you to select a way to run your application. You select Android Application and click on OK, as shown in the following screenshot: If you connected the Android device on your Mac, you can run your game on your real device or an emulator. The following screenshot shows that it is running it on Nexus5. If you added cpp files into your project, you have to modify the Android.mk file at ~/Documenst/MyGame/proj.android/jni/Android.mk. This file is needed to build NDK. This fix is required to add files. The original Android.mk would look as follows: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp If you added the TitleScene.cpp file, you have to modify it as shown in the following code: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp ../../Classes/TitleScene.cpp The preceding example shows an instance of when you add the TitleScene.cpp file. However, if you are also adding other files, you need to add all the added files. How it works... You get lots of errors when importing your project into Eclipse, but don't panic. After importing cocos2d-x library, errors soon disappear. This allows us to set the path of NDK, Eclipse could compile C++. After you modified C++ codes, run your project in Eclipse. Eclipse automatically compiles C++ codes, Java codes, and then runs. It is a tedious task to fix Android.mk again to add the C++ files. The following code is original Android.mk: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/../../Classes The following code is customized Android.mk that adds C++ files automatically. CPP_FILES := $(shell find $(LOCAL_PATH)/../../Classes -name *.cpp) LOCAL_SRC_FILES := hellocpp/main.cpp LOCAL_SRC_FILES += $(CPP_FILES:$(LOCAL_PATH)/%=%) LOCAL_C_INCLUDES := $(shell find $(LOCAL_PATH)/../../Classes -type d) The first line of the code gets C++ files to the Classes directory into CPP_FILES variable. The second and third lines add C++ files into LOCAL_C_INCLUDES variable. By doing so, C++ files will be automatically compiled in NDK. If you need to compile a file other than the extension .cpp file, you will need to add it manually. There's more... If you want to manually build C++ in NDK, you can use the following command: $ ./build_native.py This script is located at the ~/Documenst/MyGame/proj.android . It uses ANDROID_SDK_ROOT and NDK_ROOT in it. If you want to see its options, run ./build_native.py –help. Summary Cocos2d-x is an open source, cross-platform game engine, which is free and mature. It can publish games for mobile devices and desktops, including iPhone, iPad, Android, Kindle, Windows, and Mac. The book Cocos2d-x Cookbook focuses on using version 3.4, which is the latest version of Cocos2d-x that was available at the time of writing. We focus on iOS and Android development, and we'll be using Mac because we need it to develop iOS applications. Resources for Article: Further resources on this subject: CREATING GAMES WITH COCOS2D-X IS EASY AND 100 PERCENT FREE [Article] Dragging a CCNode in Cocos2D-Swift [Article] COCOS2D-X: INSTALLATION [Article]
Read more
  • 0
  • 0
  • 3069
article-image-cross-platform-building
Packt
10 Aug 2015
11 min read
Save for later

Cross-platform Building

Packt
10 Aug 2015
11 min read
In this article by Karan Sequeira, author of the book Cocos2d-x Game Development Blueprints, we'll leverage the awesome aspect of Cocos2d-x to build one of our games on Android and Windows Phone 8! (For more resources related to this topic, see here.) Setting up the environment for Android At this point in the timeline of technological evolution, Android needs no introduction. This mobile operating system was acquired by Google, and it has reached far and wide across the globe. It is now one of the top choices for application developers and game developers. With octa-core CPUs and ever-powerful GPUs, the sheer power offered by Android devices is a motivating factor! While setting up the environment for Android, you have more choices than any other mobile development platform. Your workstation could be running any of the three major operating systems (Windows, Mac OS, or Linux) and you would be able to build to Android just fine. Since Android is not fussy about its build environment, developers mostly choose their work environment based on which other platforms they will be developing for. As such, you might choose to build for Android on a machine running Mac OS since you would be able to build for iOS and Android on the same machine. The same applies for a machine running Windows as well. You would be able to build for both Android and Windows Phone. Although building for Windows Phone 8 requires you to have at least Windows 8 installed. We will discuss more on that later. Let's begin listing down the various software required to set up the environment for Android. Java Development Kit 7+ Since you already know that Java is the programming language used within the Android SDK, you must ensure that you have the environment set up to compile and run Java files. So go ahead and download the Java Development Kit (JDK)version 6 or later. You can download and install a Standard Edition (SE) version from the page available at the following link: http://www.oracle.com/technetwork/java/javase/downloads/index.html Mac OS comes with JDK installed and as such, you won't have to follow this step if you're setting up your development environment on a Mac. The Android SDK Once you've downloaded JDK, it's time to download the Android SDK from the following URL: http://developer.android.com/sdk/index.html If you're installing the Android SDK on Windows, a custom installer is provided that will take care of downloading and setting up the required parts of the Android SDK for you. For other operating systems, you can choose to download the respective archive files and extract them at the location of your choice. Eclipse or the ADT bundle Eclipse is the most commonly used IDE when it comes to Android application development. You can choose to download a standard Eclipse IDE for Java developers and then install the ADT plugin into Eclipse, or you can download the ADT bundle, which is a specialized version of Eclipse with the ADT plugin preinstalled. At the time of writing this article, the Android developer site had already deprecated ADT in favor of Android Studio. As such, we will choose the former approach for setting up our environment in Eclipse. You can download and install the standard Eclipse IDE for Java Developers for your specific machine from the following URL: http://www.eclipse.org/downloads/ ADT plugin for Eclipse Once you've downloaded Eclipse, you must now install a custom plugin for Eclipse: Android Development Tools (ADT). Visit the following URL and follow the detailed instructions that will help you install the ADT plugin into Eclipse: http://developer.android.com/sdk/installing/installing-adt.html Once you've followed the instructions on the preceding page, you will need to inform Eclipse about the location of the Android SDK that you downloaded earlier. So, open up the Preferences page for Eclipse and go to the location where you've placed the Android SDK in the Android section. With that done, we can now fire up the SDK Manager to install a few more necessary pieces of software. To launch the Android SDK Manager, select Android SDK Manager from the Windows menu in Eclipse. The resultant window should look something like this: By default, you will see a whole lot of packages selected, out of which Android SDK Platform-tools and Android SDK Build-tools are necessary. From the rest, you must select at least one of the target Android platforms. An additional package will be required if you're target environment is Windows: Google USB Driver. It is located under the Extras list. I would suggest skipping downloading the documentation and samples. If you already have an Android device, I would go one step further and suggest you skip downloading the system images as well. However, if you don't have an Android device, you will need at least one system image so that you can at least test on an emulator. Once you've chosen from the various platforms needed, proceed to install the packages and you get a window like this: Now, you must select Accept License and click on the Install button to install the respective packages. Once these packages have been installed, you have to add their locations to the path variable on your respective machines. For Windows, modify your path variable (go to Properties | Advance Settings | Environment Variables) to include the following: ;E:Androidandroid-sdkplatform-tools For Mac OS, you can add the following line to the .bash_profile file found under the home directory: export PATH=$PATH:/Android/android-sdk/platform-tools/ The preceding line can also be added to the .bash_rc file found under the home directory on your Linux machine. At this point, you can use Eclipse for Android development. Installing Cygwin for Windows Developers working on Linux can skip this step as most Linux distributions come with the make utility. Also, developers working on Mac OS may download Xcode from the Mac App Store, which will install the make utility on their respective Macs. We need to install Cygwin on Windows specifically for the GNU make utility. So, go to the following URL and download the installer for Cygwin: http://www.cygwin.com/install.html Once you've run the .exe file that you downloaded and get a window like this, click on the Next button: The next window will ask how you would like to install the required packages. Here, select option Install from Internet and click on Next: The next window will ask where you would like to install Cygwin. I'd recommend leaving it at the default value unless you have a reason to change it. Proceed by clicking on Next. In the next window, you will be asked to specify a path where the installation can download the files it requires. You can fill in a suitable path of your choice in the box and click on Next. In the next window, you will be asked to specify your Internet connection. Leave it at the Direct Connection option and click on Next. In the next window, you will be asked to select a mirror location from where to download the installation files. Here, select the site that is geographically closest to you and click on Next. In the window that follows, expand the Devel section and search for make: The GNU version of the 'make' utility. Click on the Skip option to select this package. The version of the make utility that will be installed is now displayed in place of Skip. Your window should look something like this: You can now go ahead and click the Next button to begin the download and installation of the required packages. The window should look something like this: Once all the packages have been downloaded, click on Finish to close the installation. Now that we have the make utility installed, we can go ahead and download the Android NDK, which will actually build our entire C++ code base. The Android NDK To download the Android NDK for your respective development machine, navigate to the following URL: https://developer.android.com/tools/sdk/ndk/index.html Unzip the downloaded archive and place it in the same location as the Android SDK. We must now add an environment variable named NDK_ROOT that points to the root of the Android NDK. For Windows, add a new user variable NDK_ROOT with the location of the Android NDK on your filesystem as its value. You can do this by going to Properties | Advance Settings | Environment Variables. Once you've done that, the Environment Variables window should look something like this: I'm sure you noticed the value of the NDK_ROOT variable in the previous screenshot. The value of this variable is given in Unix style and depends on the Cygwin environment, since it will be accessed within a Cygwin bash shell while executing the build script for each Android project. Mac OS and Linux users can add the following line to their .bash_profile and .bashrc files, respectively: export NDK_ROOT=/Android/android-ndk-r10 We have now successfully completed setting up the environment to build our Cocos2d-x games on Android. To test this, open up a Cygwin bash terminal (for Windows) or a standard terminal (for Mac OS or Linux) and navigate to the Cocos2d-x test bed located inside the samples folder of your Cocos2d-x source. Now, navigate to the proj.android folder and run the build_native.sh file. This is what my Cygwin bash terminal looks like on a Windows 7 machine: If you've followed the aforementioned instructions correctly, the build_native.sh script will then go on to compile the C++ source files required by the TestCpp project and will result in a single shared object (.so) file in the libs folder within the proj.android folder. Creating an Android Virtual Device We're close to running the game, but we need to create an Android Virtual Device (AVD) before we proceed. Open up the Android Virtual Device Manager from the Windows menu and click on Create.   In the next window, fill in the required details as per your requirements and configuration and click OK. This is what my window looks like with everything filled in: From the Android Virtual Device Manager window, select the newly created AVD and click on Start to boot it. Building the tests on Android With an Android device that is ready to run our project, let's begin by first importing the project into Eclipse. Within Eclipse, select File | Import.... In the following window, select Existing Projects into Workspace under the General setting and click on Next: In the next window, browse to the proj.android folder under the cocos2d-x-2.2.5samplesCppTestCpp path and click on Finish: Once imported, you can find the TestCpp project under Package Explorer. It should look something like this: As you can see, there are a few errors with the project. If you look at the Problems view (Window | Show View | Problems) located on the bottom-half of Eclipse, you might see something like this: All these errors are due to the fact that the Android project for our game depends on Cocos2d-x's Android project for Android-specific functionality, things such as the actual OpenGL surface where everything is rendered, the music player, accelerometer functionality, and many more. So let's import the Android project for Cocos2d-x located inside the following path in your Cocos2d-x source bundle: cocos2d-x-2.2.5cocos2dxplatformandroid You can import it the same way you imported TestCpp. Once the project has been imported, it will be titled libcocos2dx in Package Explorer. Now, select Clean... from the Project menu; You will notice that when the clean operation has finished, the pumpkindefense dependency on libcocos2dx is taken care of and the project for pumpkindefense builds error-free. Running the tests on Android Running the tests is as simple as right-clicking on the TestCpp project in Package Explorer and selecting Run As | Android Application. It might take a bit more time running on an emulator as compared to an actual device, but ultimately you will have something like this: Summary In this article, you learned what necessary software components are needed to set up your workstation to build and run an Android native application. You had also set up an Android Virtual Device and ran the Cocos2d-x test bed application on it. Resources for Article: Further resources on this subject: Run Xcode Run [article] Creating Games with Cocos2d-x is Easy and 100 percent Free [article] Creating Cool Content [article]
Read more
  • 0
  • 0
  • 1628

article-image-building-untangle-game-canvas-and-drawing-api
Packt
06 Jul 2015
25 min read
Save for later

Building the Untangle Game with Canvas and the Drawing API

Packt
06 Jul 2015
25 min read
In this article by Makzan, the author of HTML5 Game Development by Example: Beginner's Guide - Second Edition has discussed the new highlighted feature in HTML5—the canvas element. We can treat it as a dynamic area where we can draw graphics and shapes with scripts. (For more resources related to this topic, see here.) Images in websites have been static for years. There are animated GIFs, but they cannot interact with visitors. Canvas is dynamic. We draw and modify the context in the Canvas, dynamically through the JavaScript drawing API. We can also add interaction to the Canvas and thus make games. In this article, we will focus on using new HTML5 features to create games. Also, we will take a look at a core feature, Canvas, and some basic drawing techniques. We will cover the following topics: Introducing the HTML5 canvas element Drawing a circle in Canvas Drawing lines in the canvas element Interacting with drawn objects in Canvas with mouse events The Untangle puzzle game is a game where players are given circles with some lines connecting them. The lines may intersect the others and the players need to drag the circles so that no line intersects anymore. The following screenshot previews the game that we are going to achieve through this article: You can also try the game at the following URL: http://makzan.net/html5-games/untangle-wip-dragging/ So let's start making our Canvas game from scratch. Drawing a circle in the Canvas Let's start our drawing in the Canvas from the basic shape—circle. Time for action – drawing color circles in the Canvas First, let's set up the new environment for the example. That is, an HTML file that will contain the canvas element, a jQuery library to help us in JavaScript, a JavaScript file containing the actual drawing logic, and a style sheet: index.html js/ js/jquery-2.1.3.js js/untangle.js js/untangle.drawing.js js/untangle.data.js js/untangle.input.js css/ css/untangle.css images/ Put the following HTML code into the index.html file. It is a basic HTML document containing the canvas element: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Drawing Circles in Canvas</title> <link rel="stylesheet" href="css/untangle.css"> </head> <body> <header>    <h1>Drawing in Canvas</h1> </header> <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> <script src="js/jquery-2.1.3.min.js"></script> <script src="js/untangle.data.js"></script> <script src="js/untangle.drawing.js"></script> <script src="js/untangle.input.js"></script> <script src="js/untangle.js"></script> </body> </html> Use CSS to set the background color of the Canvas inside untangle.css: canvas { background: grey; } In the untangle.js JavaScript file, we put a jQuery document ready function and draw a color circle inside it: $(document).ready(function(){ var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }); Open the index.html file in a web browser and we will get the following screenshot: What just happened? We have just created a simple Canvas context with circles on it. There are not many settings for the canvas element itself. We set the width and height of the Canvas, the same as we have fixed the dimensions of real drawing paper. Also, we assign an ID attribute to the Canvas for an easier reference in JavaScript: <canvas id="game" width="768" height="400"> This is an interactive game with circles and lines connecting them. </canvas> Putting in fallback content when the web browser does not support the Canvas Not every web browser supports the canvas element. The canvas element provides an easy way to provide fallback content if the canvas element is not supported. The content also provides meaningful information for any screen reader too. Anything inside the open and close tags of the canvas element is the fallback content. This content is hidden if the web browser supports the element. Browsers that don't support canvas will instead display that fallback content. It is good practice to provide useful information in the fallback content. For instance, if the canvas tag's purpose is a dynamic picture, we may consider placing an <img> alternative there. Or we may also provide some links to modern web browsers for the visitor to upgrade their browser easily. The Canvas context When we draw in the Canvas, we actually call the drawing API of the canvas rendering context. You can think of the relationship of the Canvas and context as Canvas being the frame and context the real drawing surface. Currently, we have 2d, webgl, and webgl2 as the context options. In our example, we'll use the 2D drawing API by calling getContext("2d"). var canvas = document.getElementById("game"); var ctx = canvas.getContext("2d"); Drawing circles and shapes with the Canvas arc function There is no circle function to draw a circle. The Canvas drawing API provides a function to draw different arcs, including the circle. The arc function accepts the following arguments: Arguments Discussion X The center point of the arc in the x axis. Y The center point of the arc in the y axis. radius The radius is the distance between the center point and the arc's perimeter. When drawing a circle, a larger radius means a larger circle. startAngle The starting point is an angle in radians. It defines where to start drawing the arc on the perimeter. endAngle The ending point is an angle in radians. The arc is drawn from the position of the starting angle, to this end angle. counter-clockwise This is a Boolean indicating the arc from startingAngle to endingAngle drawn in a clockwise or counter-clockwise direction. This is an optional argument with the default value false. Converting degrees to radians The angle arguments used in the arc function are in radians instead of degrees. If you are familiar with the degrees angle, you may need to convert the degrees into radians before putting the value into the arc function. We can convert the angle unit using the following formula: radians = p/180 x degrees Executing the path drawing in the Canvas When we are calling the arc function or other path drawing functions, we are not drawing the path immediately in the Canvas. Instead, we are adding it into a list of the paths. These paths will not be drawn until we execute the drawing command. There are two drawing executing commands: one command to fill the paths and the other to draw the stroke. We fill the paths by calling the fill function and draw the stroke of the paths by calling the stroke function, which we will use later when drawing lines: ctx.fill(); Beginning a path for each style The fill and stroke functions fill and draw the paths in the Canvas but do not clear the list of paths. Take the following code snippet as an example. After filling our circle with the color red, we add other circles and fill them with green. What happens to the code is both the circles are filled with green, instead of only the new circle being filled by green: var canvas = document.getElementById('game'); var ctx = canvas.getContext('2d'); ctx.fillStyle = "red"; ctx.arc(100, 100, 50, 0, Math.PI*2, true); ctx.fill();   ctx.arc(210, 100, 50, 0, Math.PI*2, true); ctx.fillStyle = "green"; ctx.fill(); This is because, when calling the second fill command, the list of paths in the Canvas contains both circles. Therefore, the fill command fills both circles with green and overrides the red color circle. In order to fix this issue, we want to ensure we call beginPath before drawing a new shape every time. The beginPath function empties the list of paths, so the next time we call the fill and stroke commands, they will only apply to all paths after the last beginPath. Have a go hero We have just discussed a code snippet where we intended to draw two circles: one in red and the other in green. The code ends up drawing both circles in green. How can we add a beginPath command to the code so that it draws one red circle and one green circle correctly? Closing a path The closePath function will draw a straight line from the last point of the latest path to the first point of the path. This is called closing the path. If we are only going to fill the path and are not going to draw the stroke outline, the closePath function does not affect the result. The following screenshot compares the results on a half circle with one calling closePath and the other not calling closePath: Pop quiz Q1. Do we need to use the closePath function on the shape we are drawing if we just want to fill the color and not draw the outline stroke? Yes, we need to use the closePath function. No, it does not matter whether we use the closePath function. Wrapping the circle drawing in a function Drawing a circle is a common function that we will use a lot. It is better to create a function to draw a circle now instead of entering several code lines. Time for action – putting the circle drawing code into a function Let's make a function to draw the circle and then draw some circles in the Canvas. We are going to put code in different files to make the code simpler: Open the untangle.drawing.js file in our code editor and put in the following code: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.drawCircle = function(x, y, radius) { var ctx = untangleGame.ctx; ctx.fillStyle = "GOLD"; ctx.beginPath(); ctx.arc(x, y, radius, 0, Math.PI*2, true); ctx.closePath(); ctx.fill(); }; Open the untangle.data.js file and put the following code into it: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.drawCircle(x, y, circleRadius); } }; Then open the untangle.js file. Replace the original code in the JavaScript file with the following code: if (untangleGame === undefined) { var untangleGame = {}; }   // Entry point $(document).ready(function(){ var canvas = document.getElementById("game"); untangleGame.ctx = canvas.getContext("2d");   var width = canvas.width; var height = canvas.height;   untangleGame.createRandomCircles(width, height);   }); Open the HTML file in the web browser to see the result: What just happened? The code of drawing circles is executed after the page is loaded and ready. We used a loop to draw several circles in random places in the Canvas. Dividing code into files We are putting the code into different files. Currently, there are the untangle.js, untangle.drawing.js, and untangle.data.js files. The untangle.js is the entry point of the game. Then we put logic that is related to the context drawing into untangle.drawing.js and logic that's related to data manipulation into the untangle.data.js file. We use the untangleGame object as the global object that's being accessed across all the files. At the beginning of each JavaScript file, we have the following code to create this object if it does not exist: if (untangleGame === undefined) { var untangleGame = {}; } Generating random numbers in JavaScript In game development, we often use random functions. We may want to randomly summon a monster for the player to fight, we may want to randomly drop a reward when the player makes progress, and we may want a random number to be the result of rolling a dice. In this code, we place the circles randomly in the Canvas. To generate a random number in JavaScript, we use the Math.random() function. There is no argument in the random function. It always returns a floating number between 0 and 1. The number is equal or bigger than 0 and smaller than 1. There are two common ways to use the random function. One way is to generate random numbers within a given range. The other way is generating a true or false value. Usage Code Discussion Getting a random integer between A and B Math.floor(Math.random()*B)+A Math.floor() function cuts the decimal point of the given number. Take Math.floor(Math.random()*10)+5 as an example. Math.random() returns a decimal number between 0 to 0.9999…. Math.random()*10 is a decimal number between 0 to 9.9999…. Math.floor(Math.random()*10) is an integer between 0 to 9. Finally, Math.floor(Math.random()*10) + 5 is an integer between 5 to 14. Getting a random Boolean (Math.random() > 0.495) (Math.random() > 0.495) means 50 percent false and 50 percent true. We can further adjust the true/false ratio. (Math.random() > 0.7) means almost 70 percent false and 30 percent true. Saving the circle position When we are developing a DOM-based game, we often put the game objects into DIV elements and accessed them later in code logic. It is a different story in the Canvas-based game development. In order to access our game objects after they are drawn in the Canvas, we need to remember their states ourselves. Let's say now we want to know how many circles are drawn and where they are, and we will need an array to store their position. Time for action – saving the circle position Open the untangle.data.js file in the text editor. Add the following circle object definition code in the JavaScript file: untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; } Now we need an array to store the circles' positions. Add a new array to the untangleGame object: untangleGame.circles = []; While drawing every circle in the Canvas, we save the position of the circle in the circles array. Add the following line before calling the drawCircle function, inside the createRandomCircles function: untangleGame.circles.push(new untangleGame.Circle(x,y,circleRadius)); After the steps, we should have the following code in the untangle.data.js file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.circles = [];   untangleGame.Circle = function(x,y,radius){ this.x = x; this.y = y; this.radius = radius; };   untangleGame.createRandomCircles = function(width, height) { // randomly draw 5 circles var circlesCount = 5; var circleRadius = 10; for (var i=0;i<circlesCount;i++) {    var x = Math.random()*width;    var y = Math.random()*height;    untangleGame.circles.push(new      untangleGame.Circle(x,y,circleRadius));    untangleGame.drawCircle(x, y, circleRadius); } }; Now we can test the code in the web browser. There is no visual difference between this code and the last example when drawing random circles in the Canvas. This is because we are saving the circles but have not changed any code that affects the appearance. We just make sure it looks the same and there are no new errors. What just happened? We saved the position and radius of each circle. This is because Canvas drawing is an immediate mode. We cannot directly access the object drawn in the Canvas because there is no such information. All lines and shapes are drawn on the Canvas as pixels and we cannot access the lines or shapes as individual objects. Imagine that we are drawing on a real canvas. We cannot just move a house in an oil painting, and in the same way we cannot directly manipulate any drawn items in the canvas element. Defining a basic class definition in JavaScript We can use object-oriented programming in JavaScript. We can define some object structures for our use. The Circle object provides a data structure for us to easily store a collection of x and y positions and the radii. After defining the Circle object, we can create a new Circle instance with an x, y, and radius value using the following code: var circle1 = new Circle(100, 200, 10); For more detailed usage on object-oriented programming in JavaScript, please check out the Mozilla Developer Center at the following link: https://developer.mozilla.org/en/Introduction_to_Object-Oriented_JavaScript Have a go hero We have drawn several circles randomly on the Canvas. They are in the same style and of the same size. How about we randomly draw the size of the circles? And fill the circles with different colors? Try modifying the code and then play with the drawing API. Drawing lines in the Canvas Now we have several circles here, so how about connecting them with lines? Let's draw a straight line between each circle. Time for action – drawing straight lines between each circle Open the index.html file we just used in the circle-drawing example. Change the wording in h1 from drawing circles in Canvas to drawing lines in Canvas. Open the untangle.data.js JavaScript file. We define a Line class to store the information that we need for each line: untangleGame.Line = function(startPoint, endPoint, thickness) { this.startPoint = startPoint; this.endPoint = endPoint; this.thickness = thickness; } Save the file and switch to the untangle.drawing.js file. We need two more variables. Add the following lines into the JavaScript file: untangleGame.thinLineThickness = 1; untangleGame.lines = []; We add the following drawLine function into our code, after the existing drawCircle function in the untangle.drawing.js file. untangleGame.drawLine = function(ctx, x1, y1, x2, y2, thickness) { ctx.beginPath(); ctx.moveTo(x1,y1); ctx.lineTo(x2,y2); ctx.lineWidth = thickness; ctx.strokeStyle = "#cfc"; ctx.stroke(); } Then we define a new function that iterates the circle list and draws a line between each pair of circles. Append the following code in the JavaScript file: untangleGame.connectCircles = function() { // connect the circles to each other with lines untangleGame.lines.length = 0; for (var i=0;i< untangleGame.circles.length;i++) {    var startPoint = untangleGame.circles[i];    for(var j=0;j<i;j++) {      var endPoint = untangleGame.circles[j];      untangleGame.drawLine(startPoint.x, startPoint.y,        endPoint.x,      endPoint.y, 1);      untangleGame.lines.push(new untangleGame.Line(startPoint,        endPoint,      untangleGame.thinLineThickness));    } } }; Finally, we open the untangle.js file, and add the following code before the end of the jQuery document ready function, after we have called the untangleGame.createRandomCircles function: untangleGame.connectCircles(); Test the code in the web browser. We should see there are lines connected to each randomly placed circle: What just happened? We have enhanced our code with lines connecting each generated circle. You may find a working example at the following URL: http://makzan.net/html5-games/untangle-wip-connect-lines/ Similar to the way we saved the circle position, we have an array to save every line segment we draw. We declare a line class definition to store some essential information of a line segment. That is, we save the start and end point and the thickness of the line. Introducing the line drawing API There are some drawing APIs for us to draw and style the line stroke: Line drawing functions Discussion moveTo The moveTo function is like holding a pen in our hand and moving it on top of the paper without touching it with the pen. lineTo This function is like putting the pen down on the paper and drawing a straight line to the destination point. lineWidth The lineWidth function sets the thickness of the strokes we draw afterwards. stroke The stroke function is used to execute the drawing. We set up a collection of moveTo, lineTo, or styling functions and finally call the stroke function to execute it on the Canvas. We usually draw lines by using the moveTo and lineTo pairs. Just like in the real world, we move our pen on top of the paper to the starting point of a line and put down the pen to draw a line. Then, keep on drawing another line or move to the other position before drawing. This is exactly the flow in which we draw lines on the Canvas. We just demonstrated how to draw a simple line. We can set different line styles to lines in the Canvas. For more details on line styling, please read the styling guide in W3C at http://www.w3.org/TR/2dcontext/#line-styles and the Mozilla Developer Center at https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors. Using mouse events to interact with objects drawn in the Canvas So far, we have shown that we can draw shapes in the Canvas dynamically based on our logic. There is one part missing in the game development, that is, the input. Now, imagine that we can drag the circles around on the Canvas, and the connected lines will follow the circles. In this section, we will add mouse events to the canvas to make our circles draggable. Time for action – dragging the circles in the Canvas Let's continue with our previous code. Open the html5games.untangle.js file. We need a function to clear all the drawings in the Canvas. Add the following function to the end of the untangle.drawing.js file: untangleGame.clear = function() { var ctx = untangleGame.ctx; ctx.clearRect(0,0,ctx.canvas.width,ctx.canvas.height); }; We also need two more functions that draw all known circles and lines. Append the following code to the untangle.drawing.js file: untangleGame.drawAllLines = function(){ // draw all remembered lines for(var i=0;i<untangleGame.lines.length;i++) {    var line = untangleGame.lines[i];    var startPoint = line.startPoint;    var endPoint = line.endPoint;    var thickness = line.thickness;    untangleGame.drawLine(startPoint.x, startPoint.y,      endPoint.x,    endPoint.y, thickness); } };   untangleGame.drawAllCircles = function() { // draw all remembered circles for(var i=0;i<untangleGame.circles.length;i++) {    var circle = untangleGame.circles[i];    untangleGame.drawCircle(circle.x, circle.y, circle.radius); } }; We are done with the untangle.drawing.js file. Let's switch to the untangle.js file. Inside the jQuery document-ready function, before the ending of the function, we add the following code, which creates a game loop to keep drawing the circles and lines: // set up an interval to loop the game loop setInterval(gameloop, 30);   function gameloop() { // clear the Canvas before re-drawing. untangleGame.clear(); untangleGame.drawAllLines(); untangleGame.drawAllCircles(); } Before moving on to the input handling code implementation, let's add the following code to the jQuery document ready function in the untangle.js file, which calls the handleInput function that we will define: untangleGame.handleInput(); It's time to implement our input handling logic. Switch to the untangle.input.js file and add the following code to the file: if (untangleGame === undefined) { var untangleGame = {}; }   untangleGame.handleInput = function(){ // Add Mouse Event Listener to canvas // we find if the mouse down position is on any circle // and set that circle as target dragging circle. $("#game").bind("mousedown", function(e) {    var canvasPosition = $(this).offset();    var mouseX = e.pageX - canvasPosition.left;    var mouseY = e.pageY - canvasPosition.top;      for(var i=0;i<untangleGame.circles.length;i++) {      var circleX = untangleGame.circles[i].x;      var circleY = untangleGame.circles[i].y;      var radius = untangleGame.circles[i].radius;      if (Math.pow(mouseX-circleX,2) + Math.pow(        mouseY-circleY,2) < Math.pow(radius,2)) {        untangleGame.targetCircleIndex = i;        break;      }    } });   // we move the target dragging circle // when the mouse is moving $("#game").bind("mousemove", function(e) {    if (untangleGame.targetCircleIndex !== undefined) {      var canvasPosition = $(this).offset();      var mouseX = e.pageX - canvasPosition.left;      var mouseY = e.pageY - canvasPosition.top;      var circle = untangleGame.circles[        untangleGame.targetCircleIndex];      circle.x = mouseX;      circle.y = mouseY;    }    untangleGame.connectCircles(); });   // We clear the dragging circle data when mouse is up $("#game").bind("mouseup", function(e) {    untangleGame.targetCircleIndex = undefined; }); }; Open index.html in a web browser. There should be five circles with lines connecting them. Try dragging the circles. The dragged circle will follow the mouse cursor and the connected lines will follow too. What just happened? We have set up three mouse event listeners. They are the mouse down, move, and up events. We also created the game loop, which updates the Canvas drawing based on the new position of the circles. You can view the example's current progress at: http://makzan.net/html5-games/untangle-wip-dragging-basic/. Detecting mouse events in circles in the Canvas After discussing the difference between DOM-based development and Canvas-based development, we cannot directly listen to the mouse events of any shapes drawn in the Canvas. There is no such thing. We cannot monitor the event in any shapes drawn in the Canvas. We can only get the mouse event of the canvas element and calculate the relative position of the Canvas. Then we change the states of the game objects according to the mouse's position and finally redraw it on the Canvas. How do we know we are clicking on a circle? We can use the point-in-circle formula. This is to check the distance between the center point of the circle and the mouse position. The mouse clicks on the circle when the distance is less than the circle's radius. We use this formula to get the distance between two points: Distance = (x2-x1)2 + (y2-y1)2. The following graph shows that when the distance between the center point and the mouse cursor is smaller than the radius, the cursor is in the circle: The following code we used explains how we can apply distance checking to know whether the mouse cursor is inside the circle in the mouse down event handler: if (Math.pow(mouseX-circleX,2) + Math.pow(mouseY-circleY,2) < Math.pow(radius,2)) { untangleGame.targetCircleIndex = i; break; } Please note that Math.pow is an expensive function that may hurt performance in some scenarios. If performance is a concern, we may use the bounding box collision checking. When we know that the mouse cursor is pressing the circle in the Canvas, we mark it as the targeted circle to be dragged on the mouse move event. During the mouse move event handler, we update the target dragged circle's position to the latest cursor position. When the mouse is up, we clear the target circle's reference. Pop quiz Q1. Can we directly access an already drawn shape in the Canvas? Yes No Q2. Which method can we use to check whether a point is inside a circle? The coordinate of the point is smaller than the coordinate of the center of the circle. The distance between the point and the center of the circle is smaller than the circle's radius. The x coordinate of the point is smaller than the circle's radius. The distance between the point and the center of the circle is bigger than the circle's radius. Game loop The game loop is used to redraw the Canvas to present the later game states. If we do not redraw the Canvas after changing the states, say the position of the circles, we will not see it. Clearing the Canvas When we drag the circle, we redraw the Canvas. The problem is the already drawn shapes on the Canvas won't disappear automatically. We will keep adding new paths to the Canvas and finally mess up everything in the Canvas. The following screenshot is what will happen if we keep dragging the circles without clearing the Canvas on every redraw: Since we have saved all game statuses in JavaScript, we can safely clear the entire Canvas and draw the updated lines and circles with the latest game status. To clear the Canvas, we use the clearRect function provided by Canvas drawing API. The clearRect function clears a rectangle area by providing a rectangle clipping region. It accepts the following arguments as the clipping region: context.clearRect(x, y, width, height) Argument Definition x The top left point of the rectangular clipping region, on the x axis. y The top left point of the rectangular clipping region, on the y axis. width The width of the rectangular region. height The height of the rectangular region. The x and y values set the top left position of the region to be cleared. The width and height values define how much area is to be cleared. To clear the entire Canvas, we can provide (0,0) as the top left position and the width and height of the Canvas to the clearRect function. The following code clears all things drawn on the entire Canvas: ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height); Pop quiz Q1. Can we clear a portion of the Canvas by using the clearRect function? Yes No Q2. Does the following code clear things on the drawn Canvas? ctx.clearRect(0, 0, ctx.canvas.width, 0); Yes No Summary You learned a lot in this article about drawing shapes and creating interaction with the new HTML5 canvas element and the drawing API. Specifically, you learned to draw circles and lines in the Canvas. We added mouse events and touch dragging interaction with the paths drawn in the Canvas. Finally, we succeeded in developing the Untangle puzzle game. Resources for Article: Further resources on this subject: Improving the Snake Game [article] Playing with Particles [article] Making Money with Your Game [article]
Read more
  • 0
  • 0
  • 3187