Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Game Development

370 Articles
article-image-looking-good-graphical-interface
Packt
16 Feb 2016
45 min read
Save for later

Looking Good – The Graphical Interface

Packt
16 Feb 2016
45 min read
We will start by creating a simple Tic-tac-toe game, using the basic pieces of GUI that Unity provides. Following this, we will discuss how we can change the styles of our GUI controls to improve the look of our game. We will also explore some tips and tricks to handle the many different screen sizes of Android devices. Finally, we will learn about a much quicker way, to put our games on the device. With all that said, let's jump in. In this article, we will cover the following topics: User preferences Buttons, text, and images Dynamic GUI positioning Build and run In this article, we will be creating a new project in Unity. The first section here will walk you through its creation and setup (For more resources related to this topic, see here.) Creating a Tic-tac-toe game The project for this article is a simple Tic-tac-toe style game, similar to what any of us might play on paper. As with anything else, there are several ways in which you can make this game. We are going to use Unity's uGUI system in order to better understand how to create a GUI for any of our other games. The game board The basic Tic-tac-toe game involves two players and a 3 x 3 grid. The players take turns filling squares with Xs and Os. The player who first fills a line of three squares with their letter wins the game. If all squares are filled without a player achieving a line of three, the game is a tie. Let's start with the following steps to create our game board: The first thing to do is to create a project for this article. So, start up Unity and we will do just that. If you have been following along so far, Unity should boot up into the last project that was open. This isn't a bad feature, but it can become extremely annoying. Think of it like this: you have been working on a project for a while and it has grown large. Now you need to quickly open something else, but Unity defaults to your huge project. If you wait for it to open before you can work on anything else, it can consume a lot of time. To change this feature, go to the top of the Unity window and click on Edit followed by Preferences. This is the same place where we changed our script editor's preferences. This time, though, we are going to change settings in the General tab. The following screenshot shows the options that are present under the General tab: At this moment, our primary concern is the Load Previous Project on Startup option; however, we will still cover all of the options in turn. All the options under the General tab are explained in detail as follows:     Auto Refresh: This is one of the best features of Unity. As an asset is changed outside of Unity, this option lets Unity automatically detect the change and refresh the asset inside your project.     Load Previous Project on Startup: This is a great option and you should make sure that this is unchecked whenever installing Unity. When checked, Unity will immediately open the last project you worked on rather than Project Wizard.     Compress Assets on Import: This is the checkbox for automatically compressing your game assets when they are first imported to Unity.     Editor Analytics: This checkbox is for Unity's anonymous usage statistics. Leave it checked and the Unity Editor will send information occasionally to the Unity source. It doesn't hurt anything to leave it on and helps the Unity team to make the Unity Editor better; however, it comes down to personal preference.     Show Asset Store search hits: This setting is only relevant if you plan to use Asset Store. The asset store can be a great source of assets and tools for any game; however, since we are not going to use it. It does what the name suggests. When you search the asset store for something within the Unity Editor, the number of results is displayed based on this checkbox.     Verify Saving Assets: This is a good one to be left off. If this is on, every time you click on Save in Unity, a dialog box will pop up so that you can make sure to save any and all of the assets that have changed since your last save. This option is not so much about your models and textures, but it is concerned with Unity's internal files, materials, and prefabs. It's best to leave it off for now.     Skin (Pro Only): This option only applies to Unity Pro users. It gives the option to switch between the light and dark versions of the Unity Editor. It is purely cosmetic, so go with your gut for this one. With your preferences set, now go to File and then select New Project. Click on the Browse... button to pick a location and name for the new project. We will not be using any of the included packages, so click on Create and we can get on with it. By changing a few simple options, we can save ourselves a lot of trouble later. This may not seem like that big of a deal now for simple projects from this article, but, for large and complex projects, not choosing the correct options can cause a lot of hassle for you even if you just want to make a quick switch between projects. Creating the board With the new project created, we have a clean slate to create our game. Before we can create the core functionality, we need to set up some structure in our scene for our game to work and our players to interact with: Once Unity finishes initializing the new project, we need to create a new canvas. We can do this by navigating to GameObject | UI | Canvas. The whole of Unity's uGUI system requires a canvas in order to draw anything on the screen. It has a few key components, as you can see in the following Inspector window, which allow it and everything else in your interface to work.    Rect Transform: This is a special type of the normal transform component that you will find on nearly every other object that you will use in your games. It keeps track of the object's position on screen, its size, its rotation, the pivot point around which it will rotate, and how it will behave when the screen size changes. By default, the Rect Transform for a canvas is locked to include the whole screen's size.    Canvas: This component controls how it and the interface elements it controls interact with the camera and your scene. You can change this by adjusting Render Mode. The default mode, Screen Space – Overlay, means that everything will be drawn on screen and over top of everything else in the scene. The Screen Space – Camera mode will draw everything a specific distance away from the camera. This allows your interface to be affected by the perspective nature of the camera, but any models that might be closer to the camera will appear in front of it. The World Space mode ensures that the canvas and elements it controls are drawn in the world just like any of the models in your scene.    Graphics Raycaster: This is the component that lets you actually interact with and click on your various interface elements. When you added the canvas, an extra object called EventSystem was also created. This is what allows our buttons and other interface elements to interact with our scripts. If you ever accidentally delete it, you can recreate it by going to the top of the Unity and navigating to GameObject | UI | EventSystem. Next, we need to adjust the way the Unity Editor will display our game so that we can easily make our game board. To do this, switch to the Game view by clicking on its tab at the top of the Scene view. Then, click on the button that says Free Aspect and select the option near the bottom: 3 : 2 Landscape (3 : 2). Most of the mobile devices your games will be played on will use a screen that approximates this ratio. The rest will not see any distortion in your game. To allow our game to adjust to the various resolutions, we need to add a new component to our canvas object. With it selected in the Hierarchy panel, click on Add Component in the Inspector panel and navigate to Layout | Canvas Scaler. The selected component allows a base screen resolution to be worked from, letting it automatically scale our GUI as the devices change. To select a base resolution, select Scale With Screen Size from the Ui Scale Mode drop-down list. Next, let's put 960 for X and 640 for Y. It is better to work from a larger resolution than a smaller one. If your resolution is too small, all your GUI elements will look fuzzy when they are scaled up for high-resolution devices. To keep things organized, we need to create three empty GameObjects. Go back to the top of Unity and select Create Empty three times under GameObject. In the Hierarchy tab, click and drag them to our canvas to make them the canvas's children. To make each of them usable for organizing our GUI elements, we need to add the Rect Transform component. Find it by navigating to Add Component | Layout | Rect Transform in Inspector for each. To rename them, click on their name at the top of the Inspector and type in a new name. Name one Board, another Buttons, and the last one as Squares. Next, make Buttons and Squares children of Board. The Buttons element will hold all of the pieces of our game board that are clickable while Squares will hold the squares that have already been selected. To keep the Board element at the same place as the devices change, we need to change the way it anchors to its parent. Click on the box with a red cross and a yellow dot in the center at the top right of Rect Transform to expand the Anchor Presets menu: Each of these options affects which corner of the parent the element will stick to as the screen changes size. We want to select the bottom-right option with four arrows, one in each direction. This will make it stretch with the parent element. Make the same change to Buttons and Squares as well. Set Left, Top, Right, and Bottom of each of these objects to 0. Also, make sure that Rotation is all set to 0 and Scale is set to 1. Otherwise, our interface may be scaled oddly when we work or play on it. Next, we need to change the anchor point of the board. If Anchor is not expanded, click on the little triangle on the left-hand side to expand it. Either way, the Max X value needs to be set to 0.667 so that our board will be a square and cover the left two-thirds of our screen. This game board is the base around which the rest of our project will be created. Without it, the game won't be playable. The game squares use it to draw themselves on screen and anchor themselves to relevant places. Later, when we create menus, this is needed to make sure that a player only sees what we need them to be interacting with at that moment. Game squares Now that we have our base game board in place, we need the actual game squares. Without them, it is going to be kind of hard to play the game. We need to create nine buttons for the player to click on, nine images for the background of the selected squares, and nine texts to display which person controls the squares. To create them and set them up, perform these steps: Navigate to Game Object | UI just like we did for the canvas, but this time select Button, Image, and Text to create everything we need. Each of the image objects needs one of the text objects as a child. Then, all of the images must be children of the Squares object and the buttons must be children of the Buttons object. All of the buttons and images need a number in their name so that we can keep them organized. Name the buttons Button0 through Button8 and the images Square0 through Square8. The next step is to lay out our board so that we can keep things organized and in sync with our programming. We need to set each numbered set specifically. But first, pick the crossed arrows from the bottom-right corner of Anchor Presets for all of them and ensure that their Left, Top, Right, and Bottom values are set to 0. To set each of our buttons and squares at the right place, just match the numbers to the following table. The result will be that all the squares will be in order, starting at the top left and ending at the bottom right: Square Min X Min Y Max X Max Y 0 0 0.67 0.33 1 1 0.33 0.67 0.67 1 2 0.67 0.67 1 1 3 0 0.33 0.33 0.67 4 0.33 0.33 0.67 0.67 5 0.67 0.33 1 0.67 6 0 0 0.33 0.33 7 0.33 0 0.67 0.33 8 0.67 0 1 0.33 The last thing we need to add is an indicator to show whose turn it is. Create another Text object just like we did before and rename it as Turn Indicator. After you make sure that the Left, Top, Right, and Bottom values are set to 0 again, set Anchor Point Preset to the blue arrows again. Finally, set Min X under Anchor to 0.67. We now have everything that we need to play the basic game of Tic-tac-toe. To check it out, select the Squares object and uncheck the box in the top-right corner to turn it off. When you hit play now, you should be able to see your whole game board and click on the buttons. You can even use Unity Remote to test it with the touch settings. If you have not already done so, it would be a good idea to save the scene before continuing. The game squares are the last piece to set up our initial game. It almost looks like a playable game now. We just need to add a few scripts and we will be able to play all the games of Tic-tac-toe that we could ever desire. Controlling the game Having a game board is one of the most important parts of creating any game. However, it does us no good if we can't control what happens when its various buttons are pressed. Let's create some scripts and write some code to fix this now: Create two new scripts in the Project panel. Name the new scripts as TicTacToeControl and SquareState. Open them and clear out the default functions. The SquareState script will hold the possible states of each square of our game board. To do this, clear absolutely everything out of the script, including the using UnityEngine line and the public class SquareState line, so that we can replace it with a simple enumeration. An enumeration is just a list of potential values. This one is concerned with the player who controls the square. It will allow us to keep track of whether X's controlling it, O's controlling it, or if it is clear. The Clear statement becomes the first and therefore, the default state: public enum SquareState { Clear, Xcontrol, Ocontrol } In our other script, TicTacToeControl, we need to start by adding an extra line at the very beginning, right under using UnityEngine. This line lets our code interact with the various GUI elements, most importantly with this game, allowing us to change the text of who controls a square and whose turn it is. using UnityEngine.UI; Next, we need two variables that will largely control the flow of the game. They need to be added in place of the two default functions. The first defines our game board. It is an array of nine squares to keep track of who owns what. The second keeps track of whose turn it is. When the Boolean is true, the X player gets a turn. When the Boolean is false, the O player gets a turn: public SquareState[] board = new SquareState[9]; public bool xTurn = true; The next variable will let us change the text on screen for whose turn it is: public Text turnIndicatorLandscape; These three variables will give us access to all of the GUI objects that we set up in the last section, allowing us to change the image and text based on who owns the square. We can also turn the buttons and squares on and off as they are clicked. All of them are marked with Landscape so that we will be able to keep them straight later, when we have a second board for the Portrait orientation of devices: public GameObject[] buttonsLandscape; public Image[] squaresLandscape; public Text[] squareTextsPortrait; The last two variables for now will give us access to the images that we need to change the backgrounds: public Sprite oImage; public Sprite xImage; Our first function for this script will be called every time a button is clicked. It receives the number of buttons clicked, and the first thing it does is turn the button off and the square on: public void ButtonClick(int squareIndex) { buttonsLandscape[squareIndex].SetActive(false); squaresLandscape[squareIndex].gameObject.SetActive(true); Next, the function checks the Boolean that we created earlier to see whose turn it is. If it is the X player's turn, the square is set to use the appropriate image and text, indicating that their control is set. It then marks on the script's internal board that controls the square before finally switching to the O player's turn: if(xTurn) { squaresLandscape[squareIndex].sprite = xImage; squareTextsLandscape[squareIndex].text = "X"; board[squareIndex] = SquareState.XControl; xTurn = false; turnIndicatorLandscape.text = "O's Turn"; } This next block of code does the same thing as the previous one, except it marks control for the O player and changes the turn to the X player: else { squaresLandscape[squareIndex].sprite = oImage; squareTextsLandscape[squareIndex].text = "O"; board[squareIndex] = SquareState.OControl; xTurn = true; turnIndicatorLandscape.text = "X's Turn"; } } That is it for the code right now. Next, we need to return to the Unity Editor and set up our new script in the scene. You can do this by creating another empty GameObject and renaming it as GameControl. Add our TicTacToeControl script to it by dragging the script from the Project panel and dropping it in the Inspector panel when the object is selected. We now need to attach all of the object references that our script needs in order to actually work. We don't need to touch the Board or XTurn slots in the Inspector panel, but the Turn Indicator object does need to be dragged from the Hierarchy tab to the Turn Indicator Landscape slot in the Inspector panel. Next, expand the Buttons Landscape, Squares Landscape, and Square Texts Landscape settings and set each Size slot to 9. To each of the new slots, we need to drag the relevant object from the Hierarchy tab. The Element 0 object under Buttons Landscape gets Button0, Element 1 gets Button1, and so on. Do this for all of the buttons, images, and texts. Ensure that you put them in the right order or else our script will appear confusing as it changes things when the player is playing. Next, we need a few images. If you have not already done so, import the starting assets for this article by going to the top of Unity, by navigating to Assets | Import New Asset and selecting the files to import them. You will need to navigate to and select each one at a time. We have Onormal and Xnormal for indicating control of the square. The ButtonNormal image is used when the button is just sitting there and ButtonActive is used when the player touches the button. The Title field is going to be used for our main menu a little bit later. In order to use any of these images in our game, we need to change their import settings. Select each of them in turn and find the Texture Type dropdown in the Inspector panel. We need to change them from Texture to Sprite (2D \ uGUI). We can leave the rest of the settings at their defaults. The Sprite Mode option is used if we have a sprite sheet with multiple elements in one image. The Packing Tag option is used for grouping and finding sprites in the sheet. The Pixels To Units option affects the size of the sprite when it is rendered in world space. The Pivot option simply changes the point around which the image will rotate. For the four square images, we can click on Sprite Editor to change how the border appears when they are rendered. When clicked, a new window opens that shows our image with some green lines at the edges and some information about it in the lower-right. We can drag these green lines to change the Border property. Anything outside the green lines will not be stretched with the image as it fills spaces that are larger than it. A setting around 13 for each side will keep our whole border from stretching. Once you make any changes, ensure that you hit the Apply button to commit them. Next, select the GameControl object once more and drag the ONormal image to the OImage slot and the XNormal image to the XImage slot. Each of the buttons needs to be connected to the script. To do this, select each of them from Hierarchy in turn and click on the plus sign at the bottom-right corner of their Inspector: We then need to click on that little circle to the left of No Function and select GameControl from the list in the new window. Now navigate to No Function | TicTacToeControl | ButtonClick (int) to connect the function in our code to the button. Finally, for each of the buttons, put the number of the button in the number slot to the right of the function list. To keep everything organized, rename your Canvas object to GameBoard_Landscape. Before we can test it out, be sure that the Squares object is turned on by checking the box in the top-left corner of its Inspector. Also, uncheck the box of each of its image children. This may not look like the best game in the world, but it is playable. We have buttons that call functions in our scripts. The turn indicator changes as we play. Also, each square indicates who controls it after they are selected. With a little more work, this game could look and work great. Messing with fonts Now that we have a basic working game, we need to make it look a little better. We are going to add our button images and pick some new font sizes and colors to make everything more readable: Let's start with the buttons. Select one of the Button elements and you will see in the Inspector that it is made of an Image (Script) component and a Button (Script) component. The first component controls how the GUI element will appear when it just sits there. The second controls how it changes when a player interacts with it and what bit of functionality this triggers.    Source Image: This is the base image that is displayed when the element just sits there and is untouched by the player.    Color: This controls the tinting and fading of the image that is being used.    Material: This lets you use a texture or shader that might otherwise be used on 3D models.    Image Type: This determines how the image will be stretched to fill the available space. Usually, it will be set to Sliced, which is for images that use a border and can be optionally filled with a color based on the Fill Center checkbox. Otherwise, it will be often set to Simple, for example when you are using a normal image and can use prevent the Preserve Aspect box from being stretched by odd sized Rect Transforms.    Interactable: This simply toggles whether or not the player is able to click on the button and trigger functionality.    Transition: This changes how the button will react as the player interacts with it. ColorTint causes the button to change color as it is interacted with. SpriteSwap will change the image when it is interacted with. Animation will let you define more complex animation sequences for the transitions between states.    The Target Graphic is a reference to the base image used for drawing the button on screen.    The Normal slot, Highlighted slot, Pressed slot, and Disabled slot define the effects or images to use when the button is not being interacted with, is moused over, the player clicks on it, and when the button has been turned off. For each of our buttons, we need to drag our ButtonNormal image from our Project panel to the Source Image slot. Next, click on the white box to the right of the Color slot to open the color picker. To stop our buttons from being faded, we need to move the A slider all the way to the right or set the box to 255. We want to change images when our buttons are pressed, so change the Transition to SpriteSwap. Mobile devices have almost no way of hovering over GUI elements, so we do not need to worry about the Highlighted state. However, we do want to add our ButtonActive image to the Pressed Sprite slot so that it will switch when the player touches the button. The button squares should be blank until someone clicks on them, so we need to get rid of the text element. The easiest way to do this is to select each one under the button and delete it. Next, we need to change the Text child of each of the image elements. It is the Text (Script) component that allows us to control how text is drawn on screen.    Text: This is the area where we can change text that will be drawn on screen.    Font: This allows us to pick any font file that is in our project to use for the text.    Font Style: This will let you adjust the bold and italic nature of the text.    Font Size: This is the size of the text. This is just like picking a font size in your favorite word processor.    Line Spacing: This is the distance between each line of text.    Rich Text: This will let you use a few special HTML style tags to affect only part of the text with a color, italics, and so on.    Alignment: This changes the location where the text will be centered in the box. The first three boxes adjust the horizontal position. The second three change the vertical position.    Horizontal Overflow / Vertical Overflow: These adjust whether the text can be drawn outside the box, wrapped to a new line, or clipped off.    Best Fit: This will automatically adjust the size of the text to fit a dynamically size-changing element, within a Min and Max value.    Color/Material: These change the color and texture of the text as and when it is drawn.    Shadow (Script): This component adds a drop shadow to the text, just like what you might add in Photoshop. For each of our text elements, we need to use a Font Size of 120 and the Alignment should be centered. For the Turn Indicator text element, we also need to use a Font Size of 120 and it also needs to be centered. The last thing to do is to change the Color of the text elements to a dark gray so that we can easily see it against the color of our buttons: Now, our board works and looks good too. Try taking a stab at adding your own images for the buttons. You will need two images; one for when the button sits there and one for when the button is pressed. Also, the default Arial font is boring. Find a new font to use for your game; you can import it just like any other asset for your game. Rotating devices If you have been testing your game so far, you have probably noticed that the game only looks good when we hold the device in the landscape mode. When it is held in the portrait mode, everything becomes squished as the squares and turn indicator try to share the little amount of horizontal space that is available. As we have already set up our game board in one layout mode, it becomes a fairly simple matter to duplicate it for the other mode. However, it does require duplicating a good portion of our code to make it all work properly: To make a copy of our game board, right-click on it and select Duplicate from the new menu. Rename the duplicate game board to GameBoard_Portrait. This will be the board used when our player's device is in the portrait mode. To see our changes while we are making them, turn off the landscape game board and select 3:2 Portrait (2:3) from the drop-down list at the top left of the Game window. Select the Board object that is a child of GameBoard_Portrait. In its Inspector panel, we need to change the anchors to use the top two-thirds of the screen rather than the left two-thirds. The values of 0 for Min X, 0.33 for Min Y, and 1 for both Max X and Max Y will make this happen. Next, Turn Indicator needs to be selected and moved to the bottom-third of the screen. Values of 0 for Min X and Min Y, 1 for Max X, and 0.33 for Max Y will work well here. Now that we have our second board set up, we need to make a place for it in our code. So, open the TicTacToeControl script and scroll to the top so that we can start with some new variables. The first variable that we are going to add will give us access to the turn indicator for the portrait mode of our screen: public Text turnIndicatorPortrait; The next three variables will keep track of the buttons, square images, and owner text information. These are just like the three lists that we created earlier to keep track of the board while it is in the landscape mode: public GameObject[] buttonsPortrait; public Image[] squaresPortrait; public Text[] squareTextsPortrait; The last two variables that we are going to add to the top of our script here are for keeping track of the two canvas objects that actually draw our game boards. We need these so that we can switch between them as the user turns their device around. public GameObject gameBoardGroupLandscape; public GameObject gameBoardGroupPortrait; Next, we need to update a few of our functions so that they make changes to both boards and not just the landscape board. These first two lines turn the portrait board's buttons off and the squares on when the player clicks on them. They need to go at the beginning of our ButtonClick function. Put them right after the two lines where we use SetActive on the buttons and squares for the landscape set: buttonsPortrait[squareIndex].SetActive(false); squaresPortrait[squareIndex].gameObject.SetActive(true); These two lines change the image and text for the controlling square in favor of the X player for the Portrait set. They go inside the if statement of our ButtonClick function, right after the two lines that do the same thing for the landscape set: squaresPortrait[squareIndex].sprite = xImage; squareTextsPortrait[squareIndex].text = "X"; This line goes at the end of that same if statement and changes the Portrait set's turn indicator text: turnIndicatorPortrait.text = "O's Turn"; The next two lines change image and text in favor of the O player. They go after the same lines for the Landscape set, inside of the else statement of our ButtonClick function: squaresPortrait[squareIndex].sprite = oImage; squareTextsPortrait[squareIndex].text = "O"; This is the last line that we need to add to our ButtonClick function; it needs to be put at the end of the else statement. It simply changes the text indicating whose turn it is: turnIndicatorPortrait.text = "X's Turn"; Next, we need to create a new function to control the changing of our game boards when the player changes the orientation of their device. We will start by defining the Update function. This is a special function called by Unity for every single frame. It will allow us to check for a change in orientation for every frame: public void Update() { The function begins with an if statement that uses Input.deviceOrientation to find out how the player's device is currently being held. It compares the finding to the LandscapeLeft orientation to see whether the device is begin held sideways, with the home button on the left side. If the result is true, the Portrait set of GUI elements are turned off while the Landscape set is turned on: if(Input.deviceOrientation == DeviceOrientation.LandscapeLeft) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } The next else if statement checks for a Portrait orientation, the home button is down. It turns Portrait on and the Landscape set off if true: else if(Input.deviceOrientation == DeviceOrientation.Portrait) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } This else if statement is checking LanscapeRight when the home button is on the right side: else if(Input.deviceOrientation == DeviceOrientation.LandscapeRight) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } Finally, we check the PortraitUpsideDown orientation, which is when the home button is at the top of the device. Don't forget the extra bracket to close off and end the function: else if(Input.deviceOrientation == DeviceOrientation.PortraitUpsideDown) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } } We now need to return to Unity and select our GameControl object so that we can set up our new Inspector properties. Drag and drop the various pieces from the portrait game board in Hierarchy to the relevant slot in Inspector, Turn Indicator to the Turn Indicator Portrait slot, the buttons to the Buttons Portrait list in order, the squares to Squares Portrait, and their text children to the Square Texts Portrait. Finally, drop the GameBoard_Portrait object in the Game Board Group Portrait slot. We should now be able to play our game and see the board switch when we change the orientation of our device. You will have to either build your project on your device or connect using Unity Remote because the Editor itself and your computer simply don't have a device orientation like your mobile device. Be sure to set the display mode of your Game window to Remote in the top-left corner so that it will update along with your device while using Unity Remote. Menus and victory Our game is nearly complete. The last things that we need are as follows: An opening menu where players can start a new game A bit of code for checking whether anybody has won the game A game over menu for displaying who won the game Setting up the elements Our two new menus will be quite simple when compared to the game board. The opening menu will consist of our game's title graphic and a single button, while the game over menu will have a text element to display the victory message and a button to go back to the main menu. Let's perform the following steps to set up the elements: Let's start with the opening menu by creating a new Canvas, just like we did before, and rename it as OpeningMenu. This will allow us to keep it separate from the other screens that we have created. Next, the menu needs an Image element and a Button element as children. To make everything easier to work with, turn off the game boards with the checkbox at the top of their Inspector windows. For our image object, we can drag our Title image to the Source Image slot. For the image's Rect Transform, we need to set the Pos X and Pos Y values to 0. We also need to adjust the Width and Height. We are going to match the dimensions of the original image so that it will not be stretched. Put a value of 320 for Width and 160 for Height. To move the image to the top half of the screen, put a 0 in the Pivot Y slot. This changes where the position is based on for the image. For the button's Rect Transform, we again need the value of 0 for both Pos X and Pos Y. We again need a value of 320 for the Width, but this time we want a value of 100 for the Height. To move it to the bottom half of the screen, we need a value of 1 in the Pivot Y slot. Next up is to set the images for the button, just like we did earlier for the game board. Put the ButtonNormal image in the Source Image slot. Change Transition to SpriteSwap and put the ButtonActive image in the Pressed Sprite slot. Do not forget to change Color to have an A value of 255 in color picker so that our button is not partially faded. Finally, for this menu to change the button text, expand Button in the Hierarchy and select Text child object. Right underneath Text in the Inspector panel for this object is a text field where we can change the text displayed on the button. A value of New Game here will work well. Also, change Font Size to 45 so that we can actually read it. Next, we need to create the game over menu. So, turn off our opening menu and create a new canvas for our game over menu. Rename it as GameOverMenu so that we can continue to be organized. For this menu, we need a Text element and a Button element as its children. We will set this one up in an almost identical way to the previous one. Both the text and the button need values of 0 for the Pos X and Pos Y slots, with a value of 320 for Width. The text will use a Height of 160 and a Pivot Y of 0. We also need to set its Font Size as 80. You can change the default text, but it will be overwritten by our code anyway. To center our text in the menu, select the middle buttons from the two sets next to the Alignment property. The button will use a Height of 100 and a Pivot Y of 1. Also, be sure that you set the Source Image, Color, Transition, and Pressed Sprite to the proper images and settings. The last thing to set is the button's text child. Set the default text to Main Menu and give it a Font Size of 45. That is it for setting up our menus. We have all the screens that we need to allow the player to interact with our game. The only problem is that we don't have any of the functionality to make them actually do anything. Adding the code To make our game board buttons work, we had to create a function in our script they could reference and call when they are touched. The main menu's button will start a new game, while the game over menu's button will change screens to the main menu. We will also need to create a little bit of code to clear out and reset the game board when a new game starts. If we don't, it would be impossible for the player to play more than one round before being required to restart the whole app if they want to play again. Open the TicTacToeControl script so that we can make some more changes to it. We will start with the addition of three variables at the top of the script. The first two will keep track of the two new menus, allowing us to turn them on and off as per our need. The third is for the text object in the game over screen that will give us the ability to put a message based on the result of the game. Next, we need to create a new function. The NewGame function will be called by the button in the main menu. Its purpose is to reset the board so that we can continue to play without having to reset the whole application. public void NewGame() { The function starts by setting the game to start on the X player's turn. It then creates a new array of SquareStates, which effectively wipes out the old game board. It then sets the turn indicators for both the Landscape and Portrait sets of controls: xTurn = true; board = new SquareState[9]; turnIndicatorLandscape.text = "X's Turn"; turnIndicatorPortratit.text = "X's Turn"; We next loop through the nine buttons and squares for both the Portrait and Landscape controls. All of the buttons are turned on and the squares are turned off using SetActive, which is the same as clicking on the little checkbox at the top-left corner of the Inspector panel: for(int i=0;i<9;i++) { buttonsPortrait[i].SetActive(true); squaresPortrait[i].gameObject.SetActive(false); buttonsLandscape[i].SetActive(true); squaresLandscape[i].gameObject.SetActive(false); } The last three lines of code control which screens are visible when we change over to the game board. By default, it chooses to turn on the Landscape board and makes sure that the Portrait board is turned off. It then turns off the main menu. Don't forget the last curly bracket to close off the function: gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); mainMenuGroup.SetActive(false); } Next, we need to add a single line of code to the end of the ButtonClick function. It is a simple call to check whether anyone has won the game after the buttons and squares have been dealt with: CheckVictory(); The CheckVictory function runs through the possible combinations for victory in the game. If it finds a run of three matching squares, the SetWinner function will be called and the current game will end: public void CheckVictory() { A victory in this game is a run of three matching squares. We start by checking the column that is marked by our loop. If the first square is not Clear, compare it to the square below; if they match, check it against the square below that. Our board is stored as a list but drawn as a grid, so we have to add three to go down a square. The else if statement follows with checks of each row. By multiplying our loop value by three, we will skip down a row of each loop. We'll again compare the square to SquareState.Clear, then to the square to its right, and finally with the two squares to its right. If either set of conditions is correct, we'll send the first square in the set to another function to change our game screen: for(int i=0;i<3;i++) { if(board[i] != SquareState.Clear && board[i] == board[i + 3] && board[i] == board[i + 6]) { SetWinner(board[i]); return; } else if(board[i * 3] != SquareState.Clear && board[i * 3] == board[(i * 3) + 1] && board[i * 3] == board[(i * 3) + 2]) { SetWinner(board[i * 3]); return; } } The following code snippet is largely the same as the if statements that we just saw. However, these lines of code check the diagonals. If the conditions are true, again send out to the other function to change the game screen. You probably also noticed the returns after the function calls. If we have found a winner at any point, there is no need to check any more of the board. So, we'll exit the CheckVictory function early: if(board[0] != SquareState.Clear && board[0] == board[4] && board[0] == board[8]) { SetWinner(board[0]); return; } else if(board[2] != SquareState.Clear && board[2] == board[4] && board[2] == board[6]) { SetWinner(board[2]); return; } This is the last little bit for our CheckVictory function. If no one has won the game, as determined by the previous parts of this function, we have to check for a tie. This is done by checking all the squares of the game board. If any one of them is Clear, the game has yet to finish and we exit the function. But, if we make it through the entire loop without finding a Clear square, we set the winner by declaring a tie: for(int i=0;i<board.Length;i++) { if(board[i] == SquareState.Clear) return; } SetWinner(SquareState.Clear); } Next, we create the SetWinner function that is called repeatedly in our CheckVictory function. This function passes who has won the game, and it initially turns on the game over screen and turns off the game board: public void SetWinner(SquareState toWin) { gameOverGroup.SetActive(true); gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(false); The function then checks to see who won and picks an appropriate message for the victorText object: if(toWin == SquareState.Clear) { victorText.text = "Tie!"; } else if(toWin == SquareState.XControl) { victorText.text = "X Wins!"; } else { victorText.text = "O Wins!"; } } Finally, we have the BackToMainMenu function. This is short and sweet; it is simply called by the button on the game over screen to switch back to the main menu: public void BackToMainMenu() { gameOverGroup.SetActive(false); mainMenuGroup.SetActive(true); } That is all of the code in our game. We have all of the visual pieces that make up our game and now, we also have all of the functional pieces. The last step is to put them together and finish the game. Putting them together We have our code and our menus. Once we connect them together, our game will be complete. To put it all together perform the following steps: Go back to the Unity Editor and select the GameControl object from the Hierarchy panel. The three new properties in its Inspector window need to be filled in. Drag the OpeningMenu canvas to the Main Menu Group slot and GameOverMenu to the Game Over Group slot. Also, find the text object child of GameOverMenu and drag it to the Victor Text slot. Next, we need to connect the button functionality for each of our menus. Let's start by selecting the button object child of our OpeningMenu canvas. Click on the little plus sign at the bottom right of its Button (Script) component to add a new functionality slot. Click on the circle in the center of the new slot and select GameControl from the new pop-up window, just like we did for each of our game board buttons. The drop-down list that currently says No Function is our next target. Click on it and navigate to TicTacToeControl | NewGame (). Repeat these few steps to add the functionality to the Button childof GameOverMenu. Except, select BackToMainMenu() from the list. The very last thing to do is to turn off both the game boards and the game over menu, using the checkbox in the top left of the Inspector. Leave only the opening menu on so that our game will start there when we play it. Congratulations! This is our game. All of our buttons are set, we have multiple menus, and we even created a game board that changes based on the orientation of the player's device. The last thing to do is to build it for our devices and go show it off. A better way to build to device Now for the part of the build process that everyone itches to learn. There is a quicker and easier way to have your game built and play it on your Android device. The long and complicated way is still very good to know. Should this shorter method fail, and it will at some point, it is helpful to know the long method so that you can debug any errors. Also, the short path is only good for building for a single device. If you have multiple devices and a large project, it will take significantly more time to load them all with the short build process. Follow these steps: Start by opening the Build Settings window. Remember, it can be found under File at the top of the Unity Editor. If you have not already done so, save your scene. The option to save your scene is also found under File at the top of the Unity Editor. Click on the Add Current button to add our current scene, also the only scene, to the Scenes In Build list. If this list is empty, there is no game. Be sure to change your Platform to Android, if you haven't already done so. Do not forget to set the Player Settings. Click on the Player Settings button to open them up in the Inspector window. At the top, set the Company Name and Product Name fields. Values of TomPacktAndroid and Ch2 TicTacToe respectively for these fields will match the included completed project. Remember, these fields will be seen by the people playing your game. The Bundle Identifier field under Other Settings needs to be set as well. The format is still com.CompanyName.ProductName, so com.TomPacktAndroid.Ch2.TicTacToe will work well. In order to see our cool dynamic GUI in action on a device, there is one other setting that should be changed. Click on Resolution and Presentation to expand the options. We are interested in Default Orientation. The default is Portrait, but this option means that the game will be fixed in the portrait display mode. Click on the drop-down menu and select Auto Rotation. This option tells Unity to automatically adjust the game to be upright irrespective of the orientation in which it is being held. The new set of options that popped up when Auto Rotation was selected allows to limit the orientations that are supported. Perhaps you are making a game that needs to be wider and held in landscape orientation. By unchecking Portrait and Portrait Upside Down, Unity will still adjust (but only for the remaining orientations). On your Android device, the controls are along one of the shorter sides; these usually are the home, menu, and back or recent apps buttons. This side is generally recognized as the bottom of the device and it is the position of these buttons that dictates what each orientation is. The Portrait mode is when these buttons are down relative to the screen. The Landscape Right mode is when they are to the right. The pattern begins to become clear, does it not? For now, leave all of the orientation options checked and we will go back to Build Settings. The next step (and this very important) is to connect your device to your computer and give it a moment to be recognized. If your device is not the first one connected to your computer, this shorter build path will fail. In the bottom-right corner of the Build Settings window, click on the Build And Run button. You will be asked to give the application file, the APK, a relevant name, and save it to an appropriate location. A name such as Ch2_TicTacToe.apk will be fine, and it is suitable enough to save the file to the desktop. Click on Save and sit back to watch the wonderful loading bar that is provided. After the application is built, there is a pushing to device step. This means that the build was successful and Unity is now putting the application on your device and installing it. Once this is done, the game will start on the device and the loading will be done. We just learned about the Build And Run button provided by the Build Settings window. This is quick, easy, and free from the pain of using the command prompt; isn't the short build path wonderful? However, if the build process fails for any reason including being unable to find the device, the application file will not be saved. You will have to go through the entire build process again, if you want to try installing again. This isn't so bad for our simple Tic-tac-toe game, but it might consume a lot of time for a larger project. Also, you can only have one Android device connected to your computer while building. Any more devices and the build process is a guaranteed failure. Unity also doesn't check for multiple devices until after it has gone through the rest of the potentially long build process. Other than these words of caution, the Build And Run option is really quite nice. Let Unity handle the hard part of getting the game to your device. This gives us much more time to focus on testing and making a great game. If you are up for a challenge, this is a tough one: creating a single player mode. You will have to start by adding an extra button to the opening screen for selecting the second game mode. Any logic for the computer player should go in the Update function. Also, take a look at Random.Range for randomly selecting a square to take control. Otherwise, you could do a little more work and make the computer search for a square where it can win or create a line of two matches. Summary To learn more about ECMAScript and JavaScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Unity Android Game Development by Example Beginner's Guide Unity 5 for Android Essentials Resources for Article: Further resources on this subject: Finding Your Way [article] The Blueprint Class [article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 1431

article-image-metal-api-get-closer-bare-metal-metal-api
Packt
16 Feb 2016
7 min read
Save for later

Metal API: Get closer to the bare metal with Metal API

Packt
16 Feb 2016
7 min read
The Metal framework supports 3D graphics rendering and other data computing commands. Metal is used in game designing to reduce the CPU overhead. In this article we'll cover: CPU/GPU framework levels Graphics pipeline overview (For more resources related to this topic, see here.) The Apple Metal API and the graphics pipeline One of the rules, if not the golden rule of modern video game development, is to keep our games running constantly at 60 frames per second or greater. If developing for VR devices and applications, this is of even more importance as dropped frame rates could lead to a sickening and game ending experience for the player. In the past, being lean was the name of the game; hardware limitations prevented much from not only being written to the screen but how much memory storage a game could hold. This limited the number of scenes, characters, effects, and levels. In the past, game development was built more with an engineering mindset, so the developers made the things work with what little they had. Many of the games on 8-bit systems and earlier had levels and characters that were only different because of elaborate sprite slicing and recoloring. Over time, advances in hardware, particularly that of GPUs allowed for richer graphical experiences. This leads to the advent of computation-heavy 3D models, real-time lighting, robust shaders, and other effects that we can use to make our games present an even greater player experience; this while trying to stuff it all in that precious .016666 second/60 Hz window. To get everything out of the hardware and combat the clash between a designer's need to make the best looking experience and the engineering reality of hardware limitations in even today's CPU/GPUs, Apple developed the Metal API. CPU/GPU framework levels Metal is what's known as a low-level GPU API. When we build our games on the iOS platform, there are different levels between the machine code in our GPU/CPU hardware and what we use to design our games. This goes for any piece of computer hardware we work with, be it Apple or others. For example, on the CPU side of things, at the very base of it all is the machine code. The next level up is the assembly language of the chipset. Assembly language differs based on the CPU chipset and allows the programmer to be as detailed as determining the individual registers to swap data in and out of in the processor. Just a few lines of a for-loop in C/C++ would take up a decent number of lines to code in assembly. The benefit of working in the lower levels of code is that we could make our games run much faster. However, most of the mid-upper level languages/APIs are made to work well enough so that this isn't a necessity anymore. Game developers have coded in assembly even after the very early days of game development. In the late 1990's, the game developer Chris Sawyer created his game, Rollercoster Tycoon™, almost entirely in the x86 assembly language! Assembly can be a great challenge for any enthusiastic developer who loves to tinker with the inner workings of computer hardware. Moving up the chain we have where C/C++ code would be and just above that is where we'd find Swift and Objective-C code. Languages such as Ruby and JavaScript, which some developers can use in Xcode, are yet another level up. That was about the CPU, now on to the GPU. The Graphics Processing Unit (GPU) is the coprocessor that works with the CPU to make the calculations for the visuals we see on the screen. The following diagram shows the GPU, the APIs that work with the GPU, and possible iOS games that can be made based on which framework/API is chosen. Like the CPU, the lowest level is the processor's machine code. To work as close to the GPU's machine code as possible, many developers would use Silicon Graphics' OpenGL API. For mobile devices, such as the iPhone and iPad, it would be the OpenGL subset, OpenGL ES. Apple provides a helper framework/library to OpenGL ES named GLKit. GLKit helps simplify some of the shader logic and lessen the manual work that goes into working with the GPU at this level. For many game developers, this was practically the only option to make 3D games on the iOS device family originally; though some use of iOS's Core Graphics, Core Animation and UIKit frameworks were perfectly fine for simpler games. Not too long into the lifespan of the iOS device family, third-party frameworks came into play, which were aimed at game development. Using OpenGL ES as its base, thus sitting directly one level above it, is the Cocos2D framework. This was actually the framework used in the original release of Rovio's Angry Birds™ series of games back in 2009. Eventually, Apple realized how important gaming was for the success of the platform and made their own game-centric frameworks, that is, the SpriteKit and SceneKit frameworks. They too, like Cocos2D/3D, sat directly above OpenGL ES. When we made SKSprite nodes or SCNNodes in our Xcode projects, up until the introduction of Metal, OpenGL operations were being used to draw these objects in the update/render cycle behind the scenes. As of iOS 9, SpriteKit and SceneKit use Metal's rendering pipeline to process graphics to the screen. If the device is older, they revert to OpenGL ES as the underlying graphics API. Graphics pipeline overview Let's take a look at the graphics pipeline to get an idea, at least on an upper level, of what the GPU is doing during a single rendered frame. We can imagine the graphical data of our games being divided in two main categories: Vertex data: This is the position information of where on the screen this data can be rendered. Vector/vertex data can be expressed as points, lines, or triangles. Remember the old saying about video game graphics, "everything is a triangle." All of those polygons in a game are just a collection of triangles via their point/vector positions. The GPU's Vertex Processing Unit (VPU) handles this data. Rendering/pixel data: Controlled by the GPU's Rasterizer, this is the data that tells the GPU how the objects, positioned by the vertex data, will be colored/shaded on the screen. For example, this is where color channels, such as RGB and alpha, are handled. In short, it's the pixel data and what we actually see on the screen. Here's a diagram showing the graphics pipeline overview: The graphics pipeline is the sequence of steps it takes to have our data rendered to the screen. The previous diagram is a simplified example of this process. Here are the main sections that can make up the pipeline: Buffer objects: These are known as Vertex Buffer Objects in OpenGL and are of the class MTLBuffer in the Metal API. These are the objects we create in our code that are sent from the CPU to the GPU for primitive processing. These objects contain data, such as the positions, normal vectors, alphas, colors, and more. Primitive processing: These are the steps in the GPU that take our Buffer Objects, break down the various vertex and rendering data in those objects, and then draw this information to the frame buffer, which is the screen output we see on the device. Before we go over the steps of primitive processing done in Metal, we should first understand the history and basics of shaders. Summary This article gives us precise knowledge about CPU/GPU framework levels and Graphics pipeline. We also learned that to overcome hardware limitations in even today's CPU/GPUs world, Apple developed the Metal API. To learn more about iOS for game development, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: iOS Game Development By Example: https://www.packtpub.com/game-development/ios-game-development-example. Sparrow iOS Game Framework Beginner’s Guide: https://www.packtpub.com/game-development/sparrow-ios-game-framework-beginner%E2%80%99s-guide Resources for Article:   Further resources on this subject: Android and iOS Apps Testing at a Glance [article] Signing up to be an iOS developer [article] Introduction to GameMaker: Studio [article]
Read more
  • 0
  • 0
  • 3215

article-image-audio-and-animation-hand-hand
Packt
16 Feb 2016
5 min read
Save for later

Audio and Animation: Hand in Hand

Packt
16 Feb 2016
5 min read
In this article, we are going to learn techniques to match audio pitch to animation speed. This is very crucial while editing videos and creating animated contents. (For more resources related to this topic, see here.) Matching the audio pitch to the animation speed Many artifacts sound higher in pitch when accelerated and lower when slowed down. Car engines, fan coolers, Vinyl, a record player the list goes on. If you want to simulate this kind of sound effect in an animated object that can have its speed changed dynamically, follow this article. Getting ready For this, you'll need an animated 3D object and an audio clip. Please use the files animatedRocket.fbx and engineSound.wav, available in the 1362_09_01 folder, that you can find in code bundle of the book Unity 5.x Cookbook at https://www.packtpub.com/game-development/unity-5x-cookbook. How to do it... To change the pitch of an audio clip according to the speed of an animated object, please follow these steps: Import the animatedRocket.fbx file into your Project. Select the carousel.fbx file in the Project view. Then, from the Inspector view, check its Import Settings. Select Animations, then select the clip Take 001, and make sure to check the Loop Time option. Click on the Apply button, shown as follows to save the changes: The reason why we didn't need to check Loop Pose option is because our animation already loops in a seamless fashion. If it didn't, we could have checked that option to automatically create a seamless transition from the last to the first frame of the animation. Add the animatedRocket GameObject to the scene by dragging it from the Project view into the Hierarchy view. Import the engineSound.wav audio clip. Select the animatedRocket GameObject. Then, drag engineSound from the Project view into the Inspector view, adding it as an Audio Source for that object. In the Audio Source component of carousel, check the box for the Loop option, as shown in the following screenshot: We need to create a Controller for our object. In the Project view, click on the Create button and select Animator Controller. Name it as rocketlController. Double-click on rocketController object to open the Animator window, as shown. Then, right-click on the gridded area and select the Create State | Empty option, from the contextual menu. Name the new state spin and set Take 001 as its motion in the Motion field: From the Hierarchy view, select animatedRocket. Then, in the Animator component (in the Inspector view), set rocketController as its Controller and make sure that the Apply Root Motion option is unchecked as shown: In the Project view, create a new C# Script and rename it to ChangePitch. Open the script in your editor and replace everything with the following code: using UnityEngine; public class ChangePitch : MonoBehaviour{ public float accel = 0.05f; public float minSpeed = 0.0f; public float maxSpeed = 2.0f; public float animationSoundRatio = 1.0f; private float speed = 0.0f; private Animator animator; private AudioSource audioSource; void Start(){ animator = GetComponent<Animator>(); audioSource = GetComponent<AudioSource>(); speed = animator.speed; AccelRocket (0f); } void Update(){ if (Input.GetKey (KeyCode.Alpha1)) AccelRocket(accel); if (Input.GetKey (KeyCode.Alpha2)) AccelRocket(-accel); } public void AccelRocket(float accel){ speed += accel; speed = Mathf.Clamp(speed,minSpeed,maxSpeed); animator.speed = speed; float soundPitch = animator.speed * animationSoundRatio; audioSource.pitch = Mathf.Abs(soundPitch); } } Save your script and add it as a component to animatedRocket GameObject. Play the scene and change the animation speed by pressing key 1 (accelerate) and 2 (decelerate) on your alphanumeric keyboard. The audio pitch will change accordingly. How it works... At the Start() method, besides storing the Animator and Audio oururcecuSource components in variables, we'll get the initial speed from the Animator and, we'll call the AccelRocket() function by passing 0 as an argument, only for that function to calculate the resulting pitch for the Audio Source. During Update() function, the lines of the if(Input.GetKey (KeyCode.Alpha1)) and if(Input.GetKey (KeyCode.Alpha2)) code detect whenever the 1 or 2 keys are being pressed on the alphanumeric keyboard to call the AccelRocket() function, passing a accel float variable as an argument. The AccelRocket() function, in its turn, increments speed with the received argument (the accel float variable). However, it uses the Mathf.Clamp()command to limit the new speed value between the minimum and maximum speed as set by the user. Then, it changes the Animator speed and Audio Source pitch according to the new speed absolute value (the reason for making it an absolute value is keeping the pitch a positive number, even when the animation is reversed by a negative speed value). Also, please note that setting the animation speed and therefore, the sound pitch to 0 will cause the sound to stop, making it clear that stopping the object's animation also prevents the engine sound from playing. There's more... Here is some information on how to fine-tune and customize this recipe. Changing the Animation/Sound Ratio If you want the audio clip pitch to be more or less affected by the animation speed, change the value of the Animation/Sound Ratio parameter. Accessing the function from other scripts The AccelRocket()function was made public so that it can be accessed from other scripts. As an example, we have included the ExtChangePitch.cs script in 1362_09_01 folder. Try attaching this script to the Main Camera object and use it to control the speed by clicking on the left and right mouse buttons. Summary In this article we learned, how to match audio pitch to the animation speed, how to change Animation/Sound Ratio. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article:   Further resources on this subject: The Vertex Functions [article] Lights and Effects [article] Virtual Machine Concepts [article]
Read more
  • 0
  • 0
  • 2488

article-image-vertex-functions
Packt
01 Feb 2016
18 min read
Save for later

The Vertex Functions

Packt
01 Feb 2016
18 min read
In this article by Alan Zucconi, author of the book Unity 5.x Shaders and Effects Cookbook, we will see that the term shader originates from the fact that Cg has been mainly used to simulate realistic lighting conditions (shadows) on three-dimensional models. Despite this, shaders are now much more than that. They not only define the way objects are going to look, but also redefine their shapes entirely. If you want to learn how to manipulate the geometry of a three-dimensional object only via shaders, this article is for you. In this article, you will learn the following: Extruding your models Implementing a snow shader Implementing a volumetric explosion (For more resources related to this topic, see here.) In this article, we will explain that 3D models are not just a collection of triangles. Each vertex can contain data, which is essential for correctly rendering the model itself. This article will explore how to access this information in order to use it in a shader. We will also explore how the geometry of an object can be deformed simply using Cg code. Extruding your models One of the biggest problems in games is repetition. Creating new content is a time-consuming task and when you have to face a thousand enemies, the chances are that they will all look the same. A relatively cheap technique to add variations to your models is using a shader that alters its basic geometry. This recipe will show a technique called normal extrusion, which can be used to create a chubbier or skinnier version of a model, as shown in the following image with the soldier from the Unity camp (Demo Gameplay): Getting ready For this recipe, we need to have access to the shader used by the model that you want to alter. Once you have it, we will duplicate it so that we can edit it safely. It can be done as follows: Find the shader that your model is using and, once selected, duplicate it by pressing Ctrl+D. Duplicate the original material of the model and assign the cloned shader to it. Assign the new material to your model and start editing it. For this effect to work, your model should have normals. How to do it… To create this effect, start by modifying the duplicated shader as shown in the following: Let's start by adding a property to our shader, which will be used to modulate its extrusion. The range that is presented here goes from -1 to +1;however, you might have to adjust that according to your own needs, as follows: _Amount ("Extrusion Amount", Range(-1,+1)) = 0 Couple the property with its respective variable, as shown in the following: float _Amount; Change the pragma directive so that it now uses a vertex modifier. You can do this by adding vertex:function_name at the end of it. In our case, we have called the vertfunction, as follows: #pragma surface surf Lambert vertex:vert Add the following vertex modifier: void vert (inout appdata_full v) { v.vertex.xyz += v.normal * _Amount; } The shader is now ready; you can use the Extrusion Amount slider in the Inspectormaterial to make your model skinnier or chubbier. How it works… Surface shaders works in two steps: the surface function and the vertex modifier. It takes the data structure of a vertex (which is usually called appdata_full) and applies a transformation to it. This gives us the freedom to virtually do everything with the geometry of our model. We signal the graphics processing unit(GPU) that such a function exists by adding vertex:vert to the pragma directive of the surface shader. One of the most simple yet effective techniques that can be used to alter the geometry of a model is called normal extrusion. It works by projecting a vertex along its normal direction. This is done by the following line of code: v.vertex.xyz += v.normal * _Amount; The position of a vertex is displaced by the_Amount units toward the vertex normal. If _Amount gets too high, the results can be quite unpleasant. However, you can add lot of variations to your modelswith smaller values. There's more… If you have multiple enemies and you want each one to have theirown weight, you have to create a different material for each one of them. This is necessary as thematerials are normally shared between models and changing one will change all of them. There are several ways in which you can do this; the quickest one is to create a script that automatically does it for you. The following script, once attached to an object with Renderer, will duplicate its first material and set the _Amount property automatically, as follows: using UnityEngine; publicclassNormalExtruder : MonoBehaviour { [Range(-0.0001f, 0.0001f)] publicfloat amount = 0; // Use this for initialization void Start () { Material material = GetComponent<Renderer>().sharedMaterial; Material newMaterial = new Material(material); newMaterial.SetFloat("_Amount", amount); GetComponent<Renderer>().material = newMaterial; } } Adding extrusion maps This technique can actually be improved even further. We can add an extra texture (or using the alpha channel of the main one) to indicate the amount of the extrusion. This allows a better control over which parts are raised or lowered. The following code shows how it is possible to achieve such an effect: sampler2D _ExtrusionTex; void vert(inout appdata_full v) { float4 tex = tex2Dlod (_ExtrusionTex, float4(v.texcoord.xy,0,0)); float extrusion = tex.r * 2 - 1; v.vertex.xyz += v.normal * _Amount * extrusion; } The red channel of _ExtrusionTex is used as a multiplying coefficient for normal extrusion. A value of 0.5 leaves the model unaffected; darker or lighter shades are used to extrude vertices inward or outward, respectively. You should notice that to sample a texture in a vertex modifier, tex2Dlod should be used instead of tex2D. In shaders, colour channels go from 0 to 1.Although, sometimes, you need to represent negative values as well (such as inward extrusion). When this is the case, treat 0.5 as zero; having smaller values as negative and higher values as positive. This is exactly what happens with normals, which are usually encoded in RGB textures. The UnpackNormal()function is used to map a value in the (0,1) range on the (-1,+1)range. Mathematically speaking, this is equivalent to tex.r * 2 -1. Extrusion maps are perfect to zombify characters by shrinking the skin in order to highlight the shape of the bones underneath. The following image shows how a "healthy" soldier can be transformed into a corpse using a shader and an extrusion map. Compared to the previous example, you can notice how the clothing is unaffected. The shader used in the following image also darkens the extruded regions in order to give an even more emaciated look to the soldier:   Implementing a snow shader The simulation of snow has always been a challenge in games. The vast majority of games simply baked snow directly in the models textures so that their tops look white. However, what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a proper accumulation of material and it should be treated as so. This recipe will show how to give a snowy look to your models using just a shader. This effect is achieved in two steps. First, a white colour is used for all the triangles facing the sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can see the result in the following image:   Keep in mind that this recipe does not aim to create photorealistic snow effect. It provides a good starting point;however, it is up to an artist to create the right textures and find the right parameters to make it fit your game. Getting ready This effect is purely based on shaders. We will need to do the following: Create a new shader for the snow effect. Create a new material for the shader. Assign the newly created material to the object that you want to be snowy. How to do it… To create a snowy effect, open your shader and make the following changes: Replace the properties of the shader with the following ones: _MainColor("Main Color", Color) = (1.0,1.0,1.0,1.0) _MainTex("Base (RGB)", 2D) = "white" {} _Bump("Bump", 2D) = "bump" {} _Snow("Level of snow", Range(1, -1)) = 1 _SnowColor("Color of snow", Color) = (1.0,1.0,1.0,1.0) _SnowDirection("Direction of snow", Vector) = (0,1,0) _SnowDepth("Depth of snow", Range(0,1)) = 0 Complete them with their relative variables, as follows: sampler2D _MainTex; sampler2D _Bump; float _Snow; float4 _SnowColor; float4 _MainColor; float4 _SnowDirection; float _SnowDepth; Replace the Input structure with the following: struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA }; Replace the surface function with the following one. It will color the snowy parts of the model white: void surf(Input IN, inout SurfaceOutputStandard o) { half4 c = tex2D(_MainTex, IN.uv_MainTex); o.Normal = UnpackNormal(tex2D(_Bump, IN.uv_Bump)); if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; o.Alpha = 1; } Configure the pragma directive so that it uses a vertex modifiers, as follows: #pragma surface surf Standard vertex:vert Add the following vertex modifiers that extrudes the vertices covered in snow, as follows: void vert(inout appdata_full v) { float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; } You can now use the Inspectormaterial to select how much of your mode is going to be covered and how thick the snow should be. How it works… This shader works in two steps. Coloring the surface The first one alters the color of the triangles thatare facing the sky. It affects all the triangles with a normal direction similar to _SnowDirection. Comparing unit vectors can be done using the dot product. When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they are parallel to each other. The _Snowproperty is used to decide how aligned they should be in order to be considered facing the sky. If you look closely at the surface function, you can see that we are not directly dotting the normal and the snow direction. This is because they are usually defined in a different space. The snow direction is expressed in world coordinates, while the object normals are usually relative to the model itself. If we rotate the model, its normals will not change, which is not what we want. To fix this, we need to convert the normals from their object coordinates to world coordinates. This is done with the WorldNormalVector()function, as follows: if (dot(WorldNormalVector(IN, o.Normal), _SnowDirection.xyz) >= _Snow) o.Albedo = _SnowColor.rgb; else o.Albedo = c.rgb * _MainColor; This shader simply colors the model white; a more advanced one should initialize the SurfaceOutputStandard structure with textures and parameters from a realistic snow material. Altering the geometry The second effect of this shader alters the geometry to simulate the accumulation of snow. Firstly, we identify the triangles that have been coloured white by testing the same condition used in the surface function. This time, unfortunately, we cannot rely on WorldNormalVector()asthe SurfaceOutputStandard structure is not yet initialized in the vertex modifier. We will use this other method instead, which converts _SnowDirection in objectcoordinates, as follows: float4 sn = mul(UNITY_MATRIX_IT_MV, _SnowDirection); Then, we can extrude the geometry to simulate the accumulation of snow, as shown in the following: if (dot(v.normal, sn.xyz) >= _Snow) v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; Once again, this is a very basic effect. One could use a texture map to control the accumulation of snow more precisely or to give it a peculiar, uneven look. See also If you need high quality snow effects and props for your game, you can also check the following resources in the Asset Storeof Unity: Winter Suite ($30): A much more sophisticated version of the snow shader presented in this recipe can be found at: https://www.assetstore.unity3d.com/en/#!/content/13927 Winter Pack ($60): A very realistic set of props and materials for snowy environments are found at: https://www.assetstore.unity3d.com/en/#!/content/13316 Implementing a volumetric explosion The art of game development is a clever trade-off between realism and efficiency. This is particularly true for explosions; they are at the heart of many games, yet the physics behind them is often beyond the computational power of modern machines. Explosions are essentially nothing more than hot balls of gas; hence, the only way to correctly simulate them is by integrating a fluid simulation in your game. As you can imagine, this is infeasible for runtime applications and many games simply simulate them with particles. When an object explodes, it is common to simply instantiate many fire, smoke, and debris particles that can have believableresulttogether. This approach, unfortunately, is not very realistic and is easy to spot. There is an intermediate technique that can be used to achieve a much more realistic effect: the volumetric explosions. The idea behind this concept is that the explosions are not treated like a bunch of particlesanymore; they are evolving three-dimensional objects and not just flat two-dimensionaltextures. Getting ready Start this recipe with the following steps: Create a new shader for this effect. Create a new material to host the shader. Attach the material to a sphere. You can create one directly from the editor bynavigating to GameObject | 3D Object | Sphere. This recipe works well with the standard Unity Sphere;however, if you need big explosions, you might need to use a more high-poly sphere. In fact, a vertex function can only modify the vertices of a mesh. All the other points will be interpolated using the positions of the nearby vertices. Fewer vertices mean lower resolution for your explosions. For this recipe, you will also need a ramp texture that has, in a gradient, all the colors that your explosions will have. You can create the following texture using GIMP or Photoshop. The following is the one used for this recipe: Once you have the picture, import it to Unity. Then, from its Inspector, make sure the Filter Mode is set to Bilinear and the Wrap Mode to Clamp. These two settings make sure that the ramp texture is sampled smoothly. Lastly, you will need a noisy texture. You can find many of them on the Internet as freely available noise textures. The most commonly used ones are generated using Perlin noise. How to do it… This effect works in two steps: a vertex function to change the geometry and a surface function to give it the right color. The steps are as follows: Add the following properties for the shader: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Add their relative variables so that the Cg code of the shader can actually access them, as follows: _RampTex("Color Ramp", 2D) = "white" {} _RampOffset("Ramp offset", Range(-0.5,0.5))= 0 _NoiseTex("Noise tex", 2D) = "gray" {} _Period("Period", Range(0,1)) = 0.5 _Amount("_Amount", Range(0, 1.0)) = 0.1 _ClipRange("ClipRange", Range(0,1)) = 1 Change the Input structure so that it receives the UV data of the ramp texture, as shown in the following: struct Input { float2 uv_NoiseTex; }; Add the following vertex function: void vert(inout appdata_full v) { float3 disp = tex2Dlod(_NoiseTex, float4(v.texcoord.xy,0,0)); float time = sin(_Time[3] *_Period + disp.r*10); v.vertex.xyz += v.normal * disp.r * _Amount * time; } Add the following surface function: void surf(Input IN, inout SurfaceOutput o) { float3 noise = tex2D(_NoiseTex, IN.uv_NoiseTex); float n = saturate(noise.r + _RampOffset); clip(_ClipRange - n); half4 c = tex2D(_RampTex, float2(n,0.5)); o.Albedo = c.rgb; o.Emission = c.rgb*c.a; } We will specify the vertex function in the pragma directive, adding the nolightmapparameter to prevent Unity from adding realistic lightings to our explosion, as follows: #pragma surface surf Lambert vertex:vert nolightmap The last step is to select the material and attaching the two textures in the relative slotsfrom its inspector. This is an animated material, meaning that it evolves over time. You can watch the material changing in the editor by clicking on Animated Materials from the Scene window: How it works If you are reading this recipe, you are already familiar with how surface shaders and vertex modifiers work. The main idea behind this effect is to alter the geometry of the sphere in a seemingly chaotic way, exactly like it happens in a real explosion. The following image shows how such explosion will look in the editor. You can see that the original mesh has been heavily deformed in the following image: The vertex function is a variant of the technique called normal extrusion. The difference here is that the amount of the extrusion is determined by both the time and the noise texture. When you need a random number in Unity, you can rely on the Random.Range()function. There is no standard way to get random numbers within a shader, therefore,the easiest way is to sample a noise texture. There is no standard way to do this, therefore, take the following only as an example: float time = sin(_Time[3] *_Period + disp.r*10); The built-in _Time[3]variable is used to get the current time from the shader and the red channel of the disp.rnoise texture is used to make sure that each vertex moves independently. The sin()function makes the vertices go up and down, simulating the chaotic behavior of an explosion. Then, the normal extrusion takes place as shown in the following: v.vertex.xyz += v.normal * disp.r * _Amount * time; You should play with these numbers and variables until you find a pattern of movement that you are happy with. The last part of the effect is achieved by the surface function. Here, the noise texture is used to sample a random color from the ramp texture. However, there are two more aspects that are worth noticing. The first one is the introduction of _RampOffset. Its usage forces the explosion to sample colors from the left or right side of the texture. With positive values, the surface of the explosion tends to show more grey tones— which is exactly what happens when it is dissolving. You can use _RampOffset to determine how much fire or smoke should be there in your explosion. The second aspect introduced in the surface function is the use of clip(). Theclip()function clips (removes) pixels from the rendering pipeline. When invoked with a negative value, the current pixel is not drawn. This effect is controlled by _ClipRange, which determines the pixels of the volumetric explosions that are going to be transparent. By controlling both _RampOffset and _ClipRange, you have full control to determine how the explosion behaves and dissolves. There's more… The shader presented in this recipe makes a sphere look like an explosion. If you really want to use it, you should couple it with some scripts in order to get the most out of it. The best thing to do is to create an explosion object and turn it to a prefab so that you can reuse it every time you need. You can do this by dragging the sphere back in the Project window. Once it is done, you can create as many explosions as you want using the Instantiate() function. However,it is worth noticing that all the objects with the same material share the same look. If you have multiple explosions at the same time, they should not use the same material. When you are instantiating a new explosion, you should also duplicate its material. You can do this easily with the following piece of code: GameObject explosion = Instantiate(explosionPrefab) as GameObject; Renderer renderer = explosion.GetComponent<Renderer>(); Material material = new Material(renderer.sharedMaterial); renderer.material = material; Lastly, if you are going to use this shader in a realistic way, you should attach a script to it, which changes its size—_RampOffsetor_ClipRange—accordingly to the type of explosion you want to recreate. See also A lot more can be done to make explosions realistic. The approach presented in this recipe only creates an empty shell; the explosion in it is actually empty. An easy trick to improve it is to create particles in it. However, you can only go so far with this. The short movie,The Butterfly Effect (http://unity3d.com/pages/butterfly), created by Unity Technologies in collaboration with Passion Pictures and Nvidia, is the perfect example. It is based on the same concept of altering the geometry of a sphere;however, it renders it with a technique called volume ray casting. In a nutshell, it renders the geometry as if it's complete. You can see the following image as an example:   If you are looking for high quality explosions, refer toPyro Technix (https://www.assetstore.unity3d.com/en/#!/content/16925) on the Asset Store. It includes volumetric explosions and couples them with realistic shockwaves. Summary In this article, we saw the recipes to extrude models and implement a snow shader and volumetric explosion. Resources for Article: Further resources on this subject: Lights and Effects [article] Looking Back, Looking Forward [article] Animation features in Unity 5 [article]
Read more
  • 0
  • 0
  • 4568

article-image-scenes-and-menus
Packt
01 Feb 2016
19 min read
Save for later

Scenes and Menus

Packt
01 Feb 2016
19 min read
In this article by Siddharth Shekar, author of the book Cocos2d Cross-Platform Game Development Cookbook, Second Edition, we will cover the following recipes: Adding level selection scenes Scrolling level selection scenes (For more resources related to this topic, see here.) Scenes are the building blocks of any game. Generally, in any game, you have the main menu scene in which you are allowed to navigate to different scenes, such as GameScene, OptionsScene, and CreditsScene. In each of these scenes, you have menus. Similarly in MainScene, there is a play button that is part of a menu that, when pressed, takes the player to GameScene, where the gameplay code runs. Adding level selection scenes In this section, we will take a look at how to add a level selection scene in which you will have buttons for each level you want to play, and if you select it, this particular level will load up. Getting ready To create a level selection screen, you will need a custom sprite that will show a background image of the button and a text showing the level number. We will create these buttons first. Once the button sprites are created, we will create a new scene that we will populate with the background image, name of the scene, array of buttons, and a logic to change the scene to the particular level. How to do it... We will create a new Cocoa Touch class with CCSprite as the parent class and call it LevelSelectionBtn. Then, we will open up the LevelSelectionBtn.h file and add the following lines of code in it: #import "CCSprite.h" @interface LevelSelectionBtn : CCSprite -(id)initWithFilename:(NSString *) filename   StartlevelNumber:(int)lvlNum; @end We will create a custom init function; in this, we will pass the name of the file of the image, which will be the base of the button and integer that will be used to display the text at the top of the base button image. This is all that is required for the header class. In the LevelSelectionBtn.m file, we will add the following lines of code: #import "LevelSelectionBtn.h" @implementation LevelSelectionBtn -(id)initWithFilename:(NSString *) filename StartlevelNumber: (int)lvlNum; {   if (self = [super initWithImageNamed:filename]) {     CCLOG(@"Filename: %@ and levelNUmber: %d", filename, lvlNum);     CCLabelTTF *textLabel = [CCLabelTTF labelWithString:[NSString       stringWithFormat:@"%d",lvlNum ] fontName:@"AmericanTypewriter-Bold" fontSize: 12.0f];     textLabel.position = ccp(self.contentSize.width / 2, self.contentSize.height / 2);     textLabel.color = [CCColor colorWithRed:0.1f green:0.45f blue:0.73f];     [self addChild:textLabel];   }   return self; } @end In our custom init function, we will first log out if we are sending the correct data in. Then, we will create a text label and pass it in as a string by converting the integer. The label is then placed at the center of the current sprite base image by dividing the content size of the image by half to get the center. As the background of the base image and the text both are white, the color of the text is changed to match the color blue so that the text is actually visible. Finally, we will add the text to the current class. This is all for the LevelSelectionBtn class. Next, we will create LevelSelectionScene, in which we will add the sprite buttons and the logic that the button is pressed for. So, we will now create a new class, LevelSelectionScene, and in the header file, we will add the following lines: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   NSMutableArray *buttonSpritesArray; } +(CCScene*)scene; @end Note that apart from the usual code, we also created NSMutuableArray called buttonsSpritesArray, which will be used in the code. Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionScene +(CCScene*)scene{     return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     //Add Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@ "Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     //add text heading for file     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:@     "LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //initialize array     buttonSpritesArray = [NSMutableArray array];     int widthCount = 5;     int heightCount = 5;     float spacing = 35.0f;     float halfWidth = winSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = winSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = 1;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtnalloc]           initWithFilename:@"btnBG.png"           StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } Here, we will add the background image and heading text for the scene and initialize NSMutabaleArray. We will then create six new variables, as follows: WidthCount: This is the number of columns we want to have heightCount: This is the number of rows we want spacing: This is the distance between each of the sprite buttons so that they don't overlap halfWidth: This is the distance in the x axis from the center of the screen to upper-left position of the first sprite button that will be placed halfHeight: This is the distance in the y direction from the center to the upper-left position of the first sprite button that will be placed lvlNum: This is the counter with an initial value of 1. This is incremented each time a button is created to show the text in the button sprite. In the double loop, we will get the x and y coordinates of each of the button sprites. First, to get the y position from the half height, we will subtract the spacing multiplied by the j counter. As the value of j is initially 0, the y value remains the same as halfWidth for the topmost row. Then, for the x value of the position, we will add half the width of the spacing multiplied by the i counter. Each time, the x position is incremented by the spacing. After getting the x and y position, we will create a new LevelSelectionBtn sprite and pass in the btnBG.png image and also pass in the value of lvlNum to create the button sprite. We will set the position to the value of x and y that we calculated earlier. To refer to the button by number, we will assign the name of the sprite, which is the same as the number of the level. So, we will convert lvlNum to a string and pass in the value. Then, the button will be added to the scene, and it will also be added to the array we created globally as we will need to cycle through the images later. Finally, we will increment the value of lvlNum. However, we have still not added any interactivity to the sprite buttons so that when it is pressed, it will load the required level. For added touch interactivity, we will use the touchBegan function built right into Cocos2d. We will create more complex interfaces, but for now, we will use the basic touchBegan function. In the same file, we will add the following code right between the init function and @end: -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCrossFadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];       self.userInteractionEnabled = false;     }   } } The touchBegan function will be called each time we touch the screen. So, once we touch the screen, it gets the location of where you touched and stores it as a variable called location. Then, using the for in loop, we will loop through all the button sprites we added in the array. Using the RectContainsPoint function, we will check whether the location that we pressed is inside the rect of any of the sprites in the loop. We will then log out so that we will get an indication in the console as to which button number we have clicked on so that we can be sure that the right level is loaded. A crossfade transition is created, and the current scene is swapped with GameplayScene with the name of the current sprite clicked on. Finally, we have to set the userInteractionEnabled Boolean false so that the current class stops listening to the touch. Also, at the top of the class in the init function, we enabled this Boolean, so we will add the following line of code as highlighted in the init function:     if(self = [super init]){       self.userInteractionEnabled = TRUE;       CGSize  winSize = [[CCDirector sharedDirector]viewSize]; How it works... So, we are done with the LevelSelectionScene class, but we still need to add a button in MainScene to open LevelSelectionScene. In MainScene, we will add the following lines in the init function, in which we will add menubtn and a function to be called once the button is clicked on as highlighted here:         CCButton *playBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"playBtn_normal.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@ "playBtn_pressed.png"]           disabledSpriteFrame:nil];         [playBtn setTarget:self selector:@selector(playBtnPressed:)];          CCButton *menuBtn = [CCButton buttonWithTitle:nil           spriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"menuBtn.png"]           disabledSpriteFrame:nil];          [menuBtn setTarget:self selector:@selector(menuBtnPressed:)];         CCLayoutBox * btnMenu;         btnMenu = [[CCLayoutBox alloc] init];         btnMenu.anchorPoint = ccp(0.5f, 0.5f);         btnMenu.position = CGPointMake(winSize.width/2, winSize.height * 0.5);          btnMenu.direction = CCLayoutBoxDirectionVertical;         btnMenu.spacing = 10.0f;          [btnMenu addChild:menuBtn];         [btnMenu addChild:playBtn];          [self addChild:btnMenu]; Don't forget to include the menuBtn.png file included in the resources folder of the project, otherwise you will get a build error. Next, also add in the menuBtnPressed function, which will be called once menuBtn is pressed and released, as follows: -(void)menuBtnPressed:(id)sender{   CCLOG(@"menu button pressed");   CCTransition *transition = [CCTransition transitionCrossFadeWith Duration:0.20];   [[CCDirector sharedDirector]replaceScene:[[LevelSelectionScene alloc]init] withTransition:transition]; } Now, the MainScene should similar to the following: Click on the menu button below the play button, and you will be able to see LevelSelectionScreen in all its glory. Now, click on any of the buttons to open up the gameplay scene displaying the number that you clicked on. In this case, I clicked on button number 18, which is why it shows 18 in the gameplay scene when it loads. Scrolling level selection scenes If your game has say 20 levels, it is okay to have one single level selection scene to display all the level buttons; but what if you have more? In this section, we will modify the previous section's code, create a node, and customize the class to create a scrollable level selection scene. Getting ready We will create a new class called LevelSelectionLayer, inherit from CCNode, and move all the content we added in LevelSelectionScene to it. This is done so that we can have a separate class and instantiate it as many times as we want in the game. How to do it... In the LevelSelectionLayer.m file, we will change the code to the following: #import "CCNode.h" @interface LevelSelectionLayer : CCNode {   NSMutableArray *buttonSpritesArray; } -(id)initLayerWith:(NSString *)filename   StartlevelNumber:(int)lvlNum   widthCount:(int)widthCount   heightCount:(int)heightCount   spacing:(float)spacing; @end We changed the init function so that instead of hardcoding the values, we can create a more flexible level selection layer. In the LevelSelectionLayer.m file, we will add the following: #import "LevelSelectionLayer.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" @implementation LevelSelectionLayer - (void)onEnter{   [super onEnter];   self.userInteractionEnabled = YES; } - (void)onExit{   [super onExit];   self.userInteractionEnabled = NO; } -(id)initLayerWith:(NSString *)filename StartlevelNumber:(int)lvlNum widthCount:(int)widthCount heightCount:(int)heightCount spacing: (float)spacing{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     self.contentSize = winSize;     buttonSpritesArray = [NSMutableArray array];     float halfWidth = self.contentSize.width/2 - (widthCount-1) * spacing * 0.5f;     float halfHeight = self.contentSize.height/2 + (heightCount-1) * spacing * 0.5f;     int levelNum = lvlNum;     for(int i = 0; i < heightCount; ++i){       float y = halfHeight - i * spacing;       for(int j = 0; j < widthCount; ++j){         float x = halfWidth + j * spacing;         LevelSelectionBtn* lvlBtn = [[LevelSelectionBtn alloc]         initWithFilename:filename StartlevelNumber:levelNum];         lvlBtn.position = CGPointMake(x,y);         lvlBtn.name = [NSString stringWithFormat:@"%d",levelNum];         [self addChild:lvlBtn];         [buttonSpritesArray addObject: lvlBtn];         levelNum++;       }     }   }    return self; } -(void)touchBegan:(CCTouch *)touch withEvent:(CCTouchEvent *)event{   CGPoint location = [touch locationInNode:self];   CCLOG(@"location: %f, %f", location.x, location.y);   CCLOG(@"touched");   for (CCSprite *sprite in buttonSpritesArray)   {     if (CGRectContainsPoint(sprite.boundingBox, location)){       CCLOG(@" you have pressed: %@", sprite.name);       CCTransition *transition = [CCTransition transitionCross FadeWithDuration:0.20];       [[CCDirector sharedDirector]replaceScene:[[GameplayScene       alloc]initWithLevel:sprite.name] withTransition:transition];     }   } } @end The major changes are highlighted here. The first is that we added and removed the touch functionality using the onEnter and onExit functions. The other major change is that we set the contentsize value of the node to winSize. Also, while specifying the upper-left coordinate of the button, we did not use winsize for the center but the contentsize of the node. Let's move to LevelSelectionScene now; we will execute the following code: #import "CCScene.h" @interface LevelSelectionScene : CCScene{   int layerCount;   CCNode *layerNode; } +(CCScene*)scene; @end In the header file, we will change it to add two global variables in it: The layerCount variable keeps the total layers and nodes you add The layerNode variable is an empty node added for convenience so that we can add all the layer nodes to it so that we can move it back and forth instead of moving each layer node individually Next, in the LevelSelectionScene.m file, we will add the following: #import "LevelSelectionScene.h" #import "LevelSelectionBtn.h" #import "GameplayScene.h" #import "LevelSelectionLayer.h" @implementation LevelSelectionScene +(CCScene*)scene{   return[[self alloc]init]; } -(id)init{   if(self = [super init]){     CGSize  winSize = [[CCDirector sharedDirector]viewSize];     layerCount = 1;     //Basic CCSprite - Background Image     CCSprite* backgroundImage = [CCSprite spriteWithImageNamed:@"Bg.png"];     backgroundImage.position = CGPointMake(winSize.width/2, winSize.height/2);     [self addChild:backgroundImage];     CCLabelTTF *mainmenuLabel = [CCLabelTTF labelWithString:     @"LevelSelectionScene" fontName:@"AmericanTypewriter-Bold" fontSize:36.0f];     mainmenuLabel.position = CGPointMake(winSize.width/2, winSize.height * 0.8);     [self addChild:mainmenuLabel];     //empty node     layerNode = [[CCNode alloc]init];     [self addChild:layerNode];     int widthCount = 5;     int heightCount = 5;     float spacing = 35;     for(int i=0; i<3; i++){       LevelSelectionLayer* lsLayer = [[LevelSelectionLayer alloc]initLayerWith:@"btnBG.png"         StartlevelNumber:widthCount * heightCount * i + 1         widthCount:widthCount         heightCount:heightCount         spacing:spacing];       lsLayer.position = ccp(winSize.width * i, 0);       [layerNode addChild:lsLayer];     }     CCButton *leftBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"left.png"]       disabledSpriteFrame:nil];     [leftBtn setTarget:self selector:@selector(leftBtnPressed:)];     CCButton *rightBtn = [CCButton buttonWithTitle:nil       spriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       highlightedSpriteFrame:[CCSpriteFrame frameWithImageNamed:@"right.png"]       disabledSpriteFrame:nil];     [rightBtn setTarget:self selector:@selector(rightBtnPressed:)];     CCLayoutBox * btnMenu;     btnMenu = [[CCLayoutBox alloc] init];     btnMenu.anchorPoint = ccp(0.5f, 0.5f);     btnMenu.position = CGPointMake(winSize.width * 0.5, winSize.height * 0.2);     btnMenu.direction = CCLayoutBoxDirectionHorizontal;     btnMenu.spacing = 300.0f;     [btnMenu addChild:leftBtn];     [btnMenu addChild:rightBtn];     [self addChild:btnMenu z:4];   }   return self; } -(void)rightBtnPressed:(id)sender{   CCLOG(@"right button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount >=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(-winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount--;   } } -(void)leftBtnPressed:(id)sender{   CCLOG(@"left button pressed");   CGSize  winSize = [[CCDirector sharedDirector]viewSize];   if(layerCount <=0){     CCAction* moveBy = [CCActionMoveBy actionWithDuration:0.20       position:ccp(winSize.width, 0)];     [layerNode runAction:moveBy];     layerCount++;   } } @end How it works... The important piece of the code is highlighted. Apart from adding the usual background and text, we will initialize layerCount to 1 and initialize the empty layerNode variable. Next, we will create a for loop, in which we will add the three level selection layers by passing the starting value of each selection layer in the btnBg image, the width count, height count, and spacing between each of the buttons. Also, note how the layers are positioned at a width's distance from each other. The first one is visible to the player. The consecutive layers are added off screen similarly to how we placed the second image offscreen while creating the parallax effect. Then, each level selection layer is added to layerNode as a child. We will also create the left-hand side and right-hand side buttons so that we can move layerNode to the left and right once clicked on. We will create two functions called leftBtnPressed and rightBtnPressed in which we will add functionality when the left-hand side or right-hand side button gets pressed. First, let's look at the rightBtnPressed function. Once the button is pressed, we will log out this button. Next, we will get the size of the window. We will then check whether the value of layerCount is greater than zero, which is true as we set the value as 1. We will create a moveBy action, in which we give the window width for the movement in the x direction and 0 for the movement in the y direction as we want the movement to be only in the x direction and not y. Lastly, we will pass in a value of 0.20f. The action is then run on layerNode and the layerCount value is decremented. In the leftBtnPressed function, the opposite is done to move the layer in the opposite direction. Run the game to see the change in LevelSelectionScene. As you can't go left, pressing the left button won't do anything. However, if you press the right button, you will see that the layer scrolls to show the next set of buttons. Summary In this article, we learned about adding level selection scenes and scrolling level selection scenes in Cocos2d. Resources for Article: Further resources on this subject: Getting started with Cocos2d-x [article] Dragging a CCNode in Cocos2D-Swift [article] Run Xcode Run [article]
Read more
  • 0
  • 0
  • 1090

article-image-techniques-and-practices-game-ai
Packt
14 Jan 2016
10 min read
Save for later

Techniques and Practices of Game AI

Packt
14 Jan 2016
10 min read
In this article by Peter L Newton, author of the book Learning Unreal AI Programming, we will understand the fundamental techniques and practices of game AI. This will be the building block to developing an amazing and interesting game AI. (For more resources related to this topic, see here.) Navigation While all the following components aren't necessary to achieve AI navigation, they all contribute critical feedback that can affect navigation. Navigating within a world is limited only by the pathways within the game. Navigation for AI is built up of the following things: Path following (path nodes): Another solution similar to NavMesh, path nodes can designate the space in which the AI traverses. Navigation mesh: Using tools such as Navigation Mesh, also known as NavMesh, you can designate areas in which the AI can traverse. NavMesh generates a plot of grids that is used to calculate the path and cost during navigation. It's important to know that this is only one of several pathfinding techniques available; we use it because it works well in this demonstration. Behavior trees: Using behavior trees to influence your AI's next destination can create a more interesting player experience. It not only calculates its requested destination, but also decides whether it should enter the screen with a cartwheel double backflip, no hands or try the triple somersault to jazz hands. Steering behaviors: Steering behaviors affect the way the AI moves while navigating to avoid obstacles. This also means using steering to create formations with your fleets that you have set to attack the king's wall. Steering can be used in many ways to influence the movement of the character. Sensory systems: Sensory systems can provide critical details, such as players nearby, sound levels, cover nearby, and many other variables of the environment that can alter movement. It's critical that your AI understands the changing environment so that it doesn't break the illusion of being a real opponent. Achieving realistic movement with steering When you think of what steering does for a car, you would be right to imagine that the same idea is applied to game AI navigation. Steering influences the movement of AI elements as they traverse to their next destination. The influences can be supplied as necessary, but we will go over the most commonly used ones. Avoidance is used essentially to avoid colliding with oncoming AI. Flocking is another key factor in steering; you commonly see an example of it while watching a school of fish. This phenomenon, known as flocking, is useful in simulating interesting group movement; simulate a complete panic or a school of fish. The goal of steering behaviors is to achieve realistic movement behavior within the player's world. Creating character with randomness and probability AI with character is what randomness and probability adds to the bots decision making. If a bot attacked you the same way, always entered the scene the same way, and annoyed you with its laugh after every successful hit, it wouldn't make for a unique experience—the AI always does the same thing. By using randomness and probability, you can instead make the AI laugh based on probability or introduce randomness to the AI's skill of choice. Another great by-product of applying randomness and probability is that it allows you to introduce levels of difficulty. You can lower the chance of missing the skill cast or even allow the bots to aim more precisely. If you have bots who wander around looking for enemies, their next destination can be randomly chosen. Creating complex decision making with behavior trees Finite State Machines (FSM) allow your bot to perform transitions between states. This allows it to go from wandering to hunting and then to killing. Behavior trees are similar but allow more flexibility. Behavior trees allow hierarchical FSM, which introduces another layer of decisions. So, the bot decides between branches of behaviors that define the state it is in. There is a tool provided by UE4 called Behavior Tree. Its editor tool allows you to modify AI behavior quickly and with ease. The following sections show the components found within UE4's Behavior Tree. Root This node is the starting node that sends the signal to the next node in the tree. This would connect to a composite that begins your first tree. What you may notice is that you are required to use a composite first to define a tree and then create the task for that tree. This is because a hierarchical FSM creates branches of states. These states will be populated with other states or tasks. This allows easy transitions between multiple states. Decorators This node creates another task, which you can add on top of the node as a "decoration". This could be, for example, a Force Success decorator when using a sequence composite or using a loop to have a node's actions repeated a number of times. I used a decorator in the AI we will make that tells it to update to the next available route. Consider the following screenshot: In the preceding screenshot, you see the Attack & Destroy decorator at the top of the composite, which defines the state. This state includes two tasks, Attack Enemy and Move To Enemy, the latter of which also has a decorator telling it to execute only when the bot state is searching. Composites These are the starting points of the states. They define how the state will behave with returns and execution flow. There is a Selector in our example that will execute each of its children from left to right and doesn't fail but returns success when one of its children returns success. Therefore, this is good for a state that doesn't check for successfully executed nodes. The Sequence executes its children in a similar fashion to the Selector, but returns a fail message when one of its children returns fail. This means that it's required that the nodes return a success message to complete the sequence. Last but not least is Simple Parallel. This allows you to execute a task and a tree at essentially the same time. This is great for creating a state that will require another task to always be called. So, to set it up, you first need to connect it to a task that it will execute. The second task or state that is connected continues to be called with the first task until the first task returns a success message. Services Services run as long as the composite that it is added to stays activated. They tick on the intervals that you set within the properties. They have another float property that allows you to create deviations in the tick intervals. Services are used to modify the state of the AI in most cases, because it's always called. For example, in the bot that we will create, we add a service to the first branch of the tree so that it's called without interruption, thus being able to maintain the state that the bot should be in at any given movement. This service, called Detect Enemy, actually runs a deviating cycle that updates Blackboard variables, such as State and EnemyActor: Tasks Tasks do the dirty work and report with a success or failed message if necessary. They have two nodes, which you'll use most often when working with a task: Event Receive Execute, which receives the signal to execute the connected scripts, and Finish Execute, which sends the signal back, returning a true or false message on success. This is important when making a task meant for the Sequence composite. Blackboards Blackboards are used to store variables within the behavior tree of the AI. In our example, we store an enumeration variable, State, to store the state, TargetPoint to hold the currently targeted enemy, and Route, which stores the current route position the AI has been requested to travel to, just to name a few. Blackboards work just by setting a public variable of a node to one of the available Blackboard variables in the drop-down menu. The naming convention shown in the following screenshot makes this process streamlined: Sensory system Creating a sensory system is heavily based on the environment where the AI will be fighting the player. It will need to be able to find cover, evade the enemy, get ammo, and other features that you feel will create an immersive AI for your game. Games with AI that challenges the player create a unique individual experience. A good sensory system contributes critical information, which makes for reactive AI. In this project, we use the sensory system to detect pawns that the AI can see. We also use functions to check for the line of sight of the enemy. We check whether there is another pawn in our path. We can check for cover and other resources within the area. Machine learning Machine learning is a branch of its own. This technique allows AI to learn from situations and simulations. The inputs are from the environment, including the context in which the bot allows it to make decisive actions. In machine learning, the inputs are put within a classifier, which can predict a set of outputs with a certain level of certainty. Classifiers can be combined into ensembles to increase the accuracy of the probabilistic prediction. We don't dig heavily into this subject, but I will provide some material for those interested. Tracing Tracing allows another actor within the world to detect objects by ray tracing. A single line trace is sent out, and if it collides with an actor, the actor is returned, including the information about the impact. Tracing is used for many reasons. One way it is used in FPS games is to detect hits. Are you familiar with the hit box? When your player shoots in a game, a trace is shot out that collides with the opponent's hit box, determining the damage to your opponent and, if you're skillful enough, resulting in their death. There are other shapes available for traces, such as spheres, capsules, and boxes, which allow tracing for different situations. Recently, I used the box trace for my car in order to detect objects near it. Influence mapping Influence mapping isn't a finite approach; it's the idea that specific locations on the map would contribute information that directly influences the player or AI. An example when using influence mapping with AI is presence falloff. Say we have enemy AI in a group. Their presence map would create a radial circle around the group with an intensity based on the size of the group. This way, other AI elements know that on entering this area, they're entering a zone occupied by enemy AI. Practical information isn't the only thing people use this for, so just understand that it's meant to provide another level of input to help your bot make additional decisions. Summary In this article, we saw the fundamental techniques and practices of game AI. We saw how to implement navigation, achieve realistic movement of AI elements, and create characters with randomness in order to achieve a sense of realism. We also looked at behavior trees and all their constituent elements. Further, we touched upon some aspects related to AI, such as machine learning and tracing. Resources for Article: Further resources on this subject: Overview of Unreal Engine 4[article] The Unreal Engine[article] Creating weapons for your game using UnrealScript[article]
Read more
  • 0
  • 0
  • 8148
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-spacecraft-adding-details
Packt
31 Dec 2015
6 min read
Save for later

Spacecraft – Adding Details

Packt
31 Dec 2015
6 min read
In this article by Christopher Kuhn, the author of the book Blender 3D Incredible Machines, we'll model our Spacecraft. As we do so, we'll cover a few new tools and techniques and apply things in different ways to create a final, complex model: Do it yourself—completing the body Building the landing gear (For more resources related to this topic, see here.) We'll work though the spacecraft one section at a time by adding the details. Do it yourself – completing the body Next, let's take a look at the key areas that we have left to model: The bottom of the ship and the sensor suite (on the nose) are good opportunities to practice on your own. They use identical techniques to the areas of the ship that we've already done. Go ahead and see what you can do! For the record, here's what I ended up doing with the sensor suite: Here's what I did with the bottom. You can see that I copied the circular piece that was at the top of the engine area: One of the nice things about a project as this is that you can start to copy parts from one area to another. It's unlikely that both the top and bottom of the ship would be shown in the same render (or shot), so you can probably get away with borrowing quite a bit. Even if you did see them simultaneously, it's not unreasonable to think that a ship would have more than one of certain components. Of course, this is just a way to make things quicker (and easier). If you'd like everything to be 100% original, you're certainly free to do so. Building the landing gear We'll do the landing struts together, but you can feel free to finish off the actual skids yourself: I kept mine pretty simple compared to the other parts of the ship: Once you've got the skid plate done, make sure to make it a separate object (if it's not already). We're going to use a neat trick to finish this up. Make a copy of the landing gear part and move it to the rear section (or front if you have modeled the rear). Then, under your mesh tab, you can assign both of these objects the same mesh data: Now, whenever you make a change to one of them, the change will carry over to the other as well. Of course, you could just model one and then duplicate it, but sometimes, it's nice to see how the part will look in multiple locations. For instance, the cutouts are slightly different between the front and back of the ship. As you model it, you'll want to make sure that it will fit both areas. The first detail that we'll add is a mounting bracket for our struts to go on: Then, we'll add a small cylinder (at this point, the large one is just a placeholder): We'll rotate it just a bit: From this, it's pretty easy to create a rear mounting piece. Once you've done this, go ahead and add a shock absorber for the front (leave room for the springs, which we'll add next): To create the spring, we'll start with a small (12-sided) circle. We'll make it so small because just like the cable reel on the grabbling gun there will naturally be a lot of geometry, and we want to keep the polygon count as low as possible. Then, in edit mode, move the whole circle away from its original center point: Having done this, you can now add a screw modifier. Right away, you'll see the effect: There are a couple of settings you'll want to make note of here. The Screw value controls the vertical gap or distance of your spring: The Angle and Steps values control the number of turns and smoothness respectively: Go ahead and play with these until you're happy. Then, move and scale your spring into a position. Once it's the way you like it, go ahead and apply the screw modifier (but don't join it to the shock absorber just yet): None of my existing materials seemed right for the spring. So, I went ahead and added one that I called Blue Plastic. At this point, we have a bit of a problem. We want to join the spring to the landing gear but we can't. The landing gear has an edge split modifier with a split angle value of 30, and the spring has a value of 46. If we join them right now, the smooth edges on the spring will become sharp. We don't want this. Instead, we'll go to our shock absorber. Using the Select menu, we'll pick the Sharp Edges option: By default, it will select all edges with an angle of 30 degrees or higher. Once you do this, go ahead and mark these edges as sharp: Because all the thirty degree angles are marked sharp, we no longer need the Edge Angle option on our edge split modifier. You can disable it by unchecking it, and the landing gear remains exactly the same: Now, you can join the spring to it without a problem: Of course, this does mean that when you create new edges in your landing gear, you'll now have to mark them as sharp. Alternatively, you can keep the Edge Angle option selected and just turn it up to 46 degrees—your choice. Next, we'll just pull in the ends of our spring a little, so they don't stick out: Maybe we'll duplicate it. After all, this is a big, heavy vehicle, so maybe, it needs multiple shock absorbers: This is a good place to leave our landing gear for now. Summary In this article, we finished modeling our Spaceship's landing gear. We used a few new tools within Blender, but mostly, we focused on workflow and technique. Resources for Article: Further resources on this subject: Blender 3D 2.49: Quick Start[article] Blender 3D 2.49: Working with Textures[article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 1148

article-image-lets-get-physical-using-gamemakers-physics-system
Packt
02 Nov 2015
28 min read
Save for later

Let's Get Physical – Using GameMaker's Physics System

Packt
02 Nov 2015
28 min read
 In this article by Brandon Gardiner and Julián Rojas Millán, author of the book GameMaker Cookbook we'll cover the following topics: Creating objects that use physics Alternating the gravity Applying a force via magnets Creating a moving platform Making a rope (For more resources related to this topic, see here.) The majority of video games are ruled by physics in one way or another. 2D platformers require coded movement and jump physics. Shooters, both 2D and 3D, use ballistic calculators that vary in sophistication to calculate whether you shot that guy or missed him and he's still coming to get you. Even Pong used rudimentary physics to calculate the ball's trajectory after bouncing off of a paddle or wall. The next time you play a 3D shooter or action-adventure game, check whether or not you see the logo for Havok, a physics engine used in over 500 games since it was introduced in 2000. The point is that physics, however complex, is important in video games. GameMaker comes with its own engine that can be used to recreate physics-based sandbox games, such as The Incredible Machine, or even puzzle games, such as Cut the Rope or Angry Birds. Let's take a look at how elements of these games can be accomplished using GameMaker's built-in physics engine. Physics engine 101 In order to use GameMaker's physics engine, we first need to set it up. Let's create and test some basic physics before moving on to something more complicated. Gravity and force One of the things that we learned with regards to GameMaker physics was to create our own simplistic gravity. Now that we've set up gravity using the physics engine, let's see how we can bend it according to our requirements. Physics in the environment GameMaker's physics engine allows you to choose not only the objects that are affected by external forces but also allows you to see how they are affected. Let's take a look at how this can be applied to create environmental objects in your game. Advanced physics-based objects Many platforming games, going all the way back to Pitfall!, have used objects, such as a rope as a gameplay feature. Pitfall!, mind you, uses static rope objects to help the player avoid crocodiles, but many modern games use dynamic ropes and chains, among other things, to create a more immersive and challenging experience. Creating objects that use physics There's a trend in video games where developers create products that have less games than play areas; worlds and simulators in which a player may or may not be given an objective and it wouldn't matter either way. These games can take on a life of their own; Minecraft is essentially a virtual game of building blocks and yet has become a genre of its own, literally making its creator, Markus Persson (also known as Notch), a billionaire in the process. While it is difficult to create, the fun in games such as Minecraft is designed by the player. If you give a player a set of tools or objects to play with, you may end up seeing an outcome you hadn't initially thought of and that's a good thing. The reason why I have mentioned all of this is to show you how it binds to GameMaker and what we can do with it. In a sense, GameMaker is a lot like Minecraft. It is a set of tools, such as the physics engine we're about to use, that the user can employ if he/she desires (of course, within limits), in order to create something funny or amazing or both. What you do with these tools is up to you, but you have to start somewhere. Let's take a look at how to build a simple physics simulator. Getting ready The first thing you'll need is a room. Seems simple enough, right? Well, it is. One difference, however, is that you'll need to enable physics before we begin. With the room open, click on the Physics tab and make sure that the box marked Room is Physics World is checked. After this, we'll need some sprites and objects. For sprites, you'll need a circle, triangle, and two squares, each of a different color. The circle is for obj_ball. The triangle is for obj_poly. One of the squares is for obj_box, while the other is for obj_ground. You'll also need four objects without sprites: obj_staticParent, obj_dynamicParent, obj_button, and obj_control. How to do it Open obj_staticParent and add two collision events: one with itself and one with obj_dynamicParent. In each of the collision events, drag and drop a comment from the Control tab to the Actions box. In each comment, write Collision. Close obj_staticParent and repeat steps 1-3 for obj_dynamicParent. In obj_dynamicParent, click on Add Event, and then click on Other and select Outside Room. From the Main1 tab, drag and drop Destroy Instance in the Actions box. Select Applies to Self. Open obj_ground and set the parent to obj_staticParent. Add a create event with a code block containing the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open the room that you created and start placing instances of obj_ground around it to create platforms, stairs, and so on. This is how mine looked like: Open obj_ball and set the parent to obj_dynamicParent. Add a create event and enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ball) / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_box, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.5); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.01); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Repeat steps 10 and 11 for obj_poly, but use this code: var fixture = physics_fixture_create(); physics_fixture_set_polygon_shape(fixture); physics_fixture_add_point(fixture, 0, -(sprite_height / 2)); physics_fixture_add_point(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_add_point(fixture, -(sprite_width / 2), sprite_height / 2); physics_fixture_set_density(fixture, 0.01); physics_fixture_set_restitution(fixture, 0.1); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_control and add a create event using the following code: globalvar shape_select; globalvar shape_output; shape_select = 0; Add a Step and add the following code to a code block: if mouse_check_button(mb_left) && alarm[0] < 0 && !place_meeting(x, y, obj_button) { instance_create(mouse_x, mouse_y, shape_output); alarm[0] = 5; } if mouse_check_button_pressed(mb_right) { shape_select += 1; } Now, add an event to alarm[0] and give it a comment stating Set Timer. Place an instance of obj_control in the room that you created, but make sure that it is placed in the coordinates (0, 0). Open obj_button and add a step event. Drag a code block to the Actions tab and input the following code: if shape_select > 2 { shape_select = 0; } if shape_select = 0 { sprite_index = spr_ball; shape_output = obj_ball; } if shape_select = 1 { sprite_index = spr_box; shape_output = obj_box; } if shape_select = 2 { sprite_index = spr_poly; shape_output = obj_poly; } Once these steps are completed, you can test your physics environment. Use the right mouse button to select the shape you would like to create, and use the left mouse button to create it. Have fun! How it works While not overly complicated, there is a fair amount of activity in this recipe. Let's take a quick look at the room itself. When you created this room, you checked the box for Room is Physics World. This does exactly what it says it does; it enables physics in the room. If you have any physics-enabled objects in a room that is not a physics world, errors will occur. In the same menu, you have the gravity settings (which are vector-based) and pixels to meters, which sets the scale of objects in the room. This setting is important as it controls how each object is affected by the coded physics. YoYo Games based GameMaker's physics on the real world (as they should) and so GameMaker needs to know how many meters are represented by each pixel. The higher the number, the larger the world in the room. If you place an object in two different rooms with different pixel to meter settings, even though the objects have the same settings, GameMaker will apply physics to them differently because it views them as being of differing size and weight. Let's take a look at the objects in this simulation. Firstly, you have two parent objects: one static and the other dynamic. The static object is the only parent to one object: obj_ground. The reason for this is that static objects are not affected by outside forces in a physics world, that is, the room you built. Because of this, the ground pieces are able to ignore gravity and forces applied by other objects that collide with them. Now, neither obj_staticParent nor obj_dynamicParent contain any physics code; we saved this for our other objects. We use our parent objects to govern our collision groups using two objects instead of coding collisions in each object. So, we use drag and drop collision blocks to ensure that any children can collide with instances of one another and with themselves. Why did you drag comment blocks into these collision events? We did this so that GameMaker doesn't ignore them; the contents of each comment block are irrelevant. Also, the dynamic parent has an event that destroys any instance of its children that end up outside the room. The reason for this is simply to save memory. Otherwise, each object, even those off-screen, will be accounted for calculations at every step and this will slow everything down and eventually crash the program. Now, as we're using physics-enabled objects, let's see how each one differs from the others. When working with the object editor, you may have noticed the checkbox labelled Uses Physics. This checkbox will automatically set up the basic physics code within the selected object, but only after assuming that you're using the drag and drop method of programming. If you click on it, you'll see a new menu with basic collision options as well as several values and associated options: Density: Density in GameMaker works exactly as it does in real life. An object with a high density will be much heavier and harder to move via force than a low-density object of the same size. Think of how far you can kick an empty cardboard box versus how far you can kick a cardboard box full of bricks, assuming that you don't break your foot. Restitution: Restitution essentially governs an object's bounciness. A higher restitution will cause an object to bounce like a rubber ball, whereas a lower restitution will cause an object to bounce like a box of bricks, as mentioned in the previous example. Collision group: Collision grouping tells GameMaker how certain objects react with one another. By default, all physics objects are set to collision group 0. This means that they will not collide with other objects without a specific collision event. Assigning a positive number to this setting will cause the object in question to collide with all other objects in the same collision group, regardless of collision events. Assigning a negative number will prevent the object from colliding with any objects in that group. I don't recommend that you use collision groups unless absolutely necessary, as it takes a great deal of memory to work properly. Linear damping: Linear damping works a lot like air friction in real life. This setting affects the velocity (momentum) of objects in motion over time. Imagine a military shooter where thrown grenades don't arc, they just keep soaring through the air. We don't need this. This is what rockets are for. Angular damping: Angular damping is similar to linear damping. It only affects an object's rotation. This setting keeps objects from spinning forever. Have you ever ridden the Teacup ride at Disneyland? If so, you will know that angular damping is a good thing. Friction: Friction also works in a similar way to linear damping, but it affects an object's momentum as it collides with another object or surface. If you want to create icy surfaces in a platformer, friction is your friend. We didn't use this menu in this recipe but we did set and modify these settings through code. First, in each of the objects, we set them to use physics and then declared their shapes and collision masks. We started with declaring the fixture variable because, as you can see, it is part of each of the functions we used and typing fixture is easier than typing physics_fixture_create() every time. The fixture variable that we bind to the object is what is actually being affected by forces and other physics objects, so we must set its shape and properties in order to tell GameMaker how it should react. In order to set the fixture's shape, we use physics_set_circle_shape, physics_set_box_shape, and physics_set_polygon_shape. These functions define the collision mask associated with the object in question. In the case of the circle, we got the radius from half the width of the sprite, whereas for the box, we found the outer edges used via half the width and half the height. GameMaker then uses this information to create a collision mask to match the sprite from which the information was gathered. When creating a fixture from a more complex sprite, you can either use the aforementioned methods to approximate a mask, or you can create a more complex shape using a polygon like we did for the triangle. You'll notice that the code to create the triangle fixture had extra lines. This is because polygons require you to map each point on the shape you're trying to create. You can map three to eight points by telling GameMaker where each one is situated in relation to the center of the image (0, 0). One very important detail is that you cannot create a concave shape; this will result in an error. Every fixture you create must have a convex shape. The only way to create a concave fixture is to actually create multiple fixtures in the same object. If you were to take the code for the triangle, duplicate all of it in the same code block and alter the coordinates for each point in the duplicated code; you can create concave shapes. For example, you can use two rectangles to make an L shape. This can only be done using a polygon fixture, as it is the only fixture that allows you to code the position of individual points. Once you've coded the shape of your fixture, you can begin to code its attributes. I've described what each physics option does, and you've coded and tested them using the instructions mentioned earlier. Now, take a look at the values for each setting. The ball object has a higher restitution than the rest; did you notice how it bounced? The box object has a very low friction; it slides around on platforms as though it is made of ice. The triangle has very low density and angular damping; it is easily knocked around by the other objects and spins like crazy. You can change how objects react to forces and collisions by changing one or more of these values. I definitely recommend that you play around with these settings to see what you can come up with. Remember how the ground objects are static? Notice how we still had to code them? Well, that's because they still interact with other objects but in an almost opposite fashion. Since we set the object's density to 0, GameMaker more or less views this as an object that is infinitely dense; it cannot be moved by outside forces or collisions. It can, however, affect other objects. We don't have to set the angular and linear damping values simply because the ground doesn't move. We do, however, have to set the restitution and friction levels because we need to tell GameMaker how other objects should react when they come in contact with the ground. Do you want to make a rubber wall to bounce a player off? Set the restitution to a higher level. Do you want to make that icy patch we talked about? Then, you need to lower the friction. These are some fun settings to play around with, so try it out. Alternating gravity Gravity can be a harsh mistress; if you've ever fallen from a height, you will understand what I mean. I often think it would be great if we could somehow lessen gravity's hold on us, but then I wonder what it would be like if we could just reverse it all together! Imagine flipping a switch and then walking on the ceiling! I, for one, think that it would be great. However, since we don't have the technology to do it in real life, I'll have to settle for doing it in video games. Getting ready For this recipe, let's simplify things and use the physics environment that we created in the previous recipe. How to do it In obj_control, open the code block in the create event. Add the following code: physics_world_gravity(0, -10); That's it! Test the environment and see what happens when you create your physics objects. How it works GameMaker's physics world of gravity is vector-based. This means that you simply need to change the values of x and y in order to change how gravity works in a particular room. If you take a look at the Physics tab in the room editor, you'll see that there are values under x and y. The default value is 0 for x and 10 for y. When we added this code to the control object's create event, we changed the value of y to -10, which means that it will flow in the opposite direction. You can change the direction to 360 degrees by altering both x and y, and you can change the gravity's strength by raising and lowering the values. There's more Alternating the gravity's flow can be a lot of fun in a platformer. Several games have explored this in different ways. Your character can change the gravity by hitting a switch in a game, the player can change it by pressing a button, or you can just give specific areas different gravity settings. Play around with this and see what you can create. Applying force via magnets Remember playing with magnets in a science class when you were a kid. It was fun back then, right? Well, it's still fun; powerful magnets make a great gift for your favorite office worker. What about virtual magnets, though? Are they still fun? The answer is yes. Yes, they are. Getting ready Once again, we're simply going to modify our existing physics environment in order to add some new functionality.  How to do it In obj_control, open the code block in the step event. Add the following code: if keyboard_check(vk_space) { with (obj_dynamicParent) { var dir = point_direction(x,y,mouse_x,mouse_y); physics_apply_force(x, y, lengthdir_x(30, dir), lengthdir_y(30, dir)); } } Once you close the code block, you can test your new magnet. Add some objects, hold down the spacebar, and see what happens. How it works Applying a force to a physics-enabled object in GameMaker will add a given value to the direction, rotation, and speed of the said object. Force can be used to gradually propel an object in a given direction, or through a little math, as in this case, draw objects nearer. What we're doing here is that while the Spacebar is held down, any objects in the vicinity are drawn to the magnet (in this case, your mouse). In order to accomplish this, we first declare that the following code needs to act on obj_dynamicParent, as opposed to acting on the control object where the code resides. We then set the value of a dir variable to the point_direction of the mouse, as it relates to any child of obj_dynamicParent. From there, we can begin to apply force. With physics_apply_force, the first two values represent the x and y coordinates of the object to which the force is being applied. Since the object(s) in question is/are not static, we simply set the coordinates to whatever value they have at the time. The other two values are used in tandem to calculate the direction in which the object will travel and the force propelling it in Newtons. We get these values, in this instance, by calculating the lengthdir for both x and y. The lengthdir finds the x or y value of a point at a given length (we used 30) at a given angle (we used dir, which represents point_direction, that finds the angle where the mouse's coordinates lie). If you want to increase the length value, then you need to increase the power of the magnet. Creating a moving platform We've now seen both static and dynamic physics objects in GameMaker, but what happens when we want the best of both the worlds? Let's take a look at how to create a platform that can move and affect other objects via collisions but is immune to said collisions. Getting ready Again, we'll be using our existing physics environment, but this time, we'll need a new object. Create a sprite that is128 px wide by 32 px high and assign it to an object called obj_platform. Also, create another object called obj_kinematicParent but don't give it a sprite. Add collision events to obj_staticParent, obj_dynamicParent, and itself. Make sure that there is a comment in each event. How to do it In obj_platform, add a create event. Drag a code block to the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); phy_speed_x = 5; Add a Step event with a code block containing the following code: if (x <64) or (x > room_width-64) { phy_speed_x = phy_speed_x * -1; } Place an instance of obj_platform in the room, which is slightly higher than the highest instance of obj_ground. Once this is done, you can go ahead and test it. Try dropping various objects on the platform and see what happens! How it works Kinematic objects in GameMaker's physics world are essentially static objects that can move. While the platform has a density of 0, it also has a speed of 5 along the x axis. You'll notice that we didn't just use speed equal to 5, as this would not have the desired effect in a physics world. The code in the step simply causes the platform to remain within a set boundary by multiplying its current horizontal speed by -1. Any static object to which a movement is applied automatically becomes a kinematic object. Making a rope Is there anything more useful than a rope? I mean besides your computer, your phone or even this book. Probably, a lot of things, but that doesn't make a rope any less useful. Ropes and chains are also useful in games. Some games, such as Cut the Rope, have based their entire gameplay structure around them. Let's see how we can create ropes and chains in GameMaker. Getting ready For this recipe, you can either continue using the physics environment that we've been working with, or you can simply start from scratch. If you've gone through the rest of this chapter, you should be fairly comfortable with setting up physics objects. I completed this recipe with a fresh .gmx file. Before we begin, go ahead and set up obj_dynamicParent and obj_staticParent with collision events for one another. Next, you'll need to create the obj_ropeHome, obj_rope, obj_block, and obj_ropeControl objects. The sprite for obj_rope can simply be a 4 px wide by 16 px high box, while obj_ropeHome and obj_block can be 32 px squares. Obj_ropeControl needs to use the same sprite as obj_rope, but with the y origin set to 0. Obj_ropeControl should also be invisible. As for parenting, obj_rope should be a child of obj_dynamicParent and obj_ropeHome, and obj_block should be children of obj_staticParent, and obj_ropeControl does not require any parent at all. As always, you'll also need a room in which you need to place your objects. How to do it Open obj_ropeHome and add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.2); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); In obj_rope, add a create event with a code block. Enter the following code: var fixture = physics_fixture_create(); physics_fixture_set_box_shape(fixture, sprite_width / 2, sprite_height / 2); physics_fixture_set_density(fixture, 0.25); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_linear_damping(fixture, 0.5); physics_fixture_set_angular_damping(fixture, 1); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Open obj_ropeControl and add a create event. Drag a code block to the actions box and enter the following code: setLength = image_yscale-1; ropeLength = 16; rope1 = instance_create(x,y,obj_ropeHome2); rope2 = instance_create(x,y,obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); repeat (setLength) { ropeLength += 16; rope1 = rope2; rope2 = instance_create(x, y+ropeLength, obj_rope2); physics_joint_revolute_create(rope1, rope2, rope1.x, rope1.y, 0,0,0,0,0,0,0); } In obj_block, add a create event. Place a code block in the actions box and add the following code: var fixture = physics_fixture_create(); physics_fixture_set_circle_shape(fixture, sprite_get_width(spr_ropeHome)/2); physics_fixture_set_density(fixture, 0); physics_fixture_set_restitution(fixture, 0.01); physics_fixture_set_friction(fixture, 0.5); physics_fixture_bind(fixture, id); physics_fixture_delete(fixture); Now, add a step event with the following code in a code block: phy_position_x = mouse_x; phy_position_y = mouse_y; Place an instance of obj_ropeControl anywhere in the room. This will be the starting point of the rope. You can place multiple instances of the object if you wish. For every instance of obj_ropeControl you place in the room, use the bounding box to stretch it to however long you wish. This will determine the length of your rope. Place a single instance of obj_block in the room. Once you've completed these steps, you can go ahead and test them. How it works This recipe may seem somewhat complicated but it's really not. What you're doing here is that we are taking multiple instances of the same physics-enabled object and stringing them together. Since you're using instances of the same object, you only have to code one and the rest will follow. Once again, our collisions are handled by our parent objects. This way, you don't have to set collisions for each object. Also, setting the physical properties of each object is done exactly as we have done in previous recipes. By setting the density of obj_ropeHome and obj_block to 0, we're ensuring that they are not affected by gravity or collisions, but they can still collide with other objects and affect them. In this case, we set the physics coordinates of obj_block to those of the mouse so that, when testing, you can use them to collide with the rope, moving it. The most complex code takes place in the create event for obj_ropeControl. Here, we not only define how many sections of a rope or chain will be used, but we also define how they are connected. To begin, the y scale of the control object is measured in order to determine how many instances of obj_rope are required. Based on how long you stretched obj_ropeControl in the room, the rope will be longer (more instances) or shorter (fewer instances). We then set a variable (ropeLength) to the size of the sprite used for obj_rope. This will be used later to tell GameMaker where each instance of obj_rope should be so that we can connect them in a line. Next, we create the object that will hold the obj_ropeHome rope. This is a static object that will not move, no matter how much the rope moves. This is connected to the first instance of obj_rope via a revolute joint. In GameMaker, a revolute joint is used in several ways: it can act as part of a motor, moving pistons; it can act as a joint on a ragdoll body; in this case, it acts as the connection between instances of obj_rope. A revolute joint allows the programmer to code its angle and torque; but for our purposes, this isn't necessary. We declared the objects that are connected via the joint as well as the anchor location, but the other values remain null. Once the rope holder (obj_ropeHome) and initial joint are set up, we can automate the creation of the rest. Using the repeat function, we can tell GameMaker to repeat a block of code a set number of times. In this case, this number is derived from how many instances of obj_rope can fit within the distance between the y origin of obj_ropeControl and the point to which you stretched it. We subtract 1 from this number as GameMaker will calculate too many in order to cover the distance in its entirety. The code that will be repeated does a few things at once. First, it increases the value of the ropeLength variable by 16 for each instance that is calculated. Then, GameMaker changes the value of rope1 (which creates an instance of obj_ropeHome) to that of rope2 (which creates an instance of obj_rope). The rope2 variable is then reestablished to create an instance of obj_rope, but also adds the new value of ropeLength so as to move its coordinates directly below those of the previous instance, thus creating a chain. This process is repeated until the set length of the overall rope is reached. There's more Each section of a rope is a physics object and acts in the physics world. By changing the physics settings, when initially creating the rope sections, you can see how they react to collisions. How far and how quickly the rope moves when pushed by another object is very much related to the difference between their densities. If you make the rope denser than the object colliding with it, the rope will move very little. If you reverse these values, you can cause the rope to flail about, wildly. Play around with the settings and see what happens, but when placing a rope or chain in a game, you really must consider what the rope and other objects are made of. It wouldn't seem right for a lead chain to be sent flailing about by a collision with a pillow; now would it? Summary This article introduces the physics system and demonstrates how GameMaker handles gravity, friction, and so on. Learn how to implement this system to make more realistic games. Resources for Article: Further resources on this subject: Getting to Know LibGDX [article] HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Introducing GameMaker [article]
Read more
  • 0
  • 0
  • 5250

article-image-exciting-features-haxeflixel
Packt
02 Nov 2015
4 min read
Save for later

The Exciting Features of HaxeFlixel

Packt
02 Nov 2015
4 min read
This article by Jeremy McCurdy, the author of the book Haxe Game Development Essentials, uncovers the exciting features of HaxeFlixel. When getting into cross-platform game development, it's often difficult to pick the best tool. There are a lot of engines and languages out there to do it, but when creating 2D games, one of the best options out there is HaxeFlixel. HaxeFlixel is a game engine written in the Haxe language. It is powered by the OpenFL framework. Haxe is a cross-platform language and compiler that allows you to write code and have it run on a multitude of platforms. OpenFL is a framework that expands the Haxe API and allows you to have easy ways to handle things such as rendering an audio in a uniform way across different platforms. Here's a rundown of what we'll look at: Core features Display Audio Input Other useful features Multiplatform support Advanced user interface support Visual effects  (For more resources related to this topic, see here.)  Core features HaxeFlixel is a 2D game engine, originally based off the Flash game engine Flixel. So, what makes it awesome? Let's start with the basic things you need: display, audio, and input. Display In HaxeFlixel, most visual elements are represented by objects using the FlxSprite class. This can be anything from spritesheet animations to shapes drawn through code. This provides you with a simple and consistent way of working with visual elements. Here's an example of how the FlxSprite objects are used: You can handle things such as layering by using the FlxGroup class, which does what its name implies—it groups things together. The FlxGroup class also can be used for collision detection (check whether objects from group A hit objects from group B). It also acts an object pool for better memory management. It's really versatile without feeling bloated. Everything visual is displayed by using the FlxCamera class. As the name implies, it's a game camera. It allows you to do things such as scrolling, having fullscreen visual effects, and zooming in or out of the page. Audio Sound effects and music are handled using a simple but effective sound frontend. It allows you to play sound effects and loop music clips with easy function calls. You can also manage the volume on a per sound basis, via global volume controls, or a mix of both. Input HaxeFlixel supports many methods of input. You can use mouse, touch, keyboard, or gamepad input. This allows you to support players on every platform easily. On desktop platforms, you can easily customize the mouse cursor without the need to write special functionalities. The built-in gamepad support covers mappings for the following controllers: Xbox PS3 PS4 OUYA Logitech Other useful features HaxeFlixel has a bunch of other cool features. This makes it a solid choice as a game engine. Among these are multiplatform support, advanced user interface support, and visual effects. Multi-platform support HaxeFlixel can be built for many different platforms. Much of this comes from it being built using OpenFL and its stellar cross-platform support. You can build desktop games that will work natively on Windows, Mac, and Linux. You can build mobile games for Android and iOS with relative ease. You can also target the Web by using Flash or the experimental support for HTML5. Advanced user interface support By using the flixel-ui add-on library, you can create complex game user interfaces. You can define and set up these interfaces with this by using XML configuration files. The flixel-ui library gives you access to a lot of different control types, such as 9-sliced images, the check/toggle buttons, text input, tabs, and drop-down menus. You can even localize UI text into different languages by using the firetongue library of Haxe. Visual effects Another add-on is the effects library. It allows you to warp and distort sprites by using the FlxGlitchSprite and FlxWaveSprite classes. You can also add trails to objects by using the FlxTrail class. Aside from the add-on library, HaxeFlixel also has built-in support for 2D particle effects, camera effects such as screen flashes and fades, and screen shake for an added impact. Summary In this article, we discussed several features of HaxeFlixel. This includes the core features of display, audio, and input. We also covered the additional features of multiplatform support, advanced user interface support, and visual effects. Resources for Article: Further resources on this subject: haXe 2: The Dynamic Type and Properties [article] Being Cross-platform with haXe [article] haXe 2: Using Templates [article]
Read more
  • 0
  • 0
  • 3587

article-image-welcome-land-bludborne
Packt
23 Oct 2015
12 min read
Save for later

Welcome to the Land of BludBorne

Packt
23 Oct 2015
12 min read
In this article by Patrick Hoey, the author of Mastering LibGDX Game Development, we will jump into creating the world of BludBourne (that's our game!). We will first learn some concepts and tools related to creating tile based maps and then we will look into starting with BludBorne! We will cover the following topics in this article: Creating and editing tile based maps Implementing the starter classes for BludBourne (For more resources related to this topic, see here.) Creating and editing tile based maps For the BludBourne project map locations, we will be using tilesets, which are terrain and decoration sprites in the shape of squares. These are easy to work with since LibGDX supports tile-based maps with its core library. The easiest method to create these types of maps is to use a tile-based editor. There are many different types of tilemap editors, but there are two primary ones that are used with LibGDX because they have built in support: Tiled: This is a free and actively maintained tile-based editor. I have used this editor for the BludBourne project. Download the latest version from http://www.mapeditor.org/download.html. Tide: This is a free tile-based editor built using Microsoft XNA libraries. The targeted platforms are Windows, Xbox 360, and Windows Phone 7. Download the latest version from http://tide.codeplex.com/releases. For the BludBourne project, we will be using Tiled. The following figure is a screenshot from one of the editing sessions when creating the maps for our game:    The following is a quick guide for how we can use Tiled for this project: Map View (1): The map view is the part of the Tiled editor where you display and edit your individual maps. Numerous maps can be loaded at once, using a tab approach, so that you can switch between them quickly. There is a zoom feature available for this part of Tiled in the lower right hand corner, and can be easily customized depending on your workflow. The maps are provided in the project directory (under coreassetsmaps), but when you wish to create your own maps, you can simply go to File | New. In the New Map dialog box, first set the Tile size dimensions, which, for our project, will be a width of 16 pixels and a height of 16 pixels. The other setting is Map size which represents the size of your map in unit size, using the tile size dimensions as your unit scale. An example would be creating a map that is 100 units by 100 units, and if our tiles have a dimension of 16 pixels by 16 pixels then this would give is a map size of 1600 pixels by 1600 pixels. Layers (2): This represents the different layers of the currently loaded map. You can think of creating a tile map like painting a scene, where you paint the background first and build up the various elements until you get to the foreground. Background_Layer: This tile layer represents the first layer created for the tilemap. This will be the layer to create the ground elements, such as grass, dirt paths, water, and stone walkways. Nothing else will be shown below this layer. Ground_Layer: This tile layer will be the second later created for the tilemap. This layer will be buildings built on top of the ground, or other structures like mountains, trees, and villages. The primary reason is convey a feeling of depth to the map, as well as the fact that structural tiles such as walls have a transparency (alpha channel) so that they look like they belong on the ground where they are being created. Decoration_Layer: This third tile layer will contain elements meant to decorate the landscape in order to remove repetition and make more interesting scenes. These elements include rocks, patches of weeds, flowers, and even skulls. MAP_COLLISION_LAYER: This fourth layer is a special layer designated as an object layer. This layer does not contain tiles, but will have objects, or shapes. This is the layer that you will configure to create areas in the map that the player character and non-player characters cannot traverse, such as walls of buildings, mountain terrain, ocean areas, and decorations such as fountains. MAP_SPAWNS_LAYER: This fifth layer is another special object layer designated only for player and non-playable character spawns, such as people in the towns. These spawns will represent the various starting locations where these characters will first be rendered on the map. MAP_PORTAL_LAYER: This sixth layer is the last object layer designated for triggering events in order to move from one map into another. These will be locations where the player character walks over, triggering an event which activates the transition to another map. An example would be in the village map, when the player walks outside of the village map, they will find themselves on the larger world map. Tilesets (3): This area of Tiled represents all of the tilesets you will work with for the current map. Each tileset, or spritesheet, will get its own tab in this interface, making it easy to move between them. Adding a new tileset is as easy as clicking the New icon in the Tilesets area, and loading the tileset image in the New Tileset dialog. Tiled will also partition out the tilemap into the individual tiles after you configure the tile dimensions in this dialog. Properties (4): This area of Tiled represents the different additional properties that you can set for the currently selected map element, such as a tile or object. An example of where these properties can be helpful is when we create a portal object on the portal layer. We can create a property defining the name of this portal object that represents the map to load. So, when we walk over a small tile that looks like a town in the world overview map, and trigger the portal event, we know that the map to load is TOWN because the name property on this portal object is TOWN. After reviewing a very brief description of how we can use the Tiled editor for BludBourne, the following screenshots show the three maps that we will be using for this project. The first screenshot is of the TOWN map which will be where our hero will discover clues from the villagers, obtain quests, and buy armor and weapons. The town has shops, an inn, as well as a few small homes of local villagers:    The next screenshot is of the TOP_WORLD map which will be the location where our hero will battle enemies, find clues throughout the land, and eventually make way to the evil antagonist held up in his castle. The hero can see how the pestilence of evil has started to spread across the lands and lay ruin upon the only harvestable fields left:    Finally, we make our way to the CASTLE_OF_DOOM map, which will be where our hero, once leveled enough, will battle the evil antagonist held up in the throne room of his own castle. Here, the hero will find many high level enemies, as well as high valued items for trade:     Implementing the starter classes for BludBourne Now that we have created the maps for the different locations of BludBourne, we can now begin to develop the initial pieces of our source code project in order to load these maps, and move around in our world. The following diagram represents a high level view of all the relevant classes that we will be creating:   This class diagram is meant to show not only all the classes we will be reviewing in this article, but also the relationships that these classes share so that we are not developing them in a vacuum. The main entry point for our game (and the only platform specific class) is DesktopLauncher, which will instantiate BludBourne and add it along with some configuration information to the LibGDX application lifecycle. BludBourne will derive from Game to minimize the lifecycle implementation needed by the ApplicationListener interface. BludBourne will maintain all the screens for the game. MainGameScreen will be the primary gameplay screen that displays the different maps and player character moving around in them. MainGameScreen will also create the MapManager, Entity, and PlayerController. MapManager provides helper methods for managing the different maps and map layers. Entity will represent the primary class for our player character in the game. PlayerController implements InputProcessor and will be the class that controls the players input and controls on the screen. Finally, we have some asset manager helper methods in the Utility class used throughout the project. DesktopLauncher The first class that we will need to modify is DesktopLauncher, which the gdx-setup tool generated: package com.packtpub.libgdx.bludbourne.desktop; import com.badlogic.gdx.Application; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.backends.lwjgl.LwjglApplication; import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration; import com.packtpub.libgdx.bludbourne.BludBourne; The Application class is responsible for setting up a window, handling resize events, rendering to the surfaces, and managing the application during its lifetime. Specifically, Application will provide the modules for dealing with graphics, audio, input and file I/O handling, logging facilities, memory footprint information, and hooks for extension libraries. The Gdx class is an environment class that holds static instances of Application, Graphics, Audio, Input, Files, and Net modules as a convenience for access throughout the game. The LwjglApplication class is the backend implementation of the Application interface for the desktop. The backend package that LibGDX uses for the desktop is called LWJGL. This implementation for the desktop will provide cross-platform access to native APIs for OpenGL. This interface becomes the entry point that the platform OS uses to load your game. The LwjglApplicationConfiguration class provides a single point of reference for all the properties associated with your game on the desktop: public class DesktopLauncher { public static void main (String[] arg) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); config.title = "BludBourne"; config.useGL30 = false; config.width = 800; config.height = 600; Application app = new LwjglApplication(new BludBourne(), config); Gdx.app = app; //Gdx.app.setLogLevel(Application.LOG_INFO); Gdx.app.setLogLevel(Application.LOG_DEBUG); //Gdx.app.setLogLevel(Application.LOG_ERROR); //Gdx.app.setLogLevel(Application.LOG_NONE); } } The config object is an instance of the LwjglApplicationConfiguration class where we can set top level game configuration properties, such as the title to display on the display window, as well as display window dimensions. The useGL30 property is set to false, so that we use the much more stable and mature implementation of OpenGL ES, version 2.0. The LwjglApplicationConfiguration properties object, as well as our starter class instance, BludBourne, are then passed to the backend implementation of the Application class, and an object reference is then stored in the Gdx class. Finally, we will set the logging level for the game. There are four values for the logging levels which represent various degrees of granularity for application level messages output to standard out. LOG_NONE is a logging level where no messages are output. LOG_ERROR will only display error messages. LOG_INFO will display all messages that are not debug level messages. Finally, LOG_DEBUG is a logging level that displays all messages. BludBourne The next class to review is BludBourne. The class diagram for BludBourne shows the attributes and method signatures for our implementation: The import packages for BludBourne are as follows: package com.packtpub.libgdx.bludbourne; import com.packtpub.libgdx.bludbourne.screens.MainGameScreen; import com.badlogic.gdx.Game; The Game class is an abstract base class which wraps the ApplicationListener interface and delegates the implementation of this interface to the Screen class. This provides a convenience for setting the game up with different screens, including ones for a main menu, options, gameplay, and cutscenes. The MainGameScreen is the primary gameplay screen that the player will see as they move their hero around in the game world: public class BludBourne extends Game { public static final MainGameScreen _mainGameScreen = new MainGameScreen(); @Override public void create(){ setScreen(_mainGameScreen); } @Override public void dispose(){ _mainGameScreen.dispose(); } } The gdx-setup tool generated our starter class BludBourne. This is the first place where we begin to set up our game lifecycle. An instance of BludBourne is passed to the backend constructor of LwjglApplication in DesktopLauncher which is how we get hooks into the lifecycle of LibGDX. BludBourne will contain all of the screens used throughout the game, but for now we are only concerned with the primary gameplay screen, MainGameScreen. We must override the create() method so that we can set the initial screen for when BludBourne is initialized in the game lifecycle. The setScreen() method will check to see if a screen is already currently active. If the current screen is already active, then it will be hidden, and the screen that was passed into the method will be shown. In the future, we will use this method to start the game with a main menu screen. We should also override dispose() since BludBourne owns the screen object references. We need to make sure that we dispose of the objects appropriately when we are exiting the game. Summary In this article, we first learned about tile based maps and how to create them with the Tiled editor. We then learned about the high level architecture of the classes we will have to create and implemented starter classes which allowed us to hook into the LibGDX application lifecycle. Have a look at Mastering LibGDX Game Development to learn about textures, TMX formatted tile maps, and how to manage them with the asset manager. Also included is how the orthographic camera works within our game, and how to display the map within the render loop. You can learn to implement a map manager that deals with collision layers, spawn points, and a portal system which allows us to transition between different locations seamlessly. Lastly, you can learn to implement a player character with animation cycles and input handling for moving around the game map. Resources for Article: Further resources on this subject: Finding Your Way [article] Getting to Know LibGDX [article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 4002
article-image-getting-started-cocos2d-x
Packt
19 Oct 2015
11 min read
Save for later

Getting started with Cocos2d-x

Packt
19 Oct 2015
11 min read
 In this article written by Akihiro Matsuura, author of the book Cocos2d-x Cookbook, we're going to install Cocos2d-x and set up the development environment. The following topics will be covered in this article: Installing Cocos2d-x Using Cocos command Building the project by Xcode Building the project by Eclipse Cocos2d-x is written in C++, so it can build on any platform. Cocos2d-x is open source written in C++, so we can feel free to read the game framework. Cocos2d-x is not a black box, and this proves to be a big advantage for us when we use it. Cocos2d-x version 3, which supports C++11, was only recently released. It also supports 3D and has an improved rendering performance. (For more resources related to this topic, see here.) Installing Cocos2d-x Getting ready To follow this recipe, you need to download the zip file from the official site of Cocos2d-x (http://www.cocos2d-x.org/download). In this article we've used version 3.4 which was the latest stable version that was available. How to do it... Unzip your file to any folder. This time, we will install the user's home directory. For example, if the user name is syuhari, then the install path is /Users/syuhari/cocos2d-x-3.4. We call it COCOS_ROOT. The following steps will guide you through the process of setting up Cocos2d-x: Open the terminal Change the directory in terminal to COCOS_ROOT, using the following comand: $ cd ~/cocos2d-x-v3.4 Run setup.py, using the following command: $ ./setup.py The terminal will ask you for NDK_ROOT. Enter into NDK_ROOT path. The terminal will will then ask you for ANDROID_SDK_ROOT. Enter the ANDROID_SDK_ROOT path. Finally, the terminal will ask you for ANT_ROOT. Enter the ANT_ROOT path. After the execution of the setup.py command, you need to execute the following command to add the system variables: $ source ~/.bash_profile Open the .bash_profile file, and you will find that setup.py shows how to set each path in your system. You can view the .bash_profile file using the cat command: $ cat ~/.bash_profile We now verify whether Cocos2d-x can be installed: Open the terminal and run the cocos command without parameters. $ cocos If you can see a window like the following screenshot, you have successfully completed the Cocos2d-x install process. How it works... Let's take a look at what we did throughout the above recipe. You can install Cocos2d-x by just unzipping it. You know setup.py is only setting up the cocos command and the path for Android build in the environment. Installing Cocos2d-x is very easy and simple. If you want to install a different version of Cocos2d-x, you can do that too. To do so, you need to follow the same steps that are given in this recipe, but which will be for a different version. There's more... Setting up the Android environment  is a bit tough. If you started to develop at Cocos2d-x soon, you can turn after the settings part of Android. And you would do it when you run on Android. In this case, you don't have to install Android SDK, NDK, and Apache. Also, when you run setup.py, you only press Enter without entering a path for each question. Using the cocos command The next step is using the cocos command. It is a cross-platform tool with which you can create a new project, build it, run it, and deploy it. The cocos command works for all Cocos2d-x supported platforms. And you don't need to use an IDE if you don't want to. In this recipe, we take a look at this command and explain how to use it. How to do it... You can use the cocos command help by executing it with the --help parameter, as follows: $ cocos --help We then move on to generating our new project: Firstly, we create a new Cocos2d-x project with the cocos new command, as shown here: $ cocos new MyGame -p com.example.mygame -l cpp -d ~/Documents/ The result of this command is shown the following screenshot: Behind the new parameter is the project name. The other parameters that are mentioned denote the following: MyGame is the name of your project. -p is the package name for Android. This is the application id in Google Play store. So, you should use the reverse domain name to the unique name. -l is the programming language used for the project. You should use "cpp" because we will use C++. -d is the location in which to generate the new project. This time, we generate it in the user's documents directory. You can look up these variables using the following command: $ cocos new —help Congratulations, you can generate your new project. The next step is to build and run using the cocos command. Compiling the project If you want to build and run for iOS, you need to execute the following command: $ cocos run -s ~/Documents/MyGame -p ios The parameters that are mentioned are explained as follows: -s is the directory of the project. This could be an absolute path or a relative path. -p denotes which platform to run on.If you want to run on Android you use -p android. The available options are IOS, android, win32, mac, and linux. You can run cocos run –help for more detailed information. The result of this command is shown in the following screenshot: You can now build and run iOS applications of cocos2d-x. However, you have to wait for a long time if this is your first time building an iOS application. That's why it takes a long time to build cocos2d-x library, depending on if it was clean build or first build. How it works... The cocos command can create a new project and build it. You should use the cocos command if you want to create a new project. Of course, you can build by using Xcode or Eclipse. You can easier of there when you develop and debug. There's more... The cocos run command has other parameters. They are the following: --portrait will set the project as a portrait. This command has no argument. --ios-bundleid will set the bundle ID for the iOS project. However, it is not difficult to set it later. The cocos command also includes some other commands, which are as follows: The compile command: This command is used to build a project. The following patterns are useful parameters. You can see all parameters and options if you execute cocos compile [–h] command. cocos compile [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The deploy command: This command only takes effect when the target platform is android. It will re-install the specified project to the android device or simulator. cocos deploy [-h] [-s SRC_DIR] [-q] [-p PLATFORM] [-m MODE] The run command continues to compile and deploy commands. Building the project by Xcode Getting ready Before building the project by Xcode, you require Xcode with an iOS developer account to test it on a physical device. However, you can also test it on an iOS simulator. If you did not install Xcode, you can get it from Mac App Store. Once you have installed it, get it activated. How to do it... Open your project from Xcode. You can open your project by double-clicking on the file placed at: ~/Documents/MyGame/proj.ios_mac/MyGame.xcodeproj. Build and Run by Xcode You should select an iOS simulator or real device on which you want to run your project. How it works... If this is your first time building, it will take a long time. But continue to build with confidence as it's the first time. You can develop your game faster if you develop and debug it using Xcode rather than Eclipse. Building the project by Eclipse Getting ready You must finish the first recipe before you begin this step. If you have not finished it yet, you will need to install Eclipse. How to do it... Setting up NDK_ROOT: Open the preference of Eclipse Open C++ | Build | Environment Click on Add and set the new variable, name is NDK_ROOT, value is NDK_ROOT path. Importing your project into Eclipse: Open the file and click on Import Go to Android | Existing Android Code into Workspace Click on Next Import the project to Eclipse at ~/Documents/MyGame/proj.android. Importing Cocos2d-x library into Eclipse Perform the same steps from Step 3 to Step 4. Import the project cocos2d lib at ~/Documents/MyGame/cocos2d/cocos/platform/android/java, using the folowing command: importing cocos2d lib Build and Run Click on Run icon The first time, Eclipse asks you to select a way to run your application. You select Android Application and click on OK, as shown in the following screenshot: If you connected the Android device on your Mac, you can run your game on your real device or an emulator. The following screenshot shows that it is running it on Nexus5. If you added cpp files into your project, you have to modify the Android.mk file at ~/Documenst/MyGame/proj.android/jni/Android.mk. This file is needed to build NDK. This fix is required to add files. The original Android.mk would look as follows: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp If you added the TitleScene.cpp file, you have to modify it as shown in the following code: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp ../../Classes/TitleScene.cpp The preceding example shows an instance of when you add the TitleScene.cpp file. However, if you are also adding other files, you need to add all the added files. How it works... You get lots of errors when importing your project into Eclipse, but don't panic. After importing cocos2d-x library, errors soon disappear. This allows us to set the path of NDK, Eclipse could compile C++. After you modified C++ codes, run your project in Eclipse. Eclipse automatically compiles C++ codes, Java codes, and then runs. It is a tedious task to fix Android.mk again to add the C++ files. The following code is original Android.mk: LOCAL_SRC_FILES := hellocpp/main.cpp ../../Classes/AppDelegate.cpp ../../Classes/HelloWorldScene.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/../../Classes The following code is customized Android.mk that adds C++ files automatically. CPP_FILES := $(shell find $(LOCAL_PATH)/../../Classes -name *.cpp) LOCAL_SRC_FILES := hellocpp/main.cpp LOCAL_SRC_FILES += $(CPP_FILES:$(LOCAL_PATH)/%=%) LOCAL_C_INCLUDES := $(shell find $(LOCAL_PATH)/../../Classes -type d) The first line of the code gets C++ files to the Classes directory into CPP_FILES variable. The second and third lines add C++ files into LOCAL_C_INCLUDES variable. By doing so, C++ files will be automatically compiled in NDK. If you need to compile a file other than the extension .cpp file, you will need to add it manually. There's more... If you want to manually build C++ in NDK, you can use the following command: $ ./build_native.py This script is located at the ~/Documenst/MyGame/proj.android . It uses ANDROID_SDK_ROOT and NDK_ROOT in it. If you want to see its options, run ./build_native.py –help. Summary Cocos2d-x is an open source, cross-platform game engine, which is free and mature. It can publish games for mobile devices and desktops, including iPhone, iPad, Android, Kindle, Windows, and Mac. The book Cocos2d-x Cookbook focuses on using version 3.4, which is the latest version of Cocos2d-x that was available at the time of writing. We focus on iOS and Android development, and we'll be using Mac because we need it to develop iOS applications. Resources for Article: Further resources on this subject: CREATING GAMES WITH COCOS2D-X IS EASY AND 100 PERCENT FREE [Article] Dragging a CCNode in Cocos2D-Swift [Article] COCOS2D-X: INSTALLATION [Article]
Read more
  • 0
  • 0
  • 3780

article-image-using-tiled-map-editor
Packt
13 Oct 2015
5 min read
Save for later

Using the Tiled map editor

Packt
13 Oct 2015
5 min read
LibGDX is a game framework and not a game engine. This is why it doesn't have any editor to place the game objects or to make levels. Tiled is a 2D level/map editor well-suited for this purpose. In this article by Indraneel Potnis, the author of LibGDX Cross-platform Development Blueprints, we will learn how to draw objects and create animations. LibGDX has an excellent support for rendering and reading maps/levels made through Tiled. (For more resources related to this topic, see here.) Drawing objects Sometimes, simple tiles may not satisfy your requirements. You might need to create objects with complex shapes. You can define these shape outlines in the editor easily. The first thing you need to do is create an object layer. Go to Layer | Add Object Layer: You will notice that a new layer has been added to the Layers pane called Object Layer 1. You can rename it if you like: With this layer selected, you can see the object toolbar getting enabled: You can draw basic shapes, such as a rectangle or an ellipse/circle: You can also draw a polygon and a polyline by selecting the appropriate options from the toolbar. Once you have added all the edges, click on the right mouse button to stop drawing the current object: Once the polygon/polyline is drawn, you can edit it by selecting the Edit Polygons option from the toolbar: After this, select the area that encompasses your polygon in order to change to the edit mode. You can edit your polygons/polylines now: You can also add custom properties to your polygons by right-clicking on them and selecting Object Properties: You can then add custom properties as mentioned previously: You can also add tiles as an object. Click on the Insert Tile icon in the toolbar: Once you select this, you can insert tiles as objects into the map. You will observe that the tiles can be placed anywhere now, irrespective of the grid boundaries: To select and move multiple objects, you can select the Select Objects option from the toolbar: You can then select the area that encompasses the objects. Once they are selected, you can move them by dragging them with your mouse cursor: You can also rotate the object by dragging the indicators at the corners after they are selected: Tile animations and images Tiled allows you to create animations in the editor. Let's make an animated shining crystal. First, we will need an animation sheet of the crystal. I am using this one, which is 16 x 16 pixels per crystal: The next thing we need to do is add this sheet as a tileset to the editor and name it crystals. After you add the tileset, you can see a new tab in the Tilesets pane: Go to View | Tile Animation Editor to open the animation editor: A new window will open that will allow you to edit the animations: On the right-hand side, you will see the individual animation frames that make up the animation. This is the animation tileset, which we added. Hold Ctrl on your keyboard, and select all of them with your mouse. Then, drag them to the left window: The numbers beside the images indicate the amount of time each image will be displayed in milliseconds. The images are displayed in this order and repeat continuously. In this example, every image will be shown for 100ms or 1/10th of a second. In the bottom-left corner, you can preview the animation you just created. Click on the Close button. You can now see something like this in the Tilesets pane: The first tile represents the animation, which we just created. Select it, and you can draw the animation anywhere in the map. You can see the animation playing within the map: Lastly, we can also add images to our map. To use them, we need to add an image layer to our map. Go to Layer | Add Image Layer. You will notice that a new layer has been added to the Layers pane. Rename it House: To use an image, we need to set the image's path as a property for this layer. In the Properties pane, you will find a property called Image. There is a file picker next to it where you can select the image you want: Once you set the image, you can use it to draw on the map:   Summary In this article, we learned about a tool called Tiled, and we also learned how to draw various objects and make tile animations and add images. Carry on with LibGDX Cross-platform Development Blueprints to learn how to develop great games, such as Monty Hall Simulation, Whack a Mole, Bounce the Ball, and many more. You can also take a look at the vast array of LibGDX titles from Packt Publishing, a few among these are as follows: Learning Libgdx Game Development, Andreas Oehlke LibGDX Game Development By Example, James Cook LibGDX Game Development Essentials, Juwal Bose Resources for Article:   Further resources on this subject: Getting to Know LibGDX [article] Using Google's offerings [article] Animations in Cocos2d-x [article]
Read more
  • 0
  • 0
  • 3573

article-image-swift-programming-language
Packt
06 Oct 2015
25 min read
Save for later

The Swift Programming Language

Packt
06 Oct 2015
25 min read
This article is by Chuck Gaffney, the author of the book iOS 9 Game Development Essentials. This delves into some vital specifics of the Swift language. (For more resources related to this topic, see here.) At the core of all game development is your game's code. It is the brain of your project and outside of the art, sound, and various asset developments, it is where you will spend most of your time creating and testing your game. Up until Apple's Worldwide Developers Conference WWDC14 in June of 2014, the code of choice for iOS game and app development was Objective-C. At WWDC14, a new and faster programming language Swift, was announced and is now the recommended language for all current and future iOS games and general app creation. As of the writing of this book, you can still use Objective-C to design your games, but both programmers new and seasoned will see why writing in Swift is not only easier with expressing your game's logic but even more preformat. Keeping your game running at that critical 60 FPS is dependent on fast code and logic. Engineers at Apple developed the Swift Programming Language from the ground up with performance and readability in mind, so this language can execute certain code iterations faster than Objective-C while also keeping code ambiguity to a minimum. Swift also uses many of the methodologies and syntax found in more modern languages like Scala, JavaScript, Ruby, and Python. So let's dive into the Swift language. It is recommended that some basic knowledge of Object Oriented Programming (OOP) be known prior but we will try to keep the build up and explanation of code simple and easy to follow as we move on to the more advanced topics related to game development. Hello World! It's somewhat tradition in the education of programming languages to begin with a Hello World example. A Hello World program is simply using your code to display or log the text Hello World. It's always been the general starting point because sometimes just getting your code environment set up and having your code executing correctly is half the battle. At least, this was more the case in previous programming languages. Swift makes this easier than ever, without going into the structure of a Swift file (which we shall do later on and is also much easier than Objective-C and past languages), here's how you create a Hello World program: print("Hello, World!") That's it! That is all you need to have the text "Hello, World" appear in XCode's Debug Area output. No more semicolons Those of us who have been programming for some time might notice that the usually all important semicolon (;) is missing. This isn't a mistake, in Swift we don't have to use a semicolon to mark the end of an expression. We can if we'd like and some of us might still do it as a force of habit, but Swift has omitted that common concern. The use of the semicolon to mark the end of an expression stems from the earliest days of programming when code was written in simple word processors and needed a special character to represent when the code's expression ends and the next begins. Variables, constants, and primitive data types When programming any application, either if new to programming or trying to learn a different language, first we should get an understanding of how a language handles variables, constants, and various data types, such as Booleans, integers, floats, strings, and arrays. You can think of the data in your program as boxes or containers of information. Those containers can be of different flavors, or types. Throughout the life of your game, the data could change (variables, objects, and so on) or they can stay the same. For example, the number of lives a player has would be stored as a variable, as that is expected to change during the course of the game. That variable would then be of the primitive data type integer, which are basically whole numbers. Data that stores, say the name of a certain weapon or power up in your game would be stored in what's known as a constant, as the name of that item is never going to change. In a game where the player can have interchangeable weapons or power-ups, the best way to represent the currently equipped item would be to use a variable. A variable is a piece of data that is bound to change. That weapon or power-up will also most likely have a bit more information to it than just a name or number; the primitive types we mentioned prior. The currently equipped item would be made up of properties like its name, power, effects, index number, and the sprite or 3D model that visually represents it. Thus the currently equipped item wouldn't just be a variable of a primitive data type, but be what is known as type of object. Objects in programming can hold a number of properties and functionality that can be thought of as a black box of both function and information. The currently equipped item in our case would be sort of a placeholder that can hold an item of that type and interchange it when needed; fulfilling its purpose as a replaceable item. Swift is what's known as a type-safe language, so keeping track of the exact type of data and even it's future usage (that is, if the data is or will be NULL) as it's very important when working with Swift compared to other languages. Apple made Swift behave this way to help keep runtime errors and bugs in your applications to a minimum and so we can find them much earlier in the development process. Variables Let's look at how variables are declared in Swift: var lives = 3 //variable of representing the player's lives lives = 1 //changes that variable to a value of 1 Those of us who have been developing in JavaScript will feel right at home here. Like JavaScript, we use the keyword var to represent a variable and we named the variable, lives. The compiler implicitly knows that the type of this variable is a whole number, and the data type is a primitive one: integer. The type can be explicitly declared as such: var lives: Int = 3 //variable of type Int We can also represent lives as the floating point data types as double or float: // lives are represented here as 3.0 instead of 3 var lives: Double = 3 //of type Double var lives: Float = 3 //of type Float Using a colon after the variable's name declaration allows us to explicitly typecast the variable. Constants During your game there will be points of data that don't change throughout the life of the game or the game's current level or scene. This can be various data like gravity, a text label in the Heads-Up Display (HUD), the center point of character's 2D animation, an event declaration, or time before your game checks for new touches or swipes. Declaring constants is almost the same as declaring variables. Using a colon after the variable's name declaration allows us to explicitly typecast the variable. let gravityImplicit = -9.8 //implicit declaration let gravityExplicit: Float = -9.8 //explicit declaration As we can see, we use the keyword let to declare constants. Here's another example using a string that could represent a message displayed on the screen during the start or end of a stage. let stageMessage = "Start!" stageMessage = "You Lose!" //error Since the string stageMessage is a constant, we cannot change it once it has been declared. Something like this would be better as a variable using var instead of let. Why don't we declare everything as a variable? This is a question sometimes asked by new developers and is understandable why it's asked especially since game apps tend to have a large number of variables and more interchangeable states than an average application. When the compiler is building its internal list of your game's objects and data, more goes on behind the scenes with variables than with constants. Without getting too much into topics like the program's stack and other details, in short, having objects, events, and data declared as constants with the let keyword is more efficient than var. In a small app on the newest devices today, though not recommended, we could possibly get away with this without seeing a great deal of loss in app performance. When it comes to video games however, performance is critical. Buying back as much performance as possible can allow for a better player experience. Apple recommends that when in doubt, always use let when declaring and have the complier say when to change to var. More about constants As of Swift version 1.2, constants can have a conditionally controlled initial value. Prior to this update, we had to initialize a constant with a single starting value or be forced to make the property a variable. In XCode 6.3 and newer, we can perform the following logic: let x : SomeThing if condition { x = foo() } else { x = bar() } use(x) An example of this in a game could be let stageBoss : Boss if (stageDifficulty == gameDifficulty.hard) { stageBoss = Boss.toughBoss() } else { stageBoss = Boss.normalBoss() } loadBoss(stageBoss) With this functionality, a constant's initialization can have a layer of variance while still keeping it unchangeable, or immutable through its use. Here, the constant, stageBoss can be one of two types based on the game's difficulty: Boss.toughBoss() or Boss.normalBoss(). The boss won't change for the course of this stage, so it makes sense to keep it as a constant. More on if and else statements is covered later in the article. Arrays, matrices, sets, and dictionaries Variables and constants can represent a collection of various properties and objects. The most common collection types are arrays, matrices, sets, and dictionaries. An Array is an ordered list of distinct objects, a Matrix is, in short, an array of arrays, a Set is an unordered list of distinct objects and a Dictionary is an unordered list that utilizes a key : value association to the data. Arrays Here's an example of an Array in Swift. let stageNames : [String] = ["Downtown Tokyo","Heaven Valley", "Nether"] The object stageNames is a collection of strings representing names of a game's stages. Arrays are ordered by subscripts from 0 to array length 1. So stageNames[0] would be Downtown Tokyo, stageNames[2] would be Nether, and stageNames[4] would give an error since that's beyond the limits of the array and doesn't exist. We use [] brackets around the class type of stageNames, [String] to tell the compiler that we are dealing with an array of Strings. Brackets are also used around the individual members of this array. 2D arrays or matrices A common collection type used in physics calculations, graphics, and game design, particularly grid-based puzzle games, are two-dimensional arrays or matrices. 2D arrays are simply arrays that have arrays as their members. These arrays can be expressed in a rectangular fashion in rows and columns. For example, the 4x4 (4 rows, 4 columns) tile board in the 15 Puzzle Game can be represented as such: var tileBoard = [[1,2,3,4], [5,6,7,8], [9,10,11,12], [13,14,15,""]] In the 15 Puzzle game, your goal is to shift the tiles using the one empty spot (represented with the blank String ""), to all end up in the 1—15 order we see up above. The game would start with the numbers arranged in a random and solvable order and player would then have to swap the numbers and the blank space. To better perform various actions on AND or OR, store information about each tile in the 15 Game (and other games); it'd be better to create a tile object as opposed to using raw values seen here. For the sake of understanding what a matrix or 2D array is, simply take note on how the array is surrounded by doubly encapsulated brackets [[]]. We will later use one of our example games, SwiftSweeper, to better understand how puzzle games use 2D arrays of objects to create a full game. Here are ways to declare blank 2D arrays with strict types: var twoDTileArray : [[Tiles]] = [] //blank 2D array of type,Tiles var anotherArray = Array<Array<Tile>>() //same array, using Generics The variable twoDTileArray uses the double brackets [[Tiles]] to declare it as a blank 2D array or matrix for the made up type, tiles. The variable anotherArray is a rather oddly declared array that uses angle bracket characters <> for enclosures. It utilizes what's known as Generics. Generics is a rather advanced topic that we will touch more on later. They allow for very flexible functionality among a wide array of data types and classes. For the moment we can think of them as a catchall way of working with Objects. To fill in the data for either version of this array, we would then use for-loops to fill in the data. More on loops and iterations later in the article! Sets This is how we would make a set of various game items in Swift: var keyItems = Set([Dungeon_Prize, Holy_Armor, Boss_Key,"A"]) This set keyItems has various objects and a character A. Unlike an Array, a Set is not ordered and contains unique items. So unlike stageNames, attempting to get keyItems[1] would return an error and items[1] might not necessarily be the Holy_Armor object, as the placement of objects is internally random in a set. The advantage Sets have over Arrays is that Sets are great at checking for duplicated objects and for specific content searching in the collection overall. Sets make use of hashing to pinpoint the item in the collections; so checking for items in Set's content can be much faster than an array. In game development, a game's key items which the player may only get once and should never have duplicates of, could work great as a Set. Using the function keyItems, contains(Boss_Key) returns the Boolean value of true in this case. Sets were added in Swift 1.2 / XCode 6.3. Their class is represented by the Generic type Set<T> where T is the class type of the collection. In other words, the set Set([45, 66, 1233, 234]) would be of the type Set<Int> and our example here would be a Set<NSObject> instance due to it having a collection of various data types. We will discuss more on Generics and Class Hierarchy later in this article. Dictionaries A Dictionary can be represented this way in Swift: var playerInventory: [Int : String] = [1 : "Buster Sword", 43 : "Potion", 22: "StrengthBooster"] Dictionaries use a key : value association, so playerInventory[22] returns the value StrengthBooster based on the key, 22. Both the key and value could be initialized to almost any class type*. In addition to the inventory example given, we can have the code as following: var stageReward: [Int : GameItem] = [:] //blank initialization //use of the Dictionary at the end of a current stage stageReward = [currentStage.score : currentStage.rewardItem] *The values of a Dictionary, though rather flexible in Swift, do have limitations. The key must conform to what's known as the Hashable protocol. Basic data types like integer and string already have this functionality, so if you are to make your own classes or data structures that are to be used in Dictionaries, say mapping a player actions with player input, this protocol must be utilized first. We will discuss more about Protocols, later in this article. Dictionaries are like Sets in that they are unordered but with the additional layer of having a key and a value associated with their content instead of just the hashed key. As with Sets, Dictionaries are great for quick insertion and retrieval of specific data. In IOS Apps and in web applications, Dictionaries are what's used to parse and select items from JSON (JavaScript Object Notation) data. In the realm of game development, Dictionaries using JSON or via Apple's internal data class, NSUserDefaults, can be used to save and load game data, set up game configurations or access specific members of a game's API. For example, here's one way to save a player's high score in an IOS game using Swift: let newBestScore : Void = NSUserDefaults.standardUserDefaults().setInteger(bestScore, forKey: "bestScore") This code comes directly from a published Swift—developed game called PikiPop, which we will use from time to time to show code used in actual game applications. Again, note that Dictionaries are unordered but Swift has ways to iterate or search through an entire Dictionary. Mutable or immutable collections One rather important discussion that we left out is how to subtract, edit or add to Arrays, Sets, and Dictionaries, but before we do that, we should understand the concept of mutable and immutable data or collections. A mutable collection is simply data that can be changed, added to or subtracted from, while an immutable collection cannot be changed, added to or subtracted from. To work with mutable and immutable collections efficiently in Objective-C, we had to explicitly state the mutability of the collection beforehand. For example, an array of the type NSArray in Objective-C is always immutable. There are methods we can call on NSArray that would edit the collection but behind the scenes this would be creating brand new NSArrays, thus would be rather inefficient if doing this often in the life of our game. Objective-C solved this issue with class type, NSMutableArray. Thanks to the flexibility of Swift's type inference, we already know how to make a collection mutable or immutable! The concept of constants and variables has us covered when it comes to data mutability in Swift. Using the keyword let when creating a collection will make that collection immutable while using var will initialize it as a mutable collection. //mutable Array var unlockedLevels : [Int] = [1, 2, 5, 8] //immutable Dictionary let playersForThisRound : [PlayerNumber:PlayerUserName] = [453:"userName3344xx5", 233:"princeTrunks", 6567: "noScopeMan98", 211: "egoDino"] The Array of Int, unlockedLevels can be edited simply because it's a variable. The immutable Dictionary playersForThisRound, can't be changed since it's already been declared as a constant; no additional layers of ambiguity concerning additional class types. Editing or accessing collection data As long as a collection type is a variable, using the var keyword, we can do various edits to the data. Let's go back to our unlockedLevels array. Many games have the functionality of unlocking levels as the player progresses. Say the player reached the high score needed to unlock the previously locked level 3 (as 3 isn't a member of the array). We can add 3 to the array using the append function: unlockedLevels.append(3) Another neat attribute of Swift is that we can add data to an array using the += assignment operator: unlockedLevels += [3] Doing it this way however will simply add 3 to the end of the array. So our previous array of [1, 2, 5, 8] is now [1, 2, 5, 8, 3]. This probably isn't a desirable order, so to insert the number 3 in the third spot, unlockedLevels[2], we can use the following method: unlockedLevels.insert(3, atIndex: 2) Now our array of unlocked levels is ordered to [1, 2, 3, 5, 8]. This is assuming though that we know a member of the array prior to 3 is sorted already. There's various sorting functionalities provided by Swift that could assist in keeping an array sorted. We will leave the details of sorting to our discussions of loops and control flow later on in this article. Removing items from an array is just as simple. Let's use again our unlockedLevels array. Imagine our game has an over world for the player to travel to and from, and the player just unlocked a secret that triggered an event, which blocked off access to level 1. Level 1 would now have to be removed from the unlocked levels. We can do it like this: unlockedLevels.removeAtIndex(0) // array is now [2, 3, 5, 8] Alternately, imagine the player lost all of their lives and got a Game Over. A penalty to that could be to lock up the furthest level. Though probably a rather infuriating method and us knowing that Level 8 is the furthest level in our array, we can remove it using the .removeLast() function of Array types. unlockedLevels.removeLast() // array is now [2,3,5] That this is assuming we know the exact order of the collection. Sets or Dictionaries might be better at controlling certain aspects of your game. Here's some ways to edit a set or a dictionary as a quick guide. Set inventory.insert("Power Ring") //.insert() adds items to a set inventory.remove("Magic Potion") //.remove() removes a specific item inventory.count //counts # of items in the Set inventory.union(EnemyLoot) //combines two Sets inventory.removeAll() //removes everything from the Set inventory.isEmpty //returns true Dictionary var inventory = [Float : String]() //creates a mutable dictionary /* one way to set an equipped weapon in a game; where 1.0 could represent the first "item slot" that would be placeholder for the player's "current weapon" */ inventory.updateValue("Broadsword", forKey: 1.0) //removes an item from a Dictionary based on the key value inventory.removeValueForKey("StatusBooster") inventory.count //counts items in the Dictionary inventory.removeAll(keepCapacity: false) //deletes the Dictionary inventory.isEmpty //returns false //creates an array of the Dictionary's values let inventoryNames = [String](inventory.values) //creates an array of the Dictionary's keys let inventoryKeys = [String](inventory.keys) Iterating through collection types We can't discuss about collection types without mentioning how to iterate through them in mass. Here's some ways we'd iterate though an Array, Set or a Dictionary in Swift: //(a) outputs every item through the entire collection //works for Arrays, Sets, and Dictionaries but output will vary for item in inventory { print(item) } //(b) outputs sorted item list using Swift's sorted() function //works for Sets for item in sorted(inventory) { print("(item)") } //(c) outputs every item as well as it's current index //works for Arrays, Sets, and Dictionaries for (index, value) in enumerate(inventory) { print("Item (index + 1): (value)") } //(d) //Iterate through and through the keys of a Dictionary for itemCode in inventory.keys { print("Item code: (itemCode)") } //(e) //Iterate through and through the values of a Dictionary for itemName in inventory.values { print("Item name: (itemName)") } As stated previously, this is done with what's known as a for-loop; with these examples we show how Swift utilizes the for-in variation using the in keyword. The code will repeat until it reaches the end of the collection in all of these examples. In example (c) we also see the use of the Swift function, enumerate(). This function returns a compound value, (index,value), for each item. This compound value is known as a tuple and Swift's use of tuples makes for a wide variety of functionality for functions loops as well as code blocks. We will delve more into tuples, loops, and blocks later on. Comparing Objective-C and Swift Here's a quick review of our Swift code with a comparison of the Objective-C equivalent. Objective-C An example code in Objective-C is as follows: const int MAX_ENEMIES = 10; //constant float playerPower = 1.3; //variable //Array of NSStrings NSArray * stageNames = @[@"Downtown Tokyo", @"Heaven Valley", @" Nether"]; //Set of various NSObjects NSSet *items = [NSSet setWithObjects: Weapons, Armor, HealingItems,"A", nil]; //Dictionary with an Int:String key:value NSDictionary *inventory = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:1], @"Buster Sword", [NSNumber numberWithInt:43], @"Potion", [NSNumber numberWithInt:22], @"Strength", nil]; Swift An example code in Objective-C is as follows: let MAX_ENEMIES = 10 //constant var playerPower = 1.3 //variable //Array of Strings let stageNames : [String] = ["Downtown Tokyo","Heaven Valley","Nether"] //Set of various NSObjects var items = Set([Weapons, Armor, HealingItems,"A"]) //Dictionary with an Int:String key:value var playerInventory: [Int : String] = [1 : "Buster Sword", 43 : "Potion", 22: "StrengthBooster"] In the preceding code, we some examples of variables, constants, Arrays, Sets, and Dictionaries. First we see their Objective-C syntax and then we see the equivalent declarations using Swift's syntax. We can see from this example how compact Swift is compared to Objective-C. Characters and strings For some time in this article we've been mentioning Strings. Strings are also a collection of data type but a specially dealt collection of Characters, of the class type, String. Swift is Unicode-compliant so we can have Strings like this: let gameOverText = "Game Over!" We have strings with emoji characters like this: let cardSuits = "♠ ♥ ♣ ♦" What we did now was create what's known as a string literal. A string literal is when we explicitly define a String around two quotes "". We can create empty String variables for later use in our games as such: var emptyString = "" // empty string literal var anotherEmptyString = String() // using type initializer Both are valid ways to create an empty String "". String interpolation We can also create a string from a mixture of other data types, known as string interpolation. String Interpolation is rather common in game development, debugging, and string use in general. The most notable of examples are displaying the player's score and lives. This is how one our example games, PikiPop uses string interpolation to display current player stats: //displays the player's current lives var livesLabel = "x (currentScene.player!.lives)" //displays the player's current score var scoreText = "Score: (score)" Take note of the "(variable_name)" formatting. We've actually seen this before in our past code snippets. In the various print() outputs, we used this to display the variable, collection, and so on we wanted to get info on. In Swift, the way to output the value of a data type in a String is by using this formatting. For those of us who came from Objective-C, it's the same as this: NSString *livesLabel = @"Lives: "; int lives = 3; NSString *livesText = [NSString stringWithFormat:@" %@ (%d days ago)", livesLabel, lives]; Notice how Swift makes string interpolation much cleaner and easier to read than its Objective-C predecessor. Mutating strings There are various ways to change strings. We can also add to a string just the way we did while working with collection objects. Here's a basic example: var gameText = "The player enters the stage" gameText += " and quickly lost due to not leveling up" /* gameText now says "The player enters the stage and lost due to not leveling up" */ Since Strings are essentially arrays of characters, like arrays, we can use the += assignment operator to add to the previous String. Also, akin to arrays, we can use the append() function to add a character to the end of a string: let exclamationMark: Character = "!" gameText.append(exclamationMark) /*gameText now says "The player enters the stage and lost due to not leveling up!"*/ Here's how we iterate through the Characters in a string in Swift: for character in "Start!" { print(character) } //outputs: //S //t //a //r //t //! Notice how again we use the for-in loop and even have the flexibility of using a string literal if we'd so like to be what's iterated through by the loop. String indices Another similarity between Arrays and Strings is the fact that a String's individual characters can be located via indices. Unlike Arrays however, since a character can be a varying size of data, broken in 21-bit numbers known as Unicode scalars, they can not be located in Swift with Int type index values. Instead, we can use the .startIndex and .endIndex properties of a String and move one place ahead or one place behind the index with the .successor() and .predecessor() functions respectively to retrieve the needed character or characters of a String. gameText[gameText.startIndex] // = T gameText[gameText.endIndex] // = ! gameText[gameText.startIndex.successor()] // = h gameText[gameText.endIndex.predecessor()] // = p Here are some examples that use these properties and functions using our previous gameText String: There are many ways to manipulate, mix, remove, and retrieve various aspects of a String and Characters. For more information, be sure to check out the official Swift documentation on Characters and Strings here: https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/StringsAndCharacters.html. Summary There's much more about the Swift programming language than we can fit here. Throughout the course of this book we will throw in a few extra tidbits and nuances about Swift as it becomes relevant to our upcoming gaming programming needs. If you wish to become more versed in the Swift programming language, Apple actually provides a wonderful tool for us in what's known as a Playground. Playgrounds were introduced with the Swift programming language at WWDC14 in June of 2014 and allow us to test various code output and syntax without having to create a project, build, run, and repeat again when in many cases we simply needed to tweak a few variables and function loop iterations. There are a number of resources to check out on the official Swift developer page (https://developer.apple.com/swift/resources/). Two highly recommended Playgrounds to check out are as follows: Guided Tour Playground (https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/GuidedTour.playground.zip) This Playground covers many of the topics we mentioned in this article and more, from Hello World all the way to Generics. The second playground to test out is the Balloons Playground (https://developer.apple.com/swift/blog/downloads/Balloons.zip). The Balloons Playground was the keynote Playgrounds demonstration from WWDC14 and shows off many of the features Playgrounds have to offer, particularly for making and testing games. Sometimes the best way to learn a programming language is to test live code; and that's exactly what Playgrounds allow us to do.
Read more
  • 0
  • 0
  • 1803
article-image-overview-physics-bodies-and-physics-materials
Packt
30 Sep 2015
14 min read
Save for later

Overview of Physics Bodies and Physics Materials

Packt
30 Sep 2015
14 min read
In this article by Katax Emperor and Devin Sherry, author of the book Unreal Engine Physics Essentials, we will take a deeper look at Physics Bodies in Unreal Engine 4. We will also look at some of the detailed properties available to these assets. In addition, we will discuss the following topics: Physical materials – an overview For the purposes of this article, we will continue to work with Unreal Engine 4 and the Unreal_PhyProject. Let's begin by discussing Physics Bodies in Unreal Engine 4. (For more resources related to this topic, see here.) Physics Bodies – an overview When it comes to creating Physics Bodies, there are multiple ways to go about it (most of which we have covered up to this point), so we will not go into much detail about the creation of Physics Bodies. We can have Static Meshes react as Physics Bodies by checking the Simulate Physics property of the asset when it is placed in our level: We can also create Physics Bodies by creating Physics Assets and Skeletal Meshes, which automatically have the properties of physics by default. Lastly, Shape Components in blueprints, such as spheres, boxes, and capsules will automatically gain the properties of a Physics Body if they are set for any sort of collision, overlap, or other physics simulation events. As always, remember to ensure that our asset has a collision applied to it before attempting to simulate physics or establish Physics Bodies, otherwise the simulation will not work. When you work with the properties of Physics on Static Meshes or any other assets that we will attempt to simulate physics with, we will see a handful of different parameters that we can change in order to produce the desired effect under the Details panel. Let's break down these properties: Simulate Physics: This parameter allows you to enable or simulate physics with the asset you have selected. When this option is unchecked, the asset will remain static, and once enabled, we can edit the Physics Body properties for additional customization. Auto Weld: When this property is set to True, and when the asset is attached to a parent object, such as in a blueprint, the two bodies are merged into a single rigid body. Physics settings, such as collision profiles and body settings, are determined by Root Component. Start Awake: This parameter determines whether the selected asset will Simulate Physics at the start once it is spawned or whether it will Simulate Physics at a later time. We can change this parameter with the level and actor blueprints. Override Mass: When this property is checked and set to True, we can then freely change the Mass of our asset using kilograms (kg). Otherwise, the Mass in Kg parameter will be set to a default value that is based on a computation between the physical material applied and the mass scale value. Mass in Kg: This parameter determines the Mass of the selected asset using kilograms. This is important when you work with different sized physics objects and want them to react to forces appropriately. Locked Axis: This parameter allows you to lock the physical movement of our object along a specified axis. We have the choice to lock the default axes as specified in Project Settings. We also have the choice to lock physical movement along the individual X, Y, and Z axes. We can have none of the axes either locked in translation or rotation, or we can customize each axis individually with the Custom option. Enable Gravity: This parameter determines whether the object should have the force of gravity applied to it. The force of gravity can be altered in the World Settings properties of the level or in the Physics section of the Engine properties in Project Settings. Use Async Scene: This property allows you to enable the use of Asynchronous Physics for the specified object. By default, we cannot edit this property. In order to do so, we must navigate to Project Settings and then to the Physics section. Under the advanced Simulation tab, we will find the Enable Async Scene parameter. In an asynchronous scene, objects (such as Destructible actors) are simulated, and a Synchronous scene is where classic physics tasks, such as a falling crate, take place. Override Walkable Slope on Instance: This parameter determines whether or not we can customize an object's walkable slope. In general, we would use this parameter for our player character, but this property enables the customization of how steep a slope is that an object can walk on. This can be controlled specifically by the Walkable Slope Angle parameter and the Walkable Slope Behavior parameter. Override Max Depenetration Velocity: This parameter allows you to customize Max Depenetration Velocity of the selected physics body. Center of Mass Offset: This property allows you to specify a specific vector offset for the selected objects' center of mass from the calculated location. Being able to know and even modify the center of the mass for our objects can be very useful when you work with sensitive physics simulations (such as flight). Sleep Family: This parameter allows you to control the set of functions that the physics object uses when in a sleep mode or when the object is moving and slowly coming to a stop. The SF Sensitive option contains values with a lower sleep threshold. This is best used for objects that can move very slowly or for improved physics simulations (such as billiards). The SF Normal option contains values with a higher sleep threshold, and objects will come to a stop in a more abrupt manner once in motion as compared to the SF Sensitive option. Mass Scale: This parameter allows you to scale the mass of our object by multiplying a scalar value. The lower the number, the lower the mass of the object will become, whereas the larger the number, the larger the mass of the object will become. This property can be used in conjunction with the Mass in Kg parameter to add more customization to the mass of the object. Angular Damping: This property is a modifier of the drag force that is applied to the object in order to reduce angular movement, which means to reduce the rotation of the object. We will go into more detail regarding Angular Damping. Linear Damping: This property is used to simulate the different types of friction that can assist in the game world. This modifier adds a drag force to reduce linear movement, reducing the translation of the object. We will go into more detail regarding Linear Damping. Max Angular Velocity: This parameter limits Max Angular Velocity of the selected object in order to prevent the object from rotating at high rates. By increasing this value, the object will spin at very high speeds once it is impacted by an outside force that is strong enough to reach the Max Angular Velocity value. By decreasing this value, the object will not rotate as fast, and it will come to a halt much faster depending on the angular damping applied. Position Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its position; the solver iteration count is responsible for periodically checking the physics body's position. Increasing this value will be more CPU intensive, but better stabilized. Velocity Solver Iteration Count: This parameter reflects the physics body's solver iteration count for its velocity; the solver iteration count is responsible for periodically checking the physics body's velocity. Increasing this value will be more CPU intensive, but better stabilized. Now that we have discussed all the different parameters available to Physics Bodies in Unreal Engine 4, feel free to play around with these values in order to obtain a stronger grasp of what each property controls and how it affects the physical properties of the object. As there are a handful of properties, we will not go into detailed examples of each, but the best way to learn more is to experiment with these values. However, we will work with how to create various examples of physics bodies in order to explore Physics Damping and Friction. Physical Materials – an overview Physical Materials are assets that are used to define the response of a physics body when you dynamically interact with the game world. When you first create Physical Material, you are presented with a set of default values that are identical to the default Physical Material that is applied to all physics objects. To create Physical Material, let's navigate to Content Browser and select the Content folder so that it is highlighted. From here, we can right-click on the Content folder and select the New Folder option to create a new folder for our Physical Material; name this new folder PhysicalMaterials. Now, in the PhysicalMaterials folder, right-click on the empty area of Content Browser and navigate to the Physics section and select Physical Material. Make sure to name this new asset PM_Test. Double-click on the new Physical Material asset to open Generic Asset Editor and we should see the following values that we can edit in order to make our physics objects behave in certain ways: Let's take a few minutes to break down each of these properties: Friction: This parameter controls how easily objects can slide on this surface. The lower the friction value, the more slippery the surface. The higher the friction value, the less slippery the surface. For example, ice would have a Friction surface value of .05, whereas a Friction surface value of 1 would cause the object not to slip as much once moved. Friction Combine Mode: This parameter controls how friction is computed for multiple materials. This property is important when it comes to interactions between multiple physical materials and how we want these calculations to be made. Our choices are Average, Minimum, Maximum, and Multiply. Override Friction Combine Mode: This parameter allows you to set the Friction Combine Mode parameter instead of using Friction Combine Mode, found in the Project Settings | Engine | Physics section. Restitution: This parameter controls how bouncy the surface is. The higher the value, the more bouncy the surface will become. Density: This parameter is used in conjunction with the shape of the object to calculate its mass properties. The higher the number, the heavier the object becomes (in grams per cubic centimeter). Raise Mass to Power: This parameter is used to adjust the way in which the mass increases as the object gets larger. This is applied to the mass that is calculated based on a solid object. In actuality, larger objects do not tend to be solid and become more like shells (such as a vehicle). The values are clamped to 1 or less. Destructible Damage Threshold Scale: This parameter is used to scale the damage threshold for the destructible objects that this physical material is applied to. Surface Type: This parameter is used to describe what type of real-world surface we are trying to imitate for our project. We can edit these values by navigating to the Project Settings | Physics | Physical Surface section. Tire Friction Scale: This parameter is used as the overall tire friction scalar for every type of tire and is multiplied by the parent values of the tire. Tire Friction Scales: This parameter is almost identical to the Tire Friction Scale parameter, but it looks for a Tire Type data asset to associate it to. Tire Types can be created through the use of Data Assets by right-clicking on the Content Browser | Miscellaneous | Data Asset | Tire Type section. Now that we have briefly discussed how to create Physical Materials and what their properties are, let's take a look at how to apply Physical Materials to our physics bodies. In FirstPersonExampleMap, we can select any of the physics body cubes throughout the level and in the Details panel under Collision, we will find the Phys Material Override parameter. It is here that we can apply our Physical Material to the cube and view how it reacts to our game world. For the sake of an example, let's return to the Physical Material, PM_Test, that we created earlier, change the Friction property from 0.7 to 0.2, and save it. With this change in place, let's select a physics body cube in FirstPersonExampleMap and apply the Physical Material, PM_Test, to the Phys Material Override parameter of the object. Now, if we play the game, we will see that the cube we applied the Physical Material, PM_Test, to will start to slide more once shot by the player than it did when it had a Friction value of 0.7. We can also apply this Physical Material to the floor mesh in FirstPersonExampleMap to see how it affects the other physics bodies in our game world. From here, feel free to play around with the Physical Material parameters to see how we can affect the physics bodies in our game world. Lastly, let's briefly discuss how to apply Physical Materials to normal Materials, Material Instances, and Skeletal Meshes. To apply Physical Material to a normal material, we first need to either create or open an already created material in Content Browser. To create a material, just right-click on an empty area of Content Browser and select Material from the drop-down menu.Double-click on Material to open Material Editor, and we will see the parameter for Phys Material under the Physical Material section of Details panel in the bottom-left of Material Editor: To apply Physical Material to Material Instance, we first need to create Material Instance by navigating to Content Browser and right-clicking on an empty area to bring up the context drop-down menu. Under the Materials & Textures section, we will find an option for Material Instance. Double-click on this option to open Material Instance Editor. Under the Details panel in the top-left corner of this editor, we will find an option to apply Phys Material under the General section: Lastly, to apply Physical Material to Skeletal Mesh, we need to either create or open an already created Physics Asset that contains Skeletal Mesh. In the First Person Shooter Project template, we can find TutorialTPP_PhysicsAsset under the Engine Content folder. If the Engine Content folder is not visible by default in Content Browser, we need to simply navigate to View Options in the bottom-right corner of Content Browser and check the Show Engine Content parameter. Under the Engine Content folder, we can navigate to the Tutorial folder and then to the TutorialAssets folder to find the TutorialTPP_PhysicsAsset asset. Double-click on this asset to open Physical Asset Tool. Now, we can click on any of the body parts found on Skeletal Mesh to highlight it. Once this is highlighted, we can view the option for Simple Collision Physical Material in the Details panel under the Physics section. Here, we can apply any of our Physical Materials to this body part. Summary In this article, we discussed what Physics Bodies are and how they function in Unreal Engine 4. Moreover, we looked at the properties that are involved in Physics Bodies and how these properties can affect the behavior of these bodies in the game. Additionally, we briefly discussed Physical Materials, how to create them, and what their properties entail when it comes to affecting its behavior in the game. We then reviewed how to apply Physical Materials to static meshes, materials, material instances, and skeletal meshes. Now that we have a stronger understanding of how Physics Bodies work in the context of angular and linear velocities, momentum, and the application of damping, we can move on and explore in detail how Physical Materials work and how they are implemented. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game[article] Working with Away3D Cameras[article] Replacing 2D Sprites with 3D Models [article]
Read more
  • 0
  • 0
  • 9483

article-image-lights-and-effects
Packt
29 Sep 2015
27 min read
Save for later

Lights and Effects

Packt
29 Sep 2015
27 min read
 In this article by Matt Smith and Chico Queiroz, authors of Unity 5.x Cookbook, we will cover the following topics: Using lights and cookie textures to simulate a cloudy day Adding a custom Reflection map to a scene Creating a laser aim with Projector and Line Renderer Reflecting surrounding objects with Reflection Probes Setting up an environment with Procedural Skybox and Directional Light (For more resources related to this topic, see here.) Introduction Whether you're willing to make a better-looking game, or add interesting features, lights and effects can boost your project and help you deliver a higher quality product. In this article, we will look at the creative ways of using lights and effects, and also take a look at some of Unity's new features, such as Procedural Skyboxes, Reflection Probes, Light Probes, and custom Reflection Sources. Lighting is certainly an area that has received a lot of attention from Unity, which now features real-time Global Illumination technology provided by Enlighten. This new technology provides better and more realistic results for both real-time and baked lighting. For more information on Unity's Global Illumination system, check out its documentation at http://docs.unity3d.com/Manual/GIIntro.html. The big picture There are many ways of creating light sources in Unity. Here's a quick overview of the most common methods. Lights Lights are placed into the scene as game objects, featuring a Light component. They can function in Realtime, Baked, or Mixed modes. Among the other properties, they can have their Range, Color, Intensity, and Shadow Type set by the user. There are four types of lights: Directional Light: This is normally used to simulate the sunlight Spot Light: This works like a cone-shaped spot light Point Light: This is a bulb lamp-like, omnidirectional light Area Light: This baked-only light type is emitted in all directions from a rectangle-shaped entity, allowing for a smooth, realistic shading For an overview of the light types, check Unity's documentation at http://docs.unity3d.com/Manual/Lighting.html. Different types of lights Environment Lighting Unity's Environment Lighting is often achieved through the combination of a Skybox material and sunlight defined by the scene's Directional Light. Such a combination creates an ambient light that is integrated into the scene's environment, and which can be set as Realtime or Baked into Lightmaps. Emissive materials When applied to static objects, materials featuring the Emission colors or maps will cast light over surfaces nearby, in both real-time and baked modes, as shown in the following screenshot: Projector As its name suggests, a Projector can be used to simulate projected lights and shadows, basically by projecting a material and its texture map onto the other objects. Lightmaps and Light Probes Lightmaps are basically texture maps generated from the scene's lighting information and applied to the scene's static objects in order to avoid the use of processing-intensive real-time lighting. Light Probes are a way of sampling the scene's illumination at specific points in order to have it applied onto dynamic objects without the use of real-time lighting. The Lighting window The Lighting window, which can be found through navigating to the Window | Lighting menu, is the hub for setting and adjusting the scene's illumination features, such as Lightmaps, Global Illumination, Fog, and much more. It's strongly recommended that you take a look at Unity's documentation on the subject, which can be found at http://docs.unity3d.com/Manual/GlobalIllumination.html. Using lights and cookie textures to simulate a cloudy day As it can be seen in many first-person shooters and survival horror games, lights and shadows can add a great deal of realism to a scene, helping immensely to create the right atmosphere for the game. In this recipe, we will create a cloudy outdoor environment using cookie textures. Cookie textures work as masks for lights. It functions by adjusting the intensity of the light projection to the cookie texture's alpha channel. This allows for a silhouette effect (just think of the bat-signal) or, as in this particular case, subtle variations that give a filtered quality to the lighting. Getting ready If you don't have access to an image editor, or prefer to skip the texture map elaboration in order to focus on the implementation, please use the image file called cloudCookie.tga, which is provided inside the 1362_06_01 folder. How to do it... To simulate a cloudy outdoor environment, follow these steps: In your image editor, create a new 512 x 512 pixel image. Using black as the foreground color and white as the background color, apply the Clouds filter (in Photoshop, this is done by navigating to the Filter | Render | Clouds menu). Learning about the Alpha channel is useful, but you could get the same result without it. Skip steps 3 to 7, save your image as cloudCookie.png and, when changing texture type in step 9, leave Alpha from Greyscale checked. Select your entire image and copy it. Open the Channels window (in Photoshop, this can be done by navigating to the Window | Channels menu). There should be three channels: Red, Green, and Blue. Create a new channel. This will be the Alpha channel. In the Channels window, select the Alpha 1 channel and paste your image into it. Save your image file as cloudCookie.PSD or TGA. Import your image file to Unity and select it in the Project view. From the Inspector view, change its Texture Type to Cookie and its Light Type to Directional. Then, click on Apply, as shown: We will need a surface to actually see the lighting effect. You can either add a plane to your scene (via navigating to the GameObject | 3D Object | Plane menu), or create a Terrain (menu option GameObject | 3D Object | Terrain) and edit it, if you so you wish. Let's add a light to our scene. Since we want to simulate sunlight, the best option is to create a Directional Light. You can do this through the drop-down menu named Create | Light | Directional Light in the Hierarchy view. Using the Transform component of the Inspector view, reset the light's Position to X: 0, Y: 0, Z: 0 and its Rotation to X: 90; Y: 0; Z: 0. In the Cookie field, select the cloudCookie texture that you imported earlier. Change the Cookie Size field to 80, or a value that you feel is more appropriate for the scene's dimension. Please leave Shadow Type as No Shadows. Now, we need a script to translate our light and, consequently, the Cookie projection. Using the Create drop-down menu in the Project view, create a new C# Script named MovingShadows.cs. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class MovingShadows : MonoBehaviour{ public float windSpeedX; public float windSpeedZ; private float lightCookieSize; private Vector3 initPos; void Start(){ initPos = transform.position; lightCookieSize = GetComponent<Light>().cookieSize; } void Update(){ Vector3 pos = transform.position; float xPos= Mathf.Abs (pos.x); float zPos= Mathf.Abs (pos.z); float xLimit = Mathf.Abs(initPos.x) + lightCookieSize; float zLimit = Mathf.Abs(initPos.z) + lightCookieSize; if (xPos >= xLimit) pos.x = initPos.x; if (zPos >= zLimit) pos.z = initPos.z; transform.position = pos; float windX = Time.deltaTime * windSpeedX; float windZ = Time.deltaTime * windSpeedZ; transform.Translate(windX, 0, windZ, Space.World); } } Save your script and apply it to the Directional Light. Select the Directional Light. In the Inspector view, change the parameters Wind Speed X and Wind Speed Z to 20 (you can change these values as you wish, as shown). Play your scene. The shadows will be moving. How it works... With our script, we are telling the Directional Light to move across the X and Z axis, causing the Light Cookie texture to be displaced as well. Also, we reset the light object to its original position whenever it traveled a distance that was either equal to or greater than the Light Cookie Size. The light position must be reset to prevent it from traveling too far, causing problems in real-time render and lighting. The Light Cookie Size parameter is used to ensure a smooth transition. The reason we are not enabling shadows is because the light angle for the X axis must be 90 degrees (or there will be a noticeable gap when the light resets to the original position). If you want dynamic shadows in your scene, please add a second Directional Light. There's more... In this recipe, we have applied a cookie texture to a Directional Light. But what if we were using the Spot or Point Lights? Creating Spot Light cookies Unity documentation has an excellent tutorial on how to make the Spot Light cookies. This is great to simulate shadows coming from projectors, windows, and so on. You can check it out at http://docs.unity3d.com/Manual/HOWTO-LightCookie.html. Creating Point Light Cookies If you want to use a cookie texture with a Point Light, you'll need to change the Light Type in the Texture Importer section of the Inspector. Adding a custom Reflection map to a scene Whereas Unity Legacy Shaders use individual Reflection Cubemaps per material, the new Standard Shader gets its reflection from the scene's Reflection Source, as configured in the Scene section of the Lighting window. The level of reflectiveness for each material is now given by its Metallic value or Specular value (for materials using Specular setup). This new method can be a real time saver, allowing you to quickly assign the same reflection map to every object in the scene. Also, as you can imagine, it helps keep the overall look of the scene coherent and cohesive. In this recipe, we will learn how to take advantage of the Reflection Source feature. Getting ready For this recipe, we will prepare a Reflection Cubemap, which is basically the environment to be projected as a reflection onto the material. It can be made from either six or, as shown in this recipe, a single image file. To help us with this recipe, it's been provided a Unity package, containing a prefab made of a 3D object and a basic Material (using a TIFF as Diffuse map), and also a JPG file to be used as the reflection map. All these files are inside the 1362_06_02 folder. How to do it... To add Reflectiveness and Specularity to a material, follow these steps: Import batteryPrefab.unitypackage to a new project. Then, select battery_prefab object from the Assets folder, in the Project view. From the Inspector view, expand the Material component and observe the asset preview window. Thanks to the Specular map, the material already features a reflective look. However, it looks as if it is reflecting the scene's default Skybox, as shown: Import the CustomReflection.jpg image file. From the Inspector view, change its Texture Type to Cubemap, its Mapping to Latitude - Longitude Layout (Cylindrical), and check the boxes for Glossy Reflection and Fixup Edge Seams. Finally, change its Filter Mode to Trilinear and click on the Apply button, shown as follows: Let's replace the Scene's Skybox with our newly created Cubemap, as the Reflection map for our scene. In order to do this, open the Lighting window by navigating to the Window | Lighting menu. Select the Scene section and use the drop-down menu to change the Reflection Source to Custom. Finally, assign the newly created CustomReflection texture as the Cubemap, shown as follows: Check out for the new reflections on the battery_prefab object. How it works... While it is the material's specular map that allows for a reflective look, including the intensity and smoothness of the reflection, the refection itself (that is, the image you see on the reflection) is given by the Cubemap that we have created from the image file. There's more... Reflection Cubemaps can be achieved in many ways and have different mapping properties. Mapping coordinates The Cylindrical mapping that we applied was well-suited for the photograph that we used. However, depending on how the reflection image is generated, a Cubic or Spheremap-based mapping can be more appropriate. Also, note that the Fixup Edge Seams option will try to make the image seamless. Sharp reflections You might have noticed that the reflection is somewhat blurry compared to the original image; this is because we have ticked the Glossy Reflections box. To get a sharper-looking reflection, deselect this option; in which case, you can also leave the Filter Mode option as default (Bilinear). Maximum size At 512 x 512 pixels, our reflection map will probably run fine on the lower-end machines. However, if the quality of the reflection map is not so important in your game's context, and the original image dimensions are big (say, 4096 x 4096), you might want to change the texture's Max Size at the Import Settings to a lower number. Creating a laser aim with Projector and Line Renderer Although using GUI elements, such as a cross-hair, is a valid way to allow players to aim, replacing (or combining) it with a projected laser dot might be a more interesting approach. In this recipe, we will use the Projector and Line components to implement this concept. Getting ready To help us with this recipe, it's been provided with a Unity package containing a sample scene featuring a character holding a laser pointer, and also a texture map named LineTexture. All files are inside the 1362_06_03 folder. Also, we'll make use of the Effects assets package provided by Unity (which you should have installed when installing Unity). How to do it... To create a laser dot aim with a Projector, follow these steps: Import BasicScene.unitypackage to a new project. Then, open the scene named BasicScene. This is a basic scene, featuring a player character whose aim is controlled via mouse. Import the Effects package by navigating to the Assets | Import Package | Effects menu. If you want to import only the necessary files within the package, deselect everything in the Importing package window by clicking on the None button, and then check the Projectors folder only. Then, click on Import, as shown: From the Inspector view, locate the ProjectorLight shader (inside the Assets | Standard Assets | Effects | Projectors | Shaders folder). Duplicate the file and name the new copy as ProjectorLaser. Open ProjectorLaser. From the first line of the code, change Shader "Projector/Light" to Shader "Projector/Laser". Then, locate the line of code – Blend DstColor One and change it to Blend One One. Save and close the file. The reason for editing the shader for the laser was to make it stronger by changing its blend type to Additive. However, if you want to learn more about it, check out Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/SL-Reference.html. Now that we have fixed the shader, we need a material. From the Project view, use the Create drop-down menu to create a new Material. Name it LaserMaterial. Then, select it from the Project view and, from the Inspector view, change its Shader to Projector/Laser. From the Project view, locate the Falloff texture. Open it in your image editor and, except for the first and last columns column of pixels that should be black, paint everything white. Save the file and go back to Unity. Change the LaserMaterial's Main Color to red (RGB: 255, 0, 0). Then, from the texture slots, select the Light texture as Cookie and the Falloff texture as Falloff. From the Hierarchy view, find and select the pointerPrefab object (MsLaser | mixamorig:Hips | mixamorig:Spine | mixamorig:Spine1 | mixamorig:Spine2 | mixamorig:RightShoulder | mixamorig:RightArm | mixamorig:RightForeArm | mixamorig:RightHand | pointerPrefab). Then, from the Create drop-down menu, select Create Empty Child. Rename the new child of pointerPrefab as LaserProjector. Select the LaserProjector object. Then, from the Inspector view, click the Add Component button and navigate to Effects | Projector. Then, from the Projector component, set the Orthographic option as true and set Orthographic Size as 0.1. Finally, select LaserMaterial from the Material slot. Test the scene. You will be able to see the laser aim dot, as shown: Now, let's create a material for the Line Renderer component that we are about to add. From the Project view, use the Create drop-down menu to add a new Material. Name it as Line_Mat. From the Inspector view, change the shader of the Line_Mat to Particles/Additive. Then, set its Tint Color to red (RGB: 255;0;0). Import the LineTexture image file. Then, set it as the Particle Texture for the Line_Mat, as shown: Use the Create drop-down menu from Project view to add a C# script named LaserAim. Then, open it in your editor. Replace everything with the following code: using UnityEngine; using System.Collections; public class LaserAim : MonoBehaviour { public float lineWidth = 0.2f; public Color regularColor = new Color (0.15f, 0, 0, 1); public Color firingColor = new Color (0.31f, 0, 0, 1); public Material lineMat; private Vector3 lineEnd; private Projector proj; private LineRenderer line; void Start () { line = gameObject.AddComponent<LineRenderer>(); line.material = lineMat; line.material.SetColor("_TintColor", regularColor); line.SetVertexCount(2); line.SetWidth(lineWidth, lineWidth); proj = GetComponent<Projector> (); } void Update () { RaycastHit hit; Vector3 fwd = transform.TransformDirection(Vector3.forward); if (Physics.Raycast (transform.position, fwd, out hit)) { lineEnd = hit.point; float margin = 0.5f; proj.farClipPlane = hit.distance + margin; } else { lineEnd = transform.position + fwd * 10f; } line.SetPosition(0, transform.position); line.SetPosition(1, lineEnd); if(Input.GetButton("Fire1")){ float lerpSpeed = Mathf.Sin (Time.time * 10f); lerpSpeed = Mathf.Abs(lerpSpeed); Color lerpColor = Color.Lerp(regularColor, firingColor, lerpSpeed); line.material.SetColor("_TintColor", lerpColor); } if(Input.GetButtonUp("Fire1")){ line.material.SetColor("_TintColor", regularColor); } } } Save your script and attach it to the LaserProjector game object. Select the LaserProjector GameObject. From the Inspector view, find the Laser Aim component and fill the Line Material slot with the Line_Mat material, as shown: Play the scene. The laser aim is ready, and looks as shown: In this recipe, the width of the laser beam and its aim dot have been exaggerated. Should you need a more realistic thickness for your beam, change the Line Width field of the Laser Aim component to 0.05, and the Orthographic Size of the Projector component to 0.025. Also, remember to make the beam more opaque by setting the Regular Color of the Laser Aim component brighter. How it works... The laser aim effect was achieved by combining two different effects: a Projector and Line Renderer. A Projector, which can be used to simulate light, shadows, and more, is a component that projects a material (and its texture) onto other game objects. By attaching a projector to the Laser Pointer object, we have ensured that it will face the right direction at all times. To get the right, vibrant look, we have edited the projector material's Shader, making it brighter. Also, we have scripted a way to prevent projections from going through objects, by setting its Far Clip Plane on approximately the same level of the first object that is receiving the projection. The line of code that is responsible for this action is—proj.farClipPlane = hit.distance + margin;. Regarding the Line Renderer, we have opted to create it dynamically, via code, instead of manually adding the component to the game object. The code is also responsible for setting up its appearance, updating the line vertices position, and changing its color whenever the fire button is pressed, giving it a glowing/pulsing look. For more details on how the script works, don't forget to check out the commented code, available within the 1362_06_03 | End folder. Reflecting surrounding objects with Reflection Probes If you want your scene's environment to be reflected by game objects, featuring reflective materials (such as the ones with high Metallic or Specular levels), then you can achieve such effect using Reflection Probes. They allow for real-time, baked, or even custom reflections through the use of Cubemaps. Real-time reflections can be expensive in terms of processing; in which case, you should favor baked reflections, unless it's really necessary to display dynamic objects being reflected (mirror-like objects, for instance). Still, there are some ways real-time reflections can be optimized. In this recipe, we will test three different configurations for reflection probes: Real-time reflections (constantly updated) Real-time reflections (updated on-demand) via script Baked reflections (from the Editor) Getting ready For this recipe, we have prepared a basic scene, featuring three sets of reflective objects: one is constantly moving, one is static, and one moves whenever it is interacted with. The Probes.unitypackage package that is containing the scene can be found inside the 1362_06_04 folder. How to do it... To reflect the surrounding objects using the Reflection probes, follow these steps: Import Probes.unitypackage to a new project. Then, open the scene named Probes. This is a basic scene featuring three sets of reflective objects. Play the scene. Observe that one of the systems is dynamic, one is static, and one rotates randomly, whenever a key is pressed. Stop the scene. First, let's create a constantly updated real-time reflection probe. From the Create drop-down button of the Hierarchy view, add a Reflection Probe to the scene (Create | Light | Reflection Probe). Name it as RealtimeProbe and make it a child of the System 1 Realtime | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0, as shown: Now, go to the Reflection Probe component. Set Type as Realtime; Refresh Mode as Every Frame and Time Slicing as No time slicing, shown as follows: Play the scene. The reflections will be now be updated in real time. Stop the scene. Observe that the only object displaying the real-time reflections is System 1 Realtime | MainSphere. The reason for this is the Size of the Reflection Probe. From the Reflection Probe component, change its Size to X: 25; Y: 10; Z: 25. Note that the small red spheres are now affected as well. However, it is important to notice that all objects display the same reflection. Since our reflection probe's origin is placed at the same location as the MainSphere, all reflective objects will display reflections from that point of view. If you want to eliminate the reflection from the reflective objects within the reflection probe, such as the small red spheres, select the objects and, from the Mesh Renderer component, set Reflection Probes as Off, as shown in the following screenshot: Add a new Reflection Probe to the scene. This time, name it OnDemandProbe and make it a child of the System 2 On Demand | MainSphere game object. Then, from the Inspector view, Transform component, change its Position to X: 0; Y: 0; Z: 0. Now, go to the Reflection Probe component. Set Type as Realtime, Refresh Mode as Via scripting, and Time Slicing as Individual faces, as shown in the following screenshot: Using the Create drop-down menu in the Project view, create a new C# Script named UpdateProbe. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class UpdateProbe : MonoBehaviour { private ReflectionProbe probe; void Awake () { probe = GetComponent<ReflectionProbe> (); probe.RenderProbe(); } public void RefreshProbe(){ probe.RenderProbe(); } } Save your script and attach it to the OnDemandProbe. Now, find the script named RandomRotation, which is attached to the System 2 On Demand | Spheres object, and open it in the code editor. Right before the Update() function, add the following lines: private GameObject probe; private UpdateProbe up; void Awake(){ probe = GameObject.Find("OnDemandProbe"); up = probe.GetComponent<UpdateProbe>(); } Now, locate the line of code called transform.eulerAngles = newRotation; and, immediately after it, add the following line: up.RefreshProbe(); Save the script and test your scene. Observe how the Reflection Probe is updated whenever a key is pressed. Stop the scene. Add a third Reflection Probe to the scene. Name it as CustomProbe and make it a child of the System 3 On Custom | MainSphere game object. Then, from the Inspector view, the Transform component, change its Position to X: 0; Y: 0; Z: 0. Go to the Reflection Probe component. Set Type as Custom and click on the Bake button, as shown: A Save File dialog window will show up. Save the file as CustomProbe-reflectionHDR.exr. Observe that the reflection map does not include the reflection of red spheres on it. To change this, you have two options: set the System 3 On Custom | Spheres GameObject (and all its children) as Reflection Probe Static or, from the Reflection Probe component of the CustomProbe GameObject, check the Dynamic Objects option, as shown, and bake the map again (by clicking on the Bake button). If you want your reflection Cubemap to be dynamically baked while you edit your scene, you can set the Reflection Probe Type to Baked, open the Lighting window (the Assets | Lighting menu), access the Scene section, and check the Continuous Baking option as shown. Please note that this mode won't include dynamic objects in the reflection, so be sure to set System 3 Custom | Spheres and System 3 Custom | MainSphere as Reflection Probe Static. How it works... The Reflection Probes element act like omnidirectional cameras that render Cubemaps and apply them onto the objects within their constraints. When creating Reflection Probes, it's important to be aware of how the different types work: Real-time Reflection Probes: Cubemaps are updated at runtime. The real-time Reflection Probes have three different Refresh Modes: On Awake (Cubemap is baked once, right before the scene starts); Every frame (Cubemap is constantly updated); Via scripting (Cubemap is updated whenever the RenderProbe function is used).Since Cubemaps feature six sides, the Reflection Probes features Time Slicing, so each side can be updated independently. There are three different types of Time Slicing: All Faces at Once (renders all faces at once and calculates mipmaps over 6 frames. Updates the probe in 9 frames); Individual Faces (each face is rendered over a number of frames. It updates the probe in 14 frames. The results can be a bit inaccurate, but it is the least expensive solution in terms of frame-rate impact); No Time Slicing (The Probe is rendered and mipmaps are calculated in one frame. It provides high accuracy, but it also the most expensive in terms of frame-rate). Baked: Cubemaps are baked during editing the screen. Cubemaps can be either manually or automatically updated, depending whether the Continuous Baking option is checked (it can be found at the Scene section of the Lighting window). Custom: The Custom Reflection Probes can be either manually baked from the scene (and even include Dynamic objects), or created from a premade Cubemap. There's more... There are a number of additional settings that can be tweaked, such as Importance, Intensity, Box Projection, Resolution, HDR, and so on. For a complete view on each of these settings, we strongly recommend that you read Unity's documentation on the subject, which is available at http://docs.unity3d.com/Manual/class-ReflectionProbe.html. Setting up an environment with Procedural Skybox and Directional Light Besides the traditional 6 Sided and Cubemap, Unity now features a third type of skybox: the Procedural Skybox. Easy to create and setup, the Procedural Skybox can be used in conjunction with a Directional Light to provide Environment Lighting to your scene. In this recipe, we will learn about different parameters of the Procedural Skybox. Getting ready For this recipe, you will need to import Unity's Standard Assets Effects package, which you should have installed when installing Unity. How to do it... To set up an Environment Lighting using the Procedural Skybox and Directional Light, follow these steps: Create a new scene inside a Unity project. Observe that a new scene already includes two objects: the Main Camera and a Directional Light. Add some cubes to your scene, including one at Position X: 0; Y: 0; Z: 0 scaled to X: 20; Y: 1; Z: 20, which is to be used as the ground, as shown: Using the Create drop-down menu from the Project view, create a new Material and name it MySkybox. From the Inspector view, use the appropriate drop-down menu to change the Shader of MySkybox from Standard to Skybox/Procedural. Open the Lighting window (menu Window | Lighting), access the Scene section. At the Environment Lighting subsection, populate the Skybox slot with the MySkybox material, and the Sun slot with the Directional Light from the Scene. From the Project view, select MySkybox. Then, from the Inspector view, set Sun size as 0.05 and Atmosphere Thickness as 1.4. Experiment by changing the Sky Tint color to RGB: 148; 128; 128, and the Ground color to a value that resembles the scene cube floor's color (such as RGB: 202; 202; 202). If you feel the scene is too bright, try bringing the Exposure level down to 0.85, shown as follows: Select the Directional Light and change its Rotation to X: 5; Y: 170; Z: 0. Note that the scene should resemble a dawning environment, something like the following scene: Let's make things even more interesting. Using the Create drop-down menu in the Project view, create a new C# Script named RotateLight. Open your script and replace everything with the following code: using UnityEngine; using System.Collections; public class RotateLight : MonoBehaviour { public float speed = -1.0f; void Update () { transform.Rotate(Vector3.right * speed * Time.deltaTime); } } Save it and add it as a component to the Directional Light. Import the Effects Assets package into your project (via the Assets | Import Package | Effects menu). Select the Directional Light. Then, from Inspector view, Light component, populate the Flare slot with the Sun flare. From the Scene section of the Lighting window, find the Other Settings subsection. Then, set Flare Fade Speed as 3 and Flare Strength as 0.5, shown as follows: Play the scene. You will see the sun rising and the Skybox colors changing accordingly. How it works... Ultimately, the appearance of Unity's native Procedural Skyboxes depends on the five parameters that make them up: Sun size: The size of the bright yellow sun that is drawn onto the skybox is located according to the Directional Light's Rotation on the X and Y axes. Atmosphere Thickness: This simulates how dense the atmosphere is for this skybox. Lower values (less than 1.0) are good for simulating the outer space settings. Moderate values (around 1.0) are suitable for the earth-based environments. Values that are slightly above 1.0 can be useful when simulating air pollution and other dramatic settings. Exaggerated values (like more than 2.0) can help to illustrate extreme conditions or even alien settings. Sky Tint: It is the color that is used to tint the skybox. It is useful for fine-tuning or creating stylized environments. Ground: This is the color of the ground. It can really affect the Global Illumination of the scene. So, choose a value that is close to the level's terrain and/or geometry (or a neutral one). Exposure: This determines the amount of light that gets in the skybox. The higher levels simulate overexposure, while the lower values simulate underexposure. It is important to notice that the Skybox appearance will respond to the scene's Directional Light, playing the role of the Sun. In this case, rotating the light around its X axis can create dawn and sunset scenarios, whereas rotating it around its Y axis will change the position of the sun, changing the cardinal points of the scene. Also, regarding the Environment Lighting, note that although we have used the Skybox as the Ambient Source, we could have chosen a Gradient or a single Color instead—in which case, the scene's illumination wouldn't be attached to the Skybox appearance. Finally, also regarding the Environment Lighting, please note that we have set the Ambient GI to Realtime. The reason for this was to allow the real-time changes in the GI, promoted by the rotating Directional Light. In case we didn't need these changes at runtime, we could have chosen the Baked alternative. Summary In this article you have learned and had hands-on approach to a number Unity's lighting system features, such as cookie textures, Reflection maps, Lightmaps, Light and Reflection probes, and Procedural Skyboxes. The article also demonstrated the use of Projectors. Resources for Article: Further resources on this subject: Animation features in Unity 5[article] Scripting Strategies[article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 5671