Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - 3D Game Development

115 Articles
article-image-ogre-3d-fixed-function-pipeline-and-shaders
Packt
25 Nov 2010
13 min read
Save for later

Ogre 3D: Fixed Function Pipeline and Shaders

Packt
25 Nov 2010
13 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Introduction Fixed Function Pipeline is the rendering pipeline on the graphics card that produces those nice shiny pictures we love looking at. As the prefix Fixed suggests, there isn't a lot of freedom to manipulate the Fixed Function Pipeline for the developer. We can tweak some parameters using the material files, but nothing fancy. That's where shaders can help fill the gap. Shaders are small programs that can be loaded onto the graphics card and then function as a part of the rendering process. These shaders can be thought of as little programs written in a C-like language with a small, but powerful, set of functions. With shaders, we can almost completely control how our scene is rendered and also add a lot of new effects that weren't possible with only the Fixed Function Pipeline. Render Pipeline To understand shaders, we need to first understand how the rendering process works as a whole. When rendering, each vertex of our model is translated from local space into camera space, then each triangle gets rasterized. This means, the graphics card calculates how to represent the model in an image. These image parts are called fragments. Each fragment is then processed and manipulated. We could apply a specific part of a texture to this fragment to texture our model or we could simply assign it a color when rendering a model in only one color. After this processing, the graphics card tests if the fragment is covered by another fragment that is nearer to the camera or if it is the fragment nearest to the camera. If this is true, the fragment gets displayed on the screen. In newer hardware, this step can occur before the processing of the fragment. This can save a lot of computation time if most of the fragments won't be seen in the end result. The following is a very simplified graph showing the pipeline: With almost each new graphics card generation, new shader types were introduced. It began with vertex and pixel/fragment shaders. The task of the vertex shader is to transform the vertices into camera space, and if needed, modify them in any way, like when doing animations completely on the GPU. The pixel/fragment shader gets the rasterized fragments and can apply a texture to them or manipulate them in other ways, for example, for lighting models with an accuracy of a pixel. Time for action – our first shader application Let's write our first vertex and fragment shaders: In our application, we only need to change the used material. Change it to MyMaterial13. Also remove the second quad: manual->begin("MyMaterial13", RenderOperation::OT_TRIANGLE_LIST); Now we need to create this material in our material file. First, we are going to define the fragment shader. Ogre 3D needs five pieces of information about the shader: The name of the shader In which language it is written In which source file it is stored How the main function of this shader is called In what profile we want the shader to be compiled All this information should be in the material file: fragment_program MyFragmentShader1 cg { source Ogre3DBeginnersGuideShaders.cg entry_point MyFragmentShader1 profiles ps_1_1 arbfp1 } The vertex shader needs the same parameter, but we also have to define a parameter that is passed from Ogre 3D to our shader. This contains the matrix that we will use for transforming our quad into camera space: vertex_program MyVertexShader1 cg { source Ogre3DBeginnerGuideShaders.cg entry_point MyVertexShader1 profiles vs_1_1 arbvp1 default_params { param_named_auto worldViewMatrix worldviewproj_matrix } } The material itself just uses the vertex and fragment shader names to reference them: material MyMaterial13 { technique { pass { vertex_program_ref MyVertexShader1 { } fragment_program_ref MyFragmentShader1 { } } } } Now we need to write the shader itself. Create a file named Ogre3DBeginnersGuideShaders.cg in the mediamaterialsprograms folder of your Ogre 3D SDK. Each shader looks like a function. One difference is that we can use the out keyword to mark a parameter as an outgoing parameter instead of the default incoming parameter. The out parameters are used by the rendering pipeline for the next rendering step. The out parameters of a vertex shader are processed and then passed into the pixel shader as an in parameter. The out parameter from a pixel shader is used to create the final render result. Remember to use the correct name for the function; otherwise, Ogre 3D won't find it. Let's begin with the fragment shader because it's easier: void MyFragmentShader1(out float4 color: COLOR) The fragment shader will return the color blue for every pixel we render: { color = float4(0,0,1,0); } That's all for the fragment shader; now we come to the vertex shader. The vertex shader has three parameters—the position for the vertex, the translated position of the vertex as an out variable, and as a uniform variable for the matrix we are using for the translation: void MyVertexShader1( float4 position : POSITION, out float4 oPosition : POSITION, uniform float4x4 worldViewMatrix) Inside the shader, we use the matrix and the incoming position to calculate the outgoing position: { oPosition = mul(worldViewMatrix, position); } Compile and run the application. You should see our quad, this time rendered in blue. What just happened? Quite a lot happened here; we will start with step 2. Here we defined the fragment shader we are going to use. Ogre 3D needs five pieces of information for a shader. We define a fragment shader with the keyword fragment_program, followed by the name we want the fragment program to have, then a space, and at the end, the language in which the shader will be written. As for programs, shader code was written in assembly and in the early days, programmers had to write shader code in assembly because there wasn't another language to be used. But also, as with general programming language, soon there came high-level programming to ease the pain of writing shader code. At the moment, there are three different languages that shaders can be written in: HLSL, GLSL, and CG. The shader language HLSL is used by DirectX and GLSL is the language used by OpenGL. CG was developed by NVidia in cooperation with Microsoft and is the language we are going to use. This language is compiled during the start up of our application to their respective assembly code. So shaders written in HLSL can only be used with DirectX and GLSL shaders with OpenGL. But CG can compile to DirectX and OpenGL shader assembly code; that's the reason why we are using it to be truly cross platform. That's two of the five pieces of information that Ogre 3D needs. The other three are given in the curly brackets. The syntax is like a property file—first the key and then the value. One key we use is source followed by the file where the shader is stored. We don't need to give the full path, just the filename will do, because Ogre 3D scans our directories and only needs the filename to find the file. Another key we are using is entry_point followed by the name of the function we are going to use for the shader. In the code file, we created a function called MyFragmentShader1 and we are giving Ogre 3D this name as the entry point for our fragment shader. This means, each time we need the fragment shader, this function is called. The function has only one parameter out float4 color : COLOR. The prefix out signals that this parameter is an out parameter, meaning we will write a value into it, which will be used by the render pipeline later on. The type of this parameter is called float4, which simply is an array of four float values. For colors, we can think of it as a tuple (r,g,b,a) where r stands for red, g for green, b for blue, and a for alpha: the typical tuple to description colors. After the name of the parameter, we got a : COLOR. In CG, this is called a semantic describing for what the parameter is used in the context of the render pipeline. The parameter :COLOR tells the render pipeline that this is a color. In combination with the out keyword and the fact that this is a fragment shader, the render pipeline can deduce that this is the color we want our fragment to have. The last piece of information we supply uses the keyword profiles with the values ps_1_1 and arbfp1. To understand this, we need to talk a bit about the history of shaders. With each generation of graphics cards, a new generation of shaders have been introduced. What started as a fairly simple C-like programming language without even IF conditions are now really complex and powerful programming languages. Right now, there are several different versions for shaders and each with a unique function set. Ogre 3D needs to know which of these versions we want to use. ps_1_1 means pixel shader version 1.1 and arbfp1 means fragment program version 1. We need both profiles because ps_1_1 is a DirectX specific function set and arbfp1 is a function subset for OpenGL. We say we are cross platform, but sometimes we need to define values for both platforms. All subsets can be found at http://www.ogre3d.org/docs/manual/manual_18.html. That's all needed to define the fragment shader in our material file. In step 3, we defined our vertex shader. This part is very similar to the fragment shader definition code; the main difference is the default_params block. This block defines parameters that are given to the shader during runtime. param_named_auto defines a parameter that is automatically passed to the shader by Ogre 3D. After this key, we need to give the parameter a name and after this, the value keyword we want it to have. We name the parameter worldViewMatrix; any other name would also work, and the value we want it to have has the key worldviewproj_matrix. This key tells Ogre 3D we want our parameter to have the value of the WorldViewProjection matrix. This matrix is used for transforming vertices from local into camera space. A list of all keyword values can be found at http://www.ogre3d.org/docs/manual/manual_23.html#SEC128. How we use these values will be seen shortly. Step 4 used the work we did before. As always, we defined our material with one technique and one pass; we didn't define a texture unit but used the keyword vertex_program_ref. After this keyword, we need to put the name of a vertex program we defined, in our case, this is MyVertexShader1. If we wanted, we could have put some more parameters into the definition, but we didn't need to, so we just opened and closed the block with curly brackets. The same is true for fragment_program_ref. Writing a shader Now that we have defined all necessary things in our material file, let's write the shader code itself. Step 6 defines the function head with the parameter we discussed before, so we won't go deeper here. Step 7 defines the function body; for this fragment shader, the body is extremely simple. We created a new float4 tuple (0,0,1,0), describes the color blue and assigns this color to our out parameter color. The effect is that everything that is rendered with this material will be blue. There isn't more to the fragment shader, so let's move on to the vertex shader. Step 8 defines the function header. The vertex shader has 3 parameters— two are marked as positions using CG semantics and the other parameter is a 4x4 matrix using float4 as values named worldViewMatrix. Before the parameter type definition, there is the keyword uniform. Each time our vertex shader is called, it gets a new vertex as the position parameter input, calculates the position of this new vertex, and saves it in the oPosition parameter. This means with each call, the parameter changes. This isn't true for the worldViewMatrix. The keyword uniform denotes parameters that are constant over one draw call. When we render our quad, the worldViewMatrix doesn't change while the rest of the parameters are different for each vertex processed by our vertex shader. Of course, in the next frame, the worldViewMatrix will probably have changed. Step 9 creates the body of the vertex shader. In the body, we multiply the vertex that we got with the world matrix to get the vertex translated into camera space. This translated vertex is saved in the out parameter to be processed by the rendering pipeline. We will look more closely into the render pipeline after we have experimented with shaders a bit more. Texturing with shaders We have painted our quad in blue, but we would like to use the previous texture. Time for action – using textures in shaders Create a new material named MyMaterial14. Also create two new shaders named MyFragmentShader2 and MyVertexShader2. Remember to copy the fragment and vertex program definitions in the material file. Add to the material file a texture unit with the rock texture: texture_unit { texture terr_rock6.jpg } We need to add two new parameters to our fragment shader. The first is a two tuple of floats for the texture coordinates. Therefore, we also use the semantic to mark the parameter as the first texture coordinates we are using. The other new parameter is of type sampler2D, which is another name for texture. Because the texture doesn't change on a per fragment basis, we mark it as uniform. This keyword indicates that the parameter value comes from outside the CG program and is set by the rendering environment, in our case, by Ogre 3D: void MyFragmentShader2(float2 uv : TEXCOORD0, out float4 color : COLOR, uniform sampler2D texture) In the fragment shader, replace the color assignment with the following line: color = tex2D(texture, uv); The vertex shader also needs some new parameters—one float2 for the incoming texture coordinates and one float2 as the outgoing texture coordinates. Both are our TEXCOORD0 because one is the incoming and the other is the outgoing TEXCOORD0: void MyVertexShader2( float4 position : POSITION, out float4 oPosition : POSITION, float2 uv : TEXCOORD0, out float2 oUv : TEXCOORD0, uniform float4x4 worldViewMatrix) In the body, we calculate the outgoing position of the vertex: oPosition = mul(worldViewMatrix, position); For the texture coordinates, we assign the incoming value to the outgoing value: oUv = uv; Remember to change the used material in the application code, and then compile and run it. You should see the quad with the rock texture.
Read more
  • 0
  • 0
  • 3009

article-image-materials-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Materials with Ogre 3D

Packt
25 Nov 2010
7 min read
OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Creating a white quad We will use this to create a sample quad that we can experiment with. Time for action – creating the quad We will start with an empty application and insert the code for our quad into the createScene() function: Begin with creating the manual object: Ogre::ManualObject* manual = mSceneMgr- >createManualObject("Quad"); manual->begin("BaseWhiteNoLighting", RenderOperation::OT_TRIANGLE_ LIST); Create four points for our quad: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,1); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(1,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(1,1); manual->position(5.0, 10.0, 0.0);manual->textureCoord(0,0); Use indices to describe the quad: manual->index(0); manual->index(1); manual->index(2); manual->index(0); manual->index(3); manual->index(1); Finish the manual object and convert it to a mesh: manual->end(); manual->convertToMesh("Quad"); Create an instance of the entity and attach it to the scene using a scene node: Ogre::Entity * ent = mSceneMgr->createEntity("Quad"); Ogre::SceneNode* node = mSceneMgr->getRootSceneNode()- >createChildSceneNode("Node1"); node->attachObject(ent); Compile and run the application. You should see a white quad. What just happened? We used our knowledge to create a quad and attach to it a material that simply renders everything in white. The next step is to create our own material. Creating our own material Always rendering everything in white isn't exactly exciting, so let's create our first material. Time for action – creating a material Now, we are going to create our own material using the white quad we created. Change the material name in the application from BaseWhiteNoLighting to MyMaterial1: manual->begin("MyMaterial1", RenderOperation::OT_TRIANGLE_LIST); Create a new file named Ogre3DBeginnersGuide.material in the mediamaterialsscripts folder of our Ogre3D SDK. Write the following code into the material file: material MyMaterial1 { technique { pass { texture_unit { texture leaf.png } } } } Compile and run the application. You should see a white quad with a plant drawn onto it. What just happened? We created our first material file. In Ogre 3D, materials can be defined in material files. To be able to find our material files, we need to put them in a directory listed in the resources.cfg, like the one we used. We also could give the path to the file directly in code using the ResourceManager. To use our material defined in the material file, we just had to use the name during the begin call of the manual object. The interesting part is the material file itself. Materials Each material starts with the keyword material, the name of the material, and then an open curly bracket. To end the material, use a closed curly bracket—this technique should be very familiar to you by now. Each material consists of one or more techniques; a technique describes a way to achieve the desired effect. Because there are a lot of different graphic cards with different capabilities, we can define several techniques and Ogre 3D goes from top to bottom and selects the first technique that is supported by the user's graphic cards. Inside a technique, we can have several passes. A pass is a single rendering of your geometry. For most of the materials we are going to create, we only need one pass. However, some more complex materials might need two or three passes, so Ogre 3D enables us to define several passes per technique. In this pass, we only define a texture unit. A texture unit defines one texture and its properties. This time the only property we define is the texture to be used. We use leaf.png as the image used for our texture. This texture comes with the SDK and is in a folder that gets indexed by resources.cfg, so we can use it without any work from our side. Have a go hero – creating another material Create a new material called MyMaterial2 that uses Water02.jpg as an image. Texture coordinates take two There are different strategies used when texture coordinates are outside the 0 to 1 range. Now, let's create some materials to see them in action. Time for action – preparing our quad We are going to use the quad from the previous example with the leaf texture material: Change the texture coordinates of the quad from range 0 to 1 to 0 to 2. The quad code should then look like this: manual->position(5.0, 0.0, 0.0); manual->textureCoord(0,2); manual->position(-5.0, 10.0, 0.0); manual->textureCoord(2,0); manual->position(-5.0, 0.0, 0.0); manual->textureCoord(2,2); manual->position(5.0, 10.0, 0.0); manual->textureCoord(0,0); Now compile and run the application. Just as before, we will see a quad with a leaf texture, but this time we will see the texture four times. What just happened? We simply changed our quad to have texture coordinates that range from zero to two. This means that Ogre 3D needs to use one of its strategies to render texture coordinates that are larger than 1. The default mode is wrap. This means each value over 1 is wrapped to be between zero and one. The following is a diagram showing this effect and how the texture coordinates are wrapped. Outside the corners, we see the original texture coordinates and inside the corners, we see the value after the wrapping. Also for better understanding, we see the four texture repetitions with their implicit texture coordinates. We have seen how our texture gets wrapped using the default texture wrapping mode. Our plant texture shows the effect pretty well, but it doesn't show the usefulness of this technique. Let's use another texture to see the benefits of the wrapping mode. Using the wrapping mode with another texture Time for action – adding a rock texture For this example, we are going to use another texture. Otherwise, we wouldn't see the effect of this texture mode: Create a new material similar to the previous one, except change the used texture to: terr_rock6.jpg: material MyMaterial3 { technique { pass { texture_unit { texture terr_rock6.jpg } } } } Change the used material from MyMaterial1 to MyMaterial3: manual->begin("MyMaterial3", RenderOperation::OT_TRIANGLE_LIST) Compile and run the application. You should see a quad covered in a rock texture. What just happened? This time, the quad seems like it's covered in one single texture. We don't see any obvious repetitions like we did with the plant texture. The reason for this is that, like we already know, the texture wrapping mode repeats. The texture was created in such a way that at the left end of the texture, the texture is started again with its right side and the same is true for the lower end. This kind of texture is called seamless. The texture we used was prepared so that the left and right side fit perfectly together. The same goes for the upper and lower part of the texture. If this wasn't the case, we would see instances where the texture is repeated.
Read more
  • 0
  • 0
  • 3807

article-image-ogre-3d-double-buffering
Packt
25 Nov 2010
5 min read
Save for later

Ogre 3D: Double Buffering

Packt
25 Nov 2010
5 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction When a scene is rendered, it isn't normally rendered directly to the buffer, which is displayed on the monitor. Normally, the scene is rendered to a second buffer and when the rendering is finished, the buffers are swapped. This is done to prevent some artifacts, which can be created if we render to the same buffer, which is displayed on the monitor. The FrameListener function, frameRenderingQueued, is called after the scene has been rendered to the back buffer, the buffer which isn't displayed at the moment. Before the buffers are swapped, the rendering result is already created but not yet displayed. Directly after the frameRenderingQueued function is called, the buffers get swapped and then the application gets the return value and closes itself. That's the reason why we see an image this time. Now, we will see what happens when frameRenderingQueued also returns true. Time for action – returning true in the frameRenderingQueued function Once again we modify the code to test the behavior of the Frame Listener Change frameRenderingQueued to return true: bool frameRenderingQueued (const Ogre::FrameEvent& evt){ std::cout << «Frame queued» << std::endl; return true;} Compile and run the application. You should see Sinbad for a short period of time before the application closes, and the following three lines should be in the console output: Frame started Frame queued Frame ended What just happened? Now that the frameRenderingQueued handler returns true, it will let Ogre 3D continue to render until the frameEnded handler returns false. Like in the last example, the render buffers were swapped, so we saw the scene for a short period of time. After the frame was rendered, the frameEnded function returned false, which closes the application and, in this case, doesn't change anything from our perspective. Time for action – returning true in the frameEnded function Now let's test the last of three possibilities. Change frameRenderingQueued to return true: bool frameEnded (const Ogre::FrameEvent& evt){ std::cout << «Frame ended» << std::endl; return true;} Compile and run the application. You should see the scene with Sinbad and an endless repetition of the following three lines: Frame started Frame queued Frame ended What just happened? Now, all event handlers returned true and, therefore, the application will never be closed; it would run forever as long as we aren't going to close the application ourselves. Adding input We have an application running forever and have to force it to close; that's not neat. Let's add input and the possibility to close the application by pressing Escape. Time for action – adding input Now that we know how the FrameListener works, let's add some input. We need to include the OIS header file to use OIS: #include "OISOIS.h" Remove all functions from the FrameListener and add two private members to store the InputManager and the Keyboard: OIS::InputManager* _InputManager;OIS::Keyboard* _Keyboard; The FrameListener needs a pointer to the RenderWindow to initialize OIS, so we need a constructor, which takes the window as a parameter: MyFrameListener(Ogre::RenderWindow* win){ OIS will be initialized using a list of parameters, we also need a window handle in string form for the parameter list; create the three needed variables to store the data: OIS::ParamList parameters;unsigned int windowHandle = 0;std::ostringstream windowHandleString; Get the handle of the RenderWindow and convert it into a string: win->getCustomAttribute("WINDOW", &windowHandle);windowHandleString << windowHandle; Add the string containing the window handle to the parameter list using the key "WINDOW": parameters.insert(std::make_pair("WINDOW", windowHandleString.str())); Use the parameter list to create the InputManager: _InputManager = OIS::InputManager::createInputSystem(parameters); With the manager create the keyboard: _Keyboard = static_cast<OIS::Keyboard*>(_InputManager->createInputObject( OIS::OISKeyboard, false )); What we created in the constructor, we need to destroy in the destructor: ~MyFrameListener(){ _InputManager->destroyInputObject(_Keyboard); OIS::InputManager::destroyInputSystem(_InputManager);} Create a new frameStarted function, which captures the current state of the keyboard, and if Escape is pressed, it returns false; otherwise, it returns true: bool frameStarted(const Ogre::FrameEvent& evt){ _Keyboard->capture(); if(_Keyboard->isKeyDown(OIS::KC_ESCAPE)) { return false; } return true;} The last thing to do is to change the instantiation of the FrameListener to use a pointer to the render window in the startup function: _listener = new MyFrameListener(window);_root->addFrameListener(_listener); Compile and run the application. You should see the scene and now be able to close it by pressing the Escape key. What just happened? We added input processing capabilities to our FrameListener but we didn't use any example classes, except our own versions.
Read more
  • 0
  • 0
  • 1944

article-image-introduction-blender-25-color-grading-sequel
Packt
18 Nov 2010
2 min read
Save for later

Introduction to Blender 2.5 Color Grading - A Sequel

Packt
18 Nov 2010
2 min read
Colorizing with hue adjustment For a quick and dirty colorization of images, hue adjustment is your best friend. However, the danger with using hue adjustment is that you don't have much control over your tones compared to when you were using color curves. To add the hue adjustment node in Blender's Node Editor Window, press SHIFT A then choose Color then finally Hue Saturation Value. This will add the Hue Saturation Value Node which is basically used to adjust the image's tint, saturation (grayscale, vibrant colors), and value (brightness). Later on in this article, you'll see just how useful this node will be. But for now, let's stick with just the hue adjustment aspect of this node. Move the mouse over the image to enlarge it. (Adding the Hue Saturation Value Node) (Hue Saturation Value Node) To colorize your images, simply slide the Hue slider. When using the hue slider, it's a good rule of thumb to keep the adjustments at a minimum, but for other special purpose, you can set them the way you want to. Below are some examples of different values of the Hue Adjustment. (Hue at 0.0) (Hue at 0.209) (Hue at 0.333) (Hue at 0.431) (Hue at 0.671) (Hue at 0.853) (Hue at 1.0)
Read more
  • 0
  • 0
  • 2551

article-image-introduction-blender-25-color-grading
Packt
11 Nov 2010
11 min read
Save for later

Introduction to Blender 2.5: Color Grading

Packt
11 Nov 2010
11 min read
Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        I would like to thank a few people who have made this all possible and I wouldn't be inspired doing this now without their great aid: To Francois Tarlier (http://www.francois-tarlier.com) for patiently bearing with my questions, for sharing his thoughts on color grading with Blender, and for simply developing things to make these things existent in Blender. A clear example of this would be the addition of the Color Balance Node in Blender 2.5's Node Compositor (which I couldn't live without). To Matt Ebb (http://mke3.net/) for creating tools to make Blender's Compositor better and for supporting the efforts of making one. And lastly, to Stu Maschwitz (http://www.prolost.com) for his amazing tips and tricks on color grading. Now, for some explanation. Color grading is usually defined as the process of altering and/or enhancing the colors of a motion picture or a still image. Traditionally, this happens by altering the subject photo-chemically (color timing) in a laboratory. But with modern tools and techniques, color grading can now be achieved digitally. Software like Apple's Final Cut Pro, Adobe's After Effects, Red Giant Software’s Magic Bullet Looks, etc. Luckily, the latest version of Blender has support for color grading by using a selection and plethora of nodes that will then process our input accordingly. However, I really want to stress here that often, it doesn't matter what tools you use, it all really depends on how crafty and artistic you are, regardless of whatever features your application has. Normally, color grading could also be related to color correction in some ways, however strictly speaking, color correction deals majorly on a “correctional” aspect (white balancing, temperature changes, etc.) rather than a specific alteration that would otherwise be achieved when applied with color grading. With color grading, we can turn a motion picture or still image into different types of mood and time of the day, we can fake lens filters and distortions, highlight part of an image via bright spotting, remove red eye effects, denoise an image, add glares, and a lot more. With all the things mentioned above, they can be grouped into three major categories, namely: Color Balancing Contrasting Stylization Material Variation Compensation With Color Balancing, we are trying to fix tint errors and colorizations that occurred during hardware post-production, something that would happen when recording the data into, say, a camera's memory right after it has been internally processed. Or sometimes, this could also be applied to fix some white balance errors that were overlooked while shooting or recording. These are, however, non-solid rules that aren't followed all the time. We can, however, use color balancing to simply correct the tones of an image or frame such that the human skin will look more natural with respect to the scene it is located at. Contrasting deals with how subject/s are emphasized with respect to the scene it is located at. It could also refer to vibrance and high dynamic imaging. It could also be just a general method of “popping out” necessary details present in a frame. Stylization refers to effects that are added on top of the original footage/image after applying color correction, balancing, etc. Some examples would be: dreamy effect, day to night conversion, retro effect, sepia, and many more. And last but not the least is Material Variation Compensation. Often, as artists, there will come a point in time that after hours and hours of waiting for your renders to finish, you will realize at the last minute that something is just not right with how the materials are set up. If you're on a tight deadline, rerendering the entire sequence or frame is not an option. Thankfully, but not absolute all the time, we can compensate this by using color grading techniques to specifically tell Blender to adjust just a portion of an image that looks wrong and save us a ton of time if we were to rerender again. However, with the vast topics that Color Grading has, I can only assume that I will only be leading you to the introductory steps to get you started and for you to have a basis for your own experiments. To have a view of what we could possibly discuss, you can check some of the videos I've done here: http://vimeo.com/13262256 http://vimeo.com/13995077 And to those of you interested with some presets, Francois Tarlier has provided some in this page http://code.google.com/p/ft-projects/downloads/list. Outlining some of the aspects that we'll go through in Part 1 of this article, here's a list of the things we will be doing: Loading Image Files in the Compositor Loading Sequence Files in the Compositor Loading Movie Files in the Compositor Contrasting with Color Curves Colorizing with Color Curves Color Correcting with Color Curves And before we start, here are some prerequisites that you should have: Latest Blender 2.5 version (grab one from http://www.graphicall.org or from the latest svn updates) Movies, Footages, Animations (check http://www.stockfootageforfree.com for free stock footages) Still Images Intermediate Blender skill level Initialization With all the prerequisites met and before we get our hands dirty, there are some things we need to do. Fire up Blender 2.5 and you'll notice (by default) that Blender starts with a cool splash screen and with it on the upper right hand portion, you can see the Blender version number and the revision number. As much as possible, you would want to have a similar revision number as what we'll be using here, or better yet, a newer one. This will ensure that tools we'll be using are up to date, bug free, and possibly feature-pumped. Move the mouse over the image to enlarge it. (Blender 2.5 Initial Startup Screen) After we have ensured we have the right version (and revision number) of Blender, it's time to set up our scenes and screens accordingly to match our ideal workflow later on. Before starting any color grading session, make sure you have a clear plan of what you want to achieve and to do with your footages and images. This way you can eliminate the guessing part and save a lot of time in the process. Next step is to make sure we are in the proper screen for doing color grading. You'll see in the menu bar at the top that we are using the “Default” screen. This is useful for general-purpose Blender workflow like Modeling, Lighting, and Shading setup. To harness Blender's intuitive interface, we'll go ahead and change this screen to something more obvious and useful. (Screen Selection Menu) Click the button on the left of the screen selection menu and you'll see a list of screens to choose from. For this purpose, we'll choose “Compositing”. After enabling the screen, you'll notice that Blender's default layout has been changed to something more varied, but not very dramatic. (Choosing the Compositing Screen) The Compositing Screen will enable us to work seamlessly with color grading in that, by default, it has everything we need to start our session. By default, the compositing screen has the Node Editor on top, the UV/Image Editor on the lower left hand side, the 3D View on the lower right hand side. On the far right corner, equaling the same height as these previous three windows, is the Properties Window, and lastly (but not so obvious) is the Timeline Window which is just below the Properties Window as is situated on the far lower right corner of your screen. Since we won't be digging too much on Blender's 3D aspect here, we can go ahead and ignore the lower right view (3D View), or better yet, let's merge the UV/Image Editor to the 3D View such that the UV/Image Editor will encompass mostly the lower half of the screen (as seen below). You could also merge the Properties Window and the Timeline Window such that the only thing present on the far right hand side is the Properties Window. (Merging the Screen Windows) (Merged Screens) (Merged Screens) Under the Node Editor Window, click on and enable Use Nodes. This will tell Blender that we'll be using the node system in conjunction with the settings we'll be enabling later on. (Enabling “Use Nodes”) After clicking on Use Nodes, you'll notice nodes start appearing on the Node Editor Window, namely the Render Layer and Composite nodes. This is one good hint that Blender now recognizes the nodes as part of its rendering process. But that's not enough yet. Looking on the far right window (Properties Window), look for the Shading and Post Processing tabs under Render. If you can't see some parts, just scroll through until you do. (Locating the Shading and Post Processing Tabs) Under the Shading tab, disable all check boxes except for Texture. This will ensure that we won't get any funny output later on. It will also eliminate the error debugging process, if we do encounter some. (Disabling Shading Options) Next, let's proceed to the Post Processing tab and disable Sequencer. Then let's make sure that Compositing is enabled and checked. (Disabling Post Processing Options) Thats it for now, but we'll get back to the Properties Window whenever necessary. Let's move our attention back to the Node Editor Window above. Same keyboard shortcuts apply here compared to the 3D Viewport. To review, here are the shortcuts we might find helpful while working on the Node Editor Window:   Select Node Right Mouse Button Confirm Left Mouse Button Zoom In Mouse Wheel Up/CTRL + Mouse Wheel Drag Zoom Out Mouse Wheel Down/CTRL + Mouse Wheel Drag Pan Screen Middle Mouse Drag Move Node G Box Selection B Delete Node X Make Links F Cut Links CTRL Left Mouse Button Hide Node H Add Node SHIFT A Toggle Full Screen SHIFT SPACE Now, let's select the Render Layer Node and delete it. We won't be needing it now since we're not directly working with Blender's internal render layer system yet, since we'll be solely focusing our attention on uploading images and footages for grading work. Select the Composite Node and move it far right, just to get it out of view for now. (Deleting the Render Layer Node and Moving the Composite Node) Loading image files in the compositor Blender's Node Compositor can upload pretty much any image format you have. Most of the time, you might want only to work with JPG, PNG, TIFF, and EXR file formats. But choose what you prefer, just be aware though of the image format's compression features. For most of my compositing tasks, I commonly use PNG, it being a lossless type of image, meaning, even after processing it a few times, it retains its original quality and doesn't compress which results in odd results, like in a JPG file. However, if you really want to push your compositing project and use data such as z-buffer (depth), etc. you'll be good with EXR, which is one of the best out there, but it creates such huge file sizes depending on the settings you have. Play around and see which one is most comfortable with you. For ease, we'll load up JPG images for now. With the Node Editor Window active, left click somewhere on an empty space on the left side, imagine placing an imaginative cursor there with the left mouse button. This will tell Blender to place here the node we'll be adding. Next, press SHIFT A. This will bring up the add menu. Choose Input then click on Image. (Adding an Image Node) Most often, when you have the Composite Node selected before performing this action, Blender will automatically connect and link the newly added node to the composite node. If not, you can connect the Image Node's image output node to the Composite Node's image input node. (Image Node Connected to Composite Node) To load images into the Compositor, simply click on Open on the the Image Node and this will bring up a menu for you to browse on. Once you've chosen the desired image, you can double left click on the image or single click then click on Open. After that is done, you'll notice the Image Node's and the Composite Node's preview changed accordingly. (Image Loaded in the Compositor) This image is now ready for compositing work.
Read more
  • 0
  • 0
  • 4519

article-image-blender-25-creating-uv-texture
Packt
21 Oct 2010
4 min read
Save for later

Blender 2.5: creating a UV texture

Packt
21 Oct 2010
4 min read
Before we can create a custom UV texture, we need to export our current UV map from Blender to a file that an image manipulation program, such as GIMP or Photoshop, can read. Exporting our UV map If we have GIMP downloaded, we can export our UV map from Blender to a format that GIMP can read. To do this, make sure we can view our UV map in the Image Editor. Then, go to UVs | Export UV Layout. Then save the file in a folder you can easily get to, naming it UV_layout or whatever you like. (Move the mouse over the image to enlarge.) Now it's time to open GIMP! Downloading GIMP Before we begin, we need to first get an image manipulation program. If you don't have one of the high-end programs, such as Photoshop, there still is hope. There's a wonderful free (and open source) program called GIMP, which parallels Photoshop in functionality. For the sake of creating our textures, we will be using GIMP, but feel free to use whatever you are personally most comfortable with. To download GIMP, visit the program's website at http://www.gimp.org and download the right version for your operating system. Mac Users will need to install X11 so GIMP will run. Consult your Mac OS installation guide for instructions on how to install. Windows users, you will need to install the GTK+ Runtime Environment to run GIMP—the download installer should warn you about this during installation. To install GTK+, visit http://www.gtk.org. Hello GIMP! When we open GIMP for the first time, we should have a 3-window layout, similar to the following screen: Create a new document by selecting File | New. You can also use the Ctrl+N keyboard shortcut. This should bring up a dialog box with a list of settings we can use to customize our new document. Because Blender exported our UV map as an SVG file, we can choose any size image we want, because we can scale the image to fit our document. SVG stands for Scalable Vector Graphic. Vector graphics are images defined by mathematically calculated paths, allowing them to be scaled infinitely without the pixilation caused when raster images are enlarged beyond a certain point. Change the Width and Height attributes to 2000 each. This will create a texture image 2000 pixels wide by 2000 pixels high. Click on OK to create our new document. Getting reference images Before we can create a UV texture for our wine bottle, which will primarily define the bottle's label, we need to know what is typically on a wine bottle's label. If you search the web for any wine bottle, you'll get a pretty good idea of what a wine bottle label looks like. However, for our purposes, we're going to use the following image: Notice how there's typically the name of the wine company, the type of wine, and the year it was made. We're going to use all of these in our own wine bottle label. Importing our UV map A nice thing about GIMP is that we can import images as layers into our current file. We're going to do just this with our UV map. Go to File | Open as Layers... to bring up the file selection dialog box. Navigate to the UV map we saved earlier and open it. Another dialog box will pop up—we can use this to tell GIMP how we want our SVG to appear in our document. Change the Width and Height attributes to match our working document—2000px by 2000px. Click on OK to confirm. Not every file type will bring up this dialog box—it's specific to SVG files only. We should now see our UV map in the document as a new layer. Before we continue, we should change the background color of our texture. Our label is going to be white, so we are going to need to distinguish our label from the rest of the wine bottle's material. With our background layer selected, fill the layer with a black color using the Fill tool. Next, we can create the background color of the label. Create a new layer by clicking on the New Layer button. Name it label_background. Using the Marquee Selection tool, make a selection similar to the following image: Fill it, using the Fill tool, with white. This will be the background for our label—everything else we add with be made in relation to this layer. Keep the UV map layer on top as often as possible. This will help us keep a clear view of where our graphics are in relation to our UV map at all times.
Read more
  • 0
  • 0
  • 9311
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-lighting-outdoor-scene-blender
Packt
19 Oct 2010
7 min read
Save for later

Lighting an Outdoor Scene in Blender

Packt
19 Oct 2010
7 min read
  Blender 2.5 Lighting and Rendering Bring your 3D world to life with lighting, compositing, and rendering Render spectacular scenes with realistic lighting in any 3D application using interior and exterior lighting techniques Give an amazing look to 3D scenes by applying light rigs and shadow effects Apply color effects to your scene by changing the World and Lamp color values A step-by-step guide with practical examples that help add dimensionality to your scene        Getting the right files Before we get started, we need a scene to work with. There are three scenes provided for our use—an outdoor scene, an indoor scene, and a hybrid scene that incorporates elements that are found both inside as well as outside. All these files can be downloaded from http://www.cgshark.com/lightingand-rendering/ The file we are going to use for this scene is called exterior.blend. This scene contains a tricycle, which we will light as if it were a product being promoted for a company. To download the files for this tutorial, visit http://www.cgshark.com/lighting-and-rendering/ and select exterior.blend. Blender render settings In computer graphics, a two-dimensional image is created from three-dimensional data through a computational process known as rendering. It's important to understand how to customize Blender's internal renderer settings to produce a final result that's optimized for our project, be it a single image or a full-length film. With the settings Blender provides us, we can set frame rates for animation, image quality, image resolution, and many other essential parts needed to produce that optimized final result. The Scene menu We can access these render settings through the Scene menu. Here, we can adjust a myriad of settings. For the sake of these projects, we are only going to be concerned with: Which window Blender will render our image in How render layers are set up Image dimensions Output location and file type Render settings The first settings we see when we look at the Scene menu are the Render settings. Here, we can tell Blender to render the current frame or an animation using the render buttons. We can also choose what type of window we want Blender to render our image in using the Display options. The first option (and the one chosen by default) is Full Screen. This renders our image in a window that overlaps the three-dimensional window in our scene. To restore the three-dimensional view, select the Back to Previous button at the top of the window. The next option is the Image Editor that Blender uses both for rendering as well as UV editing. This is especially useful when using the Compositor, allowing us to see our result alongside our composite node setup. By default, Blender replaces the three-dimensional window with the Image Editor. The last option is the option that Blender has used, by default, since day one—New Window. This means that Blender will render the image in a newly created window, separate from the rest of the program's interface. For the sake of these projects, we're going to keep this setting at the default setting—Full Screen. Dimensions settings These are some of the most important settings that we can set when dealing with optimizing our project output. We can set the image size, frame rate, frame range, and aspect ratio of our render. Luckily for us, Blender provides us with preset render settings, common in the film industry: HDTV 1080P HDTV 720P TV NTSC TV PAL TV PAL 16:9 Because we want to keep our render times relatively low for our projects, we're going to set our preset dimensions to TV NTSC, which results in an image 720 pixels wide by 480 pixels high. If you're interested in learning more about how the other formats behave, feel free to visit http://en.wikipedia.org/wiki/Display_resolution. Output settings These settings are an important factor when determining how we want our final product to be viewed. Blender provides us with numerous image and video types to choose from. When rendering an animation or image sequence, it's always easier to manually set the folder we want Blender to save to. We can tell Blender where we want it to save by establishing the path in the output settings. By default on Macintosh, Blender saves to the /tmp/ folder. Now that we understand how Blender's renderer works, we can start working with our scene! Establishing a workflow The key to constantly producing high-quality work is to establish a well-tested and efficient workflow. Everybody's workflow is different, but we are going to follow this series of steps: Evaluate what the scene we are lighting will require. Plan how we want to lay out the lamps in our scene. Set lamp positions, intensities, colors, and shadows, if applicable. Add materials and textures. Tweak until we're satisfied. Evaluating our scene Before we even begin to approach a computer, we need to think about our scene from a conceptual perspective. This is important, because knowing everything about our scene and the story that's taking place will help us produce a more realistic result. To help kick start this process, we can ask ourselves a series of questions that will get us thinking about what's happening in our scene. These questions can pertain to an entire array of possibilities and conditions, including: Weather What is the weather like on this particular day? What was it like the day before or the day after? Is it cloudy, sunny, or overcast? Did it rain or snow? Source of light Where is the light coming from? Is it in front of, to the side, or even behind the object? Remember, light is reflected and refracted until all energy is absorbed; this not only affects the color of the light, but the quality as well. Do we need to add additional light sources to simulate this effect? Scale of light sources What is the scale of our light sources in relation to our three-dimensional scene? Believe it or not, this factor carries a lot of weight when it comes to the quality of the final render. If any lights feel out of place, it could potentially affect the believability of the final product. The goal of these questions is to prove to ourselves that the scene we're lighting has the potential to exist in real life. It's much harder, if not impossible, to light a scene if we don't know how it could possibly act in the real world. Let's take a look at these questions. What is the weather like? In our case, we're not concerned with anything too challenging, weather wise. The goal of this tutorial is to depict our tricycle in an environment that reflects the effects of a sunny, cloudless day. To achieve this, we are going to use lights with blue and yellow hues for simulating the effect the sun and sky will have on our tricycle. What are the sources of our light and where are they coming from in relation to our scene? In a real situation, the sun would provide most of the light, so we'll need a key light that simulates how the sun works. In our case, we can use a Sun lamp. The key to positioning light sources within a three-dimensional scene is to find a compromise between achieving the desired mood of the image and effectively illuminating the object being presented. What is the scale of our light sources? The sun is rather large, but because of the nature of the Sun lamp in Blender, we don't have to worry about the scale of the lamp in our three-dimensional scene. Sometimes—more commonly when working with indoor scenes, such as the scene we'll approach later—certain light sources need to be of certain sizes in relation to our scene, otherwise the final result will feel unnatural. Although we will be using a realistic approach to materials, textures, and lighting, we are going to present this scene as a product visualization. This means that we won't explicitly show a ground plane, allowing the viewer to focus on the product being presented, in this case, our tricycle.
Read more
  • 0
  • 0
  • 6037

article-image-unity-3d-game-development-dont-be-clock-blocker
Packt
29 Sep 2010
9 min read
Save for later

Unity 3D Game Development: Don't Be a Clock Blocker

Packt
29 Sep 2010
9 min read
  Unity 3D Game Development by Example Beginner's Guide A seat-of-your-pants manual for building fun, groovy little games quickly Build fun games using the free Unity 3D game engine even if you've never coded before Learn how to "skin" projects to make totally different games from the same file – more games, less effort! Deploy your games to the Internet so that your friends and family can play them Packed with ideas, inspiration, and advice for your own game design and development Stay engaged with fresh, fun writing that keeps you awake as you learn Read more about this book (For more resources on Unity 3D, see here.) We've taken a baby game like Memory and made it slightly cooler by changing the straight-up match mechanism and adding a twist: matching disembodied robot parts to their bodies. Robot Repair is a tiny bit more interesting and more challenging thanks to this simple modification. There are lots of ways we could make the game even more difficult: we could quadruple the number of robots, crank the game up to a 20x20 card grid, or rig Unity up to some peripheral device that issues a low-grade electrical shock to the player every time he doesn't find a match. NOW who's making a baby game? These ideas could take a lot of time though, and the Return-On-Investment (ROI) we see from these features may not be worth the effort. One cheap, effective way of amping up the game experience is to add a clock. Apply pressure What if the player only has x seconds to find all the matches in the Robot Repair game? Or, what if in our keep-up game, the player has to bounce the ball without dropping it until the timer runs out in order to advance to the next level? In this article let's: Program a text-based countdown clock to add a little pressure to our games Modify the clock to make it graphical, with an ever-shrinking horizontal bar Layer in some new code and graphics to create a pie chart-style clock That's three different countdown clocks, all running from the same initial code, all ready to be put to work in whatever Unity games you dream up. Roll up your sleeves—it's time to start coding! Time for action – prepare the clock script Open your Robot Repair game project and make sure you're in the game Scene. We'll create an empty GameObject and glue some code to it. Go to GameObject | Create Empty. Rename the empty Game Object Clock. Create a new JavaScript and name it clockScript. Drag-and-drop the clockScript onto the Clock Game Object. No problem! We know the drill by now—we've got a Game Object ready to go with an empty script where we'll put all of our clock code. Time for more action – prepare the clock text In order to display the numbers, we need to add a GUIText component to the Clock GameObject, but there's one problem: GUIText defaults to white, which isn't so hot for a game with a white background. Let's make a quick adjustment to the game background color so that we can see what's going on. We can change it back later. Select the Main Camera in the Hierarchy panel. Find the Camera component in the Inspector panel. Click on the color swatch labeled Back Ground Color, and change it to something darker so that our piece of white GUIText will show up against it. I chose a "delightful" puce (R157 G99 B120). Select the Clock Game Object from the Hierarchy panel. It's not a bad idea to look in the Inspector panel and confirm that the clockScript script was added as a component in the preceding instruction. With the Clock Game Object selected, go to Component | Rendering | GUIText. This is the GUIText component that we'll use to display the clock numbers on the screen. In the Inspector panel, find the GUIText component and type whatever in the blank Text property. In the Inspector panel, change the clock's X position to 0.8 and its Y position to 0.9 to bring it into view. You should see the word whatever in white, floating near the top-right corner of the screen in the Game view.. Right, then! We have a Game Object with an empty script attached. That Game Object has a GUIText component to display the clock numbers. Our game background is certifiably hideous. Let's code us some clock. Still time for action – change the clock text color Double-click the clockScript. Your empty script, with one lone Update() function, should appear in the code editor. The very first thing we should consider is doing away with our puce background by changing the GUIText color to black instead of white. Let's get at it. Write the built-in Start function and change the GUIText color: function Start(){ guiText.material.color = Color.black;}function Update() {} Save the script and test your game to see your new black text. If you feel comfy, you can change the game background color back to white by clicking on the Main Camera Game Object and finding the color swatch in the Inspector panel. The white whatever GUIText will disappear against the white background in the Game view because the color-changing code that we just wrote runs only when we test the game (try testing the game to confirm this). If you ever lose track of your text, or it's not displaying properly, or you just really wanna see it on the screen, you can change the camera's background color to confirm that it's still there. If you're happy with this low-maintenance, disappearing-text arrangement, you can move on to the Prepare the clock code section. But, if you want to put in a little extra elbow grease to actually see the text, in a font of your choosing, follow these next steps. Time for action rides again – create a font texture and material In order to change the font of this GUIText, and to see it in a different color without waiting for the code to run, we need to import a font, hook it up to a Material, and apply that Material to the GUIText. Find a font that you want to use for your game clock. I like the LOLCats standby Impact. If you're running Windows, your fonts are likely to be in the C:WindowsFonts directory. If you're a Mac user, you should look in the LibraryFonts folder. Drag the font into the Project panel in Unity. The font will be added to your list of Assets. Right-click (or secondary-click) an empty area of the Project panel and choose Create | Material. You can also click on the Create button at the top of the panel. Rename the new Material to something useful. Because I'm using the Impact font, and it's going to be black, I named mine "BlackImpact" (incidentally, "Black Impact" is also the name of my favorite exploitation film from the 70s). Click on the Material you just created in the Project Panel. In the Inspector panel, click on the color swatch labeled Main Color and choose black (R0 G0 B0), then click on the little red X to close the color picker. In the empty square area labeled None (Texture 2D), click on the Select button, and choose your font from the list of textures (mine was labeled impact - font texture). At the top of the Inspector panel, there's a drop-down labeled Shader. Select Transparent/Diffuse from the list. You'll know it worked when the preview sphere underneath the Inspector panel shows your chosen font outline wrapped around a transparent sphere. Pretty cool! Click on the Clock Game Object in the Hierarchy panel. Find the GUIText component in the Inspector panel. Click and drag your font—the one with the letter A icon—from the Project panel into the parameter labeled Font in the GUIText component. You can also click the drop-down arrow (the parameter should say None (Font) initially) and choose your font from the list. Similarly, click-and-drag your Material—the one with the gray sphere icon—from the Project panel into the parameter labeled Material in the GUIText component. You can also click on the drop-down arrow (the parameter should say None (Material) initially) and choose your Material from the list. Just as you always dreamed about since childhood, the GUIText changes to a solid black version of the fancy font you chose! Now, you can definitely get rid of that horrid puce background and switch back to white. If you made it this far and you're using a Material instead of the naked font option, it's also safe to delete the guiText.material.color = Color.black; line from the clockScript. Time for action – what's with the tiny font? The Impact font, or any other font you choose, won't be very… impactful at its default size. Let's change the import settings to biggify it. Click on your imported font—the one with the letter A icon—in the Project panel. In the Inspector panel, you'll see the True Type Font Importer. Change the Font Size to something respectable, like 32, and press the Enter key on your keyboard. Click on the Apply button. Magically, your GUIText cranks up to 32 points (you'll only see this happen if you still have a piece of text like "whatever" entered into the Text parameter of the GUIText of the Clock Game Object component). What just happened - was that seriously magic? Of course, there's nothing magical about it. Here's what happened when you clicked on that Apply button: When you import a font into Unity, an entire set of raster images is created for you by the True Type Font Importer. Raster images are the ones that look all pixelly and square when you zoom in on them. Fonts are inherently vector instead of raster, which means that they use math to describe their curves and angles. Vector images can be scaled up any size without going all Rubik's Cube on you. But, Unity doesn't support vector fonts. For every font size that you want to support, you need to import a new version of the font and change its import settings to a different size. This means that you may have four copies of, say, the Impact font, at the four different sizes you require. When you click on the Apply button, Unity creates its set of raster images based on the font that you're importing.
Read more
  • 0
  • 0
  • 2982

article-image-introduction-game-development-using-unity-3d
Packt
24 Sep 2010
9 min read
Save for later

Introduction to Game Development Using Unity 3D

Packt
24 Sep 2010
9 min read
  Unity 3D Game Development by Example Beginner's Guide A seat-of-your-pants manual for building fun, groovy little games quickly Read more about this book (For more resources on this subject, see here.) Technology is a tool. It helps us accomplish amazing things, hopefully more quickly and more easily and more amazingly than if we hadn't used the tool. Before we had newfangled steam-powered hammering machines, we had hammers. And before we had hammers, we had the painful process of smacking a nail into a board with our bare hands. Technology is all about making our lives better and easier. And less painful. Introducing Unity 3D Unity 3D is a new piece of technology that strives to make life better and easier for game developers. Unity is a game engine or a game authoring tool that enables creative folks like you to build video games. By using Unity, you can build video games more quickly and easily than ever before. In the past, building games required an enormous stack of punch cards, a computer that filled a whole room, and a burnt sacrificial offering to an ancient god named Fortran. Today, instead of spanking nails into boards with your palm, you have Unity. Consider it your hammer—a new piece of technology for your creative tool belt. Unity takes over the world We'll be distilling our game development dreams down to small, bite-sized nuggets instead of launching into any sweepingly epic open-world games. The idea here is to focus on something you can actually finish instead of getting bogged down in an impossibly ambitious opus. When you're finished, you can publish these games on the Web, Mac, or PC. The team behind Unity 3D is constantly working on packages and export opinions for other platforms. At the time of this writing, Unity could additionally create games that can be played on the iPhone, iPod, iPad, Android devices, Xbox Live Arcade, PS3, and Nintendo's WiiWare service. Each of these tools is an add-on functionality to the core Unity package, and comes at an additional cost. As we're focusing on what we can do without breaking the bank, we'll stick to the core Unity 3D program for the remainder of this article. The key is to start with something you can finish, and then for each new project that you build, to add small pieces of functionality that challenge you and expand your knowledge. Any successful plan for world domination begins by drawing a territorial border in your backyard. Browser-based 3D? Welcome to the future Unity's primary and most astonishing selling point is that it can deliver a full 3D game experience right inside your web browser. It does this with the Unity Web Player—a free plugin that embeds and runs Unity content on the Web. Time for action – install the Unity Web Player Before you dive into the world of Unity games, download the Unity Web Player. Much the same way the Flash player runs Flash-created content, the Unity Web Player is a plugin that runs Unity-created content in your web browser. Go to http://unity3D.com. Click on the install now! button to install the Unity Web Player. Click on Download Now! Follow all of the on-screen prompts until the Web Player has finished installing. Welcome to Unity 3D! Now that you've installed the Web Player, you can view the content created with the Unity 3D authoring tool in your browser. What can I build with Unity? In order to fully appreciate how fancy this new hammer is, let's take a look at some projects that other people have created with Unity. While these games may be completely out of our reach at the moment, let's find out how game developers have pushed this amazing tool to its very limits. FusionFall The first stop on our whirlwind Unity tour is FusionFall—a Massively Multiplayer Online Role-Playing Game (MMORPG). You can find it at fusionfall.com. You may need to register to play, but it's definitely worth the extra effort! FusionFall was commissioned by the Cartoon Network television franchise, and takes place in a re-imagined, anime-style world where popular Cartoon Network characters are all grown up. Darker, more sophisticated versions of the Powerpuff Girls, Dexter, Foster and his imaginary friends, and the kids from Codename: Kids Next Door run around battling a slimy green alien menace. Completely hammered FusionFall is a very big and very expensive high-profile game that helped draw a lot of attention to the then-unknown Unity game engine when the game was released. As a tech demo, it's one of the very best showcases of what your new technological hammer can really do! FusionFall has real-time multiplayer networking, chat, quests, combat, inventory, NPCs (non-player characters), basic AI (artificial intelligence), name generation, avatar creation, and costumes. And that's just a highlight of the game's feature set. This game packs a lot of depth. Should we try to build FusionFall? At this point, you might be thinking to yourself, "Heck YES! FusionFall is exactly the kind of game I want to create with Unity, and this article is going to show me how!" Unfortunately, a step-by-step guide to creating a game the size and scope of FusionFall would likely require its own flatbed truck to transport, and you'd need a few friends to help you turn each enormous page. It would take you the rest of your life to read, and on your deathbed, you'd finally realize the grave error that you had made in ordering it online in the first place, despite having qualified for free shipping. Here's why: check out the game credits link on the FusionFall website: http://www.fusionfall.com/game/credits.php. This page lists all of the people involved in bringing the game to life. Cartoon Network enlisted the help of an experienced Korean MMO developer called Grigon Entertainment. There are over 80 names on that credits list! Clearly, only two courses of action are available to you: Build a cloning machine and make 79 copies of yourself. Send each of those copies to school to study various disciplines, including marketing, server programming, and 3D animation. Then spend a year building the game with your clones. Keep track of who's who by using a sophisticated armband system. Give up now because you'll never make the game of your dreams. Another option Before you do something rash and abandon game development for farming, let's take another look at this. FusionFall is very impressive, and it might look a lot like the game that you've always dreamed of making. This article is not about crushing your dreams. It's about dialing down your expectations, putting those dreams in an airtight jar, and taking baby steps. Confucius said: "A journey of a thousand miles begins with a single step." I don't know much about the man's hobbies, but if he was into video games, he might have said something similar about them—creating a game with a thousand awesome features begins by creating a single, less feature-rich game. So, let's put the FusionFall dream in an airtight jar and come back to it when we're ready. We'll take a look at some smaller Unity 3D game examples and talk about what it took to build them. Off-Road Velociraptor Safari No tour of Unity 3D games would be complete without a trip to Blurst.com—the game portal owned and operated by indie game developer Flashbang Studios. In addition to hosting games by other indie game developers, Flashbang has packed Blurst with its own slate of kooky content, including Off-Road Velociraptor Safari. (Note: Flashbang Studios is constantly toying around with ways to distribute and sell its games. At the time of this writing, Off-Road Velociraptor Safari could be played for free only as a Facebook game. If you don't have a Facebook account, try playing another one of the team's creations, like Minotaur China Shop or Time Donkey). In Off-Road Velociraptor Safari, you play a dinosaur in a pith helmet and a monocle driving a jeep equipped with a deadly spiked ball on a chain (just like in the archaeology textbooks). Your goal is to spin around in your jeep doing tricks and murdering your fellow dinosaurs (obviously). For many indie game developers and reviewers, Off-Road Velociraptor Safari was their first introduction to Unity. Some reviewers said that they were stunned that a fully 3D game could play in the browser. Other reviewers were a little bummed that the game was sluggish on slower computers. We'll talk about optimization a little later, but it's not too early to keep performance in mind as you start out. Fewer features, more promise If you play Off-Road Velociraptor Safari and some of the other games on the Blurst site, you'll get a better sense of what you can do with Unity without a team of experienced Korean MMO developers. The game has 3D models, physics (code that controls how things move around somewhat realistically), collisions (code that detects when things hit each other), music, and sound effects. Just like FusionFall, the game can be played in the browser with the Unity Web Player plugin. Flashbang Studios also sells downloadable versions of its games, demonstrating that Unity can produce standalone executable game files too. Maybe we should build Off-Road Velociraptor Safari? Right then! We can't create FusionFall just yet, but we can surely create a tiny game like Off-Road Velociraptor Safari, right? Well... no. Again, this article isn't about crushing your game development dreams. But the fact remains that Off-Road Velociraptor Safari took five supremely talented and experienced guys eight weeks to build on full-time hours, and they've been tweaking and improving it ever since. Even a game like this, which may seem quite small in comparison to full-blown MMO like FusionFall, is a daunting challenge for a solo developer. Put it in a jar up on the shelf, and let's take a look at something you'll have more success with.
Read more
  • 0
  • 0
  • 4019

article-image-adding-sound-music-and-video-3d-game-development-microsoft-silverlight-3-part-1
Packt
19 Nov 2009
3 min read
Save for later

Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 1

Packt
19 Nov 2009
3 min read
A game needs sound, music and video. It has to offer the player attractive background music. It must also generate sounds associated with certain game events. When a spaceship shoots a laser beam, a sound must accompany this action. Reproducing videos showing high-quality previously rendered animations is a good idea during transitions between one stage and the next. Hear the UFOs coming So far, we have worked with 3D scenes showing 3D models with textures and different kinds of lights. We took advantage of C# object-oriented capabilities and we animated 3D models and moved the cameras. We have read values from many different input devices and we added physics, artificial intelligence, amazing effects, gauges, statistics, skill levels, environments, and stages. However, the game does not use the speakers at all because there is no background music and there are no in-game sounds. Thus, we have to sort this issue out. Modern games use videos to dazzle the player before starting each new stage. They use amazing sound eff ects and music custom prepared for the game by renowned artists. How can we add videos, music, and sounds in Silverlight? We can do this by taking advantage of the powerful multimedia classes offered by Silverlight 3. However, as a game uses more multimedia resources than other simpler applications, we must be careful to avoid including unnecessary resources in the files that must be downloaded before starting the application. Time for action – installing tools to manipulate videos The 3D digital artists used Blender to create an introductory video showing a high quality rendered animation for five seconds. They took advantage of Blender's animation creation features, as shown in the following screenshot: A spaceship flies in a starry universe for a few seconds. Then, the camera navigates through the stars. Your project manager wants you to add this video as an introduction to the game. However, as the video file is in AVI (Audio Video Interleave) format and Silverlight 3 does not support this format, you have to convert the video to an appropriate format. The creation of video animations for a game is very complex and requires specialist skills. We are going to simplify this process by using an existing video. First, we must download and install an additional tool that will help us in converting an existing video to the most appropriate file formats used in Silverlight 3: The necessary tools will depend on the applications the digital artists use to create the videos. However, we will be using some tools that will work fine with our examples. Download one of the following files: parceApplication's name Download link File name Description Expression Encoder 2 http://www.microsoft.com/expression/try-it/try-it-v2.aspx Encoder_Trial_en.exe It is a commercial tool, but the trial offers a free fully functional version for 30 days. This tool will enable us to encode videos to the appropriate format to use in Silverlight 3. Expression Encoder 3 http://www.microsoft.com/expression/try-it Encoder_Trial_en.exe It is the newest trial version of the aforementioned commercial tool. Run the installers and follow the steps to complete the installation wizards. If you installed Expression Encoder 2, download and install its Service Pack 1. The download link for it is http://www.microsoft.com/expression/tryit/try-it-v2.aspx#encodersp1 file name—EncoderV2SP1_en.exe. Once you have installed one of the versions of Expression Encoder, you will be able to load and encode many video files in different file formats, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 1402
article-image-adding-sound-music-and-video-3d-game-development-microsoft-silverlight-3-part-2
Packt
19 Nov 2009
5 min read
Save for later

Adding Sound, Music, and Video in 3D Game Development with Microsoft Silverlight 3: Part 2

Packt
19 Nov 2009
5 min read
Time for action – animating projections Your project manager wants you to animate the perspective transform applied to the video while it is being reproduced. We are going to add a StoryBoard in XAML code to animate the PlaneProjection instance: Stay in the project, 3DInvadersSilverlight. Open MainPage.xaml and replace the PlaneProjection definition with the following line (we have to add a name to refer to it): <PlaneProjection x:Name ="proIntroduction" RotationX="-40" RotationY="15" RotationZ="-6" LocalOffsetX="-70" LocalOffsetY="-105" /> Add the following lines of code before the end of the definition of the cnvVideo Canvas: <Canvas.Resources>    <Storyboard x_Name="introductionSB">        <DoubleAnimation Storyboard.TargetName="proIntroduction"                Storyboard.TargetProperty="RotationX"                From="-40" To="0" Duration="0:0:5"                AutoReverse="False" RepeatBehavior="1x" />    </Storyboard></Canvas.Resources> Now, add the following line of code before the end of the PlayIntroductoryVideo method (to start the animation): introductionSB.Begin(); Build and run the solution. Click on the butt on and the video will start its reproduction after the transition effect. While the video is being played, the projection will be animated, as shown in the following diagram: What just happened? Now, the projection that shows the video is animated while the video is being reproduced. Working with a StoryBoard in XAML to animate a projection First, we added a name to the existing PlaneProjection (proIntroduction). Then, we were able to create a new StoryBoard with a DoubleAnimation instance as a child, with the StoryBoard's TargetName set to proIntroduction and its TargetProperty set to RotationX. Thus, the DoubleAnimation controls proIntroduction's RotationX value. The RotationX value will go from -40 to 0 in five seconds—the same time as the video's duration: From="-40" To="0" Duration="0:0:5" The animation will run once (1x) and it won't reverse its behavior: AutoReverse="False" RepeatBehavior="1x" We added the StoryBoard inside . Thus, we were able to start it by calling its Begin method, in the PlayIntroductionVideo procedure: introductionSB.Begin(); We can define StoryBoard instances and different Animation (System. Windows.Media.Animation) subclasses instances as DoubleAnimation, using XAML code. This way, we can create amazing animations for many properties of many other UIElements defined in XAML code.   Time for action – solving navigation problems When the game starts, there is an undesired side effect. The projected video appears in the right background, as shown in the following screenshot: This usually happens when working with projections. Now, we are going to solve this small problem: Stay in the 3DInvadersSilverlight project. Open MainPage.xaml.cs and add the following line before the first one in the medIntroduction_MediaEnded method: cnvVideo.Visibility = Visibility.Collapsed; Build and run the solution. Click on the button and after the video reproduction and animation, the game will start without the undesired background, as shown in the following screenshot: What just happened? Now, once the video finishes its reproduction and associated animation, we have hidden the Canvas that contains it. Hence, there are no parts of the previous animation visible when the game starts. Time for action – reproducing music Great games have appealing background music. Now, we are going to search and add background music to our game: As with other digital content, sound and music have a copyright owner and a license. Hence, we must be very careful when downloading sound and music for our games. We must read licenses before deploying our games with these digital contents embedded. One of the 3D digital artists found a very cool electro music sample for reproduction as background music. You have to pay to use it. However, you can download a free demo (Distorted velocity. 1) from http://www.musicmediatracks.com/music/Style/Electro/. Save the downloaded MP3 file (distorted_velocity._1.mp3) in the previously created media folder (C:Silverlight3DInvaders3DMedia). You can use any other MP3 sound for this exercise. The aforementioned MP3 demo is not included in the accompanying source code. Stay in the 3DInvadersSilverlight project. Right-click on the Media sub-folder in the 3DInvadersSilverlight.Web project and select Add | Existing item… from the context menu that appears. Go to the folder in which you copied the downloaded MP3 file (C:Silverlight3DInvaders3DMedia). Select the MP3 file and click on Add. This way, the audio file will be part of the web project, in the Media folder, as shown in the following screenshot: Now, add the following lines of code at the beginning of the btnStartGame button's Click event. This code will enable the new background music to start playing: // Background musicMediaElement backgroundMusic = new MediaElement();LayoutRoot.Children.Add(backgroundMusic);backgroundMusic.Volume = 0.8;backgroundMusic.Source = new Uri("Media/distorted_velocity._1.mp3", UriKind.Relative);backgroundMusic.Play(); Build and run the solution. Click on the button and turn on your speakers. You will hear the background music while the transition effect starts.
Read more
  • 0
  • 0
  • 1466

article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-2
Packt
18 Nov 2009
6 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2

Packt
18 Nov 2009
6 min read
Time for action – simulating fluids with movement Your project manager is amazed with the shower of dozens of meteors in the background. However, he wants to add a more realistic background. He shows you a water simulation sample using Farseer Physics Engine. He wants you to use the wave simulation capabilities offered by this powerful physics simulator to create an asteroids belt. First, we are going to create a new class to define a fluid model capable of setting the initial parameters and updating a wave controller provided by the physics simulator. We will use Farseer Physics Engine's wave controller to add real-time fluids with movement for our games. The following code is based on the Silverlight water sample offered with the physics simulator. However, in this case, we are not interested in collision detection capabilities because we are going to create an asteroid belt in the background. Stay in the 3DInvadersSilverlight project. Create a new class—FluidModel. Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Farseer Physics Engine): using System;using FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Controllers;using FarseerGames.FarseerPhysics.Mathematics; Add the following public property to hold the WaveController instance: public WaveController WaveController { get; private set; } Add the following public properties to define the wave generator parameters: public float WaveGeneratorMax { get; set; }public float WaveGeneratorMin { get; set; }public float WaveGeneratorStep { get; set; } Add the following constructor without parameters: public FluidModel(){ // Assign the initial values for the wave generator parameters WaveGeneratorMax = 0.20f; WaveGeneratorMin = -0.15f; WaveGeneratorStep = 0.025f;} Add the Initialize method to create and configure the WaveController instance using the PhysicsSimulator instance received as a parameter: public void Initialize(PhysicsSimulator physicsSimulator){ // The wave controller controls how the waves move // It defines how big and how fast is the wave // It is represented as set of points equally spaced horizontally along the width of the wave. WaveController = new WaveController(); WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); WaveController.Width = ConvertUnits.ToSimUnits(30); WaveController.Height = ConvertUnits.ToSimUnits(3); // The number of vertices that make up the surface of the wave WaveController.NodeCount = 40; // Determines how quickly the wave will dissipate WaveController.DampingCoefficient = .95f; // Establishes how fast the wave algorithm runs (in seconds) WaveController.Frequency = .16f; //The wave generator parameters simply move an end-point of the WaveController.WaveGeneratorMax = WaveGeneratorMax; WaveController.WaveGeneratorMin = WaveGeneratorMin; WaveController.WaveGeneratorStep = WaveGeneratorStep; WaveController.Initialize();} Add the Update method to update the wave controller and update the points that draw the waves shapes: public void Update(TimeSpan elapsedTime){ WaveController.Update((float) elapsedTime.TotalSeconds);} What just happened? We now have a FluidModel class that creates, configures, and updates a WaveController instance according to an associated physics simulator. As we are going to work with different gravitational forces, we are going to use another independent physics simulator to work with the FluidModel instance in our game. Simulating waves The wave controller offers many parameters to represent a set of points equally spaced horizontally along the width of one or many waves. The waves can be: Big or small Fast or slow Tall or short The wave controller's parameters allow us to determine the number of vertices that make up the surface of the wave assigning a value to its NodeCount property. In this case, we are going to create waves with 40 nodes and each point is going to be represented by an asteroid: WaveController.NodeCount = 40; The Initialize method defines the position, width, height and other parameters for the wave controller. We have to convert our position values to the simulator values. Thus, we use the ConvertUnits.ToSimUnits method. For example, this line defines the 2D Vector for the wave's upper left corner (X = -20 and Y = 5): WaveController.Position = ConvertUnits.ToSimUnits(-20, 5); The best way to understand each parameter is changing its values and running the example using these new values. Using a wave controller we can create amazing fluids with movement.   Time for action – creating a subclass for a complex asteroid belt Now, we are going to create a specialized subclass of Actor (Balder.Core.Runtime. Actor) to load, create an update a fluid with waves. This class will enable us to encapsulate an independent asteroid belt and add it to the game. In this case, it is a 3D character composed of many models (many instances of Mesh). Stay in the 3DInvadersSilverlight project. Create a new class, FluidWithWaves (a subclass of Actor) using the following declaration: public class FluidWithWaves : Actor Replace the default using declarations with the following lines of code (we are going to use many classes and interfaces from Balder, Farseer Physics Engine and lists): using System.Windows;using System.Windows.Controls;using System.Windows.Media;using System.Windows.Shapes;// BALDERusing Balder.Core;using Balder.Core.Geometries;using Balder.Core.Math;using Balder.Core.Runtime;// FARSEER PHYSICSusing FarseerGames.FarseerPhysics;using FarseerGames.FarseerPhysics.Collisions;using FarseerGames.FarseerPhysics.Dynamics;using FarseerGames.FarseerPhysics.Factories;using FarseerGames.FarseerPhysics.Mathematics;// LISTSusing System.Collections.Generic; Add the following protected variables to hold references for the RealTimeGame and the Scene instances: protected RealTimeGame _game;protected Scene _scene; Add the following private variables to hold the associated FluidModel instance, the collection of points that define the wave and the list of meshes (asteroids): private FluidModel _fluidModel;private PointCollection _points;private List<Mesh> _meshList; Add the following constructor with three parameters—the RealTimeGame, the Scene, and the PhysicsSimulator instances: public FluidWithWaves(RealTimeGame game, Scene scene, PhysicsSimulator physicsSimulator){ _game = game; _scene = scene; _fluidModel = new FluidModel(); _fluidModel.Initialize(physicsSimulator); int count = _fluidModel.WaveController.NodeCount; _points = new PointCollection(); for (int i = 0; i < count; i++) { _points.Add(new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]))); }} Override the LoadContent method to load the meteors' meshes and set their initial positions according to the points that define the wave: public override void LoadContent(){ base.LoadContent(); _meshList = new List<Mesh>(_points.Count); for (int i = 0; i < _points.Count; i++) { Mesh mesh = _game.ContentManager.Load<Mesh>("meteor.ase"); _meshList.Add(mesh); _scene.AddNode(mesh); mesh.Position.X = (float) _points[i].X; mesh.Position.Y = (float) _points[i].Y; mesh.Position.Z = 0; }} Override the Update method to update the fluid model and then change the meteors' positions taking into account the points that define the wave according to the elapsed time: public override void Update(){ base.Update(); // Update the fluid model with the real-time game elapsed time _fluidModel.Update(_game.ElapsedTime); _points.Clear(); for (int i = 0; i < _fluidModel.WaveController.NodeCount; i++) { Point p = new Point(ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.XPosition[i]), ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.CurrentWave[i]) +ConvertUnits.ToDisplayUnits (_fluidModel.WaveController.Position.Y)); _points.Add(p); }// Update the positions for the meshes that define the wave's points for (int i = 0; i < _points.Count; i++) { _meshList[i].Position.X = (float)_points[i].X; _meshList[i].Position.Y = (float)_points[i].Y; }}
Read more
  • 0
  • 0
  • 1392

article-image-unity-game-development-interactions-part-1
Packt
18 Nov 2009
8 min read
Save for later

Unity Game Development: Interactions (Part 1)

Packt
18 Nov 2009
8 min read
To detect physical interactions between game objects, the most common method is to use a Collider component—an invisible net that surrounds an object's shape and is in charge of detecting collisions with other objects. The act of detecting and retrieving information from these collisions is known as collision detection. Not only can we detect when two colliders interact, but we can also pre-empt a collision and perform many other useful tasks by utilizing a technique called Ray Casting, which draws a Ray—put simply, an invisible (non-rendered) vector line between two points in 3D space—which can also be used to detect an intersection with a game object's collider. Ray casting can also be used to retrieve lots of other useful information such as the length of the ray (therefore—distance), and the point of impact of the end of the line. In the given example, a ray facing the forward direction from our character is demonstrated. In addition to the direction, a ray can also be given a specific length, or allowed to cast until it finds an object. Over the course of the article, we will work with the outpost model. Because this asset has been animated for us, the animation of the outpost's door opening and closing is ready to be triggered—once the model is placed into our scene. This can be done with either collision detection or ray casting, and we will explore what you will need to do to implement either approach. Let's begin by looking at collision detection and when it may be appropriate to use ray casting instead of, or in complement to, collision detection. Exploring collisions When objects collide in any game engine, information about the collision event becomes available. By recording a variety of information upon the moment of impact, the game engine can respond in a realistic manner. For example, in a game involving physics, if an object falls to the ground from a height, then the engine needs to know which part of the object hit the ground first. With that information, it can correctly and realistically control the object's reaction to the impact. Of course, Unity handles these kinds of collisions and stores the information on your behalf, and you only have to retrieve it in order to do something with it. In the example of opening a door, we would need to detect collisions between the player character's collider and a collider on or near the door. It would make little sense to detect collisions elsewhere, as we would likely need to trigger the animation of the door when the player is near enough to walk through it, or to expect it to open for them. As a result, we would check for collisions between the player character's collider and the door's collider. However, we would need to extend the depth of the door's collider so that the player character's collider did not need to be pressed up against the door in order to trigger a collision, as shown in the following illustration. However, the problem with extending the depth of the collider is that the game interaction with it becomes unrealistic. In the example of our door, the extended collider protruding from the visual surface of the door would mean that we would bump into an invisible surface which would cause our character to stop in their tracks, and although we would use this collision to trigger the opening of the door through animation, the initial bump into the extended collider would seem unnatural to the player and thus detract from their immersion in the game. So while collision detection will work perfectly well between the player character collider and the door collider, there are drawbacks that call for us as creative game developers to look for a more intuitive approach, and this is where ray casting comes in. Ray casting While we can detect collisions between the player character's collider and a collider that fits the door object, a more appropriate method may be to check for when the player character is facing the door we are expecting to open and is within a certain distance of this door. This can be done by casting a ray forward from the player's forward direction and restricting its length. This means that when approaching the door, the player needn't walk right up to it—or bump into an extended collider—in order for it to be detected. It also ensures that the player cannot walk up to the door facing away from it and still open it—with ray casting they must be facing the door in order to use it, which makes sense. In common usage, ray casting is done where collision detection is simply too imprecise to respond correctly. For example, reactions that need to occur with a frame-by-frame level of detail may occur too quickly for a collision to take place. In this instance, we need to preemptively detect whether a collision is likely to occur rather than the collision itself. Let's look at a practical example of this problem. The frame miss In the example of a gun in a 3D shooter game, ray casting is used to predict the impact of a gunshot when a gun is fired. Because of the speed of an actual bullet, simulating the flight path of a bullet heading toward a target is very difficult to visually represent in a way that would satisfy and make sense to the player. This is down to the frame-based nature of the way in which games are rendered. If you consider that when a real gun is fired, it takes a tiny amount of time to reach its target—and as far as an observer is concerned it could be said to happen instantly—we can assume that even when rendering over 25 frames of our game per second, the bullet would need to have reached its target within only a few frames. In the example above, a bullet is fired from a gun. In order to make the bullet realistic, it will have to move at a speed of 500 feet per second. If the frame rate is 25 frames per second, then the bullet moves at 20 feet per frame. The problem with this is a person is about 2 feet in diameter, which means that the bullet will very likely miss the enemies shown at 5 and 25 feet away that would be hit. This is where prediction comes into play. Predictive collision detection Instead of checking for a collision with an actual bullet object, we find out whether a fired bullet will hit its target. By casting a ray forward from the gun object (thus using its forward direction) on the same frame that the player presses the fire button, we can immediately check which objects intersect the ray. We can do this because rays are drawn immediately. Think of them like a laser pointer—when you switch on the laser, we do not see the light moving forward because it travels at the speed of light—to us it simply appears. Rays work in the same way, so that whenever the player in a ray-based shooting game presses fire, they draw a ray in the direction that they are aiming. With this ray, they can retrieve information on the collider that is hit. Moreover, by identifying the collider, the game object itself can be addressed and scripted to behave accordingly. Even detailed information, such as the point of impact, can be returned and used to affect the resultant reaction, for example, causing the enemy to recoil in a particular direction. In our shooting game example, we would likely invoke scripting to kill or physically repel the enemy whose collider the ray hits, and as a result of the immediacy of rays, we can do this on the frame after the ray collides with, or intersects the enemy collider. This gives the effect of a real gunshot because the reaction is registered immediately. It is also worth noting that shooting games often use the otherwise invisible rays to render brief visible lines to help with aim and give the player visual feedback, but do not confuse these lines with ray casts because the rays are simply used as a path for line rendering. Adding the outpost Before we begin to use both collision detection and ray casting to open the door of our outpost, we'll need to introduce it to the scene. To begin, drag the outpost model from the Project panel to the Scene view and drop it anywhere—bear in mind you cannot position it when you drag-and-drop; this is done once you have dropped the model (that is, let go off the mouse). Once the outpost is in the Scene, you'll notice its name has also appeared in the Hierarchy panel and that it has automatically become selected. Now you're ready to position and scale it!  
Read more
  • 0
  • 0
  • 2480
article-image-unity-game-development-interactions-part-2
Packt
18 Nov 2009
14 min read
Save for later

Unity Game Development: Interactions (Part 2)

Packt
18 Nov 2009
14 min read
Opening the outpost In this section, we will look at the two differing approaches for triggering the animation giving you an overview of the two techniques that will both become useful in many other game development situations. In the first approach, we'll use collision detection—a crucial concept to get to grips with as you begin to work on games in Unity. In the second approach, we'll implement a simple ray cast forward from the player. Approach 1—Collision detection To begin writing the script that will trigger the door-opening animation and thereby grant access to the outpost, we need to consider which object to write a script for. In game development, it is often more efficient to write a single script for an object that will interact with many other objects, rather than writing many individual scripts that check for a single object. With this in mind, when writing scripts for a game such as this, we will write a script to be applied to the player character in order to check for collisions with many objects in our environment, rather than a script made for each object the player may interact with, which checks for the player. Creating new assets Before we introduce any new kind of asset into our project, it is good practice to create a folder in which we will keep assets of that type. In the Project panel, click on the Create button, and choose Folder from the drop-down menu that appears. Rename this folder Scripts by selecting it and pressing Return (Mac) or by pressing F2 (PC). Next, create a new JavaScript file within this folder simply by leaving the Scripts folder selected and clicking on the Project panel's Create button again, this time choosing JavaScript. By selecting the folder, you want a newly created asset to be in before you create them, you will not have to create and then relocate your asset, as the new asset will be made within the selected folder. Rename the newly created script from the default—NewBehaviourScript—to PlayerCollisions. JavaScript files have the file extension of .js but the Unity Project panel hides file extensions, so there is no need to attempt to add it when renaming your assets. You can also spot the file type of a script by looking at its icon in the Project panel. JavaScript files have a 'JS' written on them, C# files simply have 'C#' and Boo files have an image of a Pacman ghost, a nice little informative pun from the guys at Unity Technologies! Scripting for character collision detection To start editing the script, double-click on its icon in the Project panel to launch it in the script editor for your platform—Unitron on Mac, or Uniscite on PC. Working with OnControllerColliderHit By default, all new JavaScripts include the Update() function, and this is why you'll find it present when you open the script for the first time. Let's kick off by declaring variables we can utilise throughout the script. Our script begins with the definition of four variables, public member variables and two private variables. Their purposes are as follows: doorIsOpen: a private true/false (boolean) type variable acting as a switch for the script to check if the door is currently open. doorTimer: a private floating-point (decimal-placed) number variable, which is used as a timer so that once our door is open, the script can count a defined amount of time before self-closing the door. currentDoor: a private GameObject storing variable used to store the specific currently opened door. Should you wish to add more than one outpost to the game at a later date, then this will ensure that opening one of the doors does not open them all, which it does by remembering the most recent door hit. doorOpenTime: a floating-point (potentially decimal) numeric public member variable, which will be used to allow us to set the amount of time we wish the door to stay open in the Inspector. doorOpenSound/doorShutSound: Two public member variables of data type AudioClip, for allowing sound clip drag-and-drop assignment in the Inspector panel. Define the variables above by writing the following at the top of the PlayerCollisions script you are editing: private var doorIsOpen : boolean = false;private var doorTimer : float = 0.0;private var currentDoor : GameObject;var doorOpenTime : float = 3.0;var doorOpenSound : AudioClip;var doorShutSound : AudioClip; Next, we'll leave the Update() function briefly while we establish the collision detection function itself. Move down two lines from: function Update(){} And write in the following function: function OnControllerColliderHit(hit : ControllerColliderHit){} This establishes a new function called OnControllerColliderHit. This collision detection function is specifically for use with player characters such as ours, which use the CharacterController component. Its only parameter hit is a variable that stores information on any collision that occurs. By addressing the hit variable, we can query information on the collision, including—for starters—the specific game object our player has collided with. We will do this by adding an if statement to our function. So within the function's braces, add the following if statement: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ }} In this if statement, we are checking two conditions, firstly that the object we hit is tagged with the tag outpostDoor and secondly that the variable doorOpen is currently set to false. Remember here that two equals symbols (==) are used as a comparative, and the two ampersand symbols (&&) simply say 'and also'. The end result means that if we hit the door's collider that we have tagged and if we have not already opened the door, then it may carry out a set of instructions. We have utilized the dot syntax to address the object we are checking for collisions with by narrowing down from hit (our variable storing information on collisions) to gameObject (the object hit) to the tag on that object. If this if statement is valid, then we need to carry out a set of instructions to open the door. This will involve playing a sound, playing one of the animation clips on the model, and setting our boolean variable doorOpen to true. As we are to call multiple instructions—and may need to call these instructions as a result of a different condition later when we implement the ray casting approach—we will place them into our own custom function called OpenDoor. We will write this function shortly, but first, we'll call the function in the if statement we have, by adding: OpenDoor(); So your full collision function should now look like this: function OnControllerColliderHit(hit: ControllerColliderHit){ if(hit.gameObject.tag == "outpostDoor" && doorIsOpen == false){ OpenDoor(); }} Writing custom functions Storing sets of instructions you may wish to call at any time should be done by writing your own functions. Instead of having to write out a set of instructions or "commands" many times within a script, writing your own functions containing the instructions means that you can simply call that function at any time to run that set of instructions again. This also makes tracking mistakes in code—known as Debugging—a lot simpler, as there are fewer places to check for errors. In our collision detection function, we have written a call to a function named OpenDoor. The brackets after OpenDoor are used to store parameters we may wish to send to the function—using a function's brackets, you may set additional behavior to pass to the instructions inside the function. We'll take a look at this in more depth later in this article under the heading Function Efficiency. Our brackets are empty here, as we do not wish to pass any behavior to the function yet. Declaring the function To write the function we need to call, we simply begin by writing: function OpenDoor(){} In between the braces of the function, much in the same way as the instructions of an if statement, we place any instructions to be carried out when this function is called. Playing audio Our first instruction is to play the audio clip assigned to the variable called doorOpenSound. To do this, add the following line to your function by placing it within the curly braces after { "and before" }: audio.PlayOneShot(doorOpenSound); To be certain, it should look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound);} Here we are addressing the Audio Source component attached to the game object this script is applied to (our player character object, First Person Controller), and as such, we'll need to ensure later that we have this component attached; otherwise, this command will cause an error. Addressing the audio source using the term audio gives us access to four functions, Play(), Stop(), Pause(), and PlayOneShot(). We are using PlayOneShot because it is the best way to play a single instance of a sound, as opposed to playing a sound and then switching clips, which would be more appropriate for continuous music than sound effects. In the brackets of the PlayOneShot command, we pass the variable doorOpenSound, which will cause whatever sound file is assigned to that variable in the Inspector to play. We will download and assign this and the clip for closing the door after writing the script. Checking door status One condition of our if statement within our collision detection function was that our boolean variable doorIsOpen must be set to false. As a result, the second command inside our OpenDoor() function is to set this variable to true. This is because the player character may collide with the door several times when bumping into it, and without this boolean, they could potentially trigger the OpenDoor() function many times, causing sound and animation to recur and restart with each collision. By adding in a variable that when false allows the OpenDoor() function to run and then disallows it by setting the doorIsOpen variable to true immediately, any further collisions will not re-trigger the OpenDoor() function. Add the line: doorOpen = true; to your OpenDoor() function now by placing it between the curly braces after the previous command you just added. Playing animation We have already imported the outpost asset package and looked at various settings on the asset before introducing it to the game in this article. One of the tasks performed in the import process was the setting up of animation clips using the Inspector. By selecting the asset in the Project panel, we specified in the Inspector that it would feature three clips: idle (a 'do nothing' state) dooropen doorshut In our openDoor() function, we'll call upon a named clip using a String of text to refer to it. However, first we'll need to state which object in our scene contains the animation we wish to play. Because the script we are writing is to be attached to the player, we must refer to another object before referring to the animation component. We do this by stating the line: var myOutpost : GameObject = GameObject.Find("outpost"); Here we are declaring a new variable called myOutpost by setting its type to be a GameObject and then selecting a game object with the name outpost by using GameObject.Find. The Find command selects an object in the current scene by its name in the Hierarchy and can be used as an alternative to using tags. Now that we have a variable representing our outpost game object, we can use this variable with dot syntax to call animation attached to it by stating: myOutpost.animation.Play("dooropen"); This simply finds the animation component attached to the outpost object and plays the animation called dooropen. The play() command can be passed any string of text characters, but this will only work if the animation clips have been set up on the object in question. Your finished OpenDoor() custom function should now look like this: function OpenDoor(){ audio.PlayOneShot(doorOpenSound); doorIsOpen = true; var myOutpost : GameObject = GameObject.Find("outpost"); myOutpost.animation.Play("dooropen");} Reversing the procedure Now that we have created a set of instructions that will open the door, how will we close it once it is open? To aid playability, we will not force the player to actively close the door but instead establish some code that will cause it to shut after a defined time period. This is where our doorTimer variable comes into play. We will begin counting as soon as the door becomes open by adding a value of time to this variable, and then check when this variable has reached a particular value by using an if statement. Because we will be dealing with time, we need to utilize a function that will constantly update such as the Update() function we had awaiting us when we created the script earlier. Create some empty lines inside the Update() function by moving its closing curly brace } a few lines down. Firstly, we should check if the door has been opened, as there is no point in incrementing our timer variable if the door is not currently open. Write in the following if statement to increment the timer variable with time if the doorIsOpen variable is set to true: if(doorIsOpen){ doorTimer += Time.deltaTime;} Here we check if the door is open — this is a variable that by default is set to false, and will only become true as a result of a collision between the player object and the door. If the doorIsOpen variable is true, then we add the value of Time.deltaTime to the doorTimer variable. Bear in mind that simply writing the variable name as we have done in our if statement's condition is the same as writing doorIsOpen == true. Time.deltaTime is a Time class that will run independent of the game's frame rate. This is important because your game may be run on varying hardware when deployed, and it would be odd if time slowed down on slower computers and was faster when better computers ran it. As a result, when adding time, we can use Time.deltaTime to calculate the time taken to complete the last frame and with this information, we can automatically correct real-time counting. Next, we need to check whether our timer variable, doorTimer, has reached a certain value, which means that a certain amount of time has passed. We will do this by nesting an if statement inside the one we just added—this will mean that the if statement we are about to add will only be checked if the doorIsOpen if condition is valid. Add the following code below the time incrementing line inside the existing if statement: if(doorTimer > doorOpenTime){shutDoor();doorTimer = 0.0;} This addition to our code will be constantly checked as soon as the doorIsOpen variable becomes true and waits until the value of doorTimer exceeds the value of the doorOpenTime variable, which, because we are using Time.deltaTime as an incremental value, will mean three real-time seconds have passed. This is of course unless you change the value of this variable from its default of 3 in the Inspector. Once the doorTimer has exceeded a value of 3, a function called shutDoor() is called, and the doorTimer variable is reset to zero so that it can be used again the next time the door is triggered. If this is not included, then the doorTimer will get stuck above a value of 3, and as soon as the door was opened it would close as a result. Your completed Update() function should now look like this: function Update(){ if(doorIsOpen){ doorTimer += Time.deltaTime; if(doorTimer > 3){ shutDoor(); doorTimer = 0.0; } }} Now, add the following function called shutDoor() to the bottom of your script. Because it performs largely the same function as openDoor(), we will not discuss it in depth. Simply observe that a different animation is called on the outpost and that our doorIsOpen variable gets reset to false so that the entire procedure may start over: function shutDoor(){audio.PlayOneShot(doorShutSound);doorIsOpen = false;var myOutpost : GameObject = GameObject.Find("outpost");myOutpost.animation.Play("doorshut");}
Read more
  • 0
  • 0
  • 1863

article-image-applying-special-effects-3d-game-development-microsoft-silverlight-3-part-1
Packt
18 Nov 2009
7 min read
Save for later

Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 1

Packt
18 Nov 2009
7 min read
  A 3D game must be attractive. It has to offer amazing effects for the main characters and in the background. A spaceship has to fly through a meteor shower. An asteroid belt has to draw waves while a UFO pursues a spaceship. A missile should make a plane explode. The real world shows us things moving everywhere. Most of these scenes, however, aren't repetitive sequences. Hence, we have to combine great designs, artificial intelligence (AI), and advanced physics to create special effects. Working with 3D characters in the background So far, we have added physics, collision detection capabilities, life, and action to our 3D scenes. We were able to simulate real-life effects for the collision of two 3D characters by adding some artificial intelligence. However, we need to combine this action with additional effects to create a realistic 3D world. Players want to move the camera while playing so that they can watch amazing effects. They want to be part of each 3D scene as if it were a real life situation. How can we create complex and realistic backgrounds capable of adding realistic behavior to the game? We can do this combining everything we have learned so far with a good object-oriented design. We have to create random situations combined with more advanced physics. We have to add more 3D characters with movement to the scenes. We must add complexity to the backgrounds. We can work with many independent physics engines to work with parallel worlds. In real-life, there are concurrent and parallel words. We have to reproduce this behavior in our 3D scenes. Time for action – adding a transition to start the game Your project manager does not want the game to start immediately. He wants you to add a butt on in order to allow the player to start the game by clicking on it. As you are using Balder, adding a butt on is not as simple as expected. We are going to add a butt on to the main page, and we are going to change Balder's default game initialization: Stay in the 3DInvadersSilverlight project. Expand App.xaml in the Solution Explorer and open App.xaml.cs––the C# code for App.xaml. Comment the following line of code (we are not going to use Balder's services in this class):  //using Balder.Silverlight.Services; Comment the following line of code in the event handler for the Application_Startup event, after the line this.RootVisual = new MainPage();: //TargetDevice.Initialize<InvadersGame>(); Open the XAML code for MainPage.xaml and add the following lines of code after the line (You will see a butt on with the ti tle Start the game.): <!-- A button to start the game --><Button x_Name="btnStartGame" Content="Start the game!" Canvas.Left="200" Canvas.Top="20" Width="200" Height="30" Click="btnStartGame_Click"></Button> Now, expand MainPage.xaml in the Solution Explorer and open MainPage.xaml.cs––the C# code for MainPage.xaml. Add the following line of code at the beginning (As we are going to use many of Balder's classes and interfaces.): using Balder.Silverlight.Services; Add the following lines of code to program the event handler for the button's Click event (this code will initialize the game using Balder's services): private void btnStartGame_Click(object sender, RoutedEventArgs e){ btnStartGame.Visibility = Visibility.Collapsed; TargetDevice.Initialize<InvadersGame>();} Build and run the solution. Click on the Start the game! butt on and the UFOs will begin their chase game. The butt on will make a transition to start the game, as shown in the following screenshots:   What just happened? You could use a Start the game! butt on to start a game using Balder's services. Now, you will be able to offer the player more control over some parameters before starting the game. We commented the code that started the game during the application start-up. Then, we added a button on the main page (MainPage). The code programmed in its Click event handler initializes the desired Balder.Core.Game subclass (InvadersGame) using just one line: TargetDevice.Initialize<InvadersGame>(); This initialization adds a new specific Canvas as another layout root's child, controlled by Balder to render the 3D scenes. Thus, we had to make some changes to add a simple butt on to control this initialization. Time for action – creating a low polygon count meteor model The 3D digital artists are creating models for many aliens. They do not have the time to create simple models. Hence, they teach you to use Blender and 3D Studio Max to create simple models with low polygon count. Your project manager wants you to add dozens of meteors, to the existing chase game. A gravitational force must attract these meteors and they have to appear in random initial positions in the 3D world. First, we are going to create a low polygon count meteor using 3D Studio Max. Then, we are going to add a texture based on a PNG image and export the 3D model to the ASE format, compatible with Balder. As previously explained, we have to do this in order to export the ASE format with a bitmap texture definition enveloping the meshes. We can also use Blender or any other 3D DCC tool to create this model. We have already learned how to export an ASE format from Blender. Thus, this time, we are going to learn the necessary steps to do it using 3D Studio Max. Start 3D Studio Max and create a new scene. Add a sphere with six segments. Locate the sphere in the scene's center. Use the Uniform Scale tool to resize the low polygon count sphere to 11.329 in the three axis, as shown in the following screenshot: Click on the Material Editor button. Click on the first material sphere, on the Material Editor window's upper-left corner. Click on the small square at the right side of the Diffuse color rectangle, as shown in the following screenshot: Select Bitmap from the list shown in the Material/Map Browser window that pops up and click on OK. Select the PNG file to be used as a texture to envelope the sphere. You can use Bricks.PNG, previously downloaded from http://www.freefoto.com/. You just need to add a reference to a bitmap file. Then, click on Open. The Material Editor preview panel will show a small sphere thumbnail enveloped by the selected bitmap, as shown in the following screenshot: Drag the new material and drop it on the sphere. If you are facing problems, remember that the 3D digital artist created a similar sphere a few days ago and he left the meteor.max file in the following folder (C:Silverlight3DInvaders3D3DModelsMETEOR). Save the file using the name meteor.max in the previously mentioned folder. Now, you have to export the model to the ASE format with the reference to the texture. Therefore, select File | Export and choose ASCII Scene Export (*.ASE) on the Type combo box. Select the aforementioned folder, enter the file name meteor.ase and click on Save. Check the following options in the ASCII Export dialog box. (They are unchecked by default): Mesh Normals Mapping Coordinates Vertex Colors The dialog box should be similar to the one shown in the following screenshot: Click on OK. Now, the model is available as an ASE 3D model with reference to the texture. You will have to change the absolute path for the bitmap that defines the texture in order to allow Balder to load the model in a Silverlight application.
Read more
  • 0
  • 0
  • 1830