Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - 3D Game Development

115 Articles
article-image-blender-25-detailed-render-earth-space
Packt
25 May 2011
10 min read
Save for later

Blender 2.5: Detailed Render of the Earth from Space

Packt
25 May 2011
10 min read
Blender 2.5 HOTSHOT Challenging and fun projects that will push your Blender skills to the limit Our purpose is to create a very detailed view of the earth from space. By detailed, we mean that it includes land, oceans, and clouds, and not only the color and specular reflection, but also the roughness they seem to have, when seen from space. For this project, we are going to perform some work with textures and get them properly set up for our needs (and also for Blender's way of working). What Does It Do? We will create a nice image of the earth resembling the beautiful pictures that are taken from orbiting of the earth, showing the sun rising over the rim of the planet. For this, we will need to work carefully with some textures, set up a basic scene, and create a fairly complex setup of nodes for compositing the final result. In our final image, we will get very nice effects, such as the volumetric effect of the atmosphere that we can see round its rim, the strong highlight of the sun when rising over the rim of the earth, and the very calm, bluish look of the dark part of the earth when lit by the moon. Why Is It Awesome? With this project, we are going to understand how important it is to have good textures to work with. Having the right textures for the job saves lots of time when producing a high-quality rendered image. Not only are we going to work with some very good textures that are freely available on the Internet, but we are also going to perform some hand tweaking to get them tuned exactly as we need them. This way we can also learn how much time can be saved by just doing some preprocessing on the textures to create finalized maps that will be fed directly to the material, without having to resort to complex tricks that would only cause us headaches. One of the nicest aspects of this project is that we are going to see how far we take a very simple scene by using the compositor in Blender. We are definitely going to learn some useful tricks for compositing. Your Hotshot Objectives This project will be tackled in five parts: Preprocessing the textures Object setup Lighting setup Compositing preparation Compositing Mission Checklist The very key for the success of our project is getting the right set of quality images at a sufficiently high resolution. Let's go to www.archive.org and search for www.oera.net/How2.htm on the 'wayback machine'. Choose the snapshot from the Apr 18, 2008 link. Click on the image titled Texture maps of the Earth and Planets. Once there, let's download these images: Earth texture natural colors Earth clouds Earth elevation/bump Earth water/land mask Remember to save the high-resolution version of the images, and put them in the tex folder, inside the project's main folder. We will also need to use Gimp to perform the preprocessing of the textures, so let's make sure to have it installed. We'll be working with version 2.6. Preprocessing the Textures The textures we downloaded are quite good, both in resolution and in the way they clearly separate each aspect of the shading of the earth. There is a catch though—using the clouds, elevation, and water/land textures as they are will cause us a lot of headache inside Blender. So let's perform some better basic preprocessing to get finalized and separated maps for each channel of the shader that will be created. Engage Thrusters For each one of the textures that we're going to work on, let's make sure to get the previous one closed to avoid mixing the wrong textures. Clouds Map Drag the EarthClouds_2500x1250.jpg image from the tex folder into the empty window of Gimp to get it loaded. Now locate the Layers window and right-click on the thumbnail of the Background layer, and select the entry labeled Add Layer Mask... from the menu. In the dialog box, select the Grayscale copy of layer option. Once the mask is added to the layer, the black part of the texture should look transparent. If we take a look at the image after adding the mask, we'll notice the clouds seem to have too much transparency. To solve this, we will perform some adjustment directly on the mask of the layer. Go to the Layers window and click on the thumbnail of the mask (the one to the right-hand side) to make it active (its border should become white). Then go to the main window (the one containing the image) and go to Colors | Curves.... In the Adjust Color Curves dialog, add two control points and get the curve shown in the next screenshot: The purpose of this curve is to get the light gray pixels of the mask to become lighter and the dark ones to get darker; the strong slope between the two control points will cause the border of the mask to be sharper. Make sure that the Value channel is selected and click on OK. Now let's take a look at the image and see how strong the contrast of the image is and how well defined the clouds are now. Finally, let's go to Image| Mode| RGB to set the internal data format for the image to a safe format (thus avoiding the risk of having Blender confused by it). Now we only need to go to File| Save A Copy... and save it as EarthClouds.png in the tex folder of the project. In the dialogs asking for confirmation, make sure to tell Gimp to apply the layer mask (click on Export in the first dialog). For the settings of the PNG file, we can use the default values. Let's close the current image in Gimp and get the main window empty in order to start working on the next texture. Specular Map Let's start by dragging the image named EarthMask_2500x1250.jpg onto the main window of Gimp to get it open. Then drag the image EarthClouds_2500x1250.jpg over the previous one to get it added as a separate layer in Gimp. Now, we need to make sure that the images are correctly aligned. To do this, let's go to View| Zoom| 4:1 (400%), to be able to move the layer with pixel precision easily. Now go to the bottom right-hand side corner of the window and click-and-drag over the four-arrows icon until the part of the image shown in the viewport is one of the corners. After looking at the right place, let's go to the Toolbox and activate the Move tool. Finally, we just need to drag the clouds layer so that its corner exactly matches the corner of the water/land image. Then let's switch to another zoom level by going to View| Zoom| 1:4 (25%). Now let's go to the Layers window, select the EarthClouds layer, and set its blending mode to Multiply (Mode drop-down, above the layers list). Now we just need to go to the main window and go to Colors| Invert. Finally, let's switch the image to RGB mode by going to Image| Mode| RGB and we are done with the processing. Remember to save the image as EarthSpecMap.jpg in the tex folder of the project and close it in Gimp. The purpose of creating this specular map is to correctly mix the specularity of the ocean (full) with one of the clouds that is above the ocean (null). This way, we get a correct specularity, both in the ocean and in the clouds. If we just used the water or land mask to control specularity, then the clouds above the ocean would have specular reflection, which is wrong. Bump Map The bump map controls the roughness of the material; this one is very important as it adds a lot of detail to the final render without having to create actual geometry to represent it. First, drag the EarthElevation_2500x1250.jpg to the main window of Gimp to get it open. Then let's drag the EarthClouds_2500x1250.jpg image over the previous one, so that it gets loaded as a layer above the first one. Now zoom in by going to View| Zoom| 4:1 (400%). Drag the image so that you are able to see one of its corners and use the move tool to get the clouds layer exactly matching the elevation layer. Then switch back to a wider view by going to View| Zoom| 1:4 (25%). Now it's time to add a mask to the clouds layer. Right-click on the clouds layer and select the Add Layer Mask... entry from the menu. Then select the Grayscale copy of layer option in the dialog box and click Add. What we have thus far is a map that defines how intense the roughness of the surface in each point will be. But there's is a problem: The clouds are as bright as or even brighter than the Andes and the Himalayas, which means the render process will distort them quite a lot. Since we know that the intensity of the roughness on the clouds must be less, let's perform another step to get the map corrected accordingly. Let's select the left thumbnail of the clouds layer (color channel of the layer), then go to the main window and open the color levels using the Levels tool by going to Colors| Levels.... In the Output Levels part of the dialog box, let's change the value 255 (on the right-hand side) to 66 and then click on OK. Now we have a map that clearly gives a stronger value to the highest mounts on earth than to the clouds, which is exactly what we needed. Finally, we just need to change the image mode to RGB (Image| Mode| RGB) and save it as EarthBumpMap.jpg in the tex folder of the project. Notice that we are mixing the bump maps of the clouds and the mountains. The reason for this is that working with separate bump maps will get us into a very tricky situation when working inside Blender; definitely, working with a single bump map is way easier than trying to mix two or more. Now we can close Gimp, since we will work exclusively within Blender from now on. Objective Complete - Mini Debriefing This part of the project was just a preparation of the textures. We must create these new textures for three reasons: To get the clouds' texture having a proper alpha channel; this will save us trouble when working with it in Blender. To control the spec map properly, in the regions where there are clouds, as the clouds must not have specular reflection. To create a single, unified bump map for the whole planet. This will save us lots of trouble when controlling the Normal channel of the material in Blender. Notice that we are using the term "bump map" to refer to a texture that will be used to control the "normal" channel of the material. The reason to not call it "normal map" is because a normal map is a special kind of texture that isn't coded in grayscale, like our current texture.
Read more
  • 0
  • 0
  • 3706

article-image-panda3d-game-development-scene-effects-and-shaders
Packt
20 Apr 2011
8 min read
Save for later

Panda3D game development: scene effects and shaders

Packt
20 Apr 2011
8 min read
While brilliant gameplay is the key to a fun and successful game, it is essential to deliver beautiful visuals to provide a pleasing experience and immerse the player in the game world. The looks of many modern productions are massively dominated by all sorts of visual magic to create the jaw-dropping visual density that is soaked up by players with joy and makes them feel connected to the action and the gameplay they are experiencing. The appearance of your game matters a lot to its reception by players. Therefore it is important to know how to leverage your technology to get the best possible looks out of it. This is why this article will show you how Panda3D allows you to create great looking games using lights, shaders, and particles. Adding lights and shadows in Panda3d Lights and shadows are very important techniques for producing a great presentation. Proper scene lighting sets the mood and also adds depth to an otherwise flat-looking scene, while shadows add more realism, and more importantly, root the shadow-casting objects to the ground, destroying the impression of models floating in mid-air. This recipe will show you how to add lights to your game scenes and make objects cast shadows to boost your visuals. Getting ready You need to create the setup presented in Setting up the game structure before proceeding, as this recipe continues and builds upon this base code. How to do it... This recipe consists of these tasks: Add the following code to Application.py: from direct.showbase.ShowBase import ShowBase from direct.actor.Actor import Actor from panda3d.core import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.panda = Actor("panda", {"walk": "panda-walk"}) self.panda.reparentTo(render) self.panda.loop("walk") cm = CardMaker("plane") cm.setFrame(-10, 10, -10, 10) plane = render.attachNewNode(cm.generate()) plane.setP(270) self.cam.setPos(0, -40, 6) ambLight = AmbientLight("ambient") ambLight.setColor(Vec4(0.2, 0.1, 0.1, 1.0)) ambNode = render.attachNewNode(ambLight) render.setLight(ambNode) dirLight = DirectionalLight("directional") dirLight.setColor(Vec4(0.1, 0.4, 0.1, 1.0)) dirNode = render.attachNewNode(dirLight) dirNode.setHpr(60, 0, 90) render.setLight(dirNode) pntLight = PointLight("point") pntLight.setColor(Vec4(0.8, 0.8, 0.8, 1.0)) pntNode = render.attachNewNode(pntLight) pntNode.setPos(0, 0, 15) self.panda.setLight(pntNode) sptLight = Spotlight("spot") sptLens = PerspectiveLens() sptLight.setLens(sptLens) sptLight.setColor(Vec4(1.0, 0.0, 0.0, 1.0)) sptLight.setShadowCaster(True) sptNode = render.attachNewNode(sptLight) sptNode.setPos(-10, -10, 20) sptNode.lookAt(self.panda) render.setLight(sptNode) render.setShaderAuto() Start the program with the F6 key. You will see the following scene: How it works... As we can see when starting our program, the panda is lit by multiple lights, casting shadows onto itself and the ground plane. Let's see how we achieved this effect. After setting up the scene containing our panda and a ground plane, one of each possible light type is added to the scene. The general pattern we follow is to create new light instances before adding them to the scene using the attachNewNode() method. Finally, the light is turned on with setLight(), which causes the calling object and all of its children in the scene graph to receive light. We use this to make the point light only affect the panda but not the ground plane. Shadows are very simple to enable and disable by using the setShadowCaster() method, as we can see in the code that initializes the spotlight. The line render.setShaderAuto() enables the shader generator, which causes the lighting to be calculated pixel perfect. Additionally, for using shadows, the shader generator needs to be enabled. If this line is removed, lighting will look coarser and no shadows will be visible at all. Watch the amount of lights you are adding to your scene! Every light that contributes to the scene adds additional computation cost, which will hit you if you intend to use hundreds of lights in a scene! Always try to detect the nearest lights in the level to use for lighting and disable the rest to save performance. There's more... In the sample code, we add several types of lights with different properties, which may need some further explanation. Ambient light sets the base tone of a scene. It has no position or direction—the light color is just added to all surface colors in the scene, which avoids unlit parts of the scene to appear completely black. You shouldn't set the ambient color to very high intensities. This will decrease the effect of other lights and make the scene appear flat and washed out. Directional lights do not have a position, as only their orientation counts. This light type is generally used to simulate sunlight—it comes from a general direction and affects all light-receiving objects equally. A point light illuminates the scene from a point of origin from which light spreads towards all directions. You can think of it as a (very abstract) light bulb. Spotlights, just like the headlights of a car or a flashlight, create a cone of light that originates from a given position and points towards a direction. The way the light spreads is determined by a lens, just like the viewing frustum of a camera. Using light ramps The lighting system of Panda3D allows you to pull off some additional tricks to create some dramatic effects with scene lights. In this recipe, you will learn how to use light ramps to modify the lights affect on the models and actors in your game scenes. Getting ready In this recipe we will extend the code created in Adding lights and shadows found in this article. Please review this recipe before proceeding if you haven't done so yet. How to do it... Light ramps can be used like this: Open Application.py and add and modify the existing code as shown: from direct.showbase.ShowBase import ShowBase from direct.actor.Actor import Actor from panda3d.core import * from direct.interval.IntervalGlobal import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.panda = Actor("panda", {"walk": "panda-walk"}) self.panda.reparentTo(render) self.panda.loop("walk") cm = CardMaker("plane") cm.setFrame(-10, 10, -10, 10) plane = render.attachNewNode(cm.generate()) plane.setP(270) self.cam.setPos(0, -40, 6) ambLight = AmbientLight("ambient") ambLight.setColor(Vec4(0.3, 0.2, 0.2, 1.0)) ambNode = render.attachNewNode(ambLight) render.setLight(ambNode) dirLight = DirectionalLight("directional") dirLight.setColor(Vec4(0.3, 0.9, 0.3, 1.0)) dirNode = render.attachNewNode(dirLight) dirNode.setHpr(60, 0, 90) render.setLight(dirNode) pntLight = PointLight("point") pntLight.setColor(Vec4(3.9, 3.9, 3.8, 1.0)) pntNode = render.attachNewNode(pntLight) pntNode.setPos(0, 0, 15) self.panda.setLight(pntNode) sptLight = Spotlight("spot") sptLens = PerspectiveLens() sptLight.setLens(sptLens) sptLight.setColor(Vec4(1.0, 0.4, 0.4, 1.0)) sptLight.setShadowCaster(True) sptNode = render.attachNewNode(sptLight) sptNode.setPos(-10, -10, 20) sptNode.lookAt(self.panda) render.setLight(sptNode) render.setShaderAuto() self.activeRamp = 0 toggle = Func(self.toggleRamp) switcher = Sequence(toggle, Wait(3)) switcher.loop() def toggleRamp(self): if self.activeRamp == 0: render.setAttrib(LightRampAttrib.makeDefault()) elif self.activeRamp == 1: render.setAttrib(LightRampAttrib.makeHdr0()) elif self.activeRamp == 2: render.setAttrib(LightRampAttrib.makeHdr1()) elif self.activeRamp == 3: render.setAttrib(LightRampAttrib.makeHdr2()) elif self.activeRamp == 4: render.setAttrib(LightRampAttrib. makeSingleThreshold(0.1, 0.3)) elif self.activeRamp == 5: render.setAttrib(LightRampAttrib. makeDoubleThreshold(0, 0.1, 0.3, 0.8)) self.activeRamp += 1 if self.activeRamp > 5: self.activeRamp = 0 Press F6 to start the sample and see it switch through the available light ramps as shown in this screenshot: How it works... The original lighting equation that is used by Panda3D to calculate the final screen color of a lit pixel limits color intensities to values within a range from zero to one. By using light ramps we are able to go beyond these limits or even define our own ones to create dramatic effects just like the ones we can see in the sample program. In the sample code, we increase the lighting intensity and add a method that switches between the available light ramps, beginning with LightRampAttrib.makeDefault() which sets the default clamping thresholds for the lighting calculations. Then, the high dynamic range ramps are enabled one after another. These light ramps allow you to have a higher range of color intensities that go beyond the standard range between zero and one. These high intensities are then mapped back into the displayable range, allocating different amounts of values within it to displaying brightness. By using makeHdr0(), we allocate a quarter of the displayable range to brightness values that are greater than one. With makeHdr1() it is a third and with makeHdr2() we are causing Panda3D to use half of the color range for overly bright values. This doesn't come without any side effects, though. By increasing the range used for high intensities, we are decreasing the range of color intensities available for displaying colors that are within the limits of 0 and 1, thus losing contrast and making the scene look grey and washed out. Finally, with the makeSingleThreshold() and makeDoubleThreshold() methods, we are able to create very interesting lighting effects. With a single threshold, lighting values below the given limit will be ignored, while anything that exceeds the threshold will be set to the intensity given in the second parameter of the method. The double threshold system works analogous to the single threshold, but lighting intensity will be normalized to two possible values, depending on which of the two thresholds was exceeded.
Read more
  • 0
  • 0
  • 4109

article-image-introduction-color-theory-and-lighting-basics-blender
Packt
14 Apr 2011
7 min read
Save for later

Introduction to Color Theory and Lighting Basics in Blender

Packt
14 Apr 2011
7 min read
Basic color theory To fully understand how light works, we need to have a basic understanding of what color is and how different colors interact with each other. The study of this phenomenon is known as color theory. What is color? When light comes in contact with an object, the object absorbs a certain amount of that light. The rest is reflected into the eye of the viewer in the form of color. The easiest way to visualize colors and their relations is in the form of a color wheel. Primary colors There are millions of colors, but there are only three colors that cannot be created through color mixing—red, yellow, and blue. These colors are known as primary colors, which are used to create the other colors on the color wheel through a process known as color mixing. Through color mixing, we get other "sets" of colors, including secondary and tertiary colors. Secondary colors Secondary colors are created when two primary colors are mixed together. For example, mixing red and blue makes purple, red and yellow make orange, and blue and yellow make green. Tertiary colors It's natural to assume that, because mixing two primary colors creates a secondary color, mixing two secondary colors would create a tertiary color. Surprisingly, this isn't the case. A tertiary color is, in fact, the result of mixing a primary and secondary color together. This gives us the remainder of the color wheel: Red-orange Orange-yellow Chartreuse Turquoise Indigo Violet-red Color relationships There are other relationships between colors that we should know about before we start using Blender. The first is complimentary colors. Complimentary colors are colors that are across from each other on the color wheel. For example, red and green are compliments. Complimentary colors are especially useful for creating contrast in an image, because mixing them together darkens the hue. In a computer program, mixing perfect compliments together will result in black, but mixing compliments in a more traditional medium such as oil pastels results in more of a dark brown hue. In both situations, though, the compliments are used to create a darker value. Be wary of using complimentary colors in computer graphics—if complimentary colors mix accidentally, it will result in black artifacts in images or animations. The other color relationship that we should be aware of is analogous colors. Analogous colors are colors found next to each other on the color wheel. For example, red, red-orange, and orange are analogous. Here's the kicker—red, orange, and yellow can also be analogous as well. A good rule to follow is as long as you don't span more than one primary color on the color wheel, they're most likely considered analogous colors. Color temperature Understanding color temperature is an essential step in understanding how lights work—at the very least, it helps us understand why certain lights emit the colors they do. No light source emits a constant light wavelength. Even the sun, although considered a constant light source, is filtered by the atmosphere to various degrees based on the time of the day, changing its perceived color. Color temperature is typically measured in degrees Kelvin (°K), and has a color range from a red to blue hue, like in the image below: Real world, real lights So how is color applicable beyond a two-dimensional color wheel? In the real world, our eyes perceive color because light from the sun—which contains all colors in the visible color spectrum—is reflected off of objects in our field of vision. As light hits an object, some wavelengths are absorbed, while the rest are reflected. Those reflected rays are what determine the color we perceive that particular object to be. Of course, the sun isn't the only source of light we have. There are many different types of natural and artificial light sources, each with its own unique properties. The most common types of light sources we may try to simulate in Blender include: Candlelight Incandescent light Florescent light Sunlight Skylight Candlelight Candlelight is a source of light as old as time. It has been used for thousands of years and is still used today in many cases. The color temperature of a candle's light is about 1500 K, giving it a warm red-orange hue. Candlelight also has a tendency to create really high contrast between lit areas and unlit areas in a room, which creates a very successful dramatic effect. Incandescent light bulbs When most people hear the term "light bulb", the incandescent light bulb immediately comes to mind. It's also known as a tungsten-halogen light bulb. It's your typical household light bulb, burning at approximately 2800 K-3200 K. This color temperature value still allows it to fall within the orange-yellow part of the spectrum, but it is noticeably brighter than the light of a candle. Florescent light bulbs Florescent lights are an alternative to incandescent. Also known as mercury vapor lights, fluorescents burn at a color temperature range of 3500 K-5900 K, allowing them to emit a color anywhere between a yellow and a white hue. They're commonly used when lighting a large area effectively, such as a warehouse, school hallway, or even a conference room. The sun and the sky Now let's take a look at some natural sources of light! The most obvious example is the sun. The sun burns at a color temperature of approximately 5500 K, giving it its bright white color. We rarely use pure white as a light's color in 3D though—it makes your scene look too artificial. Instead, we may choose to use a color that best suits the scene at hand. For example, if we are lighting a desert scene, we may choose to use a beige color to simulate light bouncing off the sand. But even so, this still doesn't produce an entirely realistic effect. This is where the next source of light comes in—the sky. The sky can produce an entire array of colors from deep purple to orange to bright blue. It produces a color temperature range of 6000 K-20,000 K. That's a huge range! We can really use this to our advantage in our 3D scenes—the color of the sky can have the final say in what the mood of your scene ends up being. Chromatic adaptation What is chromatic adaptation? We're all more familiar with this process than you may realize. As light changes, the color we perceive from the world around us changes. To accommodate for those changes, our eyes adjust what we see to something we're more familiar with (or what our brains would consider normal). When working in 3D you have to keep this in mind, because even though your 3D scene may be physically lit correctly, it may not look natural because the computer renders the final image objectively, without the chromatic adaptation that we, as humans, are used to. Take this image for example. In the top image, the second card from the left appears to be a stronger shade of pink than the corresponding card in the bottom picture. Believe it or not, they are the exact same color, but because of the red hue of the second photo, our brains change how we perceive that image.
Read more
  • 0
  • 0
  • 4161

article-image-setting-panda3d-and-configuring-development-tools
Packt
14 Apr 2011
7 min read
Save for later

Setting Up Panda3D and Configuring Development Tools

Packt
14 Apr 2011
7 min read
  Panda3D 1.7 Game Developer's Cookbook Panda3D is a very powerful and feature-rich game engine that comes with a lot of features needed for creating modern video games. Using Python as a scripting language to interface with the low-level programming libraries makes it easy to quickly create games because this layer of abstraction neatly hides many of the complexities of handling assets, hardware resources, or graphics rendering, for example. This also allows simple games and prototypes to be created very quickly and keeps the code needed for getting things going to a minimum. Panda3D is a complete game engine package. This means that it is not just a collection of game programming libraries with a nice Python interface, but also includes all the supplementary tools for previewing, converting, and exporting assets as well as packing game code and data for redistribution. Delivering such tools is a very important aspect of a game engine that helps with increasing the productivity of a development team. The Panda3D engine is a very nice set of building blocks needed for creating entertainment software, scaling nicely to the needs of hobbyists, students, and professional game development teams. Panda3D is known to have been used in projects ranging from one-shot experimental prototypes to full-scale commercial MMORPG productions like Toontown Online or Pirates of the Caribbean Online. Before you are able to start a new project and use all the powerful features provided by Panda3D to their fullest, though, you need to prepare your working environment and tools. By the end of this article, you will have a strong set of programming tools at hand, as well as the knowledge of how to configure Panda3D to your future projects' needs. Downloading and configuring NetBeans to work with Panda3D When writing code, having the right set of tools at hand and feeling comfortable when using them is very important. Panda3D uses Python for scripting and there are plenty of good integrated development environments available for this language like IDLE, Eclipse, or Eric. Of course, Python code can be written using the excellent Vim or Emacs editors too. Tastes do differ, and every programmer has his or her own preferences when it comes to this decision. To make things easier and have a uniform working environment, however, we are going to use the free NetBeans IDE for developing Python scripts. This choice was made out of pure preference and one of the many great alternatives might be used as well for following through the recipes in this article, but may require different steps for the initial setup and getting samples to run. In this recipe we will install and configure the NetBeans integrated development environment to suit our needs for developing games with Panda3D using the Python programming language. Getting ready Before beginning, be sure to download and install Panda3D. To download the engine SDK and tools, go to www.panda3d.org/download.php: The Panda3D Runtime for End-Users is a prebuilt redistributable package containing a player program and a browser plugin. These can be used to easily run packaged Panda3D games. Under Snapshot Builds, you will be able to find daily builds of the latest version of the Panda3D engine. These are to be handled with care, as they are not meant for production purposes. Finally, the link labeled Panda3D SDK for Developers is the one you need to follow to retrieve a copy of the Panda3D development kit and tools. This will always take you to the latest release of Panda3D, which at this time is version 1.7.0. This version was marked as unstable by the developers but has been working in a stable way for this article. This version also added a great amount of interesting features, like the web browser plugin, an advanced shader, and graphics pipeline or built-in shadow effects, which really are worth a try. Click the link that says Panda3D SDK for Developers to reach the page shown in the following screenshot: Here you can select one of the SDK packages for the platforms that Panda3D is available on. This article assumes a setup of NetBeans on Windows but most of the samples should work on these alternative platforms too, as most of Panda3D's features have been ported to all of these operating systems. To download and install the Panda3D SDK, click the Panda3D SDK 1.7.0 link at the top of the page and download the installer package. Launch the program and follow the installation wizard, always choosing the default settings. In this and all of the following recipes we'll assume the install path to be C:Panda3D-1.7.0, which is the default installation location. If you chose a different location, it might be a good idea to note the path and be prepared to adapt the presented file and folder paths to your needs! How to do it... Follow these steps to set up your Panda3D game development environment: Point your web browser to netbeans.org and click the prominent Download FREE button: Ignore the big table showing all kinds of different versions on the following page and scroll down. Click the link that says JDK with NetBeans IDE Java SE bundle. This will take you to the following page as shown here. Click the Downloads link to the right to proceed. You will find yourself at another page, as shown in the screenshot. Select Windows in the Platform dropdown menu and tick the checkbox to agree to the license agreement. Click the Continue button to proceed. Follow the instructions on the next page. Click the file name to start the download. Launch the installer and follow the setup wizard. Once installed, start the NetBeans IDE. In the main toolbar click Tools | Plugins. Select the tab that is labeled Available Plugins. Browse the list until you find Python and tick the checkbox next to it: Click Install. This will start a wizard that downloads and installs the necessary features for Python development. At the end of the installation wizard you will be prompted to restart the NetBeans IDE, which will finish the setup of the Python feature. Once NetBeans reappears on your screen, click Tools | Python Platforms. In the Python Platform Manager window, click the New button and browse for the file C:Panda3D-1.7.0pythonppython.exe. Select Python 2.6.4 from the platforms list and click the Make Default button. Your settings should now reflect the ones shown in the following screenshot: Finally we select the Python Path tab and once again, compare your settings to the screenshot: Click the Close button and you are done! How it works... In the preceding steps we configured NetBeans to use the Python runtime that drives the Panda3D engine and as we can see, it is very easy to install and set up our working environment for Panda3D. There's more... Different than other game engines, Panda3D follows an interesting approach in its internal architecture. While the more common approach is to embed a scripting runtime into the game engine's executable, Panda3D uses the Python runtime as its main executable. The engine modules handling such things as loading assets, rendering graphics, or playing sounds are implemented as native extension modules. These are loaded by Panda3D's custom Python interpreter as needed when we use them in our script code. Essentially, the architecture of Panda3D turns the hierarchy between native code and the scripting runtime upside down. While in other game engines, native code initiates calls to the embedded scripting runtime, Panda3D shifts the direction of program flow. In Panda3D, the Python runtime is the core element of the engine that lets script code initiate calls into native programming libraries. To understand Panda3D, it is important to understand this architectural decision. Whenever we start the ppython executable, we start up the Panda3D engine. If you ever get into a situation where you are compiling your own Panda3D runtime from source code, don't forget to revisit steps 13 to 17 of this recipe to configure NetBeans to use your custom runtime executable!
Read more
  • 0
  • 0
  • 3407

article-image-collision-detection-and-physics-panda3d-game-development
Packt
30 Mar 2011
12 min read
Save for later

Collision Detection and Physics in Panda3D Game Development

Packt
30 Mar 2011
12 min read
Panda3D 1.7 Game Developer's Cookbook Over 80 recipes for developing 3D games with Panda3D, a full-scale 3D game engine In a video game, the game world or level defines the boundaries within which the player is allowed to interact with the game environment. But how do we enforce these boundaries? How do we keep the player from running through walls? This is where collision detection and response come into play. Collision detection and response not only allow us to keep players from passing through the level boundaries, but also are the basis for many forms of interaction. For example, lots of actions in games are started when the player hits an invisible collision mesh, called a trigger, which initiates a scripted sequence as a response to the player entering its boundaries. Simple collision detection and response form the basis for nearly all forms of interaction in video games. It’s responsible for keeping the player within the level, for crates being pushable, for telling if and where a bullet hit the enemy. What if we could add some extra magic to the mix to make our games even more believable, immersive, and entertaining? Let’s think again about pushing crates around: What happens if the player pushes a stack of crates? Do they just move like they have been glued together, or will they start to tumble and eventually topple over? This is where we add physics to the mix to make things more interesting, realistic, and dynamic. In this article, we will take a look at the various collision detection and physics libraries that the Panda3D engine allows us to work with. Putting in some extra effort, we will also see that it is not very hard to integrate a physics engine that is not part of the Panda3D SDK. Using the built-in collision detection system Not all problems concerning world and player interaction need to be handled by a fully fledged physics API—sometimes a much more basic and lightweight system is just enough for our purposes. This is why in this recipe we dive into the collision handling system that is built into the Panda3D engine. Getting ready This recipe relies upon the project structure created in Setting up the game structure (code download-Ch:1), Setting Up Panda3D and Configuring Development Tools. How to do it... Let’s go through this recipe’s tasks: Open Application.py and add the include statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * import random class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.addSmiley() self.addFloor() taskMgr.add(self.updateSmiley, "UpdateSmiley") Next, add the method that initializes the collision detection system: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.accept("frowney-in-floor", self.onCollision) Next, implement the method for adding the frowney model to the scene: def addSmiley(self): self.frowney = loader.loadModel("frowney") self.frowney.reparentTo(render) self.frowney.setPos(0, 0, 10) self.frowney.setPythonTag("velocity", 0) col = self.frowney.attachNewNode(CollisionNode("frowney")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) The following methods will add a floor plane to the scene and handle the collision response: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollision(self, entry): vel = random.uniform(0.01, 0.2) self.frowney.setPythonTag("velocity", vel) Add this last piece of code. This will make the frowney model bounce up and down: def updateSmiley(self, task): vel = self.frowney.getPythonTag("velocity") z = self.frowney.getZ() self.frowney.setZ(z + vel) vel -= 0.001 self.frowney.setPythonTag("velocity", vel) return task.cont Hit the F6 key to launch the program: How it works... We start off by adding some setup code that calls the other initialization routines. We also add the task that will update the smiley’s position. In the setupCD() method, we initialize the collision detection system. To be able to find out which scene objects collided and issue the appropriate responses, we create an instance of the CollisionTraverser class and assign it to base.cTrav. The variable name is important, because this way, Panda3D will automatically update the CollisionTraverser every frame. The engine checks if a CollisionTraverser was assigned to that variable and will automatically add the required tasks to Panda3D’s update loop. Additionally, we enable debug drawing, so collisions are being visualized at runtime. This will overlay a visualization of the collision meshes the collision detection system uses internally. In the last lines of setupCD(), we instantiate a collision handler that sends a message using Panda3D’s event system whenever a collision is detected. The method call addInPattern(“%fn-in-%in”) defines the pattern for the name of the event that is created when a collision is encountered the first time. %fn will be replaced by the name of the object that bumps into another object that goes by the name that will be inserted in the place of %in. Take a look at the event handler that is added below to get an idea of what these events will look like. After the code for setting up the collision detection system is ready, we add the addSmiley() method, where we first load the model and then create a new collision node, which we attach to the model’s node so it is moved around together with the model. We also add a sphere collision shape, defined by its local center coordinates and radius. This is the shape that defines the boundaries; the collision system will test against it to determine whether two objects have touched. To complete this step, we register our new collision node with the collision traverser and configure it to use the collision handler that sends events as a collision response. Next, we add an infinite floor plane and add the event handling method for reacting on collision notifications. Although the debug visualization shows us a limited rectangular area, this plane actually has an unlimited width and height. In our case, this means that at any given x- and y-coordinate, objects will register a collision when any point on their bounding volume reaches a z-coordinate of 0. It’s also important to note that the floor is not registered as a collider here. This is contrary to what we did for the frowney model and guarantees that the model will act as the collider, and the floor will be treated as the collidee when a contact between the two is encountered. While the onCollision() method makes the smiley model go up again, the code in updateSmiley() constantly drags it downwards. Setting the velocity tag on the frowney model to a positive or negative value, respectively, does this in these two methods. We can think of that as forces being applied. Whenever we encounter a collision with the ground plane, we add a one-shot bounce to our model. But what goes up must come down, eventually. Therefore, we continuously add a gravity force by decreasing the model’s velocity every frame. There’s more... This sample only touched a few of the features of Panda3D’s collision system. The following sections are meant as an overview to give you an impression of what else is possible. For more details, take a look into Panda3D’s API reference. Collision Shapes In the sample code, we used CollisionPlane and CollisionSphere, but there are several more shapes available: CollisionBox: A simple rectangular shape. Crates, boxes, and walls are example usages for this kind of collision shape. CollisionTube: A cylinder with rounded ends. This type of collision mesh is often used as a bounding volume for first and third person game characters. CollisionInvSphere: This shape can be thought of as a bubble that contains objects, like a fish bowl. Everything that is outside the bubble is reported to be colliding. A CollisionInvSphere may be used to delimit the boundaries of a game world, for example. CollisionPolygon: This collision shape is formed from a set of vertices, and allows for the creating of freeform collision meshes. This kind of shape is the most complex to test for collisions, but also the most accurate one. Whenever polygon-level collision detection is important, when doing hit detection in a shooter for example, this collision mesh comes in handy. CollisionRay: This is a line that, starting from one point, extends to infinity in a given direction. Rays are usually shot into a scene to determine whether one or more objects intersect with them. This can be used for various tasks like finding out if a bullet shot in the given direction hit a target, or simple AI tasks like finding out whether a bot is approaching a wall. CollisionLine: Like CollisionRay, but stretches to infinity in both directions. CollisionSegment: This is a special form of ray that is limited by two end points. CollisionParabola: Another special type of ray that is bent. The flying curves of ballistic objects are commonly described as parabolas. Naturally, we would use this kind of ray to find collisions for bullets, for example. Collision Handlers Just like it is the case with collision shapes for this recipe, we only used CollisionHandlerEvent for our sample program, even though there are several more collision handler classes available: CollisionHandlerPusher: This collision handler automatically keeps the collider out of intersecting vertical geometry, like walls. CollisionHandlerFloor: Like CollisionHandlerPusher, but works in the horizontal plane. CollisionHandlerQueue: A very simple handler. All it does is add any intersecting objects to a list. PhysicsCollisionHandler: This collision handler should be used in connection with Panda3D’s built-in physics engine. Whenever a collision is found by this collision handler, the appropriate response is calculated by the simple physics engine that is built into the engine. Using the built-in physics system Panda3D has a built-in physics system that treats its entities as simple particles with masses to which forces may be applied. This physics system is a great amount simpler than a fully featured rigid body one. But it still is enough for cheaply, quickly, and easily creating some nice and simple physics effects. Getting ready To be prepared for this recipe, please first follow the steps found in Setting up the game structure (code download-Ch:1). Also, the collision detection system of Panda3D will be used, so reading up on it in Using the built-in collision detection system might be a good idea! How to do it... The following steps are required to work with Panda3D’s built-in physics system: Edit Application.py and add the required import statements as well as the constructor of the Application class: from direct.showbase.ShowBase import ShowBase from panda3d.core import * from panda3d.physics import * class Application(ShowBase): def __init__(self): ShowBase.__init__(self) self.cam.setPos(0, -50, 10) self.setupCD() self.setupPhysics() self.addSmiley() self.addFloor() Next, add the methods for initializing the collision detection and physics systems to the Application class: def setupCD(self): base.cTrav = CollisionTraverser() base.cTrav.showCollisions(render) self.notifier = CollisionHandlerEvent() self.notifier.addInPattern("%fn-in-%in") self.notifier.addOutPattern("%fn-out-%in") self.accept("smiley-in-floor", self.onCollisionStart) self.accept("smiley-out-floor", self.onCollisionEnd) def setupPhysics(self): base.enableParticles() gravNode = ForceNode("gravity") render.attachNewNode(gravNode) gravityForce = LinearVectorForce(0, 0, -9.81) gravNode.addForce(gravityForce) base.physicsMgr.addLinearForce(gravityForce) Next, implement the method for adding a model and physics actor to the scene: def addSmiley(self): actor = ActorNode("physics") actor.getPhysicsObject().setMass(10) self.phys = render.attachNewNode(actor) base.physicsMgr.attachPhysicalNode(actor) self.smiley = loader.loadModel("smiley") self.smiley.reparentTo(self.phys) self.phys.setPos(0, 0, 10) thrustNode = ForceNode("thrust") self.phys.attachNewNode(thrustNode) self.thrustForce = LinearVectorForce(0, 0, 400) self.thrustForce.setMassDependent(1) thrustNode.addForce(self.thrustForce) col = self.smiley.attachNewNode(CollisionNode("smiley")) col.node().addSolid(CollisionSphere(0, 0, 0, 1.1)) col.show() base.cTrav.addCollider(col, self.notifier) Add this last piece of source code that adds the floor plane to the scene to Application.py: Application.py: def addFloor(self): floor = render.attachNewNode(CollisionNode("floor")) floor.node().addSolid(CollisionPlane(Plane(Vec3(0, 0, 1), Point3(0, 0, 0)))) floor.show() def onCollisionStart(self, entry): base.physicsMgr.addLinearForce(self.thrustForce) def onCollisionEnd(self, entry): base.physicsMgr.removeLinearForce(self.thrustForce) Start the program by pressing F6: How it works... After adding the mandatory libraries and initialization code, we proceed to the code that sets up the collision detection system. Here we register event handlers for when the smiley starts or stops colliding with the floor. The calls involved in setupCD() are very similar to the ones used in Using the built-in collision detection system. Instead of moving the smiley model in our own update task, we use the built-in physics system to calculate new object positions based on the forces applied to them. In setupPhysics(), we call base.enableParticles() to fire up the physics system. We also attach a new ForceNode to the scene graph, so all physics objects will be affected by the gravity force. We also register the force with base.physicsMgr, which is automatically defined when the physics engine is initialized and ready. In the first couple of lines in addSmiley(), we create a new ActorNode, give it a mass, attach it to the scene graph and register it with the physics manager class. The graphical representation, which is the smiley model in this case, is then added to the physics node as a child so it will be moved automatically as the physics system updates. We also add a ForceNode to the physics actor. This acts as a thruster that applies a force that pushes the smiley upwards whenever it intersects the floor. As opposed to the gravity force, the thruster force is set to be mass dependant. This means that no matter how heavy we set the smiley to be, it will always be accelerated at the same rate by the gravity force. The thruster force, on the other hand, would need to be more powerful if we increased the mass of the smiley. The last step when adding a smiley is adding its collision node and shape, which leads us to the last methods added in this recipe, where we add the floor plane and define that the thruster should be enabled when the collision starts, and disabled when the objects’ contact phase ends.
Read more
  • 0
  • 0
  • 4450

article-image-ogre-3d-faqs
Packt
14 Mar 2011
8 min read
Save for later

Ogre 3D FAQs

Packt
14 Mar 2011
8 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch         Read more about this book       (For more resources on OGRE 3D, see here.) Q: What is Ogre3D? A: Creating 3D scenes and worlds is an interesting and challenging problem, but the results are hugely rewarding and the process to get there can be a lot of fun. Ogre 3D helps you create your own scenes and worlds. Ogre 3D is one of the biggest open source 3D render engines and enables its users to create and interact freely with their scenes.   Q: What are the system requirements for Ogre 3D? A: You need a compiler to compile the applications. Your computer should have a graphic card with 3D capabilities. It would be best if the graphic card supports DirectX 9.0.   Q: From where can I download the Ogre 3D software? A: Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. The following are the steps to download and install Ogre 3D SDK: Go to http://www.ogre3d.org/download/sdk Download the appropriate package. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot:     Q: Which are the different versions of the Ogre 3D SDK? A: Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms.   Q: What do you mean by a scene graph? A: A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. A scene graph has a root and is organized like a tree. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change.   Q: What are Spotlights? A: Spotlights are just like flashlights in their effect. They have a position where they are and a direction in which they illuminate the scene. This direction was the first thing we set after creating the light. The direction simply defines in which direction the spotlight is pointed. The next two parameters we set were the inner and the outer angles of the spotlight. The inner part of the spotlight illuminates the area with the complete power of the light source's color. The outer part of the cone uses less power to light the illuminated objects. This is done to emulate the effects of a real flashlight.   Q: What is the difference between frame-based and time-based movement? A: When using frame-based movement, the entity is moved the same distance each frame, by time passed movement, the entity is moved the same distance each second.   Q: What is a window handle and how is it used by our application and the operating system? A: A window handle is simply a number that is used as an identifier for a certain window. This number is created by the operating system and each window has a unique handle. The input system needs this handle because without it, it couldn't get the input events. Ogre 3D creates a window for us. So to get the window handle, we need to ask it the following line: win->getCustomAttribute("WINDOW", &windowHnd);   Q: What does a scene manager do? A: A scene manager does a lot of things, which will be obvious when we take a look at the documentation. There are lots of functions which start with create, destroy, get, set, and has. One important task the scene manager fulfills is the management of objects. This can be scene nodes, entities, lights, or a lot of other object types that Ogre 3D has. The scene manager acts as a factory for these objects and also destroys them. Ogre 3D works with the principle—he who creates an object, also destroys it. Every time we want an entity or scene node deleted, we must use the scene manager; otherwise, Ogre 3D might try to free the same memory later, which might result in an ugly application crash. Besides object management, it manages a scene, like its name suggests. This can include optimizing the scene and calculating positions of each object in the scene for rendering. It also implements efficient culling algorithms.   Q: Which three functions offer the FrameListener interface and at which point is each of these functions called? A: A FrameListener is based on the observer pattern. We can add a class instance which inherits from the Ogre::FrameListener interface to our Ogre 3D root instance using the addFrameListener() method of Ogre::Root. When this class instance is added, our class gets notified when certain events happen. The following are the three functions that offer the FrameListener interface: frameStarted which gets called before the frame is rendered frameRenderingQueued which is called after the frame is rendered but before the buffers are swapped and frameEnded which is called after the current frame has been rendered and displayed.   Q: What is a particle system? A: A particle system consists of two to three different constructs—an emitter, a particle, and an affector (optional). The most important of these three is the particle itself, as the name particle system suggests. A particle displays a color or textures using a quad or the point render capability of the graphics cards. When the particle uses a quad, this quad is always rotated to face the camera. Each particle has a set of parameters, including a time to live, direction, and velocity. There are a lot of different parameters, but these three are the most important for the concept of particle systems. The time to live parameter controls the life and death of a particle. Normally, a particle doesn't live for more than a few seconds before it gets destroyed. This effect can be seen in the demo when we look up at the smoke cone. There will be a point where the smoke vanishes. For these particles, the time to live counter reached zero and they got destroyed. An emitter creates a predefined number of particles per second and can be seen as the source of the particles. Affectors, on the other hand, don't create particles but change some of their parameters. An affector could change the direction, velocity, or color of the particles created by the emitter.     Q: Which add-ons are available for Ogre 3D? Where can I get them? A: The following are some of the add-ons available to Ogre 3D: Hydrax Hydrax is an add-on that adds the capability of rendering pretty water scenes to Ogre 3D. With this add-on, water can be added to a scene and a lot of different settings are available, such as setting the depth of the water, adding foam effects, underwater light rays, and so on. The add-on can be found at http://www.ogre3d.org/tikiwiki/Hydrax. Caelum Caelum is another add-on, which introduces sky rendering with day and night cycles to Ogre 3D. It renders the sun and moon correctly using a date and time. It also renders weather effects like snow or rain and a complex cloud simulation to make the sky look as real as possible. The wiki site for this add-on is http://www.ogre3d.org/tikiwiki/Caelum. Particle Universe Another commercial add-on is Particle Universe. Particle Universe adds a new particle system to Ogre 3D, which allows many more different effects than the normal Ogre 3D particle system allows. Also, it comes with a Particle Editor, allowing artists to create particles in a separate application and the programmer can load the created particle script later. This plugin can be found at http://www.ogre3d.org/tikiwiki/Particle+Universe+plugin.   Summary In this article we took a look at some of the most frequently asked questions on Ogre 3D. The article, Common Mistakes : Ogre Wiki, would be helpful for further queries pertaining to Ogre 3D. Further resources on this subject: Starting Ogre 3D [Article] Installation of Ogre 3D [Article] Materials with Ogre 3D [Article] The Ogre Scene Graph [Article] OGRE 3D 1.7 Beginner's Guide [Book]
Read more
  • 0
  • 0
  • 2252
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-animating-panda3d
Packt
01 Mar 2011
8 min read
Save for later

Animating in Panda3D

Packt
01 Mar 2011
8 min read
Panda3D 1.6 Game Engine Beginner's Guide Create your own computer game with this 3D rendering and game development framework The first and only guide to building a finished game using Panda3D Learn about tasks that can be used to handle changes over time Respond to events like keyboard key presses, mouse clicks, and more Take advantage of Panda3D's built-in shaders and filters to decorate objects with gloss, glow, and bump effects Follow a step-by-step, tutorial-focused process that matches the development process of the game with plenty of screenshots and thoroughly explained code for easy pick up        Actors and Animations An Actor is a kind of object in Panda3D that adds more functionality to a static model. Actors can include joints within them. These joints have parts of the model tied to them and are rotated and repositioned by animations to make the model move and change. Actors are stored in .egg and .bam files, just like models. Animation files include information on the position and rotation of joints at specific frames in the animation. They tell the Actor how to posture itself over the course of the animation. These files are also stored in .egg and .bam files. Time for action – loading Actors and Animations Let's load up an Actor with an animation and start it playing to get a feel for how this works: Open a blank document in NotePad++ and save it as Anim_01.py in the Chapter09 folder. We need a few imports to start with. Put these lines at the top of the file: import direct.directbase.DirectStart from pandac.PandaModules import * from direct.actor.Actor import Actor We won't need a lot of code for our class' __init__ method so let's just plow through it here : class World: def __init__(self): base.disableMouse() base.camera.setPos(0, -5, 1) self.setupLight() self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") self.kid.setH(180) The next thing we want to do is steal our setupLight() method from the Track class and paste it into this class: def setupLight(self): primeL = DirectionalLight("prime") primeL.setColor(VBase4(.6,.6,.6,1)) self.dirLight = render.attachNewNode(primeL) self.dirLight.setHpr(45,-60,0) render.setLight(self.dirLight) ambL = AmbientLight("amb") ambL.setColor(VBase4(.2,.2,.2,1)) self.ambLight = render.attachNewNode(ambL) render.setLight(self.ambLight) return Lastly, we need to instantiate the World class and call the run() method. w = World() run() Save the file and run it from the command prompt to see our loaded model with an animation playing on it, as depicted in the following screenshot: What just happened? Now, we have an animated Actor in our scene, slowly looping through a walk animation. We made that happen with only three lines of code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg"}) self.kid.reparentTo(render) self.kid.loop("Walk") The first line creates an instance of the Actor class. Unlike with models, we don't need to use a method of loader. The Actor class constructor takes two arguments: the first is the filename for the model that will be loaded. This file may or may not contain animations in it. The second argument is for loading additional animations from separate files. It's a dictionary of animation names and the files that they are contained in. The names in the dictionary don't need to correspond to anything; they can be any string. myActor = Actor( modelPath, {NameForAnim1 : Anim1Path, NameForAnim2 : Anim2Path, etc}) The names we give animations when the Actor is created are important because we use those names to control the animations. For instance, the last line calls the method loop() with the name of the walking animation as its argument. If the reference to the Actor is removed, the animations will be lost. Make sure not to remove the reference to the Actor until both the Actor and its animations are no longer needed. Controlling animations Since we're talking about the loop() method, let's start discussing some of the different controls for playing and stopping animations. There are four basic methods we can use: play("AnimName"): This method plays the animation once from beginning to end. loop("AnimName"): This method is similar to play, but the animation doesn't stop when it's over; it replays again from the beginning. stop() or stop("AnimName"): This method, if called without an argument, stops all the animations currently playing on the Actor. If called with an argument, it only stops the named animation. Note that the Actor will remain in whatever posture they are in when this method is called. pose("AnimName", FrameNumber): Places the Actor in the pose dictated by the supplied frame without playing the animation. We have some more advanced options as well. Firstly, we can provide option fromFrame and toFrame arguments to play or loop to restrict the animation to specific frames. myActor.play("AnimName", fromFrame = FromFrame, toFrame = toFrame) We can provide both the arguments, or just one of them. For the loop() method, there is also the optional argument restart, which can be set to 0 or 1. It defaults to 1, which means to restart the animation from the beginning. If given a 0, it will start looping from the current frame. We can also use the getNumFrames("AnimName") and getCurrentFrame("AnimName") methods to get more information about a given animation. The getCurrentAnim() method will return a string that tells us which animation is currently playing on the Actor. The final method we have in our list of basic animation controls sets the speed of the animation. myActor.setPlayRate(1.5, "AnimName") The setPlayRate() method takes two arguments. The first is the new play rate, and it should be expressed as a multiplier of the original frame rate. If we feed in .5, the animation will play half as fast. If we feed in 2, the animation will play twice as fast. If we feed in -1, the animation will play at its normal speed, but it will play in reverse. Have a go hero – basic animation controls Experiment with the various animation control methods we've discussed to get a feel for how they work. Load the Stand and Thoughtful animations from the animations folder as well, and use player input or delayed tasks to switch between animations and change frame rates. Once we're comfortable with what we've gone over so far, we'll move on. Animation blending Actors aren't limited to playing a single animation at a time. Panda3D is advanced enough to offer us a very handy functionality, called blending. To explain blending, it's important to understand that an animation is really a series of offsets to the basic pose of the model. They aren't absolutes; they are changes from the original. With blending turned on, Panda3D can combine these offsets. Time for action – blending two animations We'll blend two animations together to see how this works. Open Anim_01.py in the Chapter09 folder. We need to load a second animation to be able to blend. Change the line where we create our Actor to look like the following code: self.kid = Actor("../Models/Kid.egg", {"Walk" : "../Animations/Walk.egg", "Thoughtful" : "../Animations/Thoughtful.egg"}) Now, we just need to add this code above the line where we start looping the Walk animation: self.kid.enableBlend() self.kid.setControlEffect("Walk", 1) self.kid.setControlEffect("Thoughtful", 1) Resave the file as Anim_02.py and run it from the command prompt. What just happened? Our Actor is now performing both animations to their full extent at the same time. This is possible because we made the call to the self.kid.enableBlend() method and then set the amount of effect each animation would have on the model with the self.kid.setControlEffect() method. We can turn off blending later on by using the self.kid.disableBlend() method, which will return the Actor to the state where playing or looping a new animation will stop any previous animations. Using the setControlEffect method, we can alter how much each animation controls the model. The numeric argument we pass to setControlEffect() represents a percentage of the animation's offset that will be applied, with 1 being 100%, 0.5 being 50%, and so on. When blending animations together, the look of the final result depends a great deal on the model and animations being used. Much of the work needed to achieve a good result depends on the artist. Blending works well for transitioning between animations. In this case, it can be handy to use Tasks to dynamically alter the effect animations have on the model over time. Honestly, though, the result we got with blending is pretty unpleasant. Our model is hardly walking at all, and he looks like he has a nervous twitch or something. This is because both animations are affecting the entire model at full strength, so the Walk and Thoughtful animations are fighting for control over the arms, legs, and everything else, and what we end up with is a combination of both animation's offsets. Furthermore, it's important to understand that when blending is enabled, every animation with a control effect higher than 0 will always be affecting the model, even if the animation isn't currently playing. The only way to remove an animation's influence is to set the control effect to 0. This obviously can cause problems when we want to play an animation that moves the character's legs and another animation that moves his arms at the same time, without having them screw with each other. For that, we have to use subparts.  
Read more
  • 0
  • 0
  • 3754

article-image-installing-panda3d
Packt
11 Feb 2011
4 min read
Save for later

Installing Panda3D

Packt
11 Feb 2011
4 min read
Getting started with Panda3D installation packages The kind folks who produce Panda3D have made it very easy to get Panda3D up and working. You don't need to worry about any compiling, library linking, or other difficult, multi-step processes. The Panda3D website provides executable files that take care of all the work for you. These files even install the version of Python they need to operate correctly, so you don't need to go elsewhere for it. Time for action - downloading and installing Panda3D I know what you're thinking: "Less talk, more action!" Here are the step-by-step instructions for installing Panda3D: Navigate your web browser to www.Panda3D.org. Under the Downloads option, you'll see a link labeled SDK. Click it. If you are using Windows, scroll down this page you'll find a section titled Download other versions. Find the link to Panda3D SDK 1.6.2 and click it. If you aren't using Windows, click on the platform you are using (Mac, Linux, or any other OS.). That will take you to a page that has the downloads for that platform. Scroll down to the Download other versions section and find the link to Panda3D SDK 1.6.2, as before. When the download is complete, run the file and this screen will pop up: Click Next to continue and then accept the terms. After that, you'll be prompted about where you want to install Panda3D. The default location is just fine. Click the Install button to continue. Wait for the progress bar to fill up. When it's done, you'll see another prompt. This step really isn't necessary. Just click No and move on. When you have finished the installation, you can verify that it's working by going to Start Menu | All Programs | Panda3D 1.6.2 | Sample Programs | Ball in Maze | Run Ball in Maze. A window will open, showing the Ball in Maze sample game, where you tilt a maze to make a ball roll around while trying to avoid the holes. What just happened? You may be wondering why we skipped a part of the installation during step 7. That step of the process caches some of the assets, like 3D models and such that come with Panda3D. Essentially, by spending a few minutes caching these files now, the sample programs that come with Panda3d will load a few seconds faster the first time we run them, that's all. Now that we've got Panda3D up and running let's get ourselves an advanced text editor to do our coding in. Switching to an advanced text editor The next thing we need is Notepad++. Why, you ask? Well, to code with Python all you really need is a text editor, like the notepad that comes with Windows XP. After typing your code you just have to save the file with .py extension. Notepad itself is kind of dull, though, and it doesn't have many features to make coding easier. Notepad++ is a text editor very similar to Notepad. It can open pretty much any text file and it comes with a pile of features to make coding easier. To highlight some fan favorites, it provides language mark-up, a Find and Replace feature, and file tabs to organize multiple open files. The language mark-up will change the color and fonts of specific parts of your code to help you visually understand and organize it. With Find and Replace you can easily change a large number of variable names and also quickly and easily update code. File tabbing keeps all of your open code files in one window and makes it easy to switch back and forth between them.
Read more
  • 0
  • 0
  • 5381

article-image-installation-ogre-3d
Packt
09 Feb 2011
6 min read
Save for later

Installation of Ogre 3D

Packt
09 Feb 2011
6 min read
OGRE 3D 1.7 Beginner's Guide Downloading and installing Ogre 3D The first step we need to take is to install and configure Ogre 3D. Time for action – downloading and installing Ogre 3D We are going to download the Ogre 3D SDK and install it so that we can work with it later. Go to http://www.ogre3d.org/download/sdk. Download the appropriate package. If you need help picking the right package, take a look at the next What just happened section. Copy the installer to a directory you would like your OgreSDK to be placed in. Double-click on the Installer; this will start a self extractor. You should now have a new folder in your directory with a name similar to OgreSDK_vc9_v1-7-1. Open this folder. It should look similar to the following screenshot: (Move the mouse over the image to enlarge.) What just happened? We just downloaded the appropriate Ogre 3D SDK for our system. Ogre 3D is a cross-platform render engine, so there are a lot of different packages for these different platforms. After downloading we extracted the Ogre 3D SDK. Different versions of the Ogre 3D SDK Ogre supports many different platforms, and because of this, there are a lot of different packages we can download. Ogre 3D has several builds for Windows, one for MacOSX, and one Ubuntu package. There is also a package for MinGW and for the iPhone. If you like, you can download the source code and build Ogre 3D by yourself. This article will focus on the Windows pre-build SDK and how to configure your development environment. If you want to use another operating system, you can look at the Ogre 3D Wiki, which can be found at http://www.ogre3d.org/wiki. The wiki contains detailed tutorials on how to set up your development environment for many different platforms. Exploring the SDK Before we begin building the samples which come with the SDK, let's take a look at the SDK. We will look at the structure the SDK has on a Windows platform. On Linux or MacOS the structure might look different. First, we open the bin folder. There we will see two folders, namely, debug and release. The same is true for the lib directory. The reason is that the Ogre 3D SDK comes with debug and release builds of its libraries and dynamic-linked/shared libraries. This makes it possible to use the debug build during development, so that we can debug our project. When we finish the project, we link our project against the release build to get the full performance of Ogre 3D. When we open either the debug or release folder, we will see many dll files, some cfg files, and two executables (exe). The executables are for content creators to update their content files to the new Ogre version, and therefore are not relevant for us. The OgreMain.dll is the most important DLL. It is the compiled Ogre 3D source code we will load later. All DLLs with Plugin_ at the start of their name are Ogre 3D plugins we can use with Ogre 3D. Ogre 3D plugins are dynamic libraries, which add new functionality to Ogre 3D using the interfaces Ogre 3D offers. This can be practically anything, but often it is used to add features like better particle systems or new scene managers. The Ogre 3D community has created many more plugins, most of which can be found in the wiki. The SDK simply includes the most generally used plugins. The DLLs with RenderSystem_ at the start of their name are, surely not surprisingly, wrappers for different render systems that Ogre 3D supports. In this case, these are Direct3D9 and OpenGL. Additional to these two systems, Ogre 3D also has a Direct3D10, Direct3D11, and OpenGL ES(OpenGL for Embedded System) render system. Besides the executables and the DLLs, we have the cfg files. cfg files are config files that Ogre 3D can load at startup. Plugins.cfg simply lists all plugins Ogre 3D should load at startup. These are typically the Direct3D and OpenGL render systems and some additional SceneManagers. quakemap.cfg is a config file needed when loading a level in the Quake3 map format. We don't need this file, but a sample does. resources.cfg contains a list of all resources, like a 3D mesh, a texture, or an animation, which Ogre 3D should load during startup. Ogre 3D can load resources from the file system or from a ZIP file. When we look at resources.cfg, we will see the following lines: Zip=../../media/packs/SdkTrays.zip FileSystem=../../media/thumbnails Zip= means that the resource is in a ZIP file and FileSystem= means that we want to load the contents of a folder. resources.cfg makes it easy to load new resources or change the path to resources, so it is often used to load resources, especially by the Ogre samples. Speaking of samples, the last cfg file in the folder is samples.cfg. We don't need to use this cfg file. Again, it's a simple list with all the Ogre samples to load for the SampleBrowser. But we don't have a SampleBrowser yet, so let's build one. The Ogre 3D samples Ogre 3D comes with a lot of samples, which show all the kinds of different render effects and techniques Ogre 3D can do. Before we start working on our application, we will take a look at the samples to get a first impression of Ogre's capabilities. Time for action – building the Ogre 3D samples To get a first impression of what Ogre 3D can do, we will build the samples and take a look at them. Go to the Ogre3D folder. Open the Ogre3d.sln solution file. Right-click on the solution and select Build Solution. Visual Studio should now start building the samples. This might take some time, so get yourself a cup of tea until the compile process is finished. If everything went well, go into the Ogre3D/bin folder. Execute the SampleBrowser.exe. You should see the following on your screen: Try the different samples to see all the nice features Ogre 3D offers. What just happened? We built the Ogre 3D samples using our own Ogre 3D SDK. After this, we are sure to have a working copy of Ogre 3D.  
Read more
  • 0
  • 0
  • 5114

article-image-3d-animation-techniques-xna-game-studio-40-2
Packt
14 Jan 2011
3 min read
Save for later

3D Animation Techniques with XNA Game Studio 4.0

Packt
14 Jan 2011
3 min read
Object animation We will first look at the animation of objects as a whole. The most common ways to animate an object are rotation and translation (movement). We will begin by creating a class that will interpolate a position and rotation value between two extremes over a given amount of time. We could also have it interpolate between two scaling values, but it is very uncommon for an object to change size in a smooth manner during gameplay, so we will leave it out for simplicity's sake. The ObjectAnimation class has a number of parameters—starting and ending position and rotation values, a duration to interpolate during those values, and a Boolean indicating whether or not the animation should loop or just remain at the end value after the duration has passed: public class ObjectAnimation { Vector3 startPosition, endPosition, startRotation, endRotation; TimeSpan duration; bool loop; } We will also store the amount of time that has elapsed since the animation began, and the current position and rotation values: TimeSpan elapsedTime = TimeSpan.FromSeconds(0); public Vector3 Position { get; private set; } public Vector3 Rotation { get; private set; } The constructor will initialize these values: public ObjectAnimation(Vector3 StartPosition, Vector3 EndPosition, Vector3 StartRotation, Vector3 EndRotation, TimeSpan Duration, bool Loop) { this.startPosition = StartPosition; this.endPosition = EndPosition; this.startRotation = StartRotation; this.endRotation = EndRotation; this.duration = Duration; this.loop = Loop; Position = startPosition; Rotation = startRotation; } Finally, the Update() function takes the amount of time that has elapsed since the last update and updates the position and rotation values accordingly: public void Update(TimeSpan Elapsed) { // Update the time this.elapsedTime += Elapsed; // Determine how far along the duration value we are (0 to 1) float amt = (float)elapsedTime.TotalSeconds / (float)duration. TotalSeconds; if (loop) while (amt > 1) // Wrap the time if we are looping amt -= 1; else // Clamp to the end value if we are not amt = MathHelper.Clamp(amt, 0, 1); // Update the current position and rotation Position = Vector3.Lerp(startPosition, endPosition, amt); Rotation = Vector3.Lerp(startRotation, endRotation, amt); } As a simple example, we'll create an animation (in the Game1 class) that rotates our spaceship in a circle over a few seconds: We'll also have it move the model up and down for demonstration's sake: ObjectAnimation anim; We initialize it in the constructor: models.Add(new CModel(Content.Load<Model>("ship"), Vector3.Zero, Vector3.Zero, new Vector3(0.25f), GraphicsDevice)); anim = new ObjectAnimation(new Vector3(0, -150, 0), new Vector3(0, 150, 0), Vector3.Zero, new Vector3(0, -MathHelper.TwoPi, 0), TimeSpan.FromSeconds(10), true); We update it as follows: anim.Update(gameTime.ElapsedGameTime); models[0].Position = anim.Position; models[0].Rotation = anim.Rotation;
Read more
  • 0
  • 0
  • 2729
article-image-advanced-lighting-3d-graphics-xna-game-studio-40
Packt
22 Dec 2010
9 min read
Save for later

Advanced Lighting in 3D Graphics with XNA Game Studio 4.0

Packt
22 Dec 2010
9 min read
  3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them         Implementing a point light with HLSL A point light is just a light that shines equally in all directions around itself (like a light bulb) and falls off over a given distance: In this case, a point light is simply modeled as a directional light that will slowly fade to darkness over a given distance. To achieve a linear attenuation, we would simply divide the distance between the light and the object by the attenuation distance, invert the result (subtract from 1), and then multiply the lambertian lighting with the result. This would cause an object directly next to the light source to be fully lit, and an object at the maximum attenuation distance to be completely unlit. However, in practice, we will raise the result of the division to a given power before inverting it to achieve a more exponential falloff: Katt = 1 – (d / a) f In the previous equation, Katt is the brightness scalar that we will multiply the lighting amount by, d is the distance between the vertex and light source, a is the distance at which the light should stop affecting objects, and f is the falloff exponent that determines the shape of the curve. We can implement this easily with HLSL and a new Material class. The new Material class is similar to the material for a directional light, but specifies a light position rather than a light direction. For the sake of simplicity, the effect we will use will not calculate specular highlights, so the material does not include a "specularity" value. It also includes new values, LightAttenuation and LightFalloff, which specify the distance at which the light is no longer visible and what power to raise the division to. public class PointLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public float LightAttenuation { get; set; } public float LightFalloff { get; set; } public PointLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 0, 0); LightColor = new Vector3(.85f, .85f, .85f); LightAttenuation = 5000; LightFalloff = 2; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightAttenuation"] != null) effect.Parameters["LightAttenuation"].SetValue( LightAttenuation); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } The new effect has parameters to reflect those values: float4x4 World; float4x4 View; float4x4 Projection; float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 0, 0); float3 LightColor = float3(1, 1, 1); float LightAttenuation = 5000; float LightFalloff = 2; texture BasicTexture; sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; }; bool TextureEnabled = true; The vertex shader output struct now includes a copy of the vertex's world position that will be used to calculate the light falloff (attenuation) and light direction. struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; float4 WorldPosition : TEXCOORD2; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.WorldPosition = worldPosition; output.UV = input.UV; output.Normal = mul(input.Normal, World); return output; } Finally, the pixel shader calculates the light much the same way that the directional light did, but uses a per-vertex light direction rather than a global light direction. It also determines how far along the attenuation value the vertex's position is and darkens it accordingly. The texture, ambient light, and diffuse color are calculated as usual: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); float d = distance(LightPosition, input.WorldPosition); float att = 1 - pow(clamp(d / LightAttenuation, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } We can now achieve the above image using the following scene setup from the Game1 class: models.Add(new CModel(Content.Load<Model>("teapot"), new Vector3(0, 60, 0), Vector3.Zero, new Vector3(60), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); Effect simpleEffect = Content.Load<Effect>("PointLightEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); PointLightMaterial mat = new PointLightMaterial(); mat.LightPosition = new Vector3(0, 1500, 1500); mat.LightAttenuation = 3000; models[0].Material = mat; models[1].Material = mat; camera = new FreeCamera(new Vector3(0, 300, 1600), MathHelper.ToRadians(0), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); Implementing a spot light with HLSL A spot light is similar in theory to a point light—in that it fades out after a given distance. However, the fading is not done around the light source, but is based on the angle between the direction of an object and the light source, and the light's actual direction. If the angle is larger than the light's "cone angle", we will not light the vertex. Katt = (dot(p - lp, ld) / cos(a)) f In the previous equation, Katt is still the scalar that we will multiply our diffuse lighting with, p is the position of the vertex, lp is the position of the light, ld is the direction of the light, a is the cone angle, and f is the falloff exponent. Our new spot light material reflects these values: public class SpotLightMaterial : Material { public Vector3 AmbientLightColor { get; set; } public Vector3 LightPosition { get; set; } public Vector3 LightColor { get; set; } public Vector3 LightDirection { get; set; } public float ConeAngle { get; set; } public float LightFalloff { get; set; } public SpotLightMaterial() { AmbientLightColor = new Vector3(.15f, .15f, .15f); LightPosition = new Vector3(0, 3000, 0); LightColor = new Vector3(.85f, .85f, .85f); ConeAngle = 30; LightDirection = new Vector3(0, -1, 0); LightFalloff = 20; } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientLightColor"] != null) effect.Parameters["AmbientLightColor"].SetValue( AmbientLightColor); if (effect.Parameters["LightPosition"] != null) effect.Parameters["LightPosition"].SetValue(LightPosition); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["ConeAngle"] != null) effect.Parameters["ConeAngle"].SetValue( MathHelper.ToRadians(ConeAngle / 2)); if (effect.Parameters["LightFalloff"] != null) effect.Parameters["LightFalloff"].SetValue(LightFalloff); } } Now we can create a new effect that will render a spot light. We will start by copying the point light's effect and making the following changes to the second block of effect parameters: float3 AmbientLightColor = float3(.15, .15, .15); float3 DiffuseColor = float3(.85, .85, .85); float3 LightPosition = float3(0, 5000, 0); float3 LightDirection = float3(0, -1, 0); float ConeAngle = 90; float3 LightColor = float3(1, 1, 1); float LightFalloff = 20; Finally, we can update the pixel shader to perform the lighting calculations: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float3 diffuseColor = DiffuseColor; if (TextureEnabled) diffuseColor *= tex2D(BasicTextureSampler, input.UV).rgb; float3 totalLight = float3(0, 0, 0); totalLight += AmbientLightColor; float3 lightDir = normalize(LightPosition - input.WorldPosition); float diffuse = saturate(dot(normalize(input.Normal), lightDir)); // (dot(p - lp, ld) / cos(a))^f float d = dot(-lightDir, normalize(LightDirection)); float a = cos(ConeAngle); float att = 0; if (a < d) att = 1 - pow(clamp(a / d, 0, 1), LightFalloff); totalLight += diffuse * att * LightColor; return float4(diffuseColor * totalLight, 1); } If we were to then set up the material as follows and use our new effect, we would see the following result: SpotLightMaterial mat = new SpotLightMaterial(); mat.LightDirection = new Vector3(0, -1, -1); mat.LightPosition = new Vector3(0, 3000, 2700); mat.LightFalloff = 200; Drawing multiple lights Now that we can draw one light, the natural question to ask is how to draw more than one light. Well this, unfortunately, is not simple. There are a number of approaches—the easiest of which is to simply loop through a certain number of lights in the pixel shader and sum a total lighting value. Let's create a new shader based on the directional light effect that we created in the last chapter to do just that. We'll start by copying that effect, then modifying some of the effect parameters as follows. Notice that instead of a single light direction and color, we instead have an array of three of each, allowing us to draw up to three lights: #define NUMLIGHTS 3 float3 DiffuseColor = float3(1, 1, 1); float3 AmbientColor = float3(0.1, 0.1, 0.1); float3 LightDirection[NUMLIGHTS]; float3 LightColor[NUMLIGHTS]; float SpecularPower = 32; float3 SpecularColor = float3(1, 1, 1); Second, we need to update the pixel shader to do the lighting calculations one time per light: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { // Start with diffuse color float3 color = DiffuseColor; // Texture if necessary if (TextureEnabled) color *= tex2D(BasicTextureSampler, input.UV); // Start with ambient lighting float3 lighting = AmbientColor; float3 normal = normalize(input.Normal); float3 view = normalize(input.ViewDirection); // Perform lighting calculations per light for (int i = 0; i < NUMLIGHTS; i++) { float3 lightDir = normalize(LightDirection[i]); // Add lambertian lighting lighting += saturate(dot(lightDir, normal)) * LightColor[i]; float3 refl = reflect(lightDir, normal); // Add specular highlights lighting += pow(saturate(dot(refl, view)), SpecularPower) * SpecularColor; } // Calculate final color float3 output = saturate(lighting) * color; return float4(output, 1); } We now need a new Material class to work with this shader: public class MultiLightingMaterial : Material { public Vector3 AmbientColor { get; set; } public Vector3[] LightDirection { get; set; } public Vector3[] LightColor { get; set; } public Vector3 SpecularColor { get; set; } public MultiLightingMaterial() { AmbientColor = new Vector3(.1f, .1f, .1f); LightDirection = new Vector3[3]; LightColor = new Vector3[] { Vector3.One, Vector3.One, Vector3.One }; SpecularColor = new Vector3(1, 1, 1); } public override void SetEffectParameters(Effect effect) { if (effect.Parameters["AmbientColor"] != null) effect.Parameters["AmbientColor"].SetValue(AmbientColor); if (effect.Parameters["LightDirection"] != null) effect.Parameters["LightDirection"].SetValue(LightDirection); if (effect.Parameters["LightColor"] != null) effect.Parameters["LightColor"].SetValue(LightColor); if (effect.Parameters["SpecularColor"] != null) effect.Parameters["SpecularColor"].SetValue(SpecularColor); } } If we wanted to implement the three directional light systems found in the BasicEffect class, we would now just need to copy the light direction values over to our shader: Effect simpleEffect = Content.Load<Effect>("MultiLightingEffect"); models[0].SetModelEffect(simpleEffect, true); models[1].SetModelEffect(simpleEffect, true); MultiLightingMaterial mat = new MultiLightingMaterial(); BasicEffect effect = new BasicEffect(GraphicsDevice); effect.EnableDefaultLighting(); mat.LightDirection[0] = -effect.DirectionalLight0.Direction; mat.LightDirection[1] = -effect.DirectionalLight1.Direction; mat.LightDirection[2] = -effect.DirectionalLight2.Direction; mat.LightColor = new Vector3[] { new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f), new Vector3(0.5f, 0.5f, 0.5f) }; models[0].Material = mat; models[1].Material = mat;
Read more
  • 0
  • 0
  • 4276

article-image-introduction-hlsl-3d-graphics-xna-game-studio-40
Packt
21 Dec 2010
16 min read
Save for later

Introduction to HLSL in 3D Graphics with XNA Game Studio 4.0

Packt
21 Dec 2010
16 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them           Read more about this book       (For more resources on this subject, see here.) Getting started The vertex shader and pixel shader are contained in the same code file called an Effect. The vertex shader is responsible for transforming geometry from object space into screen space, usually using the world, view, and projection matrices. The pixel shader's job is to calculate the color of every pixel onscreen. It is giving information about the geometry visible at whatever point onscreen it is being run for and takes into account lighting, texturing, and so on. For your convenience, I've provided the starting code for this article here. public class Game1 : Microsoft.Xna.Framework.Game{ GraphicsDeviceManager graphics; SpriteBatch spriteBatch; List<CModel> models = new List<CModel>(); Camera camera; MouseState lastMouseState; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 800; }// Called when the game should load its contentprotected override void LoadContent(){ spriteBatch = new SpriteBatch(GraphicsDevice); models.Add(new CModel(Content.Load<Model>("ship"), new Vector3(0, 400, 0), Vector3.Zero, new Vector3(1f), GraphicsDevice)); models.Add(new CModel(Content.Load<Model>("ground"), Vector3.Zero, Vector3.Zero, Vector3.One, GraphicsDevice)); camera = new FreeCamera(new Vector3(1000, 500, -2000), MathHelper.ToRadians(153), // Turned around 153 degrees MathHelper.ToRadians(5), // Pitched up 13 degrees GraphicsDevice); lastMouseState = Mouse.GetState();}// Called when the game should update itselfprotected override void Update(GameTime gameTime){ updateCamera(gameTime); base.Update(gameTime);}void updateCamera(GameTime gameTime){ // Get the new keyboard and mouse state MouseState mouseState = Mouse.GetState(); KeyboardState keyState = Keyboard.GetState(); // Determine how much the camera should turn float deltaX = (float)lastMouseState.X - (float)mouseState.X; float deltaY = (float)lastMouseState.Y - (float)mouseState.Y; // Rotate the camera ((FreeCamera)camera).Rotate(deltaX * .005f, deltaY * .005f); Vector3 translation = Vector3.Zero; // Determine in which direction to move the camera if (keyState.IsKeyDown(Keys.W)) translation += Vector3.Forward; if (keyState.IsKeyDown(Keys.S)) translation += Vector3.Backward; if (keyState.IsKeyDown(Keys.A)) translation += Vector3.Left; if (keyState.IsKeyDown(Keys.D)) translation += Vector3.Right; // Move 3 units per millisecond, independent of frame rate translation *= 4 * (float)gameTime.ElapsedGameTime.TotalMilliseconds; // Move the camera ((FreeCamera)camera).Move(translation); // Update the camera camera.Update(); // Update the mouse state lastMouseState = mouseState;}// Called when the game should draw itselfprotected override void Draw(GameTime gameTime){ GraphicsDevice.Clear(Color.CornflowerBlue); foreach (CModel model in models) if (camera.BoundingVolumeIsInView(model.BoundingSphere)) model.Draw(camera.View, camera.Projection, ((FreeCamera)camera).Position); base.Draw(gameTime); }} Assigning a shader to a model In order to draw a model with XNA, it needs to have an instance of the Effect class assigned to it. Recall from the first chapter that each ModelMeshPart in a Model has its own Effect. This is because each ModelMeshPart may need to have a different appearance, as one ModelMeshPart may, for example, make up armor on a soldier while another may make up the head. If the two used the same effect (shader), then we could end up with a very shiny head or a very dull piece of armor. Instead, XNA provides us the option to give every ModelMeshPart a unique effect. In order to draw our models with our own effects, we need to replace the BasicEffect of every ModelMeshPart with our own effect loaded from the content pipeline. For now, we won't worry about the fact that each ModelMeshPart can have its own effect; we'll just be assigning one effect to an entire model. Later, however, we will add more functionality to allow different effects on each part of a model. However, before we start replacing the instances of BasicEffect assigned to our models, we need to extract some useful information from them, such as which texture and color to use for each ModelMeshPart. We will store this information in a new class that each ModelMeshPart will keep a reference to using its Tag properties: public class MeshTag{ public Vector3 Color; public Texture2D Texture; public float SpecularPower; public Effect CachedEffect = null; public MeshTag(Vector3 Color, Texture2D Texture, float SpecularPower) { this.Color = Color; this.Texture = Texture; this.SpecularPower = SpecularPower; }} This information will be extracted using a new function in the CModel class: private void generateTags(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (part.Effect is BasicEffect) { BasicEffect effect = (BasicEffect)part.Effect; MeshTag tag = new MeshTag(effect.DiffuseColor, effect.Texture, effect.SpecularPower); part.Tag = tag; }} This function will be called along with buildBoundingSphere() in the constructor: ...buildBoundingSphere();generateTags();... Notice that the MeshTag class has a CachedEffect variable that is not currently used. We will use this value as a location to store a reference to an effect that we want to be able to restore to the ModelMeshPart on demand. This is useful when we want to draw a model using a different effect temporarily without having to completely reload the model's effects afterwards. The functions that will allow us to do this are as shown: // Store references to all of the model's current effectspublic void CacheEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) ((MeshTag)part.Tag).CachedEffect = part.Effect;}// Restore the effects referenced by the model's cachepublic void RestoreEffects(){ foreach (ModelMesh mesh in Model.Meshes) foreach (ModelMeshPart part in mesh.MeshParts) if (((MeshTag)part.Tag).CachedEffect != null) part.Effect = ((MeshTag)part.Tag).CachedEffect;} We are now ready to start assigning effects to our models. We will look at this in more detail in a moment, but it is worth noting that every Effect has a dictionary of effect parameters. These are variables that the Effect takes into account when performing its calculations—the world, view, and projection matrices, or colors and textures, for example. We modify a number of these parameters when assigning a new effect, so that each texture of ModelMeshPart can be informed of its specific properties: public void SetModelEffect(Effect effect, bool CopyEffect){foreach(ModelMesh mesh in Model.Meshes)foreach (ModelMeshPart part in mesh.MeshParts){Effect toSet = effect;// Copy the effect if necessaryif (CopyEffect)toSet = effect.Clone();MeshTag tag = ((MeshTag)part.Tag);// If this ModelMeshPart has a texture, set it to the effectif (tag.Texture != null){setEffectParameter(toSet, "BasicTexture", tag.Texture);setEffectParameter(toSet, "TextureEnabled", true);}elsesetEffectParameter(toSet, "TextureEnabled", false);// Set our remaining parameters to the effectsetEffectParameter(toSet, "DiffuseColor", tag.Color);setEffectParameter(toSet, "SpecularPower", tag.SpecularPower);part.Effect = toSet;}}// Sets the specified effect parameter to the given effect, if it// has that parametervoid setEffectParameter(Effect effect, string paramName, object val){ if (effect.Parameters[paramName] == null) return; if (val is Vector3) effect.Parameters[paramName].SetValue((Vector3)val); else if (val is bool) effect.Parameters[paramName].SetValue((bool)val); else if (val is Matrix) effect.Parameters[paramName].SetValue((Matrix)val); else if (val is Texture2D) effect.Parameters[paramName].SetValue((Texture2D)val);} The CopyEffect parameter, an option that this function has, is very important. If we specify false—telling the CModel not to copy the effect per ModelMeshPart—any changes made to the effect will be reflected any other time the effect is used. This is a problem if we want each ModelMeshPart to have a different texture, or if we want to use the same effect on multiple models. Instead, we can specify true to have the CModel copy the effect for each mesh part so that they can set their own effect parameters: Finally, we need to update the Draw() function to handle Effects other than BasicEffect: public void Draw(Matrix View, Matrix Projection, Vector3 CameraPosition){ // Calculate the base transformation by combining // translation, rotation, and scaling Matrix baseWorld = Matrix.CreateScale(Scale) * Matrix.CreateFromYawPitchRoll(Rotation.Y, Rotation.X, Rotation.Z) * Matrix.CreateTranslation(Position);foreach (ModelMesh mesh in Model.Meshes){ Matrix localWorld = modelTransforms[mesh.ParentBone.Index] * baseWorld; foreach (ModelMeshPart meshPart in mesh.MeshParts) { Effect effect = meshPart.Effect; if (effect is BasicEffect) { ((BasicEffect)effect).World = localWorld; ((BasicEffect)effect).View = View; ((BasicEffect)effect).Projection = Projection; ((BasicEffect)effect).EnableDefaultLighting(); } else { setEffectParameter(effect, "World", localWorld); setEffectParameter(effect, "View", View); setEffectParameter(effect, "Projection", Projection); setEffectParameter(effect, "CameraPosition", CameraPosition); } } mesh.Draw(); }} Creating a simple effect We will create our first effect now, and assign it to our models so that we can see the result. To begin, right-click on the content project, choose Add New Item, and select Effect File. Call it something like SimpleEffect.fx: The code for the new file is as follows. Don't worry, we'll go through each piece in a moment: float4x4 World;float4x4 View;float4x4 Projection;struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);}technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} To assign this effect to the models in our scene, we need to first load it in the game's LoadContent() function, then use the SetModelEffect() function to assign the effect to each model. Add the following to the end of the LoadContent function: Effect simpleEffect = Content.Load<Effect>("SimpleEffect");models[0].SetModelEffect(simpleEffect, true);models[1].SetModelEffect(simpleEffect, true); If you were to run the game now, you would notice that the models appear both flat and gray. This is the correct behavior, as the effect doesn't have the code necessary to do anything else at the moment. After we break down each piece of the shader, we will add some more exciting behavior: Let's begin at the top. The first three lines in this effect are its effect paremeters. These three should be familiar to you—they are the world, view, and projection matrices (in HLSL, float4x4 is the equivelant of XNA's Matrix class). There are many types of effect parameters and we will see more later. float4x4 World;float4x4 View;float4x4 Projection; The next few lines are where we define the structures used in the shaders. In this case, the two structs are VertexShaderInput and VertexShaderOutput. As you might guess, these two structs are used to send input into the vertex shader and retrieve the output from it. The data in the VertexShaderOutput struct is then interpolated between vertices and sent to the pixel shader. This way, when we access the Position value in the pixel shader for a pixel that sits between two vertices, we will get the actual position of that location instead of the position of one of the two vertices. In this case, the input and output are very simple: just the position of the vertex before and after it has been transformed using the world, view, and projection matrices: struct VertexShaderInput{ float4 Position : POSITION0;};struct VertexShaderOutput{ float4 Position : POSITION0;}; You may note that the members of these structs are a little different from the properties of a class in C#—in that they must also include what are called semantics. Microsoft's definition for shader semantics is as follows (http://msdn.microsoft.com/en-us/library/bb509647%28VS.85%29.aspx): A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter. Basically, we need to specify what we intend to do with each member of our structs so that the graphics card can correctly map the vertex shader's outputs with the pixel shader's inputs. For example, in the previous code, we use the POSITION0 semantics to tell the graphics card that this value is the one that holds the position at which to draw the vertex. The next few lines are the vertex shader itself. Basically, we are just multiplying the input (object space or untransformed) vertex position by the world, view, and projection matrices (the mul function is part of HLSL and is used to multiply matrices and vertices) and returning that value in a new instance of the VertexShaderOutput struct: VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); return output;} The next bit of code makes up the pixel shader. It accepts a VertexShaderOutput struct as its input (which is passed from the vertex shader), and returns a float4—equivelent to XNA's Vector4 class, in that it is basically a set of four floating point (decimal) numbers. We use the COLOR0 semantic for our return value to let the pipeline know that this function is returning the final pixel color. In this case, we are using those numbers to represent the red, green, blue, and transparency values respectively of the pixel that we are shading. In this extremely simple pixel shader, we are just returning the color gray (.5, .5, .5), so any pixel covered by the model we are drawing will be gray (like in the previous screenshot). float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ return float4(.5, .5, .5, 1);} The last part of the shader is the shader definition. Here, we tell the graphics card which vertex and pixel shader versions to use (every graphics card supports a different set, but in this case we are using vertex shader 1.1 and pixel shader 2.0) and which functions in our code make up the vertex and pixel shaders: technique Technique1{ pass Pass1 { VertexShader = compile vs_1_1 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }} Texture mapping Let's now improve our shader by allowing it to render the textures each ModelMeshPart has assigned. As you may recall, the SetModelEffect function in the CModel class attempts to set the texture of each ModelMeshPart to its respective effect. However, it attempts to do so only if it finds the BasicTexture parameter on the effect. Let's add this parameter to our effect now (under the world, view, and projection properties): texture BasicTexture; We need one more parameter in order to draw textures on our models, and that is an instance of a sampler. The sampler is used by HLSL to retrieve the color of the pixel at a given position in a texture—which will be useful later on—in our pixel shader where we will need to retrieve the pixel from the texture corresponding the point on the model we are shading: sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>;}; A third effect parameter will allow us to turn texturing on and off: bool TextureEnabled = false; Every model that has a texture should also have what are called texture coordinates. The texture coordinates are basically two-dimensional coordinates called UV coordinates that range from (0, 0) to (1, 1) and that are assigned to every vertex in the model. These coordinates correspond to the point on the texture that should be drawn onto that vertex. A UV coordinate of (0, 0) corresponds to the top-left of the texture and (1, 1) corresponds to the bottom-right. The texture coordinates allow us to wrap two-dimensional textures onto the three-dimensional surfaces of our models. We need to include the texture coordinates in the input and output of the vertex shader, and add the code to pass the UV coordinates through the vertex shader to the pixel shader: struct VertexShaderInput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};struct VertexShaderOutput{ float4 Position : POSITION0; float2 UV : TEXCOORD0;};VertexShaderOutput VertexShaderFunction(VertexShaderInput input){ VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4x4 viewProjection = mul(View, Projection); output.Position = mul(worldPosition, viewProjection); output.UV = input.UV; return output;} Finally, we can use the texture sampler, the texture coordinates (also called UV coordinates), and HLSL's tex2D function to retrieve the texture color corresponding to the pixel we are drawing on the model: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ float3 output = float3(1, 1, 1); if (TextureEnabled) output *= tex2D(BasicTextureSampler, input.UV); return float4(output, 1);} If you run the game now, you will see that the textures are properly drawn onto the models: Texture sampling The problem with texture sampling is that we are rarely able to simply copy each pixel from a texture directly onto the screen because our models bend and distort the texture due to their shape. Textures are distorted further by the transformations we apply to our models—rotation and other transformations. This means that we almost always have to calculate the approximate position in a texture to sample from and return that value, which is what HLSL's sampler2D does for us. There are a number of considerations to make when sampling. How we sample from our textures can have a big impact on both our game's appearance and performance. More advanced sampling (or filtering) algorithms look better but slow down the game. Mip mapping refers to the use of multiple sizes of the same texture. These multiple sizes are calculated before the game is run and stored in the same texture, and the graphics card will swap them out on the fly, using a smaller version of the texture for objects in the distance, and so on. Finally, the address mode that we use when sampling will affect how the graphics card handles UV coordinates outside the (0, 1) range. For example, if the address mode is set to "clamp", the UV coordinates will be clamped to (0, 1). If the address mode is set to "wrap," the coordinates will be wrapped through the texture repeatedly. This can be used to create a tiling effect on terrain, for example. For now, because we are drawing so few models, we will use anisotropic filtering. We will also enable mip mapping and set the address mode to "wrap". sampler BasicTextureSampler = sampler_state { texture = <BasicTexture>; MinFilter = Anisotropic; // Minification Filter MagFilter = Anisotropic; // Magnification Filter MipFilter = Linear; // Mip-mapping AddressU = Wrap; // Address Mode for U Coordinates AddressV = Wrap; // Address Mode for V Coordinates}; This will give our models a nice, smooth appearance in the foreground and a uniform appearance in the background:
Read more
  • 0
  • 0
  • 2422

article-image-ogre3d-scene-graph
Packt
20 Dec 2010
13 min read
Save for later

The Ogre 3D scene graph

Packt
20 Dec 2010
13 min read
Creating a scene node We will learn how to create a new scene node and attach our 3D model to it. How to create a scene node with Ogre 3D We will follow these steps: In the old version of our code, we had the following two lines in the createScene() function: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad.mesh"); mSceneMgr->getRootSceneNode()->attachObject(ent); Replace the last line with the following: Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); Then add the following two lines; the order of those two lines is irrelevant forthe resulting scene: mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Compile and start the application. What just happened? We created a new scene node named Node 1. Then we added the scene node to the root scene node. After this, we attached our previously created 3D model to the newly created scene node so it would be visible. How to work with the RootSceneNode The call mSceneMgr->getRootSceneNode() returns the root scene node. This scene node is a member variable of the scene manager. When we want something to be visible, we need to attach it to the root scene node or a node which is a child or a descendent in any way. In short, there needs to be a chain of child relations from the root node to the node; otherwise it won't be rendered. As the name suggests, the root scene node is the root of the scene. So the entire scene will be, in some way, attached to the root scene node. Ogre 3D uses a so-called scene graph to organize the scene. This graph is like a tree, it has one root, the root scene node, and each node can have children. We already have used this characteristic when we called mSceneMgr->getRootSceneNode()->addChild(node);. There we added the created scene node as a child to the root. Directly afterwards, we added another kind of child to the scene node with node->attachObject(ent);. Here, we added an entity to the scene node. We have two different kinds of objects we can add to a scene node. Firstly, we have other scene nodes, which can be added as children and have children themselves. Secondly, we have entities that we want rendered. Entities aren't children and can't have children themselves. They are data objects which are associated with the node and can be thought of as leaves of the tree. There are a lot of other things we can add to a scene, like lights, particle systems, and so on. We will later learn what these things are and how to use them. Right now, we only need entities. Our current scene graph looks like this: The first thing we need to understand is what a scene graph is and what it does. A scene graph is used to represent how different parts of a scene are related to each other in 3D space. 3D space Ogre 3D is a 3D rendering engine, so we need to understand some basic 3D concepts. The most basic construct in 3D is a vector, which is represented by an ordered triple (x,y,z). Each position in a 3D space can be represented by such a triple using the Euclidean coordination system for three dimensions. It is important to know that there are different kinds of coordinate systems in 3D space. The only difference between the systems is the orientation of the axis and the positive rotation direction. There are two systems that are widely used, namely, the left-handed and the right-handed versions. In the following image, we see both systems—on the left side, we see the left-handed version; and on the right side, we see the right-handed one. Source:http://en.wikipedia.org/wiki/File:Cartesian_coordinate_system_handedness.svg The names left-and right-handed are based on the fact that the orientation of the axis can be reconstructed using the left and right hand. The thumb is the x-axis, the index finger the y-axis, and the middle finger the z-axis. We need to hold our hands so that we have a ninety-degree angle between thumb and index finger and also between middle and index finger. When using the right hand, we get a right-handed coordination system. When using the left hand, we get the left-handed version. Ogre uses the right-handed system, but rotates it so that the positive part of the x-axis is pointing right and the negative part of the x-axis points to the left. The y-axis is pointing up and the z-axis is pointing out of the screen and it is known as the y-up convention. This sounds irritating at first, but we will soon learn to think in this coordinate system. The website http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html contains a rather good picture-based explanation of the different coordination systems and how they relate to each other. Scene graph A scene graph is one of the most used concepts in graphics programming. Simply put, it's a way to store information about a scene. We already discussed that a scene graph has a root and is organized like a tree. But we didn't touch on the most important function of a scene graph. Each node of a scene graph has a list of its children as well as a transformation in the 3D space. The transformation is composed of three aspects, namely, the position, the rotation, and the scale. The position is a triple (x,y,z), which obviously describes the position of the node in the scene. The rotation is stored using a quaternion, a mathematical concept for storing rotations in 3D space, but we can think of rotations as a single floating point value for each axis, describing how the node is rotated using radians as units. Scaling is quite easy; again, it uses a triple (x,y,z), and each part of the triple is simply the factor to scale the axis with. The important thing about a scene graph is that the transformation is relative to the parent of the node. If we modify the orientation of the parent, the children will also be affected by this change. When we move the parent 10 units along the x-axis, all children will also be moved by 10 units along the x-axis. The final orientation of each child is computed using the orientation of all parents. This fact will become clearer with the next diagram. The position of MyEntity in this scene will be (10,0,0) and MyEntity2 will be at (10,10,20). Let's try this in Ogre 3D. Pop quiz – finding the position of scene nodes Look at the following tree and determine the end positions of MyEntity and MyEntity2: MyEntity(60,60,60) and MyEntity2(0,0,0) MyEntity(70,50,60) and MyEntity2(10,-10,0) MyEntity(60,60,60) and MyEntity2(10,10,10) Setting the position of a scene node Now, we will try to create the setup of the scene from the diagram before the previous image. Time for action – setting the position of a scene node Add this new line after the creation of the scene node: node->setPosition(10,0,0); To create a second entity, add this line at the end of the createScene() function: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Then create a second scene node: Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); Add the second node to the first one: node->addChild(node2); Set the position of the second node: node2->setPosition(0,10,20); Attach the second entity to the second node: node2->attachObject(ent2); Compile the program and you should see two instances of Sinbad: What just happened? We created a scene which matches the preceding diagram. The first new function we used was at step 1. Easily guessed, the function setPosition(x,y,z) sets the position of the node to the given triple. Keep in mind that this position is relative to the parent. We wanted MyEntity2 to be at (10,10,20), because we added node2, which holds MyEntity2, to a scene node which already was at the position (10,0,0). We only needed to set the position of node2 to (0,10,20). When both positions combine, MyEntity2 will be at (10,10,20). Pop quiz – playing with scene nodes We have the scene node node1 at (0,20,0) and we have a child scene node node2, which has an entity attached to it. If we want the entity to be rendered at (10,10,10), at which position would we need to set node2? (10,10,10) (10,-10,10) (-10,10,-10) Have a go hero – adding a Sinbad Add a third instance of Sinbad and let it be rendered at the position (10,10,30). Rotating a scene node We already know how to set the position of a scene node. Now, we will learn how to rotate a scene node and another way to modify the position of a scene node. Time for action – rotating a scene node We will use the previous code, but create completely new code for the createScene() function. Remove all code from the createScene() function. First create an instance of Sinbad.mesh and then create a new scene node. Set the position of the scene node to (10,10,0), at the end attach the entity to the node, and add the node to the root scene node as a child: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new instance of the model, also a new scene node, and set the position to (10,0,0): Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Ogre::SceneNode* node2 = mSceneMgr->createSceneNode("Node2"); node->addChild(node2); node2->setPosition(10,0,0); Now add the following two lines to rotate the model and attach the entity to the scene node: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); node2->attachObject(ent2); Do the same again, but this time use the function yaw instead of the function pitch and the translate function instead of the setPosition function: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Ogre::SceneNode* node3 = mSceneMgr->createSceneNode("Node3",); node->addChild(node3); node3->translate(20,0,0); node3->yaw(Ogre::Degree(90.0f)); node3->attachObject(ent3); And the same again with roll instead of yaw or pitch: Ogre::Entity* ent4 = mSceneMgr->createEntity("MyEntity4","Sinbad. mesh"); Ogre::SceneNode* node4 = mSceneMgr->createSceneNode("Node4"); node->addChild(node4); node4->setPosition(30,0,0); node4->roll(Ogre::Radian(Ogre::Math::HALF_PI)); node4->attachObject(ent4); Compile and run the program, and you should see the following screenshot: What just happened? We repeated the code we had before four times and always changed some small details. The first repeat is nothing special. It is just the code we had before and this instance of the model will be our reference model to see what happens to the other three instances we made afterwards. In step 4, we added one following additional line: node2->pitch(Ogre::Radian(Ogre::Math::HALF_PI)); The function pitch(Ogre::Radian(Ogre::Math::HALF_PI)) rotates a scene node around the x-axis. As said before, this function expects a radian as parameter and we used half of pi, which means a rotation of ninety degrees. In step 5, we replaced the function call setPosition(x,y,z) with translate(x,y,z). The difference between setPosition(x,y,z) and translate(x,y,z) is that setPosition sets the position—no surprises here. translate adds the given values to the position of the scene node, so it moves the node relatively to its current position. If a scene node has the position (10,20,30) and we call setPosition(30,20,10), the node will then have the position (30,20,10). On the other hand, if we call translate(30,20,10), the node will have the position (40,40,40). It's a small, but important, difference. Both functions can be useful if used in the correct circumstances, like when we want to position in a scene, we would use the setPosition(x,y,z) function. However, when we want to move a node already positioned in the scene, we would use translate(x,y,z). Also, we replaced pitch(Ogre::Radian(Ogre::Math::HALF_PI))with yaw(Ogre::Degree(90.0f)). The yaw() function rotates the scene node around the y-axis. Instead of Ogre::Radian(), we used Ogre::Degree(). Of course, Pitch and yaw still need a radian to be used. However, Ogre 3D offers the class Degree(), which has a cast operator so the compiler can automatically cast into a Radian(). Therefore, the programmer is free to use a radian or degree to rotate scene nodes. The mandatory use of the classes makes sure that it's always clear which is used, to prevent confusion and possible error sources. Step 6 introduces the last of the three different rotate function a scene node has, namely, roll(). This function rotates the scene node around the z-axis. Again, we could use roll(Ogre::Degree(90.0f)) instead of roll(Ogre::Radian(Ogre::Math::HALF_PI)). The program when run shows a non-rotated model and all three possible rotations. The left model isn't rotated, the model to the right of the left model is rotated around the x-axis, the model to the left of the right model is rotated around the y-axis, and the right model is rotated around the z-axis. Each of these instances shows the effect of a different rotate function. In short, pitch() rotates around the x-axis, yaw() around the y-axis, and roll() around the z-axis. We can either use Ogre::Degree(degree) or Ogre::Radian(radian) to specify how much we want to rotate. Pop quiz – rotating a scene node Which are the three functions to rotate a scene node? pitch, yawn, roll pitch, yaw, roll pitching, yaw, roll Have a go hero – using Ogre::Degree Remodel the code we wrote for the previous section in such a way that each occurrence of Ogre::Radian is replaced with an Ogre::Degree and vice versa, and the rotation is still the same. Scaling a scene node We already have covered two of the three basic operations we can use to manipulate our scene graph. Now it's time for the last one, namely, scaling. Time for action – scaling a scene node Once again, we start with the same code block we used before. Remove all code from the createScene() function and insert the following code block: Ogre::Entity* ent = mSceneMgr->createEntity("MyEntity","Sinbad. mesh"); Ogre::SceneNode* node = mSceneMgr->createSceneNode("Node1"); node->setPosition(10,10,0); mSceneMgr->getRootSceneNode()->addChild(node); node->attachObject(ent); Again, create a new entity: Ogre::Entity* ent2 = mSceneMgr->createEntity("MyEntity2","Sinbad. mesh"); Now we use a function that creates the scene node and adds it automatically as a child. Then we do the same thing we did before: Ogre::SceneNode* node2 = node->createChildSceneNode("node2"); node2->setPosition(10,0,0); node2->attachObject(ent2); Now, after the setPosition() function, call the following line to scale the model: node2->scale(2.0f,2.0f,2.0f); Create a new entity: Ogre::Entity* ent3 = mSceneMgr->createEntity("MyEntity3","Sinbad. mesh"); Now we call the same function as in step 3, but with an additional parameter: Ogre::SceneNode* node3 = node->createChildSceneNode("node3",Ogre:: Vector3(20,0,0)); After the function call, insert this line to scale the model: node3->scale(0.2f,0.2f,0.2f); Compile the program and run it, and you should see the following image:
Read more
  • 0
  • 0
  • 3725
article-image-environmental-effects-3d-graphics-xna-game-studio-40
Packt
16 Dec 2010
10 min read
Save for later

Environmental Effects in 3D Graphics with XNA Game Studio 4.0

Packt
16 Dec 2010
10 min read
3D Graphics with XNA Game Studio 4.0 A step-by-step guide to adding the 3D graphics effects used by professionals to your XNA games. Improve the appearance of your games by implementing the same techniques used by professionals in the game industry Learn the fundamentals of 3D graphics, including common 3D math and the graphics pipeline Create an extensible system to draw 3D models and other effects, and learn the skills to create your own effects and animate them We will look at a technique called region growing to add plants and trees to the terrain's surface, and finish by combining the terrain with our sky box, water, and billboarding effects to create a mountain scene: Building a terrain from a heightmap A heightmap is a 2D image that stores, in each pixel, the height of the corresponding point on a grid of vertices. The pixel values range from 0 to 1, so in practice we will multiply them by the maximum height of the terrain to get the final height of each vertex. We build a terrain out of vertices and indices as a large rectangular grid with the same number of vertices as the number of pixels in the heightmap. Let's start by creating a new Terrain class. This class will keep track of everything needed to render our terrain: textures, the effect, vertex and index buffers, and so on. public class Terrain { VertexPositionNormalTexture[] vertices; // Vertex array VertexBuffer vertexBuffer; // Vertex buffer int[] indices; // Index array IndexBuffer indexBuffer; // Index buffer float[,] heights; // Array of vertex heights float height; // Maximum height of terrain float cellSize; // Distance between vertices on x and z axes int width, length; // Number of vertices on x and z axes int nVertices, nIndices; // Number of vertices and indices Effect effect; // Effect used for rendering GraphicsDevice GraphicsDevice; // Graphics device to draw with Texture2D heightMap; // Heightmap texture } The constructor will initialize many of these values: public Terrain(Texture2D HeightMap, float CellSize, float Height, GraphicsDevice GraphicsDevice, ContentManager Content) { this.heightMap = HeightMap; this.width = HeightMap.Width; this.length = HeightMap.Height; this.cellSize = CellSize; this.height = Height; this.GraphicsDevice = GraphicsDevice; effect = Content.Load<Effect>("TerrainEffect"); // 1 vertex per pixel nVertices = width * length; // (Width-1) * (Length-1) cells, 2 triangles per cell, 3 indices per // triangle nIndices = (width - 1) * (length - 1) * 6; vertexBuffer = new VertexBuffer(GraphicsDevice, typeof(VertexPositionNormalTexture), nVertices, BufferUsage.WriteOnly); indexBuffer = new IndexBuffer(GraphicsDevice, IndexElementSize.ThirtyTwoBits, nIndices, BufferUsage.WriteOnly); } Before we can generate any normals or indices, we need to know the dimensions of our grid. We know that the width and length are simply the width and height of our heightmap, but we need to extract the height values from the heightmap. We do this with the getHeights() function: private void getHeights() { // Extract pixel data Color[] heightMapData = new Color[width * length]; heightMap.GetData<Color>(heightMapData); // Create heights[,] array heights = new float[width, length]; // For each pixel for (int y = 0; y < length; y++) for (int x = 0; x < width; x++) { // Get color value (0 - 255) float amt = heightMapData[y * width + x].R; // Scale to (0 - 1) amt /= 255.0f; // Multiply by max height to get final height heights[x, y] = amt * height; } } This will initialize the heights[,] array, which we can then use to build our vertices. When building vertices, we simply lay out a vertex for each pixel in the heightmap, spaced according to the cellSize variable. Note that this will create (width – 1) * (length – 1) "cells"—each with two triangles: The function that does this is as shown: private void createVertices() { vertices = new VertexPositionNormalTexture[nVertices]; // Calculate the position offset that will center the terrain at (0, 0, 0) Vector3 offsetToCenter = -new Vector3(((float)width / 2.0f) * cellSize, 0, ((float)length / 2.0f) * cellSize); // For each pixel in the image for (int z = 0; z < length; z++) for (int x = 0; x < width; x++) { // Find position based on grid coordinates and height in // heightmap Vector3 position = new Vector3(x * cellSize, heights[x, z], z * cellSize) + offsetToCenter; // UV coordinates range from (0, 0) at grid location (0, 0) to // (1, 1) at grid location (width, length) Vector2 uv = new Vector2((float)x / width, (float)z / length); // Create the vertex vertices[z * width + x] = new VertexPositionNormalTexture( position, Vector3.Zero, uv); } } When we create our terrain's index buffer, we need to lay out two triangles for each cell in the terrain. All we need to do is find the indices of the vertices at each corner of each cell, and create the triangles by specifying those indices in clockwise order for two triangles. For example, to create the triangles for the first cell in the preceding screenshot, we would specify the triangles as [0, 1, 4] and [4, 1, 5]. private void createIndices() { indices = new int[nIndices]; int i = 0; // For each cell for (int x = 0; x < width - 1; x++) for (int z = 0; z < length - 1; z++) { // Find the indices of the corners int upperLeft = z * width + x; int upperRight = upperLeft + 1; int lowerLeft = upperLeft + width; int lowerRight = lowerLeft + 1; // Specify upper triangle indices[i++] = upperLeft; indices[i++] = upperRight; indices[i++] = lowerLeft; // Specify lower triangle indices[i++] = lowerLeft; indices[i++] = upperRight; indices[i++] = lowerRight; } } The last thing we need to calculate for each vertex is the normals. Because we are creating the terrain from scratch, we will need to calculate all of the normals based only on the height data that we are given. This is actually much easier than it sounds: to calculate the normals we simply calculate the normal of each triangle of the terrain and add that normal to each vertex involved in the triangle. Once we have done this for each triangle, we simply normalize again, averaging the influences of each triangle connected to each vertex. private void genNormals() { // For each triangle for (int i = 0; i < nIndices; i += 3) { // Find the position of each corner of the triangle Vector3 v1 = vertices[indices[i]].Position; Vector3 v2 = vertices[indices[i + 1]].Position; Vector3 v3 = vertices[indices[i + 2]].Position; // Cross the vectors between the corners to get the normal Vector3 normal = Vector3.Cross(v1 - v2, v1 - v3); normal.Normalize(); // Add the influence of the normal to each vertex in the // triangle vertices[indices[i]].Normal += normal; vertices[indices[i + 1]].Normal += normal; vertices[indices[i + 2]].Normal += normal; } // Average the influences of the triangles touching each // vertex for (int i = 0; i < nVertices; i++) vertices[i].Normal.Normalize(); } We'll finish off the constructor by calling these functions in order and then setting the vertices and indices that we created into their respective buffers: createVertices(); createIndices(); genNormals(); vertexBuffer.SetData<VertexPositionNormalTexture>(vertices); indexBuffer.SetData<int>(indices); Now that we've created the framework for this class, let's create the TerrainEffect.fx effect. This effect will, for the moment, be responsible for some simple directional lighting and texture mapping. We'll need a few effect parameters: float4x4 View; float4x4 Projection; float3 LightDirection = float3(1, -1, 0); float TextureTiling = 1; texture2D BaseTexture; sampler2D BaseTextureSampler = sampler_state { Texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; The TextureTiling parameter will determine how many times our texture is repeated across the terrain's surface—simply stretching it across the terrain would look bad because it would need to be stretched to a very large size. "Tiling" it across the terrain will look much better. We will need a very standard vertex shader: struct VertexShaderInput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 UV : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, mul(View, Projection)); output.Normal = input.Normal; output.UV = input.UV; return output; } The pixel shader is also very standard, except that we multiply the texture coordinates by the TextureTiling parameter. This works because the texture sampler's address mode is set to "wrap", and thus the sampler will simply wrap the texture coordinates past the edge of the texture, creating the tiling effect. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize(LightDirection)); light = clamp(light + 0.4f, 0, 1); // Simple ambient lighting float3 tex = tex2D(BaseTextureSampler, input.UV * TextureTiling); return float4(tex * light, 1); } The technique definition is the same as our other effects: technique Technique1 { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } In order to use the effect with our terrain, we'll need to add a few more member variables to the Terrain class: Texture2D baseTexture; float textureTiling; Vector3 lightDirection; These values will be set from the constructor: public Terrain(Texture2D HeightMap, float CellSize, float Height, Texture2D BaseTexture, float TextureTiling, Vector3 LightDirection, GraphicsDevice GraphicsDevice, ContentManager Content) { this.baseTexture = BaseTexture; this.textureTiling = TextureTiling; this.lightDirection = LightDirection; // etc... Finally, we can simply set these effect parameters along with the View and Projection parameters in the Draw() function: effect.Parameters["BaseTexture"].SetValue(baseTexture); effect.Parameters["TextureTiling"].SetValue(textureTiling); effect.Parameters["LightDirection"].SetValue(lightDirection); Let's now add the terrain to our game. We'll need a new member variable in the Game1 class: Terrain terrain; We'll need to initialize it in the LoadContent() method: terrain = new Terrain(Content.Load<Texture2D>("terrain"), 30, 4800, Content.Load<Texture2D>("grass"), 6, new Vector3(1, -1, 0), GraphicsDevice, Content); Finally, we can draw it in the Draw() function: terrain.Draw(camera.View, camera.Projection); Multitexturing Our terrain looks pretty good as it is, but to make it more believable the texture applied to it needs to vary—snow and rocks at the peaks, for example. To do this, we will use a technique called multitexturing, which uses the red, blue, and green channels of a texture as a guide as to where to draw textures that correspond to those channels. For example, sand may correspond to red, snow to blue, and rock to green. Adding snow would then be as simple as painting blue onto the areas of this "texture map" that correspond with peaks on the heightmap. We will also have one extra texture that fills in the area where no colors have been painted onto the texture map—grass, for example. To begin with, we will need to modify our texture parameters on our effect from one texture to five: the texture map, the base texture, and the three color channel mapped textures. texture RTexture; sampler RTextureSampler = sampler_state { texture = <RTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture GTexture; sampler GTextureSampler = sampler_state { texture = <GTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BTexture; sampler BTextureSampler = sampler_state { texture = <BTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture BaseTexture; sampler BaseTextureSampler = sampler_state { texture = <BaseTexture>; AddressU = Wrap; AddressV = Wrap; MinFilter = Anisotropic; MagFilter = Anisotropic; }; texture WeightMap; sampler WeightMapSampler = sampler_state { texture = <WeightMap>; AddressU = Clamp; AddressV = Clamp; MinFilter = Linear; MagFilter = Linear; }; Second, we need to update our pixel shader to draw these textures onto the terrain: float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float light = dot(normalize(input.Normal), normalize( LightDirection)); light = clamp(light + 0.4f, 0, 1); float3 rTex = tex2D(RTextureSampler, input.UV * TextureTiling); float3 gTex = tex2D(GTextureSampler, input.UV * TextureTiling); float3 bTex = tex2D(BTextureSampler, input.UV * TextureTiling); float3 base = tex2D(BaseTextureSampler, input.UV * TextureTiling); float3 weightMap = tex2D(WeightMapSampler, input.UV); float3 output = clamp(1.0f - weightMap.r - weightMap.g - weightMap.b, 0, 1); output *= base; output += weightMap.r * rTex + weightMap.g * gTex + weightMap.b * bTex; return float4(output * light, 1); } We'll need to add a way to set these values to the Terrain class: public Texture2D RTexture, BTexture, GTexture, WeightMap; All we need to do now is set these values to the effect in the Draw() function: effect.Parameters["RTexture"].SetValue(RTexture); effect.Parameters["GTexture"].SetValue(GTexture); effect.Parameters["BTexture"].SetValue(BTexture); effect.Parameters["WeightMap"].SetValue(WeightMap); To use multitexturing in our game, we'll need to set these values in the Game1 class: terrain.WeightMap = Content.Load<Texture2D>("weightMap"); terrain.RTexture = Content.Load<Texture2D>("sand"); terrain.GTexture = Content.Load<Texture2D>("rock"); terrain.BTexture = Content.Load<Texture2D>("snow");
Read more
  • 0
  • 0
  • 3773

article-image-starting-ogre-3d
Packt
25 Nov 2010
7 min read
Save for later

Starting Ogre 3D

Packt
25 Nov 2010
7 min read
  OGRE 3D 1.7 Beginner's Guide Create real time 3D applications using OGRE 3D from scratch Easy-to-follow introduction to OGRE 3D Create exciting 3D applications using OGRE 3D Create your own scenes and monsters, play with the lights and shadows, and learn to use plugins Get challenged to be creative and make fun and addictive games on your own A hands-on do-it-yourself approach with over 100 examples Images         Read more about this book       (For more resources on this subject, see here.) Introduction Up until now, the ExampleApplication class has started and initialized Ogre 3D for us; now we are going to do it ourselves. Time for action – starting Ogre 3D This time we are working on a blank sheet. Start with an empty code file, include Ogre3d.h, and create an empty main function: #include "OgreOgre.h"int main (void){ return 0;} Create an instance of the Ogre 3D Root class; this class needs the name of the "plugin.cfg": "plugin.cfg":Ogre::Root* root = new Ogre::Root("plugins_d.cfg"); If the config dialog can't be shown or the user cancels it, close the application: if(!root->showConfigDialog()){ return -1;} Create a render window: Ogre::RenderWindow* window = root->initialise(true,"Ogre3DBeginners Guide"); Next create a new scene manager: Ogre::SceneManager* sceneManager = root->createSceneManager(Ogre::ST_GENERIC); Create a camera and name it camera: Ogre::Camera* camera = sceneManager->createCamera("Camera");camera->setPosition(Ogre::Vector3(0,0,50));camera->lookAt(Ogre::Vector3(0,0,0));camera->setNearClipDistance(5); With this camera, create a viewport and set the background color to black: Ogre::Viewport* viewport = window->addViewport(camera);viewport->setBackgroundColour(Ogre::ColourValue(0.0,0.0,0.0)); Now, use this viewport to set the aspect ratio of the camera: camera->setAspectRatio(Ogre::Real(viewport->getActualWidth())/Ogre::Real(viewport->getActualHeight())); Finally, tell the root to start rendering: root->startRendering(); Compile and run the application; you should see the normal config dialog and then a black window. This window can't be closed by pressing Escape because we haven't added key handling yet. You can close the application by pressing CTRL+C in the console the application has been started from. What just happened? We created our first Ogre 3D application without the help of the ExampleApplication. Because we aren't using the ExampleApplication any longer, we had to include Ogre3D.h, which was previously included by ExampleApplication.h. Before we can do anything with Ogre 3D, we need a root instance. The root class is a class that manages the higher levels of Ogre 3D, creates and saves the factories used for creating other objects, loads and unloads the needed plugins, and a lot more. We gave the root instance one parameter: the name of the file that defines which plugins to load. The following is the complete signature of the constructor: Root(const String & pluginFileName = "plugins.cfg",const String &configFileName = "ogre.cfg",const String & logFileName = "Ogre.log") Besides the name for the plugin configuration file, the function also needs the name of the Ogre configuration and the log file. We needed to change the first file name because we are using the debug version of our application and therefore want to load the debug plugins. The default value is plugins.cfg, which is true for the release folder of the Ogre 3D SDK, but our application is running in the debug folder where the filename is plugins_d.cfg. ogre.cfg contains the settings for starting the Ogre application that we selected in the config dialog. This saves the user from making the same changes every time he/she start our application. With this file Ogre 3D can remember his choices and use them as defaults for the next start. This file is created if it didn't exist, so we don't append an _d to the filename and can use the default; the same is true for the log file. Using the root instance, we let Ogre 3D show the config dialog to the user in step 3. When the user cancels the dialog or anything goes wrong, we return -1 and with this the application closes. Otherwise, we created a new render window and a new scene manager in step 4. Using the scene manager, we created a camera, and with the camera we created the viewport; then, using the viewport, we calculated the aspect ratio for the camera. After creating all requirements, we tell the root instance to start rendering, so our result would be visible. Following is a diagram showing which object was needed to create the other: Adding resources We have now created our first Ogre 3D application, which doesn't need the ExampleApplication. But one important thing is missing: we haven't loaded and rendered a model yet. Time for action – loading the Sinbad mesh We have our application, now let's add a model. After setting the aspect ratio and before starting the rendering, add the zip archive containing the Sinbad model to our resources: Ogre::ResourceGroupManager::getSingleton().addResourceLocation("../../Media/packs/Sinbad.zip","Zip"); We don't want to index more resources at the moment, so index all added resources now: Ogre::ResourceGroupManager::getSingleton().initialiseAllResourceGroups(); Now create an instance of the Sinbad mesh and add it to the scene: Ogre::Entity* ent = sceneManager->createEntity("Sinbad.mesh");sceneManager->getRootSceneNode()->attachObject(ent); Compile and run the application; you should see Sinbad in the middle of the screen: What just happened? We used the ResourceGroupManager to index the zip archive containing the Sinbad mesh and texture files, and after this was done, we told it to load the data with the createEntity() call in step 3. Using resources.cfg Adding a new line of code for each zip archive or folder we want to load is a tedious task and we should try to avoid it. The ExampleApplication used a configuration file called resources.cfg in which each folder or zip archive was listed, and all the content was loaded using this file. Let's replicate this behavior. Time for action – using resources.cfg to load our models Using our previous application, we are now going to parse the resources.cfg. Replace the loading of the zip archive with an instance of a config file pointing at the resources_d.cfg: the resources_d.cfg:Ogre::ConfigFile cf;cf.load(«resources_d.cfg»); First get the iterator, which goes over each section of the config file: Ogre::ConfigFile::SectionIterator sectionIter =cf.getSectionIterator(); Define three strings to save the data we are going to extract from the config file and iterate over each section: Ogre::String sectionName, typeName, dataname;while (sectionIter.hasMoreElements()){ Get the name of the section: sectionName = sectionIter.peekNextKey(); Get the settings contained in the section and, at the same time, advance the section iterator; also create an iterator for the settings itself: Ogre::ConfigFile::SettingsMultiMap *settings = sectionIter.getNext();Ogre::ConfigFile::SettingsMultiMap::iterator i; Iterate over each setting in the section: for (i = settings->begin(); i != settings->end(); ++i){ Use the iterator to get the name and the type of the resources: typeName = i->first;dataname = i->second; Use the resource name, type, and section name to add it to the resource index: Ogre::ResourceGroupManager::getSingleton().addResourceLocation(dataname, typeName, sectionName); Compile and run the application, and you should see the same scene as before.
Read more
  • 0
  • 0
  • 1856