Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Implementing lighting & camera effects in Unity 2018

Save for later
  • 12 min read
  • 04 May 2018

article-image

Today, we will explore lighting & camera effects in Unity 2018. We will start with cameras to include perspectives, frustums, and Skyboxes. Next, we will learn a few uses of multiple cameras to include mini-maps. We will also cover the different types of lighting, explore reflection probes, and conclude with a look at shadows.

Working with cameras


Cameras render scenes so that the user can view them.

Think about the hidden complexity of this statement. Our games are 3D, but people playing our games view them on 2D displays such as television, computer monitors, or mobile devices. Fortunately for us, Unity makes implementing cameras easy work.

Cameras are GameObjects and can be edited using transform tools in the Scene view as well as in the Inspector panel. Every scene must have at least one camera. In fact, when a new scene is created, Unity creates a camera named Main Camera. As you will see later in this chapter, a scene can have multiple cameras.

In the Scene view, cameras are indicated with a white camera silhouette, as shown in the following screenshot:

lighting-camera-effects-unity-2018-img-0


When we click our Main Camera in the Hierarchy panel, we are provided with a Camera Preview in the Scene view. This gives us a preview of what the camera sees as if it were in game mode. We also have access to several parameters via the Inspector panel. The Camera component in the Inspector panel is shown here:

lighting-camera-effects-unity-2018-img-1


Let's look at each of these parameters with relation to our Cucumber Beetle game:

  • The Clear Flags parameter lets you switch between Skybox, Solid Color, Depth Only, and Don't Clear. The selection here informs Unity which parts of the screen to clear. We will leave this setting as Skybox. You will learn more about Skyboxes later in this chapter.
  • The Background parameter is used to set the default background fill (color) of your game world. This will only be visible after all game objects have been rendered and if there is no Skybox. Our Cucumber Beetle game will have a Skybox, so this parameter can be left with the default color.
  • The Culling Mask parameter allows you to select and deselect the layers you want the camera to render. The default selection options are Nothing, Everything, Default, TransparentFX, Ignore Raycast, Water, and UI. For our game, we will select Everything. If you are not sure which layer a game object is associated with, select it and look at the Layer parameter in the top section of the Inspector panel. There you will see the assigned layer. You can easily change the layer as well as create your own unique layers. This gives you finite rendering control.
  • The Projection parameter allows you to select which projection, perspective or orthographic, you want for your camera. We will cover both of those projections later in this chapter. When perspective projection is selected, we are given access to the Field of View parameter. This is for the width of the camera's angle of view. The value range is 1-179°. You can use the slider to change the values and see the results in the Camera Preview window. When orthographic projection is selected, an additional Size parameter is available. This refers to the viewport size. For our game, we will select perspective projection with the Field of View set to 60.
  • The Clipping Planes parameters include Near and Far. These settings set the closest and furthest points, relative to the camera, that rendering will happen at. For now, we will leave the default settings of 0.3 and 1000 for the Near and Far parameters, respectively.
  • The Viewport Rect parameter has four components – X, Y, W, and H – that determine where the camera will be drawn on the screen. As you would expect, the X and Y components refer to horizontal and vertical positions, and the W and H components refer to width and height. You can experiment with these values and see the changes in the Camera Preview. For our game, we will leave the default settings.
  • The Depth parameter is used when we implement more than one camera. We can set a value here to determine the camera's priority in relation to others. Larger values indicate a higher priority. The default setting is sufficient for our game.
  • The Rendering Path parameter defines what rendering methods our camera will use. The options are Use Graphics Settings, Forward, Deferred, Legacy Vertex Lit, and Legacy Deferred (light prepass). We will use the Use Graphics Settings option for our game, which also uses the default setting.
  • The Target Texture parameter is not something we will use in our game. When a render texture is set, the camera is not able to render to the screen.
  • The Occlusion Culling parameter is a powerful setting. If enabled, Unity will not render objects that are occluded, or not seen by the camera. An example would be objects inside a building. If the camera can currently only see the external walls of the building, then none of the objects inside those walls can be seen. So, it makes sense to not render those. We only want to render what is absolutely necessary to help ensure our game has smooth gameplay and no lag. We will leave this as enabled for our game.
  • The Allow HDR parameter is a checkbox that toggles a camera's High Dynamic Range (HDR) rendering. We will leave the default setting of enabled for our game.
  • The Allow MSAA parameter is a toggle that determines whether our camera will use a Multisample Anti-Aliasing (MSAA) render target. MSAA is a computer graphics optimization technique and we want this enabled for our game.

Understanding camera projections


There are two camera projections used in Unity: perspective and orthographic. With perspective projection, the camera renders a scene based on the camera angle, as it exists in the scene. Using this projection, the further away an object is from the camera, the smaller it will be displayed. This mimics how we see things in the real world. Because of the desire to produce realistic games, or games that approximate the realworld, perspective projection is the most commonly used in modern games. It is also what we will use in our Cucumber Beetle game.

The other projection is orthographic. An orthographic perspective camera renders a scene uniformly without any perspective. This means that objects further away will not be displayed smaller than objects closer to the camera. This type of camera is commonly used for top-down games and is the default camera projection used in 2D and Unity's UI system.

We will use perspective projection for our Cucumber Beetle game.

Orientating your frustum


When a camera is selected in the Hierarchy view, its frustum is visible in the Scene view. A frustum is a geometric shape that looks like a pyramid that has had its top cut off, as illustrated here:

lighting-camera-effects-unity-2018-img-2


The near, or top, plane is parallel to its base. The base is also referred to as the far plane. The frustum's shape represents the viable region of your game. Only objects in that region are rendered. Using the camera object in Scene view, we can change our camera's frustum shape.

Creating a Skybox


When we create game worlds, we typically create the ground, buildings, characters, trees, and other game objects. What about the sky? By default, there will be a textured blue sky in your Unity game projects. That sky is sufficient for testing but does not add to an immersive gaming experience. We want a bit more realism, such as clouds, and that can be accomplished by creating a Skybox.

A Skybox is a six-sided cube visible to the player beyond all other objects. So, when a player looks beyond your objects, what they see is your Skybox. As we said, Skyboxes are six-sided cubes, which means you will need six separate images that can essentially be clamped to each other to form the cube.

The following screenshot shows the Default Skybox that Unity projects start with as well as the completed Custom Skybox you will create in this section:

lighting-camera-effects-unity-2018-img-3


Perform the following steps to create a Skybox:

  1. In the Project panel, create a Skybox subfolder in the Assets folder. We will use this folder to store our textures and materials for the Skybox.
  2. Drag the provided six Skybox images, or your own, into the new Skybox folder.
  3. Ensure the Skybox folder is selected in the Project panel.
  4. Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at €18.99/month. Cancel anytime
  5. From the top menu, select Assets | Create | Material. In the Project panel, name the material Skybox.
  6. With the Skybox material selected, turn your attention to the Inspector panel.
  7. Select the Shader drop-down menu and select SkyBox | 6 Sided.
  8. Use the Select button for each of the six images and navigate to the images you added in step 2. Be sure to match the appropriate texture to the appropriate cube face. For example, the SkyBox_Front texture matches the Front[+Z] cube face on the Skybox Material.
  9. In order to assign our new Skybox to our scene, select Window | Lighting | Settings from the top menu. This will bring up the Lighting settings dialog window.
  10. In the Lighting settings dialog window, click on the small circle to the right of the Skybox Material input field. Then, close the selection window and the Lighting window. Refer to the following screenshot:

lighting-camera-effects-unity-2018-img-4


You will now be able to see your Skybox in the Scene view. When you click on the Camera in the Hierarchy panel, you will also see the Skybox as it will appear from the camera's perspective.

Be sure to save your scene and your project.

Using multiple cameras


Our Unity games must have a least one camera, but we are not limited to using just one. As you will see we will attach our main camera, or primary camera, to our player character. It will be as if the camera is following the character around the game environment. This will become the eyes of our character. We will play the game through our character's view.

A common use of a second camera is to create a mini-map that can be seen in a small window on top of the game display. These mini-maps can be made to toggle on and off or be permanent/fixed display components. Implementations might consist of a fog-of-war display, a radar showing enemies, or a global top-down view of the map for orientation purposes. You are only limited by your imagination.

Another use of multiple cameras is to provide the player with the ability to switch between third-person and first-person views. You will remember that the first-person view puts the player's arms in view, while in the third-person view, the player's entire body is visible. We can use two cameras in the appropriate positions to support viewing from either camera. In a game, you might make this a toggle—say, with the C keyboard key—that switches from one camera to the other. Depending on what is happening in the game, the player might enjoy this ability.

Some single-player games feature multiple playable characters. Giving the player the ability to switch between these characters gives them greater control over the game strategy. To achieve this, we would need to have cameras attached to each playable character and then give the player the ability to swap characters. We would do this through scripting. This is a pretty advanced implementation of multiple characters.

Another use of multiple cameras is adding specialty views in a game. These specialty views might include looking through a door's peep-hole, looking through binoculars at the top of a skyscraper, or even looking through a periscope. We can attach cameras to objects and change their viewing parameters to create unique camera use in our games. We are only limited by our own game designs and imagination.

We can also use cameras as cameras. That's right! We can use the camera game object to simulate actual in-game cameras. One example is implementing security cameras in a prison-escape game.

Working with lighting


In the previous sections, we explored the uses of cameras for Unity games. Just like in the real world, cameras need lights to show us objects. In Unity games, we use multiple lights to illuminate the game environment.

In Unity, we have both dynamic lighting techniques as well as light baking options for better performance. We can add numerous light sources throughout our scenes and selectively enable or disable an object's ability to cast or receive shadows. This level of specificity gives us tremendous opportunity to create realistic game scenes.

Perhaps the secret behind Unity's ability to so realistically render light and shadows is that Unity models the actual behavior of lights and shadows. Real-time global illumination gives us the ability to instantiate multiple lights in each scene, each with the ability to directly or indirectly impact objects in the scene that are within range of the light sources.

We can also add and manipulate ambient light in our game scenes. This is often done with Skyboxes, a tri-colored gradient, or even a single color. Each new scene in Unity has default ambient lighting, which we can control by editing the values in the Lighting window. In that window, you have access to the following settings:

  • Environment
  • Real-time Lighting
  • Mixed Lighting
  • Lightmapping Settings
  • Other Settings
  • Debug Settings


No changes to these are required for our game at this time. We have already set the environmental lighting to our Skybox.

When we create our scenes in Unity, we have three options for lighting. We can use real-time dynamic light, use the baked lighting approach, or use a mixture of the two. Our games perform more efficiently with baked lighting, compared to real-time dynamic lighting, so if performance is a concern, try using baked lighting where you can.

To summarize, we have discussed how to create interesting lighting and camera effects using Unity 2018. This article is an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book will help you create fun filled real world games with Unity 2018.

Game Engine Wars: Unity vs Unreal Engine

Unity plugins for augmented reality application development

How to create non-player Characters (NPC) with Unity 2018