Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
OpenGL ??? Build high performance graphics

You're reading from   OpenGL ??? Build high performance graphics Assimilate the ideas shared in the course to utilize the power of OpenGL to perform a wide variety of tasks.

Arrow left icon
Product type Course
Published in May 2017
Publisher Packt
ISBN-13 9781788296724
Length 982 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Muhammad Mobeen Movania Muhammad Mobeen Movania
Author Profile Icon Muhammad Mobeen Movania
Muhammad Mobeen Movania
Raymond Chun Hing Lo Raymond Chun Hing Lo
Author Profile Icon Raymond Chun Hing Lo
Raymond Chun Hing Lo
William Lo William Lo
Author Profile Icon William Lo
William Lo
Arrow right icon
View More author details
Toc

Chapter 2. 3D Viewing and Object Picking

The recipes covered in this chapter include:

  • Implementing a vector-based camera model with FPS style input support
  • Implementing the free camera
  • Implementing target camera
  • Implementing the view frustum culling
  • Implementing object picking using the depth buffer
  • Implementing object picking using color based picking
  • Implementing object picking using scene intersection queries

Introduction

In this chapter, we will look at the recipes for handling 3D viewing tasks and object picking in OpenGL v3.3 and above. All of the real-time simulations, games, and other graphics applications require a virtual camera or a virtual viewer from the point of view of which the 3D scene is rendered. The virtual camera is itself placed in the 3D world and has a specific direction called the camera look direction. Internally, the virtual camera is itself a collection of translations and rotations, which is stored inside the viewing matrix.

Moreover, projection settings for the virtual camera control how big or small the objects appear on screen. This is the kind of functionality which is controlled through the real world camera lens. These are controlled through the projection matrix. In addition to specifying the viewing and projection matrices, the virtual camera may also help with reducing the amount of geometry pushed to the GPU. This is through a process called view frustum culling. Rather than rendering all of the objects in the scene, only those that are visible to the virtual camera are rendered, thus improving the runtime performance of the application.

Implementing a vector-based camera with FPS style input support

We will begin this chapter by designing a simple class to handle the camera. In a typical OpenGL application, the viewing operations are carried out to place a virtual object on screen. We leave the details of the transformations required in between to a typical graduate text on computer graphics like the one given in the See also section of this recipe. This recipe will focus on designing a simple and efficient camera class. We create a simple inheritance from a base class called CAbstractCamera. We will inherit two classes from this parent class, CFreeCamera and CTargetCamera, as shown in the following figure:

Getting ready

The code for this recipe is in the Chapter2/src directory. The CAbstractCamera class is defined in the AbstractCamera.[h/cpp] files.

class CAbstractCamera
{
public:
  CAbstractCamera(void);
  ~CAbstractCamera(void);
  void SetupProjection(const float fovy, const float aspectRatio, const float near=0.1f, const float far=1000.0f);
  virtual void Update() = 0;
  virtual void Rotate(const float yaw, const float pitch, const float roll);
  const glm::mat4 GetViewMatrix() const;
  const glm::mat4 GetProjectionMatrix() const;
  void SetPosition(const glm::vec3& v);
  const glm::vec3 GetPosition() const;
  void SetFOV(const float fov);
  const float GetFOV() const;
  const float GetAspectRatio() const; 
  void CalcFrustumPlanes();
  bool IsPointInFrustum(const glm::vec3& point);
  bool IsSphereInFrustum(const glm::vec3& center, const float radius);
  bool IsBoxInFrustum(const glm::vec3& min, const glm::vec3& max);
  void GetFrustumPlanes(glm::vec4 planes[6]);
  glm::vec3 farPts[4];
  glm::vec3 nearPts[4];
protected:
  float yaw, pitch, roll, fov, aspect_ratio, Znear, Zfar;
  static glm::vec3 UP;
  glm::vec3 look;
  glm::vec3 up;
  glm::vec3 right; 
  glm::vec3 position;
  glm::mat4 V;       //view matrix
  glm::mat4 P;       //projection matrix
  CPlane planes[6];  //Frustum planes
};

We first declare the constructor/destructor pair. Next, the function for setting the projection for the camera is specified. Then some functions for updating the camera matrices based on rotation values are declared. Following these, the accessors and mutators are defined.

The class declaration is concluded with the view frustum culling-specific functions. Finally, the member fields are declared. The inheriting class needs to provide the implementation of one pure virtual function—Update (to recalculate the matrices and orientation vectors). The movement of the camera is based on three orientation vectors, namely, look, up, and right.

How to do it…

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

  1. Check for the keyboard key press event.
  2. If the W or S key is pressed, move the camera in the look vector direction:
    if( GetAsyncKeyState(VK_W) & 0x8000)
      cam.Walk(dt);
    if( GetAsyncKeyState(VK_S) & 0x8000)
      cam.Walk(-dt);
  3. If the A or D key is pressed, move the camera in the right vector direction:
    if( GetAsyncKeyState(VK_A) & 0x8000)
      cam.Strafe(-dt); 
    if( GetAsyncKeyState(VK_D) & 0x8000)
      cam.Strafe(dt);
  4. If the Q or Z key is pressed, move the camera in the up vector direction:
    if( GetAsyncKeyState(VK_Q) & 0x8000)
      cam.Lift(dt); 
    if( GetAsyncKeyState(VK_Z) & 0x8000)
      cam.Lift(-dt);

For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:

  1. Define the mouse down and mouse move event handlers.
  2. Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
    if(button == GLUT_MIDDLE_BUTTON)
      state = 0;
    else
      state = 1;
  3. If zoom state is chosen, calculate the fov value based on the drag amount and then set up the camera projection matrix:
    if (state == 0) {
      fov += (y - oldY)/5.0f;
      cam.SetupProjection(fov, cam.GetAspectRatio());
    }
  4. If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
    else {
      rY += (y - oldY)/5.0f; 
      rX += (oldX-x)/5.0f; 
      if(useFiltering) 
        filterMouseMoves(rX, rY);
      else {
        mouseX = rX;
        mouseY = rY;
      } 
      cam.Rotate(mouseX,mouseY, 0);
    }

There's more…

It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:

void filterMouseMoves(float dx, float dy) {
  for (int i = MOUSE_HISTORY_BUFFER_SIZE - 1; i > 0; --i) {
    mouseHistory[i] = mouseHistory[i - 1];
  }
  mouseHistory[0] = glm::vec2(dx, dy);
  float averageX = 0.0f,  averageY = 0.0f, averageTotal = 0.0f, currentWeight = 1.0f;
 
  for (int i = 0; i < MOUSE_HISTORY_BUFFER_SIZE; ++i) {
    glm::vec2 tmp=mouseHistory[i];
    averageX += tmp.x * currentWeight;
    averageY += tmp.y * currentWeight;
    averageTotal += 1.0f * currentWeight;
    currentWeight *= MOUSE_FILTER_WEIGHT;
  }
  mouseX = averageX / averageTotal;
  mouseY = averageY / averageTotal;
}

Note

When using filtered mouse input, make sure that the history buffer is filled with the appropriate initial value; otherwise you will see a sudden jerk in the first few frames.

See also

Getting ready

The code for this recipe is in the Chapter2/src directory. The CAbstractCamera class is defined in the AbstractCamera.[h/cpp] files.

class CAbstractCamera { public: CAbstractCamera(void); ~CAbstractCamera(void); void SetupProjection(const float fovy, const float aspectRatio, const float near=0.1f, const float far=1000.0f); virtual void Update() = 0; virtual void Rotate(const float yaw, const float pitch, const float roll); const glm::mat4 GetViewMatrix() const; const glm::mat4 GetProjectionMatrix() const; void SetPosition(const glm::vec3& v); const glm::vec3 GetPosition() const; void SetFOV(const float fov); const float GetFOV() const; const float GetAspectRatio() const; void CalcFrustumPlanes(); bool IsPointInFrustum(const glm::vec3& point); bool IsSphereInFrustum(const glm::vec3& center, const float radius); bool IsBoxInFrustum(const glm::vec3& min, const glm::vec3& max); void GetFrustumPlanes(glm::vec4 planes[6]); glm::vec3 farPts[4]; glm::vec3 nearPts[4]; protected: float yaw, pitch, roll, fov, aspect_ratio, Znear, Zfar; static glm::vec3 UP; glm::vec3 look; glm::vec3 up; glm::vec3 right; glm::vec3 position; glm::mat4 V; //view matrix glm::mat4 P; //projection matrix CPlane planes[6]; //Frustum planes };

We first

declare the constructor/destructor pair. Next, the function for setting the projection for the camera is specified. Then some functions for updating the camera matrices based on rotation values are declared. Following these, the accessors and mutators are defined.

The class declaration is concluded with the view frustum culling-specific functions. Finally, the member fields are declared. The inheriting class needs to provide the implementation of one pure virtual function—Update (to recalculate the matrices and orientation vectors). The movement of the camera is based on three orientation vectors, namely, look, up, and right.

How to do it…

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

  1. Check for the keyboard key press event.
  2. If the W or S key is pressed, move the camera in the look vector direction:
    if( GetAsyncKeyState(VK_W) & 0x8000)
      cam.Walk(dt);
    if( GetAsyncKeyState(VK_S) & 0x8000)
      cam.Walk(-dt);
  3. If the A or D key is pressed, move the camera in the right vector direction:
    if( GetAsyncKeyState(VK_A) & 0x8000)
      cam.Strafe(-dt); 
    if( GetAsyncKeyState(VK_D) & 0x8000)
      cam.Strafe(dt);
  4. If the Q or Z key is pressed, move the camera in the up vector direction:
    if( GetAsyncKeyState(VK_Q) & 0x8000)
      cam.Lift(dt); 
    if( GetAsyncKeyState(VK_Z) & 0x8000)
      cam.Lift(-dt);

For handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:

  1. Define the mouse down and mouse move event handlers.
  2. Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
    if(button == GLUT_MIDDLE_BUTTON)
      state = 0;
    else
      state = 1;
  3. If zoom state is chosen, calculate the fov value based on the drag amount and then set up the camera projection matrix:
    if (state == 0) {
      fov += (y - oldY)/5.0f;
      cam.SetupProjection(fov, cam.GetAspectRatio());
    }
  4. If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
    else {
      rY += (y - oldY)/5.0f; 
      rX += (oldX-x)/5.0f; 
      if(useFiltering) 
        filterMouseMoves(rX, rY);
      else {
        mouseX = rX;
        mouseY = rY;
      } 
      cam.Rotate(mouseX,mouseY, 0);
    }

There's more…

It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:

void filterMouseMoves(float dx, float dy) {
  for (int i = MOUSE_HISTORY_BUFFER_SIZE - 1; i > 0; --i) {
    mouseHistory[i] = mouseHistory[i - 1];
  }
  mouseHistory[0] = glm::vec2(dx, dy);
  float averageX = 0.0f,  averageY = 0.0f, averageTotal = 0.0f, currentWeight = 1.0f;
 
  for (int i = 0; i < MOUSE_HISTORY_BUFFER_SIZE; ++i) {
    glm::vec2 tmp=mouseHistory[i];
    averageX += tmp.x * currentWeight;
    averageY += tmp.y * currentWeight;
    averageTotal += 1.0f * currentWeight;
    currentWeight *= MOUSE_FILTER_WEIGHT;
  }
  mouseX = averageX / averageTotal;
  mouseY = averageY / averageTotal;
}

Note

When using filtered mouse input, make sure that the history buffer is filled with the appropriate initial value; otherwise you will see a sudden jerk in the first few frames.

See also

How to do it…

In a typical application, we will not use the CAbstractCamera class. Instead, we will use either the CFreeCamera class or the CTargetCamera class, as detailed in the following recipes. In this recipe, we will see how to handle input using the mouse and keyboard.

In order to handle the keyboard events, we perform the following processing in the idle callback function:

Check for the keyboard key press event.
If the W or S key is pressed, move the camera in the look vector direction:
if( GetAsyncKeyState(VK_W) & 0x8000)
  cam.Walk(dt);
if( GetAsyncKeyState(VK_S) & 0x8000)
  cam.Walk(-dt);
If the A or D key is pressed, move the camera in the right vector direction:
if( GetAsyncKeyState(VK_A) & 0x8000)
  cam.Strafe(-dt); 
if( GetAsyncKeyState(VK_D) & 0x8000)
  cam.Strafe(dt);
If the Q or Z key is pressed, move the camera in the up vector direction:
if( GetAsyncKeyState(VK_Q) & 0x8000)
  cam.Lift(dt); 
if( GetAsyncKeyState(VK_Z) & 0x8000)
  cam.Lift(-dt);

For

handling mouse events, we attach two callbacks. One for mouse movement and the other for the mouse click event handling:

  1. Define the mouse down and mouse move event handlers.
  2. Determine the mouse input choice (the zoom or rotate state) in the mouse down event handler based on the mouse button clicked:
    if(button == GLUT_MIDDLE_BUTTON)
      state = 0;
    else
      state = 1;
  3. If zoom state is chosen, calculate the fov value based on the drag amount and then set up the camera projection matrix:
    if (state == 0) {
      fov += (y - oldY)/5.0f;
      cam.SetupProjection(fov, cam.GetAspectRatio());
    }
  4. If the rotate state is chosen, calculate the rotation amount (pitch and yaw). If mouse filtering is enabled, use the filtered mouse input, otherwise use the raw rotation amount:
    else {
      rY += (y - oldY)/5.0f; 
      rX += (oldX-x)/5.0f; 
      if(useFiltering) 
        filterMouseMoves(rX, rY);
      else {
        mouseX = rX;
        mouseY = rY;
      } 
      cam.Rotate(mouseX,mouseY, 0);
    }

There's more…

It is always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:

void filterMouseMoves(float dx, float dy) {
  for (int i = MOUSE_HISTORY_BUFFER_SIZE - 1; i > 0; --i) {
    mouseHistory[i] = mouseHistory[i - 1];
  }
  mouseHistory[0] = glm::vec2(dx, dy);
  float averageX = 0.0f,  averageY = 0.0f, averageTotal = 0.0f, currentWeight = 1.0f;
 
  for (int i = 0; i < MOUSE_HISTORY_BUFFER_SIZE; ++i) {
    glm::vec2 tmp=mouseHistory[i];
    averageX += tmp.x * currentWeight;
    averageY += tmp.y * currentWeight;
    averageTotal += 1.0f * currentWeight;
    currentWeight *= MOUSE_FILTER_WEIGHT;
  }
  mouseX = averageX / averageTotal;
  mouseY = averageY / averageTotal;
}

Note

When using filtered mouse input, make sure that the history buffer is filled with the appropriate initial value; otherwise you will see a sudden jerk in the first few frames.

See also

There's more…

It is

always better to use filtered mouse input, which gives smoother movement. In the recipes, we use a simple average filter of the last 10 inputs weighted based on their temporal distance. So the previous input is given more weight and the 5th latest input is given less weight. The filtered result is used as shown in the following code snippet:

void filterMouseMoves(float dx, float dy) {
  for (int i = MOUSE_HISTORY_BUFFER_SIZE - 1; i > 0; --i) {
    mouseHistory[i] = mouseHistory[i - 1];
  }
  mouseHistory[0] = glm::vec2(dx, dy);
  float averageX = 0.0f,  averageY = 0.0f, averageTotal = 0.0f, currentWeight = 1.0f;
 
  for (int i = 0; i < MOUSE_HISTORY_BUFFER_SIZE; ++i) {
    glm::vec2 tmp=mouseHistory[i];
    averageX += tmp.x * currentWeight;
    averageY += tmp.y * currentWeight;
    averageTotal += 1.0f * currentWeight;
    currentWeight *= MOUSE_FILTER_WEIGHT;
  }
  mouseX = averageX / averageTotal;
  mouseY = averageY / averageTotal;
}

Note

When using filtered mouse input, make sure that the history buffer is filled with the appropriate initial value; otherwise you will see a sudden jerk in the first few frames.

See also

See also

Smooth mouse filtering

Implementing the free camera

Free camera is the first camera type which we will implement in this recipe. A free camera does not have a fixed target. However it does have a fixed position from which it can look in any direction.

Getting ready

The following figure shows a free viewing camera. When we rotate the camera, it rotates at its position. When we move the camera, it keeps looking in the same direction.

The source code for this recipe is in the Chapter2/FreeCamera directory. The CFreeCamera class is defined in the Chapter2/src/FreeCamera.[h/cpp] files. The class interface is as follows:

class CFreeCamera : public CAbstractCamera
{
public:
  CFreeCamera(void);
  ~CFreeCamera(void);
  void Update();
  void Walk(const float dt);
  void Strafe(const float dt);
  void Lift(const float dt);
  void SetTranslation(const glm::vec3& t);
  glm::vec3 GetTranslation() const;
  void SetSpeed(const float speed);
  const float GetSpeed() const;
protected:
  float speed; //move speed of camera in m/s
  glm::vec3 translation;
};

How to do it…

The steps needed to implement the free camera are as follows:

  1. Define the CFreeCamera class and add a vector to store the current translation.
  2. In the Update method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):
    glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);

    Note

    Make sure that the yaw, pitch, and roll angles are in radians.

  3. Translate the camera position by the translation amount:
    position+=translation;

    If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:

    glm::vec3 t = cam.GetTranslation();
    if(glm::dot(t,t)>EPSILON2) {
       cam.SetTranslation(t*0.95f); 
    }

    If no decay is needed, then we should clear the translation vector to 0 in the CFreeCamera::Update function after translating the position:

    translation = glm::vec3(0);
  4. Transform the look vector by the current rotation matrix, and determine the right and up vectors to calculate the orthonormal basis:
    look = glm::vec3(R*glm::vec4(0,0,1,0));
    up = glm::vec3(R*glm::vec4(0,1,0,0));
    right = glm::cross(look, up);
  5. Determine the camera target point:
    glm::vec3 tgt = position+look;
  6. Use the glm::lookat function to calculate the new view matrix using the camera position, target, and the up vector:
    V = glm::lookAt(position, tgt, up); 

There's more…

The Walk function simply translates the camera in the look direction:

void CFreeCamera::Walk(const float dt) {
  translation += (look*dt);
}

The Strafe function translates the camera in the right direction:

void CFreeCamera::Strafe(const float dt) {
  translation += (right*dt);
}

The Lift function translates the camera in the up direction:

void CFreeCamera::Lift(const float dt) {
  translation += (up*dt);
}

Running the demo application renders an infinite checkered plane as shown in the following figure. The free camera can be moved around by pressing the keys W, S, A, D, Q, and Z. Left-clicking the mouse rotates the camera at the current position to change the look direction. Middle-click zooms the camera in the look direction.

See also

Getting ready

The following

figure shows a free viewing camera. When we rotate the camera, it rotates at its position. When we move the camera, it keeps looking in the same direction.

The source code for this recipe is in the Chapter2/FreeCamera directory. The CFreeCamera class is defined in the Chapter2/src/FreeCamera.[h/cpp] files. The class interface is as follows:

class CFreeCamera : public CAbstractCamera
{
public:
  CFreeCamera(void);
  ~CFreeCamera(void);
  void Update();
  void Walk(const float dt);
  void Strafe(const float dt);
  void Lift(const float dt);
  void SetTranslation(const glm::vec3& t);
  glm::vec3 GetTranslation() const;
  void SetSpeed(const float speed);
  const float GetSpeed() const;
protected:
  float speed; //move speed of camera in m/s
  glm::vec3 translation;
};

How to do it…

The steps needed to implement the free camera are as follows:

  1. Define the CFreeCamera class and add a vector to store the current translation.
  2. In the Update method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):
    glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);

    Note

    Make sure that the yaw, pitch, and roll angles are in radians.

  3. Translate the camera position by the translation amount:
    position+=translation;

    If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:

    glm::vec3 t = cam.GetTranslation();
    if(glm::dot(t,t)>EPSILON2) {
       cam.SetTranslation(t*0.95f); 
    }

    If no decay is needed, then we should clear the translation vector to 0 in the CFreeCamera::Update function after translating the position:

    translation = glm::vec3(0);
  4. Transform the look vector by the current rotation matrix, and determine the right and up vectors to calculate the orthonormal basis:
    look = glm::vec3(R*glm::vec4(0,0,1,0));
    up = glm::vec3(R*glm::vec4(0,1,0,0));
    right = glm::cross(look, up);
  5. Determine the camera target point:
    glm::vec3 tgt = position+look;
  6. Use the glm::lookat function to calculate the new view matrix using the camera position, target, and the up vector:
    V = glm::lookAt(position, tgt, up); 

There's more…

The Walk function simply translates the camera in the look direction:

void CFreeCamera::Walk(const float dt) {
  translation += (look*dt);
}

The Strafe function translates the camera in the right direction:

void CFreeCamera::Strafe(const float dt) {
  translation += (right*dt);
}

The Lift function translates the camera in the up direction:

void CFreeCamera::Lift(const float dt) {
  translation += (up*dt);
}

Running the demo application renders an infinite checkered plane as shown in the following figure. The free camera can be moved around by pressing the keys W, S, A, D, Q, and Z. Left-clicking the mouse rotates the camera at the current position to change the look direction. Middle-click zooms the camera in the look direction.

See also

How to do it…

The steps needed to implement the free camera are as follows:

Define the CFreeCamera class and add a vector to store the current translation.
In the Update method, calculate the new orientation (rotation) matrix, using the current camera orientations (that is, yaw, pitch, and roll amount):
glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
  1. Note

    Make sure that the yaw, pitch, and roll angles are in radians.

  2. Translate the camera position by the translation amount:
    position+=translation;

    If we need to implement a free camera which gradually comes to a halt, we should gradually decay the translation vector by adding the following code after the key events are handled:

    glm::vec3 t = cam.GetTranslation();
    if(glm::dot(t,t)>EPSILON2) {
       cam.SetTranslation(t*0.95f); 
    }

    If no decay is needed, then we should clear the translation vector to 0 in the CFreeCamera::Update function after translating the position:

    translation = glm::vec3(0);
  3. Transform the look vector by the current rotation matrix, and determine the right and up vectors to calculate the orthonormal basis:
    look = glm::vec3(R*glm::vec4(0,0,1,0));
    up = glm::vec3(R*glm::vec4(0,1,0,0));
    right = glm::cross(look, up);
  4. Determine the camera target point:
    glm::vec3 tgt = position+look;
  5. Use the glm::lookat function to calculate the new view matrix using the camera position, target, and the up vector:
    V = glm::lookAt(position, tgt, up); 

There's more…

The Walk function simply translates the camera in the look direction:

void CFreeCamera::Walk(const float dt) {
  translation += (look*dt);
}

The Strafe function translates the camera in the right direction:

void CFreeCamera::Strafe(const float dt) {
  translation += (right*dt);
}

The Lift function translates the camera in the up direction:

void CFreeCamera::Lift(const float dt) {
  translation += (up*dt);
}

Running the demo application renders an infinite checkered plane as shown in the following figure. The free camera can be moved around by pressing the keys W, S, A, D, Q, and Z. Left-clicking the mouse rotates the camera at the current position to change the look direction. Middle-click zooms the camera in the look direction.

See also

There's more…

The Walk function

simply translates the camera in the look direction:

void CFreeCamera::Walk(const float dt) {
  translation += (look*dt);
}

The Strafe function translates the camera in the right direction:

void CFreeCamera::Strafe(const float dt) {
  translation += (right*dt);
}

The Lift function translates the camera in the up direction:

void CFreeCamera::Lift(const float dt) {
  translation += (up*dt);
}

Running the demo application renders an infinite checkered plane as shown in the following figure. The free camera can be moved around by pressing the keys W, S, A, D, Q, and Z. Left-clicking the mouse rotates the camera at the current position to change the look direction. Middle-click zooms the camera in the look direction.

See also

See also

DHPOWare OpenGL camera demo – Part 1 (

Implementing the target camera

The target camera works the opposite way. Rather than the position, the target remains fixed, while the camera moves or rotates around the target. Some operations like panning, move both the target and the camera position together.

Getting ready

The following figure shows an illustration of a target camera. Note that the small box is the target position for the camera.

The code for this recipe resides in the Chapter2/TargetCamera directory. The CTargetCamera class is defined in the Chapter2/src/TargetCamera.[h/cpp] files. The class declaration is as follows:

class CTargetCamera : public CAbstractCamera
{
public:
  CTargetCamera(void);
  ~CTargetCamera(void);
  void Update();
  void Rotate(const float yaw, const float pitch, const float roll);
  void SetTarget(const glm::vec3 tgt);
  const glm::vec3 GetTarget() const;
  void Pan(const float dx, const float dy);
  void Zoom(const float amount );
  void Move(const float dx, const float dz);
protected:
  glm::vec3 target;
  float minRy, maxRy;
  float distance;
  float minDistance, maxDistance;
};

How to do it…

We implement the target camera as follows:

  1. Define the CTargetCamera class with a target position (target), the rotation limits (minRy and maxRy), the distance between the target and the camera position (distance), and the distance limits (minDistance and maxDistance).
  2. In the Update method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):
    glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
  3. Use the distance to get a vector and then translate this vector by the current rotation matrix:
    glm::vec3 T = glm::vec3(0,0,distance);
    T = glm::vec3(R*glm::vec4(T,0.0f));
  4. Get the new camera position by adding the translation vector to the target position:
    position = target + T;
  5. Recalculate the orthonormal basis and then the view matrix:
    look = glm::normalize(target-position);
    up = glm::vec3(R*glm::vec4(UP,0.0f));
    right = glm::cross(look, up);
    V = glm::lookAt(position, target, up);

There's more…

The Move function moves both the position and target by the same amount in both look and right vector directions.

void CTargetCamera::Move(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = look*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Pan function moves in the xy plane only, hence the up vector is used instead of the look vector:

void CTargetCamera::Pan(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = up*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Zoom function moves the position in the look direction:

void CTargetCamera::Zoom(const float amount) { 
  position += look * amount;
  distance = glm::distance(position, target); 
  Distance = std::max(minDistance, std::min(distance, maxDistance));
  Update();
}

The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:

See also

Getting ready

The following

figure shows an illustration of a target camera. Note that the small box is the target position for the camera.

The code for this recipe resides in the Chapter2/TargetCamera directory. The CTargetCamera class is defined in the Chapter2/src/TargetCamera.[h/cpp] files. The class declaration is as follows:

class CTargetCamera : public CAbstractCamera
{
public:
  CTargetCamera(void);
  ~CTargetCamera(void);
  void Update();
  void Rotate(const float yaw, const float pitch, const float roll);
  void SetTarget(const glm::vec3 tgt);
  const glm::vec3 GetTarget() const;
  void Pan(const float dx, const float dy);
  void Zoom(const float amount );
  void Move(const float dx, const float dz);
protected:
  glm::vec3 target;
  float minRy, maxRy;
  float distance;
  float minDistance, maxDistance;
};

How to do it…

We implement the target camera as follows:

  1. Define the CTargetCamera class with a target position (target), the rotation limits (minRy and maxRy), the distance between the target and the camera position (distance), and the distance limits (minDistance and maxDistance).
  2. In the Update method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):
    glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
  3. Use the distance to get a vector and then translate this vector by the current rotation matrix:
    glm::vec3 T = glm::vec3(0,0,distance);
    T = glm::vec3(R*glm::vec4(T,0.0f));
  4. Get the new camera position by adding the translation vector to the target position:
    position = target + T;
  5. Recalculate the orthonormal basis and then the view matrix:
    look = glm::normalize(target-position);
    up = glm::vec3(R*glm::vec4(UP,0.0f));
    right = glm::cross(look, up);
    V = glm::lookAt(position, target, up);

There's more…

The Move function moves both the position and target by the same amount in both look and right vector directions.

void CTargetCamera::Move(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = look*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Pan function moves in the xy plane only, hence the up vector is used instead of the look vector:

void CTargetCamera::Pan(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = up*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Zoom function moves the position in the look direction:

void CTargetCamera::Zoom(const float amount) { 
  position += look * amount;
  distance = glm::distance(position, target); 
  Distance = std::max(minDistance, std::min(distance, maxDistance));
  Update();
}

The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:

See also

How to do it…

We implement the target camera as follows:

Define the CTargetCamera class with a target position (target), the rotation limits (minRy and maxRy), the distance between the target and the camera position (distance), and the distance limits (minDistance and maxDistance).
In the Update method, calculate the new orientation (rotation) matrix using the current camera orientations (that is, yaw, pitch, and roll amount):
glm::mat4 R = glm::yawPitchRoll(yaw,pitch,roll);
Use the distance to get a vector and then translate this vector by the current rotation matrix:
glm::vec3 T = glm::vec3(0,0,distance);
T = glm::vec3(R*glm::vec4(T,0.0f));
Get the new camera position by adding the translation vector to the target position:
position = target + T;
Recalculate
  1. the orthonormal basis and then the view matrix:
    look = glm::normalize(target-position);
    up = glm::vec3(R*glm::vec4(UP,0.0f));
    right = glm::cross(look, up);
    V = glm::lookAt(position, target, up);

There's more…

The Move function moves both the position and target by the same amount in both look and right vector directions.

void CTargetCamera::Move(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = look*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Pan function moves in the xy plane only, hence the up vector is used instead of the look vector:

void CTargetCamera::Pan(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = up*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Zoom function moves the position in the look direction:

void CTargetCamera::Zoom(const float amount) { 
  position += look * amount;
  distance = glm::distance(position, target); 
  Distance = std::max(minDistance, std::min(distance, maxDistance));
  Update();
}

The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:

See also

There's more…

The Move function

moves both the position and target by the same amount in both look and right vector directions.

void CTargetCamera::Move(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = look*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Pan function moves in the xy plane only, hence the up vector is used instead of the look vector:

void CTargetCamera::Pan(const float dx, const float dy) {
  glm::vec3 X = right*dx;
  glm::vec3 Y = up*dy;
  position += X + Y;
  target += X + Y;
  Update();
}

The Zoom function moves the position in the look direction:

void CTargetCamera::Zoom(const float amount) { 
  position += look * amount;
  distance = glm::distance(position, target); 
  Distance = std::max(minDistance, std::min(distance, maxDistance));
  Update();
}

The demonstration for this recipe renders an infinite checkered plane, as in the previous recipe, and is shown in the following figure:

See also

See also

DHPOWare OpenGL camera demo – Part 1 (

Implementing view frustum culling

When working with a lot of polygonal data, there is a need to reduce the amount of geometry pushed to the GPU for processing. There are several techniques for scene management, such as quadtrees, octrees, and bsp trees. These techniques help in sorting the geometry in visibility order, so that the objects are sorted (and some of these even culled from the display). This helps in reducing the work load on the GPU.

Even before such techniques can be used, there is an additional step which most graphics applications do and that is view frustum culling. This process removes the geometry if it is not in the current camera's view frustum. The idea is that if the object is not viewable, it should not be processed. A frustum is a chopped pyramid with its tip at the camera position and the base is at the far clip plane. The near clip plane is where the pyramid is chopped, as shown in the following figure. Any geometry inside the viewing frustum is displayed.

Getting ready

For this recipe, we will create a grid of points that are moved in a sine wave using a simple vertex shader. The geometry shader does the view frustum culling by only emitting vertices that are inside the viewing frustum. The calculation of the viewing frustum is carried out on the CPU, based on the camera projection parameters. We will follow the geometric approach in this tutorial. The code implementing this recipe is in the Chapter2/ViewFrustumCulling directory.

How to do it…

We will implement view frustum culling by taking the following steps:

  1. Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
    #version 330 core
    layout(location = 0) in vec3 vVertex;  
    uniform float t;
    const float PI = 3.141562;
    void main()
    {
      gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
    }
  2. Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  3. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  4. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

How it works…

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

bool PointInFrustum(in vec3 p) {
  for(int i=0; i < 6; i++) {
    vec4 plane=FrustumPlanes[i];
    if ((dot(plane.xyz, p)+plane.w) < 0)
        return false;
  }
  return true;
}

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query);
pointShader.Use(); 
  glUniform1f(pointShader("t"), current_time);
  glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
  glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
  glBindVertexArray(pointVAOID);
  glDrawArrays(GL_POINTS,0,MAX_POINTS);
pointShader.UnUse();
glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res;
glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

Note

Note that for the camera 2 view, all points are emitted. Hence, the total number of points is displayed in the title bar.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

Getting ready

For this recipe, we will create a grid of points that are moved in a sine wave using a simple vertex shader. The geometry shader does the view frustum culling by only emitting vertices that are inside the viewing frustum. The calculation of the viewing frustum is carried out on the CPU, based on the camera projection parameters. We will follow the geometric approach in this tutorial. The code implementing this recipe is in the Chapter2/ViewFrustumCulling directory.

How to do it…

We will implement view frustum culling by taking the following steps:

  1. Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
    #version 330 core
    layout(location = 0) in vec3 vVertex;  
    uniform float t;
    const float PI = 3.141562;
    void main()
    {
      gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
    }
  2. Define a geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  3. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  4. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

How it works…

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

bool PointInFrustum(in vec3 p) {
  for(int i=0; i < 6; i++) {
    vec4 plane=FrustumPlanes[i];
    if ((dot(plane.xyz, p)+plane.w) < 0)
        return false;
  }
  return true;
}

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query);
pointShader.Use(); 
  glUniform1f(pointShader("t"), current_time);
  glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
  glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
  glBindVertexArray(pointVAOID);
  glDrawArrays(GL_POINTS,0,MAX_POINTS);
pointShader.UnUse();
glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res;
glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

Note

Note that for the camera 2 view, all points are emitted. Hence, the total number of points is displayed in the title bar.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

How to do it…

We will implement view frustum culling by taking the following steps:

Define a vertex shader that displaces the object-space vertex position using a sine wave in the y axis:
#version 330 core
layout(location = 0) in vec3 vVertex;  
uniform float t;
const float PI = 3.141562;
void main()
{
  gl_Position=vec4(vVertex,1)+vec4(0,sin(vVertex.x*2*PI+t),0,0);
}
Define a
  1. geometry shader that performs the view frustum culling calculation on each vertex passed in from the vertex shader:
    #version 330 core
    layout (points) in;
    layout (points, max_vertices=3) out;
    uniform mat4 MVP;
    uniform vec4 FrustumPlanes[6];
    bool PointInFrustum(in vec3 p) {
      for(int i=0; i < 6; i++) 
      {
        vec4 plane=FrustumPlanes[i];
        if ((dot(plane.xyz, p)+plane.w) < 0)
          return false;
      }
      return true;
    }
    void main()
    {
      //get the basic vertices
      for(int i=0;i<gl_in.length(); i++) { 
        vec4 vInPos = gl_in[i].gl_Position;
        vec2 tmp = (vInPos.xz*2-1.0)*5;
        vec3 V = vec3(tmp.x, vInPos.y, tmp.y);
        gl_Position = MVP*vec4(V,1);
        if(PointInFrustum(V)) { 
          EmitVertex();
        } 
      }
      EndPrimitive();
    }
  2. To render particles as rounded points, we do a simple trigonometric calculation by discarding all fragments that fall outside the radius of the circle:
    #version 330 core
    layout(location = 0) out vec4 vFragColor;
    void main() {
      vec2 pos = (gl_PointCoord.xy-0.5);
      if(0.25<dot(pos,pos))	discard;
      vFragColor = vec4(0,0,1,1);
    }
  3. On the CPU side, call the CAbstractCamera::CalcFrustumPlanes() function to calculate the viewing frustum planes. Get the calculated frustum planes as a glm::vec4 array by calling CAbstractCamera::GetFrustumPlanes(), and then pass these to the shader. The xyz components store the plane's normal, and the w coordinate stores the distance of the plane. After these calls we draw the points:
    pCurrentCam->CalcFrustumPlanes();
    glm::vec4 p[6];
    pCurrentCam->GetFrustumPlanes(p);
    pointShader.Use();
      glUniform1f(pointShader("t"), current_time);
      glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
      glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
      glBindVertexArray(pointVAOID);
      glDrawArrays(GL_POINTS,0,MAX_POINTS);
    pointShader.UnUse();

How it works…

There are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

bool PointInFrustum(in vec3 p) {
  for(int i=0; i < 6; i++) {
    vec4 plane=FrustumPlanes[i];
    if ((dot(plane.xyz, p)+plane.w) < 0)
        return false;
  }
  return true;
}

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query);
pointShader.Use(); 
  glUniform1f(pointShader("t"), current_time);
  glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
  glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
  glBindVertexArray(pointVAOID);
  glDrawArrays(GL_POINTS,0,MAX_POINTS);
pointShader.UnUse();
glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res;
glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

Note

Note that for the camera 2 view, all points are emitted. Hence, the total number of points is displayed in the title bar.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

How it works…

There

are two main parts of this recipe: calculation of the viewing frustum planes and checking if a given point is in the viewing frustum. The first calculation is carried out in the CAbstractCamera::CalcFrustumPlanes() function. Refer to the Chapter2/src/AbstractCamera.cpp files for details.

In this function, we follow the geometric approach, whereby we first calculate the eight points of the frustum at the near and far clip planes. Theoretical details about this method are well explained in the reference given in the See also section. Once we have the eight frustum points, we use three of these points successively to get the bounding planes of the frustum. Here, we call the CPlane::FromPoints function, which generates a CPlane object from the given three points. This is repeated to get all six planes.

Testing whether a point is in the viewing frustum is carried out in the geometry shader's PointInFrustum function, which is defined as follows:

bool PointInFrustum(in vec3 p) {
  for(int i=0; i < 6; i++) {
    vec4 plane=FrustumPlanes[i];
    if ((dot(plane.xyz, p)+plane.w) < 0)
        return false;
  }
  return true;
}

This function iterates through all of the six frustum planes. In each iteration, it checks the signed distance of the given point p with respect to the ith frustum plane. This is a simple dot product of the plane normal with the given point and adding the plane distance. If the signed distance is negative for any of the planes, the point is outside the viewing frustum so we can safely reject the point. If the point has a positive signed distance for all of the six frustum planes, it is inside the viewing frustum. Note that the frustum planes are oriented in such a way that their normals point inside the viewing frustum.

There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query);
pointShader.Use(); 
  glUniform1f(pointShader("t"), current_time);
  glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); 
  glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0]));
  glBindVertexArray(pointVAOID);
  glDrawArrays(GL_POINTS,0,MAX_POINTS);
pointShader.UnUse();
glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res;
glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

Note

Note that for the camera 2 view, all points are emitted. Hence, the total number of points is displayed in the title bar.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

There's more…

The demonstration implementing this recipe shows two cameras, the local camera (camera 1) which shows the sine wave and a world camera (camera 2) which shows the whole world, including the first camera frustum. We can toggle the current camera by pressing 1 for camera 1 and 2 for camera 2. When in camera 1 view, dragging the left mouse button rotates the scene, and the information about the total number of points in the viewing frustum are displayed in the title bar. In the camera 2 view, left-clicking rotates camera 1, and the displayed viewing frustum is updated so we can see what the camera view should contain.

In order to see the total number of visible vertices emitted from the geometry shader, we use a hardware query. The whole shader and the rendering code are bracketed in the begin/end query call as shown in the following code:

glBeginQuery(GL_PRIMITIVES_GENERATED, query); pointShader.Use(); glUniform1f(pointShader("t"), current_time); glUniformMatrix4fv(pointShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP)); glUniform4fv(pointShader("FrustumPlanes"), 6, glm::value_ptr(p[0])); glBindVertexArray(pointVAOID); glDrawArrays(GL_POINTS,0,MAX_POINTS); pointShader.UnUse(); glEndQuery(GL_PRIMITIVES_GENERATED);

After these calls, the query result is retrieved by calling:

GLuint res; glGetQueryObjectuiv(query, GL_QUERY_RESULT, &res);

If successful, this call returns the total number of vertices emitted from the geometry shader, and that is the total number of vertices in the viewing frustum.

Note

Note that for the camera 2 view, all points are emitted. Hence, the total number of points is displayed in the title bar.

When in the camera 1 view (see the following figure), we see the close-up of the wave as it displaces the points in the Y direction. In this view, the points are rendered in blue color. Moreover, the total number of visible points is written in the title bar. The frame rate is also written to show the performance benefit from view frustum culling.

When in the camera 2 view (see the following figure), we can click-and-drag the left mouse button to rotate camera 1. This allows us to see the updated viewing frustum and the visible points. In the camera 2 view, visible points in the camera 1 view frustum are rendered in magenta color, the viewing frustum planes are in red color, and the invisible points (in camera 1 viewing frustum) are in blue color.

See also

Lighthouse 3D view frustum culling

Implementing object picking using the depth buffer

Often when working on projects, we need the ability to pick graphical objects on screen. While in OpenGL versions before OpenGL 3.0, the selection buffer was used for this purpose, this buffer is removed in the modern OpenGL 3.3 core profile. However, this leaves us with some alternate methods. We will implement a simple picking technique using the depth buffer in this recipe.

Getting ready

The code for this recipe is in the Chapter2/Picking_DepthBuffer folder. Relevant source files are in the Chapter2/src folder.

How to do it…

Picking using depth buffer can be implemented as follows:

  1. Enable depth testing:
    glEnable(GL_DEPTH_TEST);
  2. In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
    glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  3. Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
    glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
  4. Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
    size_t i=0;
    float minDist = 1000;
    selected_box=-1;
    for(i=0;i<3;i++) { 
      float dist = glm::distance(box_positions[i], objPt);
      if( dist<1 && dist<minDist) {
        selected_box = i;
        minDist = dist;
      }
    }
  5. Based on the selected index, color the object as selected:
    glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
    cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[1]);
    cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[2]);
    cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
    cube->Render(glm::value_ptr(MVP*T));

How it works…

This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject) the clicked point (x,HEIGHT-y, winZ). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.

There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

Getting ready

The code for this recipe is in the Chapter2/Picking_DepthBuffer folder. Relevant source files are in the Chapter2/src folder.

How to do it…

Picking using depth buffer can be implemented as follows:

  1. Enable depth testing:
    glEnable(GL_DEPTH_TEST);
  2. In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
    glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
  3. Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
    glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
  4. Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
    size_t i=0;
    float minDist = 1000;
    selected_box=-1;
    for(i=0;i<3;i++) { 
      float dist = glm::distance(box_positions[i], objPt);
      if( dist<1 && dist<minDist) {
        selected_box = i;
        minDist = dist;
      }
    }
  5. Based on the selected index, color the object as selected:
    glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
    cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[1]);
    cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
    cube->Render(glm::value_ptr(MVP*T));
    
    T = glm::translate(glm::mat4(1), box_positions[2]);
    cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
    cube->Render(glm::value_ptr(MVP*T));

How it works…

This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject) the clicked point (x,HEIGHT-y, winZ). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.

There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

How to do it…

Picking using depth buffer can be implemented as follows:

Enable depth testing:
glEnable(GL_DEPTH_TEST);
In the mouse down event handler, read the depth value from the depth buffer using the glReadPixels function at the clicked point:
glReadPixels( x, HEIGHT-y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
Unproject the 3D point, vec3(x,HEIGHT-y,winZ), to obtain the object-space point from the clicked screen-space point x,y and the depth value winZ. Make sure to invert the y value by subtracting HEIGHT from the screen-space y value:
glm::vec3 objPt = glm::unProject(glm::vec3(x,HEIGHT-y,winZ), MV, P, glm::vec4(0,0,WIDTH, HEIGHT));
Check the distances of all of the scene objects from the object-space point objPt. If the distance is within the bounds of the object and the distance of the object is the nearest to the camera, store the index of the object:
size_t i=0;
float minDist = 1000;
selected_box=-1;
for(i=0;i<3;i++) { 
  float dist = glm::distance(box_positions[i], objPt);
  if( dist<1 && dist<minDist) {
    selected_box = i;
    minDist = dist;
  }
}
Based on the selected index, color the object as selected:
glm::mat4 T = glm::translate(glm::mat4(1), box_positions[0]);
cube->color = (selected_box==0)?glm::vec3(0,1,1):glm::vec3(1,0,0);
cube->Render(glm::value_ptr(MVP*T));

T = glm::translate(glm::mat4(1), box_positions[1]);
cube->color = (selected_box==1)?glm::vec3(0,1,1):glm::vec3(0,1,0);
cube->Render(glm::value_ptr(MVP*T));

T = glm::translate(glm::mat4(1), box_positions[2]);
cube->color = (selected_box==2)?glm::vec3(0,1,1):glm::vec3(0,0,1);
cube->Render(glm::value_ptr(MVP*T));

How it works…

This recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject) the clicked point (x,HEIGHT-y, winZ). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.

There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

How it works…

This

recipe renders three cubes in red, green, and blue on the screen. When the user clicks on any of these cubes, the depth buffer is read to find the depth value at the clicked point. The object-space point is then obtained by unprojecting (glm::unProject) the clicked point (x,HEIGHT-y, winZ). A loop is then iterated over all objects in the scene to find the nearest object to the object-space point. The index of the nearest intersected object is then stored.

There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

There's more…

In the demonstration application for this recipe, when the user clicks on any cube, the currently selected box changes color to cyan to signify selection, as shown in the following figure:

See also

Picking tutorial at OGLDEV (

Implementing object picking using color

Another method which is used for picking objects in a 3D world is color-based picking. In this recipe, we will use the same scene as in the last recipe.

Getting ready

The code for this recipe is in the Chapter2/Picking_ColorBuffer folder. Relevant source files are in the Chapter2/src folder.

How to do it…

To enable picking with the color buffer, the following steps are needed:

  1. Disable dithering. This is done to prevent any color mismatch during the query:
    glDisable(GL_DITHER);
  2. In the mouse down event handler, read the color value at the clicked position from the color buffer using the glReadPixels function:
    GLubyte pixel[4];
    glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
  3. Compare the color value at the clicked point to the color values of all objects to find the intersection:
    selected_box=-1;
    if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) {
      cout<<"picked box 1"<<endl;
      selected_box = 0;
    }
    if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) {
      cout<<"picked box 2"<<endl;
      selected_box = 1;
    }
    if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) {
      cout<<"picked box 3"<<endl;
      selected_box = 2;
    }

How it works…

This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE.

The demonstration application for this recipe uses the scene from the previous recipe. In this demonstration also, the user left-clicks on a box and the selection is highlighted in cyan, as shown in the following figure:

See also

Lighthouse3d color coded picking tutorial (http://www.lighthouse3d.com/opengl/picking/index.php3?color1).

Getting ready

The code for this recipe is in the Chapter2/Picking_ColorBuffer folder. Relevant source files are in the Chapter2/src folder.

How to do it…

To enable picking with the color buffer, the following steps are needed:

  1. Disable dithering. This is done to prevent any color mismatch during the query:
    glDisable(GL_DITHER);
  2. In the mouse down event handler, read the color value at the clicked position from the color buffer using the glReadPixels function:
    GLubyte pixel[4];
    glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
  3. Compare the color value at the clicked point to the color values of all objects to find the intersection:
    selected_box=-1;
    if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) {
      cout<<"picked box 1"<<endl;
      selected_box = 0;
    }
    if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) {
      cout<<"picked box 2"<<endl;
      selected_box = 1;
    }
    if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) {
      cout<<"picked box 3"<<endl;
      selected_box = 2;
    }

How it works…

This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE.

The demonstration application for this recipe uses the scene from the previous recipe. In this demonstration also, the user left-clicks on a box and the selection is highlighted in cyan, as shown in the following figure:

See also

Lighthouse3d color coded picking tutorial (http://www.lighthouse3d.com/opengl/picking/index.php3?color1).

How to do it…

To enable picking with the color buffer, the following steps are needed:

Disable dithering. This is done to prevent any color mismatch during the query:
glDisable(GL_DITHER);
In the mouse down event handler, read the color value at the clicked position from the color buffer using the glReadPixels function:
GLubyte pixel[4];
glReadPixels(x, HEIGHT-y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
Compare the color value at the clicked point to the color values of all objects to find the intersection:
selected_box=-1;
if(pixel[0]==255 && pixel[1]==0 && pixel[2]==0) {
  cout<<"picked box 1"<<endl;
  selected_box = 0;
}
if(pixel[0]==0 && pixel[1]==255 && pixel[2]==0) {
  cout<<"picked box 2"<<endl;
  selected_box = 1;
}
if(pixel[0]==0 && pixel[1]==0 && pixel[2]==255) {
  cout<<"picked box 3"<<endl;
  selected_box = 2;
}

How it works…

This method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE.

The demonstration application for this recipe uses the scene from the previous recipe. In this demonstration also, the user left-clicks on a box and the selection is highlighted in cyan, as shown in the following figure:

See also

Lighthouse3d color coded picking tutorial (http://www.lighthouse3d.com/opengl/picking/index.php3?color1).

How it works…

This

method is simple to implement. We simply check the color of the pixel where the mouse is clicked. Since dithering might generate a different color value, we disable dithering. The pixel's r, g, and b values are then checked against all of the scene objects and the appropriate object is selected. We could also have used the float data type, GL_FLOAT, when reading and comparing the pixel value. However, due to floating point imprecision, we might not have an accurate test. Therefore, we use the integral data type GL_UNSIGNED_BYTE.

The demonstration application for this recipe uses the scene from the previous recipe. In this demonstration also, the user left-clicks on a box and the selection is highlighted in cyan, as shown in the following figure:

See also

Lighthouse3d color coded picking tutorial (http://www.lighthouse3d.com/opengl/picking/index.php3?color1).

See also

Lighthouse3d color coded picking tutorial (

Implementing object picking using scene intersection queries

The final method we will cover for picking involves casting rays in the scene to determine the nearest object to the viewer. We will use the same scene as in the last two recipes, three cubes (red, green, and blue colored) placed near the origin.

Getting ready

The code for this recipe is in the Chapter2/Picking_SceneIntersection folder. Relevant source files are in the Chapter2/src folder.

How to do it…

For picking with scene intersection queries, take the following steps:

  1. Get two object-space points by unprojecting the screen-space point (x, HEIGHT-y), with different depth value, one at z=0 and the other at z=1:
    glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
    glm::vec3   end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
  2. Get the current camera position as eyeRay.origin and get eyeRay.direction by subtracting and normalizing the difference of the two object-space points, end and start, as follows:
    eyeRay.origin     =  cam.GetPosition();
    eyeRay.direction  =  glm::normalize(end-start);
  3. For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB) of the object. Store the nearest intersected object index:
    float tMin = numeric_limits<float>::max();
    selected_box = -1;
    for(int i=0;i<3;i++) {
      glm::vec2 tMinMax = intersectBox(eyeRay, boxes[i]);
      if(tMinMax.x<tMinMax.y && tMinMax.x<tMin) {
        selected_box=i;
        tMin = tMinMax.x;
      }
    }
    if(selected_box==-1)
      cout<<"No box picked"<<endl;
    else
      cout<<"Selected box: "<<selected_box<<endl;

How it works…

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

glm::vec2 intersectBox(const Ray& ray, const Box& cube) {
  glm::vec3 inv_dir = 1.0f/ray.direction;
  glm::vec3   tMin = (cube.min - ray.origin) * inv_dir;
  glm::vec3   tMax = (cube.max - ray.origin) * inv_dir;
  glm::vec3     t1 = glm::min(tMin, tMax);
  glm::vec3     t2 = glm::max(tMin, tMax);
  float tNear = max(max(t1.x, t1.y), t1.z);
  float  tFar = min(min(t2.x, t2.y), t2.z);
  return glm::vec2(tNear, tFar);
}

There's more…

The intersectBox function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear and tFar values. The box can only intersect with the ray if tNear is less than tFar for all of the three axes. So the code finds the smallest tFar value and the largest tMin value. If the smallest tFar value is less than the largest tNear value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:

Getting ready

The code for this recipe is in the Chapter2/Picking_SceneIntersection folder. Relevant source files are in the Chapter2/src folder.

How to do it…

For picking with scene intersection queries, take the following steps:

  1. Get two object-space points by unprojecting the screen-space point (x, HEIGHT-y), with different depth value, one at z=0 and the other at z=1:
    glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
    glm::vec3   end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
  2. Get the current camera position as eyeRay.origin and get eyeRay.direction by subtracting and normalizing the difference of the two object-space points, end and start, as follows:
    eyeRay.origin     =  cam.GetPosition();
    eyeRay.direction  =  glm::normalize(end-start);
  3. For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB) of the object. Store the nearest intersected object index:
    float tMin = numeric_limits<float>::max();
    selected_box = -1;
    for(int i=0;i<3;i++) {
      glm::vec2 tMinMax = intersectBox(eyeRay, boxes[i]);
      if(tMinMax.x<tMinMax.y && tMinMax.x<tMin) {
        selected_box=i;
        tMin = tMinMax.x;
      }
    }
    if(selected_box==-1)
      cout<<"No box picked"<<endl;
    else
      cout<<"Selected box: "<<selected_box<<endl;

How it works…

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

glm::vec2 intersectBox(const Ray& ray, const Box& cube) {
  glm::vec3 inv_dir = 1.0f/ray.direction;
  glm::vec3   tMin = (cube.min - ray.origin) * inv_dir;
  glm::vec3   tMax = (cube.max - ray.origin) * inv_dir;
  glm::vec3     t1 = glm::min(tMin, tMax);
  glm::vec3     t2 = glm::max(tMin, tMax);
  float tNear = max(max(t1.x, t1.y), t1.z);
  float  tFar = min(min(t2.x, t2.y), t2.z);
  return glm::vec2(tNear, tFar);
}

There's more…

The intersectBox function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear and tFar values. The box can only intersect with the ray if tNear is less than tFar for all of the three axes. So the code finds the smallest tFar value and the largest tMin value. If the smallest tFar value is less than the largest tNear value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:

How to do it…

For picking with scene intersection queries, take the following steps:

Get two object-space points by unprojecting the screen-space point (x, HEIGHT-y), with different depth value, one at z=0 and the other at z=1:
glm::vec3 start = glm::unProject(glm::vec3(x,HEIGHT-y,0), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
glm::vec3   end = glm::unProject(glm::vec3(x,HEIGHT-y,1), MV, P, glm::vec4(0,0,WIDTH,HEIGHT));
Get the current camera position as eyeRay.origin and get eyeRay.direction by subtracting and normalizing the difference of the two object-space points, end and start, as follows:
eyeRay.origin     =  cam.GetPosition();
eyeRay.direction  =  glm::normalize(end-start);
For all of the objects in the scene, find the intersection of the eye ray with the Axially Aligned Bounding Box (AABB)
  1. of the object. Store the nearest intersected object index:
    float tMin = numeric_limits<float>::max();
    selected_box = -1;
    for(int i=0;i<3;i++) {
      glm::vec2 tMinMax = intersectBox(eyeRay, boxes[i]);
      if(tMinMax.x<tMinMax.y && tMinMax.x<tMin) {
        selected_box=i;
        tMin = tMinMax.x;
      }
    }
    if(selected_box==-1)
      cout<<"No box picked"<<endl;
    else
      cout<<"Selected box: "<<selected_box<<endl;

How it works…

The method discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

glm::vec2 intersectBox(const Ray& ray, const Box& cube) {
  glm::vec3 inv_dir = 1.0f/ray.direction;
  glm::vec3   tMin = (cube.min - ray.origin) * inv_dir;
  glm::vec3   tMax = (cube.max - ray.origin) * inv_dir;
  glm::vec3     t1 = glm::min(tMin, tMax);
  glm::vec3     t2 = glm::max(tMin, tMax);
  float tNear = max(max(t1.x, t1.y), t1.z);
  float  tFar = min(min(t2.x, t2.y), t2.z);
  return glm::vec2(tNear, tFar);
}

There's more…

The intersectBox function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear and tFar values. The box can only intersect with the ray if tNear is less than tFar for all of the three axes. So the code finds the smallest tFar value and the largest tMin value. If the smallest tFar value is less than the largest tNear value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:

How it works…

The method

discussed in this recipe first casts a ray from the camera origin in the clicked direction, and then checks all of the scene objects' bounding boxes for intersection. There are two sub parts: estimation of the ray direction from the clicked point and the ray AABB intersection. We first focus on the estimation of the ray direction from the clicked point.

We know that after projection, the x and y values are in the -1 to 1 range. The z or depth values are in the 0 to 1 range, with 0 at the near clip plane and 1 at the far clip plane. We first take the screen-space point and unproject it taking the near clip plane z value of 0. This gives us the object-space point at the near clip plane. Next, we pass the screen-space point and unproject it with the z value of 1. This gives us the object-space point at the far clip plane. Subtracting the two unprojected object-space points gives us the ray direction. We store the camera position as eyeRay.origin and normalize the ray direction as eyeRay.direction.

After calculating the eye ray, we check it for intersection with all of the scene geometries. If the object-bounding box intersects the eye ray and it is the nearest intersection, we store the index of the object. The intersectBox function is defined as follows:

glm::vec2 intersectBox(const Ray& ray, const Box& cube) {
  glm::vec3 inv_dir = 1.0f/ray.direction;
  glm::vec3   tMin = (cube.min - ray.origin) * inv_dir;
  glm::vec3   tMax = (cube.max - ray.origin) * inv_dir;
  glm::vec3     t1 = glm::min(tMin, tMax);
  glm::vec3     t2 = glm::max(tMin, tMax);
  float tNear = max(max(t1.x, t1.y), t1.z);
  float  tFar = min(min(t2.x, t2.y), t2.z);
  return glm::vec2(tNear, tFar);
}

There's more…

The intersectBox function works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear and tFar values. The box can only intersect with the ray if tNear is less than tFar for all of the three axes. So the code finds the smallest tFar value and the largest tMin value. If the smallest tFar value is less than the largest tNear value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:

There's more…

The intersectBox function

works by finding the intersection of the ray with a pair of slabs for each of the three axes individually. Next it finds the tNear and tFar values. The box can only intersect with the ray if tNear is less than tFar for all of the three axes. So the code finds the smallest tFar value and the largest tMin value. If the smallest tFar value is less than the largest tNear value, the ray misses the box. For further details, refer to the See also section. The output result from the demonstration application for this recipe uses the same scene as in the last two recipes. In this case also, left-clicking the mouse selects the box, which is highlighted in cyan, as shown in the following figure:

See also

You have been reading a chapter from
OpenGL ??? Build high performance graphics
Published in: May 2017
Publisher: Packt
ISBN-13: 9781788296724
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image