Chapter 3. Offscreen Rendering and Environment Mapping
In this chapter, we will cover:
- Implementing the twirl filter using fragment shader
- Rendering a skybox using static cube mapping
- Implementing a mirror with render-to-texture using FBO
- Rendering a reflective object using dynamic cube mapping
- Implementing area filtering (sharpening/blurring/embossing) on an image using convolution
- Implementing the glow effect
Introduction
Offscreen rendering functionality is a powerful feature of modern graphics API. In modern OpenGL, this is implemented by using the Framebuffer objects (FBOs). Some of the applications of the offscreen rendering include post processing effects such as glows, dynamic cubemaps, mirror effect, deferred rendering techniques, image processing techniques, and so on. Nowadays almost all games use this feature to carry out stunning visual effects with high rendering quality and detail. With the FBOs, the offscreen rendering is greatly simplified, as the programmer uses FBO the way he would use any other OpenGL object. This chapter will focus on using FBO to carry out image processing effects for implementing digital convolution and glow. In addition, we will also elaborate on how to use the FBO for mirror effect and dynamic cube mapping.
Implementing the twirl filter using the fragment shader
We will use a simple image manipulation operator in the fragment shader by implementing the twirl filter on the GPU.
Getting ready
This recipe builds up on the image loading recipe from Chapter 1, Introduction to Modern OpenGL. The code for this recipe is contained in the Chapter3/TwirlFilter
directory.
How to do it…
Let us get started with the recipe as follows:
- Load the image as in the
ImageLoader
recipe from Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode toGL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
How it works…
Twirl is a simple 2D transformation which deforms the image. In polar coordinates, this transformation is given simply as follows:
In this equation, t is the amount of twirl applied on the input image f. In practice, our images are a 2D function f(x,y) of Cartesian coordinates. We first convert the Cartesian coordinates to polar coordinates (r,θ) by using the following transformation:
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
void main() {
vec2 uv = vUV-0.5;
float angle = atan(uv.y, uv.x);
float radius = length(uv);
We then increment the angle by the given amount, multiplied by the radius. Next, we convert the polar coordinates back to Cartesian coordinates.
angle+= radius*twirl_amount;
vec2 shifted = radius* vec2(cos(angle), sin(angle));
Finally, we offset the texture coordinates back to the original position. The transformed texture coordinates are then used for texture lookup.
vFragColor = texture(textureMap, (shifted+0.5));
}
There's more...
The demo application implementing this recipe shows a rendered image. Using the - and + keys, we can adjust the twirl amount as shown in the following figure:
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
Chapter 1, Introduction to Modern OpenGL. The code for this recipe is contained in the Chapter3/TwirlFilter
directory.
How to do it…
Let us get started with the recipe as follows:
- Load the image as in the
ImageLoader
recipe from Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode toGL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
How it works…
Twirl is a simple 2D transformation which deforms the image. In polar coordinates, this transformation is given simply as follows:
In this equation, t is the amount of twirl applied on the input image f. In practice, our images are a 2D function f(x,y) of Cartesian coordinates. We first convert the Cartesian coordinates to polar coordinates (r,θ) by using the following transformation:
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
void main() {
vec2 uv = vUV-0.5;
float angle = atan(uv.y, uv.x);
float radius = length(uv);
We then increment the angle by the given amount, multiplied by the radius. Next, we convert the polar coordinates back to Cartesian coordinates.
angle+= radius*twirl_amount;
vec2 shifted = radius* vec2(cos(angle), sin(angle));
Finally, we offset the texture coordinates back to the original position. The transformed texture coordinates are then used for texture lookup.
vFragColor = texture(textureMap, (shifted+0.5));
}
There's more...
The demo application implementing this recipe shows a rendered image. Using the - and + keys, we can adjust the twirl amount as shown in the following figure:
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
ImageLoader
recipe from
- Chapter 1, Introduction to Modern OpenGL. Set the texture wrap mode to
GL_CLAMP_TO_BORDER
.int texture_width = 0, texture_height = 0, channels=0; GLubyte* pData = SOIL_load_image(filename.c_str(), &texture_width, &texture_height, &channels, SOIL_LOAD_AUTO); int i,j; for( j = 0; j*2 < texture_height; ++j ) { int index1 = j * texture_width * channels; int index2 = (texture_height - 1 - j) * texture_width * channels; for( i = texture_width * channels; i > 0; --i ) { GLubyte temp = pData[index1]; pData[index1] = pData[index2]; pData[index2] = temp; ++index1; ++index2; } } glGenTextures(1, &textureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, GL_RGB, GL_UNSIGNED_BYTE, pData); SOIL_free_image_data(pData);
- Set up a simple pass through vertex shader that outputs the texture coordinates for texture lookup in the fragment shader, as given in the
ImageLoader
recipe of Chapter 1.void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- Set up the fragment shader that first shifts the texture coordinates, performs the twirl transformation, and then converts the shifted texture coordinates back for texture lookup.
void main() { vec2 uv = vUV-0.5; float angle = atan(uv.y, uv.x); float radius = length(uv); angle+= radius*twirl_amount; vec2 shifted = radius* vec2(cos(angle), sin(angle)); vFragColor = texture(textureMap, (shifted+0.5)); }
- Render a 2D screen space quad and apply the two shaders as was done in the
ImageLoader
recipe in Chapter 1.void OnRender() { glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); shader.Use(); glUniform1f(shader("twirl_amount"), twirl_amount); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0); shader.UnUse(); glutSwapBuffers(); }
How it works…
Twirl is a simple 2D transformation which deforms the image. In polar coordinates, this transformation is given simply as follows:
In this equation, t is the amount of twirl applied on the input image f. In practice, our images are a 2D function f(x,y) of Cartesian coordinates. We first convert the Cartesian coordinates to polar coordinates (r,θ) by using the following transformation:
Here, x and y are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
void main() {
vec2 uv = vUV-0.5;
float angle = atan(uv.y, uv.x);
float radius = length(uv);
We then increment the angle by the given amount, multiplied by the radius. Next, we convert the polar coordinates back to Cartesian coordinates.
angle+= radius*twirl_amount;
vec2 shifted = radius* vec2(cos(angle), sin(angle));
Finally, we offset the texture coordinates back to the original position. The transformed texture coordinates are then used for texture lookup.
vFragColor = texture(textureMap, (shifted+0.5));
}
There's more...
The demo application implementing this recipe shows a rendered image. Using the - and + keys, we can adjust the twirl amount as shown in the following figure:
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
are the two Cartesian coordinates. In the fragment shader, we first offset the texture coordinates so that the origin is at the center of the image. Next, we get the angle θ and radius r.
void main() {
vec2 uv = vUV-0.5;
float angle = atan(uv.y, uv.x);
float radius = length(uv);
We then increment the angle by the given amount, multiplied by the radius. Next, we convert the polar coordinates back to Cartesian coordinates.
angle+= radius*twirl_amount;
vec2 shifted = radius* vec2(cos(angle), sin(angle));
Finally, we offset the texture coordinates back to the original position. The transformed texture coordinates are then used for texture lookup.
vFragColor = texture(textureMap, (shifted+0.5));
}
There's more...
The demo application implementing this recipe shows a rendered image. Using the - and + keys, we can adjust the twirl amount as shown in the following figure:
Since the texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
texture clamping mode was set to GL_CLAMP_TO_BORDER
, the out of image pixels get the black color. In this recipe, we applied the twirl effect to the whole image. As an exercise, we invite the reader to limit the twirl to a specific zone within the image; for example, within a radius of, say, 150 pixels from the center of image. Hint: You can constrain the radius using the given pixel distance.
Rendering a skybox using static cube mapping
This recipe will show how to render a skybox object using static cube mapping. Cube mapping is a simple technique for generating a surrounding environment. There are several methods, such as sky dome, which uses a spherical geometry; skybox, which uses a cubical geometry; and skyplane, which uses a planar geometry. For this recipe, we will focus on skyboxes using the static cube mapping approach. The cube mapping process needs six images that are placed on each face of a cube. The skybox is a very large cube that moves with the camera but does not rotate with it.
Getting ready
The code for this recipe is contained in the Chapter3/Skybox
directory.
How to do it…
Let us get started with the recipe as follows:
- Set up the vertex array and vertex buffer objects to store a unit cube geometry.
- Load the skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
How it works…
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
for(int i=0;i<6;i++) {
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]);
SOIL_free_image_data(pData[i]);
}
The second part is the shader responsible for sampling the cubemap texture. This work is carried out in the fragment shader (Chapter3/Skybox/shaders/skybox.frag
). In the rendering code, we set the skybox shader and then render the skybox, passing it the MVP
matrix, which is obtained as follows:
glm::mat4 T = glm::translate(glm::mat4(1.0f),glm::vec3(0.0f,0.0f, dist));
glm::mat4 Rx = glm::rotate(glm::mat4(1), rX, glm::vec3(1.0f, 0.0f, 0.0f));
glm::mat4 MV = glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 S = glm::scale(glm::mat4(1),glm::vec3(1000.0));
glm::mat4 MVP = P*MV*S;
skybox->Render( glm::value_ptr(MVP));
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
Tip
In this recipe, we scaled a unit cube. While it is not necessary to have a unit cube, one thing that we have to be careful with is that the size of the cube after scaling should not be greater than the far clip plane distance. Otherwise, our skybox will be clipped.
There's more…
The demo application implementing this recipe shows a statically cube mapped skybox which can be looked around by dragging the left mouse button. This gives a surrounded environment feeling to the user as shown in the following figure:
Chapter3/Skybox
directory.
How to do it…
Let us get started with the recipe as follows:
- Set up the vertex array and vertex buffer objects to store a unit cube geometry.
- Load the skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
How it works…
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
for(int i=0;i<6;i++) {
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]);
SOIL_free_image_data(pData[i]);
}
The second part is the shader responsible for sampling the cubemap texture. This work is carried out in the fragment shader (Chapter3/Skybox/shaders/skybox.frag
). In the rendering code, we set the skybox shader and then render the skybox, passing it the MVP
matrix, which is obtained as follows:
glm::mat4 T = glm::translate(glm::mat4(1.0f),glm::vec3(0.0f,0.0f, dist));
glm::mat4 Rx = glm::rotate(glm::mat4(1), rX, glm::vec3(1.0f, 0.0f, 0.0f));
glm::mat4 MV = glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 S = glm::scale(glm::mat4(1),glm::vec3(1000.0));
glm::mat4 MVP = P*MV*S;
skybox->Render( glm::value_ptr(MVP));
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
Tip
In this recipe, we scaled a unit cube. While it is not necessary to have a unit cube, one thing that we have to be careful with is that the size of the cube after scaling should not be greater than the far clip plane distance. Otherwise, our skybox will be clipped.
There's more…
The demo application implementing this recipe shows a statically cube mapped skybox which can be looked around by dragging the left mouse button. This gives a surrounded environment feeling to the user as shown in the following figure:
- skybox images using an image loading library, such as
SOIL
.int texture_widths[6]; int texture_heights[6]; int channels[6]; GLubyte* pData[6]; cout<<"Loading skybox images: ..."<<endl; for(int i=0;i<6;i++) { cout<<"\tLoading: "<<texture_names[i]<<" ... "; pData[i] = SOIL_load_image(texture_names[i], &texture_widths[i], &texture_heights[i], &channels[i], SOIL_LOAD_AUTO); cout<<"done."<<endl; }
- Generate a cubemap OpenGL texture object and bind the six loaded images to the
GL_TEXTURE_CUBE_MAP
texture targets. Also make sure that the image data loaded by theSOIL
library is deleted after the texture data has been stored into the OpenGL texture.glGenTextures(1, &skyboxTextureID); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTextureID); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); GLint format = (channels[0]==4)?GL_RGBA:GL_RGB; for(int i=0;i<6;i++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]); SOIL_free_image_data(pData[i]); }
- Set up a vertex shader (see
Chapter3/Skybox/shaders/skybox.vert
) that outputs the vertex's object space position as the texture coordinate.smooth out vec3 uv; void main() { gl_Position = MVP*vec4(vVertex,1); uv = vVertex; }
- Add a cubemap sampler to the fragment shader. Use the texture coordinates output from the vertex shader to sample the cubemap sampler object in the fragment shader (see
Chapter3/Skybox/shaders/skybox.frag
).layout(location=0) out vec4 vFragColor; uniform samplerCube cubeMap; smooth in vec3 uv; void main() { vFragColor = texture(cubeMap, uv); }
How it works…
There are two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
for(int i=0;i<6;i++) {
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]);
SOIL_free_image_data(pData[i]);
}
The second part is the shader responsible for sampling the cubemap texture. This work is carried out in the fragment shader (Chapter3/Skybox/shaders/skybox.frag
). In the rendering code, we set the skybox shader and then render the skybox, passing it the MVP
matrix, which is obtained as follows:
glm::mat4 T = glm::translate(glm::mat4(1.0f),glm::vec3(0.0f,0.0f, dist));
glm::mat4 Rx = glm::rotate(glm::mat4(1), rX, glm::vec3(1.0f, 0.0f, 0.0f));
glm::mat4 MV = glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 S = glm::scale(glm::mat4(1),glm::vec3(1000.0));
glm::mat4 MVP = P*MV*S;
skybox->Render( glm::value_ptr(MVP));
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
Tip
In this recipe, we scaled a unit cube. While it is not necessary to have a unit cube, one thing that we have to be careful with is that the size of the cube after scaling should not be greater than the far clip plane distance. Otherwise, our skybox will be clipped.
There's more…
The demo application implementing this recipe shows a statically cube mapped skybox which can be looked around by dragging the left mouse button. This gives a surrounded environment feeling to the user as shown in the following figure:
two parts of this recipe. The first part, which loads an OpenGL cubemap texture, is self explanatory. We load the six images and bind these to an OpenGL cubemap texture target. There are six cubemap texture targets corresponding to the six sides of a cube. These targets are GL_TEXTURE_CUBE_MAP_POSITIVE_X
, GL_TEXTURE_CUBE_MAP_POSITIVE_Y
, GL_TEXTURE_CUBE_MAP_POSITIVE_Z
, GL_TEXTURE_CUBE_MAP_NEGATIVE_X
, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y
, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
. Since their identifiers are linearly generated, we offset the target by the loop variable to move to the next cubemap texture target in the following code:
for(int i=0;i<6;i++) {
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, format,texture_widths[i], texture_heights[i], 0, format,GL_UNSIGNED_BYTE, pData[i]);
SOIL_free_image_data(pData[i]);
}
The second part is the shader responsible for sampling the cubemap texture. This work is carried out in the fragment shader (Chapter3/Skybox/shaders/skybox.frag
). In the rendering code, we set the skybox shader and then render the skybox, passing it the MVP
matrix, which is obtained as follows:
glm::mat4 T = glm::translate(glm::mat4(1.0f),glm::vec3(0.0f,0.0f, dist));
glm::mat4 Rx = glm::rotate(glm::mat4(1), rX, glm::vec3(1.0f, 0.0f, 0.0f));
glm::mat4 MV = glm::rotate(Rx, rY, glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 S = glm::scale(glm::mat4(1),glm::vec3(1000.0));
glm::mat4 MVP = P*MV*S;
skybox->Render( glm::value_ptr(MVP));
To sample the correct location in the cubemap texture we need a vector. This vector can be obtained from the object space vertex positions that are passed to the vertex shader. These are passed through the uv
output attribute to the fragment shader.
Tip
In this recipe, we scaled a unit cube. While it is not necessary to have a unit cube, one thing that we have to be careful with is that the size of the cube after scaling should not be greater than the far clip plane distance. Otherwise, our skybox will be clipped.
There's more…
The demo application implementing this recipe shows a statically cube mapped skybox which can be looked around by dragging the left mouse button. This gives a surrounded environment feeling to the user as shown in the following figure:
Implementing a mirror with render-to-texture using FBO
We will now use the FBO to render a mirror object on the screen. In a typical offscreen rendering OpenGL application, we set up the FBO first, by calling the glGenFramebuffers
function and passing it the number of FBOs desired. The second parameter stores the returned identifier. After the FBO object is generated, it has to be bound to the GL_FRAMEBUFFER
, GL_DRAW_FRAMEBUFFER,
or GL_READ_FRAMEBUFFER
target. Following this call, the texture to be bound to the FBOs color attachment is attached by calling the glFramebufferTexture2D
function.
There can be more than one color attachment on an FBO. The maximum number of color attachments supported on any GPU can be queried using the GL_MAX_COLOR_ATTACHMENTS
field. The type and dimension of the texture has to be specified and it is not necessary to have the same size as the screen. However, all color attachments on the FBO must have the same dimensions. At any time, only a single FBO can be bound for a drawing operation and similarly, only one can be bound for a reading operation. In addition to the color attachment, there are also depth and stencil attachments on an FBO. The following image shows the different attachment points on an FBO:
If depth testing is required, a render buffer is also generated and bound by calling glGenRenderbuffers
followed by the glBindRenderbuffer
function. For render buffers, the depth buffer's data type and its dimensions have to be specified. After all these steps, the render buffer is attached to the frame buffer by calling the glFramebufferRenderbuffer
function.
After the setup of the frame buffer and render buffer objects, the frame buffer completeness status has to be checked by calling glCheckFramebufferStatus
by passing it the framebuffer
target. This ensures that the FBO setup is correct. The function returns the status as an identifier. If this returned value is anything other than GL_FRAMEBUFFER_COMPLETE
, the FBO setup is unsuccessful.
Tip
Make sure to check the Framebuffer
status after the Framebuffer
is bound.
Similar to other OpenGL objects, we must delete the framebuffer
and the renderbuffer
objects and any texture objects used for offscreen rendering after they are no more needed, by calling the glDeleteFramebuffers
and glDeleteRenderbuffers
functions. These are the typical steps needed to enable offscreen rendering using FBO objects in modern OpenGL.
Getting ready
The code for this recipe is contained in the Chapter3/MirrorUsingFBO
directory.
How to do it…
Let us get started with the recipe as follows:
- Initialize the
framebuffer
andrenderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using theglRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
- Generate the offscreen texture on which FBO will render to. The last parameter of
glTexImage2D
isNULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
- Attach
Renderbuffer
to the boundFramebuffer
object and check forFramebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- Unbind the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
Tip
Note that there are several aliases for the back buffer. The real back buffer is
GL_BACK_LEFT
, which is also referred by theGL_BACK
alias. The defaultFramebuffer
has up to four color buffers, namelyGL_FRONT_LEFT
,GL_FRONT_RIGHT
,GL_BACK_LEFT
, andGL_BACK_RIGHT
. If stereo rendering is not active, then only the left buffers are active, that is,GL_FRONT_LEFT
(the active front color buffer) andGL_BACK_LEFT
(the active back color buffer). - Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
How it works…
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
There's more…
Details of the Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:
See also
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Chapter3/MirrorUsingFBO
directory.
How to do it…
Let us get started with the recipe as follows:
- Initialize the
framebuffer
andrenderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using theglRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
- Generate the offscreen texture on which FBO will render to. The last parameter of
glTexImage2D
isNULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
- Attach
Renderbuffer
to the boundFramebuffer
object and check forFramebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- Unbind the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
Tip
Note that there are several aliases for the back buffer. The real back buffer is
GL_BACK_LEFT
, which is also referred by theGL_BACK
alias. The defaultFramebuffer
has up to four color buffers, namelyGL_FRONT_LEFT
,GL_FRONT_RIGHT
,GL_BACK_LEFT
, andGL_BACK_RIGHT
. If stereo rendering is not active, then only the left buffers are active, that is,GL_FRONT_LEFT
(the active front color buffer) andGL_BACK_LEFT
(the active back color buffer). - Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
How it works…
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
There's more…
Details of the Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:
See also
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
framebuffer
and renderbuffer
objects' color and depth attachments respectively. The render buffer is required if we need depth testing for the offscreen rendering, and the depth precision is specified using the glRenderbufferStorage
function.glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rbID); glBindRenderbuffer(GL_RENDERBUFFER, rbID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32,WIDTH, HEIGHT);
glTexImage2D
is NULL
, which tells OpenGL that we do not have any content yet, please provide a new block of GPU memory which gets filled when the FBO is used as a render target.glGenTextures(1, &renderTextureID); glBindTexture(GL_TEXTURE_2D, renderTextureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, L_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, WIDTH, HEIGHT, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Renderbuffer
to the bound Framebuffer
object and check for Framebuffer
completeness.glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D, renderTextureID, 0); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, rbID); GLuint status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status==GL_FRAMEBUFFER_COMPLETE) { printf("FBO setup succeeded."); } else { printf("Error in FBO setup."); }
- the
Framebuffer
object as follows:glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Create a quad geometry to act as a mirror:
mirror = new CQuad(-2);
- Render the scene normally from the point of view of camera. Since the unit color cube is rendered at origin, we translate it on the Y axis to shift it up in Y axis which effectively moves the unit color cube in Y direction so that the unit color cube's image can be viewed completely in the mirror.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); grid->Render(glm::value_ptr(MVP)); localR[3][1] = 0.5; cube->Render(glm::value_ptr(P*MV*localR));
- Store the current modelview matrix and then change the modelview matrix such that the camera is placed at the mirror object position. Also make sure to laterally invert this modelview matrix by scaling by -1 on the X axis.
glm::mat4 oldMV = MV; glm::vec3 target; glm::vec3 V = glm::vec3(-MV[2][0], -MV[2][1], -MV[2][2]); glm::vec3 R = glm::reflect(V, mirror->normal); MV = glm::lookAt(mirror->position, mirror->position + R, glm::vec3(0,1,0)); MV = glm::scale(MV, glm::vec3(-1,1,1));
- Bind the FBO, set up the FBO color attachment for
Drawbuffer
(GL_COLOR_ATTACHMENT0
) or any other attachment to which texture is attached, and clear the FBO. TheglDrawBuffer
function enables the code to draw to a specific color attachment on the FBO. In our case, there is a single color attachment so we set it as the draw buffer.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
- Set the modified modelview matrix and render the scene again. Also make sure to only render from the shiny side of the mirror.
if(glm::dot(V,mirror->normal)<0) { grid->Render(glm::value_ptr(P*MV)); cube->Render(glm::value_ptr(P*MV*localR)); }
- Unbind the FBO and restore the default
Drawbuffer
(GL_BACK_LEFT).
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT);
Tip
Note that there are several aliases for the back buffer. The real back buffer is
GL_BACK_LEFT
, which is also referred by theGL_BACK
alias. The defaultFramebuffer
has up to four color buffers, namelyGL_FRONT_LEFT
,GL_FRONT_RIGHT
,GL_BACK_LEFT
, andGL_BACK_RIGHT
. If stereo rendering is not active, then only the left buffers are active, that is,GL_FRONT_LEFT
(the active front color buffer) andGL_BACK_LEFT
(the active back color buffer). - Finally render the mirror quad at the saved modelview matrix.
MV = oldMV; glBindTexture(GL_TEXTURE_2D, renderTextureID); mirror->Render(glm::value_ptr(P*MV));
How it works…
The mirror algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
There's more…
Details of the Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:
See also
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
algorithm used in the recipe is very simple. We first get the view direction vector (V
) from the viewing matrix. We reflect this vector on the normal of the mirror (N
). Next, the camera position is moved to the place behind the mirror. Finally, the mirror is scaled by -1 on the X axis. This ensures that the image is laterally inverted as in a mirror. Details of the method are covered in the reference in the See also section.
There's more…
Details of the Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:
See also
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Framebuffer
object can be obtained from the Framebuffer
object specifications (see the See also section). The output from the demo application implementing this recipe is as follows:
See also
- The Official OpenGL registry-Framebuffer object specifications can be found at http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
- http://www.opengl.org/registry/specs/EXT/framebuffer_object.txt.
- OpenGL Superbible, Fifth Edition, Chapter 8, pages 354-358, Richard S. Wright, Addison-Wesley Professional
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Rendering a reflective object using dynamic cube mapping
Now we will see how to use dynamic cube mapping to render a real-time scene to a cubemap render target. This allows us to create reflective surfaces. In modern OpenGL, offscreen rendering (also called render-to-texture) functionality is exposed through FBOs.
Getting ready
In this recipe, we will render a box with encircling particles. The code is contained in the Chapter3/DynamicCubemap
directory.
How to do it…
Let us get started with the recipe as follows:
- Create a cubemap texture object.
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
- Set up an FBO with the cubemap texture as an attachment.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- Set the viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
How it works…
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The scene is then rendered to the cubemap texture without the reflective object by placing six cameras at the reflective object's position in the six directions. The cubemap projection matrix (Pcubemap
) is given a 90 degree fov.
Pcubemap = glm::perspective(90.0f,1.0f,0.1f,1000.0f);
This renders the scene into the cubemap texture. For each side, a new MVP
matrix is obtained by multiplying the new MV
matrix (obtained by using glm::lookAt
function). This is repeated for all six sides of the cube. Next, the scene is rendered normally and the reflective object is finally rendered using the generated cubemap to render the reflective environment. Rendering each frame six times into an offscreen target hinders performance, especially if there are complex objects in the world. Therefore this technique should be used with caution.
The cubemap vertex shader outputs the object space vertex positions and normals.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
smooth out vec3 position;
smooth out vec3 normal;
void main() {
position = vVertex;
normal = vNormal;
gl_Position = MVP*vec4(vVertex,1);
}
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
#version 330 core
layout(location=0) out vec4 vFragColor;
uniform samplerCube cubeMap;
smooth in vec3 position;
smooth in vec3 normal;
uniform vec3 eyePosition;
void main() {
vec3 N = normalize(normal);
vec3 V = normalize(position-eyePosition);
vFragColor = texture(cubeMap, reflect(V,N));
}
There's more…
The demo application implementing this recipe renders a reflective sphere with eight cubes pulsating around it, as shown in the following figure:
In this recipe, we could also use layered rendering by using the geometry shader to output to a different Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
See also
- Check the OpenGL wiki page at http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Chapter3/DynamicCubemap
directory.
How to do it…
Let us get started with the recipe as follows:
- Create a cubemap texture object.
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
- Set up an FBO with the cubemap texture as an attachment.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- Set the viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
How it works…
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The scene is then rendered to the cubemap texture without the reflective object by placing six cameras at the reflective object's position in the six directions. The cubemap projection matrix (Pcubemap
) is given a 90 degree fov.
Pcubemap = glm::perspective(90.0f,1.0f,0.1f,1000.0f);
This renders the scene into the cubemap texture. For each side, a new MVP
matrix is obtained by multiplying the new MV
matrix (obtained by using glm::lookAt
function). This is repeated for all six sides of the cube. Next, the scene is rendered normally and the reflective object is finally rendered using the generated cubemap to render the reflective environment. Rendering each frame six times into an offscreen target hinders performance, especially if there are complex objects in the world. Therefore this technique should be used with caution.
The cubemap vertex shader outputs the object space vertex positions and normals.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
smooth out vec3 position;
smooth out vec3 normal;
void main() {
position = vVertex;
normal = vNormal;
gl_Position = MVP*vec4(vVertex,1);
}
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
#version 330 core
layout(location=0) out vec4 vFragColor;
uniform samplerCube cubeMap;
smooth in vec3 position;
smooth in vec3 normal;
uniform vec3 eyePosition;
void main() {
vec3 N = normalize(normal);
vec3 V = normalize(position-eyePosition);
vFragColor = texture(cubeMap, reflect(V,N));
}
There's more…
The demo application implementing this recipe renders a reflective sphere with eight cubes pulsating around it, as shown in the following figure:
In this recipe, we could also use layered rendering by using the geometry shader to output to a different Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
See also
- Check the OpenGL wiki page at http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
glGenTextures(1, &dynamicCubeMapID); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_CUBE_MAP, dynamicCubeMapID); glTexParameterf(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for (int face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_RGBA,CUBEMAP_SIZE, CUBEMAP_SIZE, 0, GL_RGBA, GL_FLOAT, NULL); }
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenRenderbuffers(1, &rboID); glBindRenderbuffer(GL_RENDERBUFFER, rboID); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, CUBEMAP_SIZE, CUBEMAP_SIZE); glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO setup successfully."<<endl; }
- viewport to the size of the offscreen texture and render the scene six times without the reflective object to the six sides of the cubemap using FBO.
glViewport(0,0,CUBEMAP_SIZE,CUBEMAP_SIZE); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,GL_TEXTURE_CUBE_MAP_POSITIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV1 = glm::lookAt(glm::vec3(0),glm::vec3(1,0,0),glm::vec3(0,-1,0)); DrawScene( MV1*T, Pcubemap); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, dynamicCubeMapID, 0); glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT); glm::mat4 MV2 = glm::lookAt(glm::vec3(0),glm::vec3(-1,0,0), glm::vec3(0,-1,0)); DrawScene( MV2*T, Pcubemap); ...//similar for rest of the faces glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Restore the viewport and the modelview matrix, and render the scene normally.
glViewport(0,0,WIDTH,HEIGHT); DrawScene(MV, P);
- Set the cubemap shader and then render the reflective object.
glBindVertexArray(sphereVAOID); cubemapShader.Use(); T = glm::translate(glm::mat4(1), p); glUniformMatrix4fv(cubemapShader("MVP"), 1, GL_FALSE, glm::value_ptr(P*(MV*T))); glUniform3fv(cubemapShader("eyePosition"), 1, glm::value_ptr(eyePos)); glDrawElements(GL_TRIANGLES,indices.size(),GL_UNSIGNED_SHORT,0); cubemapShader.UnUse();
How it works…
Dynamic cube mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The scene is then rendered to the cubemap texture without the reflective object by placing six cameras at the reflective object's position in the six directions. The cubemap projection matrix (Pcubemap
) is given a 90 degree fov.
Pcubemap = glm::perspective(90.0f,1.0f,0.1f,1000.0f);
This renders the scene into the cubemap texture. For each side, a new MVP
matrix is obtained by multiplying the new MV
matrix (obtained by using glm::lookAt
function). This is repeated for all six sides of the cube. Next, the scene is rendered normally and the reflective object is finally rendered using the generated cubemap to render the reflective environment. Rendering each frame six times into an offscreen target hinders performance, especially if there are complex objects in the world. Therefore this technique should be used with caution.
The cubemap vertex shader outputs the object space vertex positions and normals.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
smooth out vec3 position;
smooth out vec3 normal;
void main() {
position = vVertex;
normal = vNormal;
gl_Position = MVP*vec4(vVertex,1);
}
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
#version 330 core
layout(location=0) out vec4 vFragColor;
uniform samplerCube cubeMap;
smooth in vec3 position;
smooth in vec3 normal;
uniform vec3 eyePosition;
void main() {
vec3 N = normalize(normal);
vec3 V = normalize(position-eyePosition);
vFragColor = texture(cubeMap, reflect(V,N));
}
There's more…
The demo application implementing this recipe renders a reflective sphere with eight cubes pulsating around it, as shown in the following figure:
In this recipe, we could also use layered rendering by using the geometry shader to output to a different Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
See also
- Check the OpenGL wiki page at http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
mapping renders the scene six times from the reflective object using six cameras at the reflective object's position. For rendering to the cubemap texture, an FBO is used with a cubemap texture attachment. The cubemap texture's GL_TEXTURE_CUBE_MAP_POSITIVE_X
target is bound to the GL_COLOR_ATTACHMENT0
color attachment of the FBO. The last parameter of glTexImage2D
is NULL
since this call just allocates the memory for offscreen rendering and the real data will be populated when the FBO is set as the render target.
The scene is then rendered to the cubemap texture without the reflective object by placing six cameras at the reflective object's position in the six directions. The cubemap projection matrix (Pcubemap
) is given a 90 degree fov.
Pcubemap = glm::perspective(90.0f,1.0f,0.1f,1000.0f);
This renders the scene into the cubemap texture. For each side, a new MVP
matrix is obtained by multiplying the new MV
matrix (obtained by using glm::lookAt
function). This is repeated for all six sides of the cube. Next, the scene is rendered normally and the reflective object is finally rendered using the generated cubemap to render the reflective environment. Rendering each frame six times into an offscreen target hinders performance, especially if there are complex objects in the world. Therefore this technique should be used with caution.
The cubemap vertex shader outputs the object space vertex positions and normals.
#version 330 core
layout(location=0) in vec3 vVertex;
layout(location=1) in vec3 vNormal;
uniform mat4 MVP;
smooth out vec3 position;
smooth out vec3 normal;
void main() {
position = vVertex;
normal = vNormal;
gl_Position = MVP*vec4(vVertex,1);
}
The cubemap fragment shader uses the object space vertex positions to determine the view vector. The reflection vector is then obtained by reflecting the view vector at the object space normal.
#version 330 core
layout(location=0) out vec4 vFragColor;
uniform samplerCube cubeMap;
smooth in vec3 position;
smooth in vec3 normal;
uniform vec3 eyePosition;
void main() {
vec3 N = normalize(normal);
vec3 V = normalize(position-eyePosition);
vFragColor = texture(cubeMap, reflect(V,N));
}
There's more…
The demo application implementing this recipe renders a reflective sphere with eight cubes pulsating around it, as shown in the following figure:
In this recipe, we could also use layered rendering by using the geometry shader to output to a different Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
See also
- Check the OpenGL wiki page at http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Framebuffer
object layer. This can be achieved by outputting to the appropriate gl_Layer
attribute from the geometry shader and setting the appropriate viewing transformation. This is left as an exercise for the reader.
See also
- Check the OpenGL wiki page at http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
- http://www.opengl.org/wiki/Geometry_Shader#Layered_rendering
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Implementing area filtering (sharpening/blurring/embossing) on an image using convolution
We will now see how to do area filtering, that is, 2D image convolution to implement effects like sharpening, blurring, and embossing. There are several ways to achieve image convolution in the spatial domain. The simplest approach is to use a loop that iterates through a given image window and computes the sum of products of the image intensities with the convolution kernel. The more efficient method, as far as the implementation is concerned, is separable convolution which breaks up the 2D convolution into two 1D convolutions. However, this approach requires an additional pass.
Getting ready
This recipe is built on top of the image loading recipe discussed in the first chapter. If you feel a bit lost, we suggest skimming through it to be on page with us. The code for this recipe is contained in the Chapter3/Convolution
directory. For this recipe, most of the work takes place in the fragment shader.
How to do it…
Let us get started with the recipe as follows:
- Create a simple pass-through vertex shader that outputs the clip space position and the texture coordinates which are to be passed into the fragment shader for texture lookup.
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- In the fragment shader, we declare a constant array called
kernel
which stores our convolutionkernel
. Changing the convolutionkernel
values dictates the output of convolution. The defaultkernel
sets up a sharpening convolution filter. Refer toChapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
- In the fragment shader, we run a nested loop that loops through the current pixel's neighborhood and multiplies the
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of thekernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
- After the nested loops, we divide the color value with the total number of values in the
kernel
. For a 3 x 3kernel
, we have nine values. Finally, we add the convolved color value to the current pixel's value.color/=9.0; vFragColor = color + texture(textureMap, vUV);
How it works...
For a 2D digital image f(x,y), the processed image g(x,y), after the convolution operation with a kernel h(x,y), is defined mathematically as follows:
For each pixel, we simply sum the product of the current image pixel value with the corresponding coefficient in the kernel in the given neighborhood. For details about the kernel coefficients, we refer the reader to any standard text on digital image processing, like the one given in the See also section.
The overall algorithm works like this. We set up our FBO for offscreen rendering. We render our image on the offscreen render target of the FBO, instead of the back buffer. Now the FBO attachment stores our image. Next, we set the output from the first step (that is, the rendered image on the FBO attachment) as input to the convolution shader in the second pass. We render a full-screen quad on the back buffer and apply our convolution shader to it. This performs convolution on the input image. Finally, we swap the back buffer to show the result on the screen.
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Tip
Based on the wrapping mode set for the texture, for example, GL_CLAMP
or GL_REPEAT
, the convolution result will be different. In case of the GL_CLAMP
wrapping mode, the pixels out of the image are not considered, whereas, in case of the GL_REPEAT
wrapping mode, the out of the image pixel information is obtained from the pixel at the wrapping position.
Effect |
Kernel matrix |
---|---|
Sharpening | |
Blurring / Unweighted Smoothing | |
3 x 3 Gaussian blur | |
Emboss north-west direction | |
Emboss north-east direction | |
Emboss south-east direction | |
Emboss south-west direction |
There's more…
We just touched the topic of digital image convolution. For details, we refer the reader to the See also section. In the demo application, the user can set the required kernel and then press the Space bar key to see the filtered image output. Pressing the Space bar key once again shows the normal unfiltered image.
See also
- Digital Image Processing, Third Edition, Rafael C. Gonzales and Richard E. Woods, Prentice Hall
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Chapter3/Convolution
directory. For this recipe, most of the work takes place in the fragment shader.
How to do it…
Let us get started with the recipe as follows:
- Create a simple pass-through vertex shader that outputs the clip space position and the texture coordinates which are to be passed into the fragment shader for texture lookup.
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
- In the fragment shader, we declare a constant array called
kernel
which stores our convolutionkernel
. Changing the convolutionkernel
values dictates the output of convolution. The defaultkernel
sets up a sharpening convolution filter. Refer toChapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
- In the fragment shader, we run a nested loop that loops through the current pixel's neighborhood and multiplies the
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of thekernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
- After the nested loops, we divide the color value with the total number of values in the
kernel
. For a 3 x 3kernel
, we have nine values. Finally, we add the convolved color value to the current pixel's value.color/=9.0; vFragColor = color + texture(textureMap, vUV);
How it works...
For a 2D digital image f(x,y), the processed image g(x,y), after the convolution operation with a kernel h(x,y), is defined mathematically as follows:
For each pixel, we simply sum the product of the current image pixel value with the corresponding coefficient in the kernel in the given neighborhood. For details about the kernel coefficients, we refer the reader to any standard text on digital image processing, like the one given in the See also section.
The overall algorithm works like this. We set up our FBO for offscreen rendering. We render our image on the offscreen render target of the FBO, instead of the back buffer. Now the FBO attachment stores our image. Next, we set the output from the first step (that is, the rendered image on the FBO attachment) as input to the convolution shader in the second pass. We render a full-screen quad on the back buffer and apply our convolution shader to it. This performs convolution on the input image. Finally, we swap the back buffer to show the result on the screen.
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Tip
Based on the wrapping mode set for the texture, for example, GL_CLAMP
or GL_REPEAT
, the convolution result will be different. In case of the GL_CLAMP
wrapping mode, the pixels out of the image are not considered, whereas, in case of the GL_REPEAT
wrapping mode, the out of the image pixel information is obtained from the pixel at the wrapping position.
Effect |
Kernel matrix |
---|---|
Sharpening | |
Blurring / Unweighted Smoothing | |
3 x 3 Gaussian blur | |
Emboss north-west direction | |
Emboss north-east direction | |
Emboss south-east direction | |
Emboss south-west direction |
There's more…
We just touched the topic of digital image convolution. For details, we refer the reader to the See also section. In the demo application, the user can set the required kernel and then press the Space bar key to see the filtered image output. Pressing the Space bar key once again shows the normal unfiltered image.
See also
- Digital Image Processing, Third Edition, Rafael C. Gonzales and Richard E. Woods, Prentice Hall
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
#version 330 core in vec2 vVertex; out vec2 vUV; void main() { gl_Position = vec4(vVertex*2.0-1,0,1); vUV = vVertex; }
kernel
which stores our convolution kernel
. Changing the convolution kernel
values dictates the output of convolution. The default kernel
sets up a sharpening convolution filter. Refer to Chapter3/Convolution/shaders/shader_convolution.frag
for details.const float kernel[]=float[9] (-1,-1,-1,-1, 8,-1,-1,-1,-1);
kernel
value with the current pixel's value. This is continued in an n x n neighborhood, where n is the width/height of the kernel
.for(int j=-1;j<=1;j++) { for(int i=-1;i<=1;i++) { color += kernel[index--] * texture(textureMap, vUV+(vec2(i,j)*delta)); } }
- nested loops, we divide the color value with the total number of values in the
kernel
. For a 3 x 3kernel
, we have nine values. Finally, we add the convolved color value to the current pixel's value.color/=9.0; vFragColor = color + texture(textureMap, vUV);
How it works...
For a 2D digital image f(x,y), the processed image g(x,y), after the convolution operation with a kernel h(x,y), is defined mathematically as follows:
For each pixel, we simply sum the product of the current image pixel value with the corresponding coefficient in the kernel in the given neighborhood. For details about the kernel coefficients, we refer the reader to any standard text on digital image processing, like the one given in the See also section.
The overall algorithm works like this. We set up our FBO for offscreen rendering. We render our image on the offscreen render target of the FBO, instead of the back buffer. Now the FBO attachment stores our image. Next, we set the output from the first step (that is, the rendered image on the FBO attachment) as input to the convolution shader in the second pass. We render a full-screen quad on the back buffer and apply our convolution shader to it. This performs convolution on the input image. Finally, we swap the back buffer to show the result on the screen.
After the image is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Tip
Based on the wrapping mode set for the texture, for example, GL_CLAMP
or GL_REPEAT
, the convolution result will be different. In case of the GL_CLAMP
wrapping mode, the pixels out of the image are not considered, whereas, in case of the GL_REPEAT
wrapping mode, the out of the image pixel information is obtained from the pixel at the wrapping position.
Effect |
Kernel matrix |
---|---|
Sharpening | |
Blurring / Unweighted Smoothing | |
3 x 3 Gaussian blur | |
Emboss north-west direction | |
Emboss north-east direction | |
Emboss south-east direction | |
Emboss south-west direction |
There's more…
We just touched the topic of digital image convolution. For details, we refer the reader to the See also section. In the demo application, the user can set the required kernel and then press the Space bar key to see the filtered image output. Pressing the Space bar key once again shows the normal unfiltered image.
See also
- Digital Image Processing, Third Edition, Rafael C. Gonzales and Richard E. Woods, Prentice Hall
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
is loaded and an OpenGL texture has been generated, we render a screen-aligned quad. This allows the fragment shader to run for the whole screen. In the fragment shader, for the current fragment, we iterate through its neighborhood and sum the product of the corresponding entry in the kernel with the look-up value. After the loop is terminated, the sum is divided by the total number of kernel coefficients. Finally, the convolution sum is added to the current pixel's value. There are several different kinds of kernels. We list the ones we will use in this recipe in the following table.
Tip
Based on the wrapping mode set for the texture, for example, GL_CLAMP
or GL_REPEAT
, the convolution result will be different. In case of the GL_CLAMP
wrapping mode, the pixels out of the image are not considered, whereas, in case of the GL_REPEAT
wrapping mode, the out of the image pixel information is obtained from the pixel at the wrapping position.
Effect |
Kernel matrix |
---|---|
Sharpening | |
Blurring / Unweighted Smoothing | |
3 x 3 Gaussian blur | |
Emboss north-west direction | |
Emboss north-east direction | |
Emboss south-east direction | |
Emboss south-west direction |
There's more…
We just touched the topic of digital image convolution. For details, we refer the reader to the See also section. In the demo application, the user can set the required kernel and then press the Space bar key to see the filtered image output. Pressing the Space bar key once again shows the normal unfiltered image.
See also
- Digital Image Processing, Third Edition, Rafael C. Gonzales and Richard E. Woods, Prentice Hall
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
See also
- Digital Image Processing, Third Edition, Rafael C. Gonzales and Richard E. Woods, Prentice Hall
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
Implementing the glow effect
Now that we know how to perform offscreen rendering and blurring, we will put this knowledge to use by implementing the glow effect. The code for this recipe is in the Chapter3/Glow
directory. In this recipe, we will render a set of points encircling a cube. Every 50 frames, four alternate points glow.
How to do it…
Let us get started with the recipe as follows:
- Render the scene normally by rendering the points and the cube. The particle shader renders the
GL_POINTS
value (which by default, renders as quads) as circles.grid->Render(glm::value_ptr(MVP)); cube->Render(glm::value_ptr(MVP)); glBindVertexArray(particlesVAO); particleShader.Use(); glUniformMatrix4fv(particleShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP*Rot)); glDrawArrays(GL_POINTS, 0, 8);
The particle vertex shader is as follows:
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; smooth out vec4 color; const vec4 colors[8]=vec4[8](vec4(1,0,0,1), vec4(0,1,0,1), vec4(0,0,1,1),vec4(1,1,0,1), vec4(0,1,1,1), vec4(1,0,1,1), vec4(0.5,0.5,0.5,1), vec4(1,1,1,1)) ; void main() { gl_Position = MVP*vec4(vVertex,1); color = colors[gl_VertexID/4]; }
The particle fragment shader is as follows:
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vec2 pos = gl_PointCoord-0.5; if(dot(pos,pos)>0.25) discard; else vFragColor = color; }
- Set up a single FBO with two color attachments. The first attachment is for rendering of scene elements requiring glow and the second attachment is for blurring.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenTextures(2, texID); glActiveTexture(GL_TEXTURE0); for(int i=0;i<2;i++) { glBindTexture(GL_TEXTURE_2D, texID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameterf(GL_TXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR) glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, RENDER_TARGET_WIDTH, RENDER_TARGET_HEIGHT, 0, GL_RGBA,GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,texID[i],0); } GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO set up successfully."<<endl; } glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Bind FBO, set the viewport to the size of the attachment texture, set
Drawbuffer
to render to the first color attachment (GL_COLOR_ATTACHMENT0
), and render the part of the scene which needs glow.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glViewport(0,0,RENDER_TARGET_WIDTH,RENDER_TARGET_HEIGHT); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_POINTS, offset, 4); particleShader.UnUse();
- Set
Drawbuffer
to render to the second color attachment (GL_COLOR_ATTACHMENT1
) and bind the FBO texture attached to the first color attachment. Set the blur shader by convolving with a simple unweighted smoothing filter.glDrawBuffer(GL_COLOR_ATTACHMENT1); glBindTexture(GL_TEXTURE_2D, texID[0]);
- Render a screen-aligned quad and apply the blur shader to the rendering result from the first color attachment of the FBO. This output is written to the second color attachment.
blurShader.Use(); glBindVertexArray(quadVAOID); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0);
- Disable FBO rendering, reset the default drawbuffer (
GL_BACK_LEFT
) and viewport, bind the texture attached to the FBO's second color attachment, draw a screen-aligned quad, and blend the blur output to the existing scene using additive blending.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT); glBindTexture(GL_TEXTURE_2D, texID[1]); glViewport(0,0,WIDTH, HEIGHT); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0); glBindVertexArray(0); blurShader.UnUse(); glDisable(GL_BLEND);
How it works…
The glow effect works by first rendering the candidate elements of the scene for glow into a separate render target. After rendering, a smoothing filter is applied on the rendered image containing the elements requiring glow. The smoothed output is then additively blended with the current rendering on the frame buffer, as shown in the following figure:
Note that we could also enable blending in the fragment shader. Assuming that the two images to be blended are bound to their texture units and their shader samplers are texture1
and texture2
, the additive blending shader code will be like this:
#version 330 core
uniform sampler2D texture1;
uniform sampler2D texture2;
layout(location=0) out vec4 vFragColor;
smooth in vec2 vUV;
void main() {
vec4 color1 = texture(texture1, vUV);
vec4 color2 = texture(texture2, vUV);
vFragColor = color1+color2;
}
Additionally, we can also apply separable convolution, but that requires two passes. The process requires three color attachments. We first render the scene normally on the first color attachment while the glow effect objects are rendered on the second color attachment. The third color attachment is then set as the render target while the second color attachment acts as input. A full-screen quad is then rendered with the vertical smoothing shader which simply iterates through a row of pixels. This vertically smoothed result is written to the third color attachment.
The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.
There's more…
The demo application for this recipe shows a simple unit cube encircled by eight points. The first four points are rendered in red and the latter four are rendered in green. The application applies glow to the first four points. After every 50 frames, the glow shifts to the latter four points and so on for the lifetime of the application. The output result from the application is shown in the following figure:
See also
- Glow sample in NVIDIA OpenGL SDK v10
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
GL_POINTS
value (which by default, renders as quads) as circles.grid->Render(glm::value_ptr(MVP)); cube->Render(glm::value_ptr(MVP)); glBindVertexArray(particlesVAO); particleShader.Use(); glUniformMatrix4fv(particleShader("MVP"), 1, GL_FALSE, glm::value_ptr(MVP*Rot)); glDrawArrays(GL_POINTS, 0, 8);
The particle vertex shader is as follows:
#version 330 core layout(location=0) in vec3 vVertex; uniform mat4 MVP; smooth out vec4 color; const vec4 colors[8]=vec4[8](vec4(1,0,0,1), vec4(0,1,0,1), vec4(0,0,1,1),vec4(1,1,0,1), vec4(0,1,1,1), vec4(1,0,1,1), vec4(0.5,0.5,0.5,1), vec4(1,1,1,1)) ; void main() { gl_Position = MVP*vec4(vVertex,1); color = colors[gl_VertexID/4]; }
The particle fragment shader is as follows:
#version 330 core layout(location=0) out vec4 vFragColor; smooth in vec4 color; void main() { vec2 pos = gl_PointCoord-0.5; if(dot(pos,pos)>0.25) discard; else vFragColor = color; }
- FBO with two color attachments. The first attachment is for rendering of scene elements requiring glow and the second attachment is for blurring.
glGenFramebuffers(1, &fboID); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glGenTextures(2, texID); glActiveTexture(GL_TEXTURE0); for(int i=0;i<2;i++) { glBindTexture(GL_TEXTURE_2D, texID[i]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameterf(GL_TXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_LINEAR) glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, RENDER_TARGET_WIDTH, RENDER_TARGET_HEIGHT, 0, GL_RGBA,GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0+i,GL_TEXTURE_2D,texID[i],0); } GLenum status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER); if(status != GL_FRAMEBUFFER_COMPLETE) { cerr<<"Frame buffer object setup error."<<endl; exit(EXIT_FAILURE); } else { cerr<<"FBO set up successfully."<<endl; } glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
- Bind FBO, set the viewport to the size of the attachment texture, set
Drawbuffer
to render to the first color attachment (GL_COLOR_ATTACHMENT0
), and render the part of the scene which needs glow.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboID); glViewport(0,0,RENDER_TARGET_WIDTH,RENDER_TARGET_HEIGHT); glDrawBuffer(GL_COLOR_ATTACHMENT0); glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_POINTS, offset, 4); particleShader.UnUse();
- Set
Drawbuffer
to render to the second color attachment (GL_COLOR_ATTACHMENT1
) and bind the FBO texture attached to the first color attachment. Set the blur shader by convolving with a simple unweighted smoothing filter.glDrawBuffer(GL_COLOR_ATTACHMENT1); glBindTexture(GL_TEXTURE_2D, texID[0]);
- Render a screen-aligned quad and apply the blur shader to the rendering result from the first color attachment of the FBO. This output is written to the second color attachment.
blurShader.Use(); glBindVertexArray(quadVAOID); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0);
- Disable FBO rendering, reset the default drawbuffer (
GL_BACK_LEFT
) and viewport, bind the texture attached to the FBO's second color attachment, draw a screen-aligned quad, and blend the blur output to the existing scene using additive blending.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); glDrawBuffer(GL_BACK_LEFT); glBindTexture(GL_TEXTURE_2D, texID[1]); glViewport(0,0,WIDTH, HEIGHT); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_SHORT,0); glBindVertexArray(0); blurShader.UnUse(); glDisable(GL_BLEND);
How it works…
The glow effect works by first rendering the candidate elements of the scene for glow into a separate render target. After rendering, a smoothing filter is applied on the rendered image containing the elements requiring glow. The smoothed output is then additively blended with the current rendering on the frame buffer, as shown in the following figure:
Note that we could also enable blending in the fragment shader. Assuming that the two images to be blended are bound to their texture units and their shader samplers are texture1
and texture2
, the additive blending shader code will be like this:
#version 330 core
uniform sampler2D texture1;
uniform sampler2D texture2;
layout(location=0) out vec4 vFragColor;
smooth in vec2 vUV;
void main() {
vec4 color1 = texture(texture1, vUV);
vec4 color2 = texture(texture2, vUV);
vFragColor = color1+color2;
}
Additionally, we can also apply separable convolution, but that requires two passes. The process requires three color attachments. We first render the scene normally on the first color attachment while the glow effect objects are rendered on the second color attachment. The third color attachment is then set as the render target while the second color attachment acts as input. A full-screen quad is then rendered with the vertical smoothing shader which simply iterates through a row of pixels. This vertically smoothed result is written to the third color attachment.
The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.
There's more…
The demo application for this recipe shows a simple unit cube encircled by eight points. The first four points are rendered in red and the latter four are rendered in green. The application applies glow to the first four points. After every 50 frames, the glow shifts to the latter four points and so on for the lifetime of the application. The output result from the application is shown in the following figure:
See also
- Glow sample in NVIDIA OpenGL SDK v10
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
also enable blending in the fragment shader. Assuming that the two images to be blended are bound to their texture units and their shader samplers are texture1
and texture2
, the additive blending shader code will be like this:
#version 330 core
uniform sampler2D texture1;
uniform sampler2D texture2;
layout(location=0) out vec4 vFragColor;
smooth in vec2 vUV;
void main() {
vec4 color1 = texture(texture1, vUV);
vec4 color2 = texture(texture2, vUV);
vFragColor = color1+color2;
}
Additionally, we can also apply separable convolution, but that requires two passes. The process requires three color attachments. We first render the scene normally on the first color attachment while the glow effect objects are rendered on the second color attachment. The third color attachment is then set as the render target while the second color attachment acts as input. A full-screen quad is then rendered with the vertical smoothing shader which simply iterates through a row of pixels. This vertically smoothed result is written to the third color attachment.
The second color attachment is then set as output while the output results from the vertical smoothing pass (which was written to the third color attachment) is set as input. The horizontal smoothing shader is then applied on a column of pixels which smoothes the entire image. The image is then rendered to the second color attachment. Finally, the blend shader combines the result from the first color attachment with the result from the second color attachment. Note that the same effect could be carried out by using two separate FBOs: a rendering FBO and a filtering FBO, which gives us more flexibility as we can down sample the filtering result to take advantage of hardware linear filtering. This technique has been used in the Implementing variance shadow mapping recipe in Chapter 4, Lights and Shadows.
There's more…
The demo application for this recipe shows a simple unit cube encircled by eight points. The first four points are rendered in red and the latter four are rendered in green. The application applies glow to the first four points. After every 50 frames, the glow shifts to the latter four points and so on for the lifetime of the application. The output result from the application is shown in the following figure:
See also
- Glow sample in NVIDIA OpenGL SDK v10
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html
See also
- Glow sample in NVIDIA OpenGL SDK v10
- FBO tutorial by Song Ho Ahn: http://www.songho.ca/opengl/gl_fbo.html