Saturday, 23 March 2013

Frame Buffer Objects

In several of my previous posts, Frame Buffer Objects (FBO's) were often discussed and how they are used to a positive effect with such effects as particles, bloom, and shadow mapping. In all of these posts, there was never a great deal of attention spent on FBO's. This week, I will giving a tutorial on FBO's, some code, and a step-by-step on how to make one. Enjoy :)

What is an FBO?

A Frame Buffer Object (FBO) is an extension to OpenGL for doing off-screen rendering. FBO's capture images that would normally be drawn to screen and use these captured images to preform image filters and post processing effects. By using an FBO to assist with post processing effects and different filters, it allows a programmer an easy and efficient way to render out their scene. For example, using an FBO to render out a scene with cel-shading, or shadow mapping. Using Frame Buffer Objects allows programmers to do expensive tasks much easier.

The nature of an FBO is rather simple. Below are the steps of rendering an image to a texture using a Frame Buffer Object.

Steps to rendering an image to a texture

  1. Generate a handle for a framebuffer object and generate handles for a depth render-buffer object and for a texture object. (These will later be attached to the framebuffer object).
  2. Bind the framebuffer object to the context.
  3. Bind the depth render buffer object to the context 
    1. Assign storage attributes to it
    2. Attach it to the framebuffer object
  4. Bind the texture object to the context
    1. Assign storage attributes to it
    2. Assign texture parameters to it.
    3. Attach it to the framebuffer object
  5. Render
  6. Un-bind the framebuffer object from the context.
NOTE: Steps 3 and 4 are interchangable. In the example I show, I binded the texture object first just to show it can be done in a different order.
Coding your very own FBO

Personally, I found coding FBO's to be tricky, but like any programming challenge, it requires time and patience. Don't panic or freak out if you don't get them your first time because that is OK. When creating your FBO, you'll want it to be flexible in its execution. It is important to encapsulate everything in a class for ease of use. So when you're implementing a shader with multiple passes you'll want to set up your FBO's to render to the right spot to avoid any issues. 

NOTE: Any BLUE text is C++ code.

unsigned int CreateFBO(unsigned int numColourTargets, bool useDepth, 
unsigned int *colourTextureListOut, 
unsigned int *depthTextureOut, 
unsigned int width, unsigned int height, 
bool useLinearFiltering, bool useHDR)
{
   // Stuff
}

This unsigned int takes in various arguments  The number of colour targets, whether we are using depth, and so on. The names are fairly self explanatory but let it be known that width and height are for screen size. Let us continue to what exists inside the braces. 

if (!numColourTargets && !useDepth)
return 0;

unsigned int fboHandle;

// generate FBO in graphics memory
glGenFramebuffers(1, &fboHandle);
glBindFramebuffer(GL_FRAMEBUFFER, fboHandle);

Here we state that if we have no colour targets and we're not using depth, return 0. Next, we create our FBO handle as stated in the first step, then we generate the FBO in graphics memory. 

Next we're going to need to create a texture for our colour storage. This particular FBO creator is using Multiple Render Targets. Multiple Render Targets is simple a feature of GPU's that allows the programmable rendering pipeline to render images to multiple render target textures at once. This is very handy for scenes with many textures. 

if (numColourTargets && colourTextureListOut)
{
// texture creation
glGenTextures(numColourTargets, colourTextureListOut);
for ( unsigned int i = 0; i < numColourTargets; ++i )
{

glBindTexture(GL_TEXTURE_2D, colourTextureListOut[i]);
glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, 
GL_RGBA, internalType, 0);
      // Provide image data such as clamp, min and mag filter

glFramebufferTexture2D(GL_FRAMEBUFFER, (GL_COLOR_ATTACHMENT0 + i), 
GL_TEXTURE_2D, colourTextureListOut[i], 0);
}
}

Now that we have a created texture with different parameters that is bound to a 2d frame buffer, we can move on to create texture for depth storage

if (useDepth && depthTextureOut)
{
                // Generate the texture data
glGenTextures(1, depthTextureOut);
                // Bind it
glBindTexture(GL_TEXTURE_2D, *depthTextureOut);
                // Create a 2D texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, 
GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
// Include texture paramters such as min and mag filters, clamping.
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 
GL_TEXTURE_2D, *depthTextureOut, 0);
}

Once you create your texture for depth storage, you can begin wrapping up your FBO creator. Include a simple FBO completion checker to see if there was any errors with the frame buffer status. 

int status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
printf("\n ERROR: FBO creator failed.");
return 0;
}

As for good practice, disable some things to go back to the original state that your program was at.

glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return fboHandle;

Conclusion

That is it! A very simple FBO creator. Remember, when creating your FBO, if you follow the 6 steps to creating a FBO for your OpenGL program, it will certainly ease yourself into it. Remember, simply being patient and going to various different sources is key to writing a successful FBO. Thank you for reading, I hope you enjoyed and learned something!


Source: Bailey, Mike, and Steve Cunningham. Graphics Shaders Theory and Practice Second Edition. New York: CRC Press, 2011. Print.

Sunday, 17 March 2013

Bump & Displacement Mapping

For as long as digital environments have been around for, enhancing the appeal of objects has been a goal programmers and artists have striven towards. Many years ago in the world of video games, textures and objects often looked flat and without definition. When creating a digital environment, the style, appeal, and realism is critical when trying to immerse your player within it. These components could range from foliage, structures, characters, cracks in a wall, and more. A popular way to enhance the realism and appeal of objects is through bump mapping and displacement mapping. Both of these techniques are used in computer graphics to enhance realism of textures.

What is bump mapping? 

Bump mapping is a computer graphics method of adding richer detail to an object through changes in the lighting calculations. Jump mapping can make a perfectly round sphere can look like a rocky planet without having any vertices change, just the way the light influences the object. While this method is great for creating realistic walls, floors, and other objects, the drawback is that the objects vertices are not influenced. That jagged rock along the edges is going to look like a perfectly normal sphere. And if that object has a shadow, that shadow will look rounded, instead of jagged. While the not-so-bumpy edges may be a draw back, bump mapping is an excellent way of providing realism to flat objects. In the image below, there is a significant amount of bump mapping in the mountainous regions of the earth.


Once we visually understand the effects of bump mapping, we can begin to understand, as a whole, where we can use it and how. While it would entirely possible to manually influence the vertices of the object, this would be to time consuming and bump mapping can give a more realistic and less expensive effect. It can be used to add definition and realism to different components of a 3D environment to further immerse the user in said environment. Without bump mapping, an object that could look cool would simply look flat and bad.

What is displacement mapping?

While bump mapping is excellent for providing realism to objects, it has drawbacks. Bump mapping does not actually influence the object and this is apparent when analyzing the edges of the bump mapped object. Also, the more bump mapping you use, the more your object looks looks to be bumped mapped and not actually influenced.

Another method to add definition to an object is displacement mapping. Displacement mapping literally moves the vertices of an object to enhance its definition. By displacing the vertices it is possible to achieve highly realistic effects. Instead of an object simply being flat, or a fake bump, it is entirely possible to displace the vertices. This method of achieving realism is becoming more popular, with games often using this to achieve a bumpy wall, or stone floor effect. The image below demonstrates displacement.

The bottom picture being the original image, and the top being the displaced texture.
Note how the above image has crevices and a rugged definition.

What do they share in common?

Displacement and normal mapping use similar methods to achieve their effect. Both begin with an original mesh and a texture (often called a noise map or displacement map) that will alter the original mesh in some way. For bump mapping, because we often try to achieve a rough look, a noise map will be suitable. This will cause roughness in the original mesh. However, for displacement mapping, it will have to be a texture of whatever it is you want to displace.

Can you guess which is bump mapped and which is displacement mapped?

How does bump mapping work?

Bump mapping is very different from displacement mapping. As mentioned before, displacement mapping literally moves the vertices of an original mesh. Bump mapping changes the lighting calculations by use of a texture. This texture can be a noise map, or a specifically designed texture for an object. The modified For this example, I will use a floor to demonstrate how one would go about doing bump mapping.

No bump mapping vs bump mapping
In order to achieve the effect above, a texture is needed. The programmer can specify a normal map that will contain the modified normals for each point of the mesh. To do this though, that requires normal mapping which is similar but different to bump mapping! However, there are fundamental differences between bump mapping and normal mapping. For example:

  • Bump mapping disturbs the existing normals of a model. These original normals are often defined in a 3D modelling program. Normal mapping replaces these normals entirely. 
  • Each colour channel of the normal map an 8-bit bending of the pixel normal on an axis, one axies for each channel: R, G, and B.
  • A normal map uses a both R and G colour channels to distort the original mesh to artists needs. a blank normal map would use the values R: 128 G:128 B:255. The B channel is maxed because it represents flat regions


In the image above, the rocky floor is a mixture of different red, greens and blues. In order to do a normal properly, you must understand that the red and green channels are at 128 initially because it represents a neutral. If you can imagine a scale ranging from 1 to -1 which represents the RGB values (-1 being 0, 0 being 128, and 1 being 255), anything beyond 128, or 0, for the R and G values causes a change or distortion of the surface. 

Knowing this, an artist can play with these different colour values and achieve different levels of bump mapping through playing with the different values. As a ending note, it is best to render your normal maps at a much higher resolution then what the final resolution will be. This will allow the artist to add in extra details and special care to ensure that the final product is exactly what the game designers wish it to be.

How does displacement mapping work?

With displacement mapping a "map" is used to help displace the mesh.   In the example image to the left, the white of the displacement map is used as a strength value to move the vertices of the original mesh. When programming your very own displacement map shader, you wish to do several things:

1.) Read in your object that you wish to displace.
2.) Load in your displacement texture (displacement map)
3.) Wrap your displacement map around your object.
4.) Sample the location of the vertices of the original object in relation to the displacement map.
5.) Whatever color or gray scale value you read in from the sampled locations will be your displacement value.

Using those points in relation to the example image, the displacement map would be applied to the image and the vertex locations read. When the locations of the vertices are seen to be white, and assuming the programming set white to be a positive displacement value, the original mesh will displace accordingly. A programmer though could use different displacement maps for different objectives as well.

For example, to create a jagged stone floor, it would be possible to set a multi-coloured texture to be your displacement map. Let us assume the programmer sets a red value to displace the mesh along the positive Y and blue to displace along the negative Y. The artist would change the original mesh texture to different variations of red and blue so when the displacement texture is sent through the program, you are not only able to create the effect of some stone blocks coming out of the ground, but different levels of depression between the stones.

Use in video games

As per usual: Crysis examples

Bump without displacement

Bump and displacement being best of friends
Conclusion

Both methods of adding detail to an environment have been becoming more and more prevalent as new hardware is released. As better technology come to life, we will see more impressive visuals in video games, as well as an exciting future of high-tech, realistic environments! Whether it be by adding appeal through bump mapping, or definition through displacement mapping, in modern games it is becoming crucial to include these methods of adding detail to enhance the appeal and style of any game. 

Thank you for reading, I hope you enjoyed it!