Thursday, 28 February 2013

Graphics in Video Games: Cel-Shading



Cel-shading, also known as toon shading, is a common method of lighting and shading in computer graphics in order to achieve a hand drawn effect. Cel-shading is often used to mimic a comic book or cartoon feel. This method of shading is becoming more and more popular, especially within the games industry. Cel is a reference to the clear sheet of acetate that are painted on for traditional 2D cartoon drawings, like the original Bugs Bunny show.

What is cel-shading exactly?

Within computer graphics, cel-shading is an increasingly popular way to achieve a unique and fun art style. Some more notable recent games to include cel-shading are Borderlands, Legend of Zelda: Windwaker, and  Team Fortress 2. Cel-shading differs from photo-realistic lighting in that it rather then calculating smooth lighting per pixel, it has lighting that is much more "blocky" in appearance.While the lighting method for diffuse is a more realistic approach as it gives more definition to the character, for cel-shading the character appears more flat but it can complement the games style and design.

Jet Set Radio - comparing diffuse lighting (left) with cel-shading (right).
So, if cel-shading is used to achieve a cartoon feel, how can I use this to my benefit for game design? How can a programmer take an artists dream and turn it into computer graphics reality? There are many different challenges a programmer must understand before tackling cel-shading. In the image above, the character on the right shows diffuse lighting that gives him depth and definition. However, the character on the right has only 1 definite shadow that gives limited depth. Another distinct feature however is the black edge line surrounding the cel-shaded character. I will discuss how a programmer can over come these challenges and get a cool cel-shaded affect.

Coding cel-shading

As mentioned before, there are two particlar challenges that a coder faces: blocky lighting that looks cartoony and thick black lines that outline the character edges. First, I'll outline how we can generate blocky shading, then move onto edges. (NOTE: In the example image below, this is our base scene in which will describe our scene with NO shading effects. The image has 4 objects loaded from an object loader with attached textures).



To get proper toon shading we need two shaders: a fragment shader and a vertex shader. The vertex shader will transform vertices and send the required variables to the fragment shader for toon shading. Let's get started on some code!

With the vertex shader we'll need two variables to handle the texture coordinates and the normals. Once we declare these variables, we have to get vertex positions, as well as the normals and texture coordinates. This will tell the fragment shader exactly where to draw the toon shadows and apply proper coloring.

Moving on to the fragment shader, there will be a little more programming required here. As before, we'll need the texcoord and normal information, as well as 2 uniform variables: the inputImage and qmap (in this example, the qmap is simple a black to white gradient).

Our qmap
Also, we will need place for where the light will be coming from. In the example I will show you, I chose the light to be in the position of the camera, however, you can really place it wherever you feel is best. Now that the initial variables have been chosen, it is time to start working on the shading component of the scene.

What we first must do in ensure all of our vectors are normalized. Once this is done, we can begin to write our qmap calculation to apply the lighting properly. First, we want to calculate a diffuse. The reason we want to do this is so we can later when calculating the "blocky" value of our shadows.


// Makes sure our normal vector has a length of 1
vec3 n = normalize(normal);

// This is out q-map
float diffuse = max(0.0, dot(n, lightEyeDir));
float blocky = texture2D(qmap, vec2(diffuse, 0.5)).r;


vec3 c = texture2D(inputImage, texcoord).rgb;


With these three lines of code, we calculate our diffuse and take the max of either 0 or the dot product of the normalized vector and the light direction vector. We then send calculate the blocky value which will give us thick strips of shadowed areas, not a smooth gradient like most games. The last line, which will make more sense in a second, takes vector c and gives it the texture2D data of the imputImage and texture coordinates. All that is left is multiplying our blocky value by our c vector to get the desired results.

gl_FragData[0].rgb = c * blocky;

In the end, we get something like this:



Edges!

While the image above gives us the blocky look we want, this hardly looks like the usual cel-shaded effect we are used to seeing. Now it is time to implement the thick black lines that exist around the edges of the objects in order to truly get that old cartoon feel. In order to achieve this, we will need something called the "sobel filter" which is an edge filtering system that is applied to the normal/depth to generate an edge texture. Using the sobel filter, we can generate black texels along the edges of our objects and leave the remainder of the image white (which becomes important later)!

With the sobel filter, we will identify a kernel, similar to Guassian blur, but we will use this kernel to detect if a pixel is at an edge or not. Before we write out sobel filter we be aware we must calculate our edge detection so it picks up on both horizontal and vertical edges. The differences between calculating vertical and horizontal are fundamentally the same, the only difference is the values we use when calculating sum (as seen later).

So, when writing our sobel filter, it will have to take in xy image coords and image data. Next, we will need the pixel size and a sum vector when getting the sum of kernal location and the texture data of the image.

Assuming we are using the kernal:

[   ]
|-1 0 1|
|-2 0 2|
|-1 0 1|
[   ]
we can develop the rest of our shader. (NOTE: depending how much black outline you want you can change these values, but they are fairly sensitive to change. ).


(NOTE: This is for calculating the horizontal edges)

vec2 ps = pixelSize;
vec2 offset[6] = vec2[](vec2(-ps.s, -ps.t), vec2(ps.s, -ps.t),
vec2(-ps.s, 0.0),   vec2(ps.s, 0.0),
vec2(-ps.s, ps.t),  vec2(ps.s, ps.t));

vec3 sum = vec3(0.0);

Once we have this, we can then calculate the sum at every component of the kernel location in relation to the image data.


sum += -1.0 * texture2D(image, offset[0]+texcoord).rgb;
sum +=  1.0 * texture2D(image, offset[1]+texcoord).rgb;
sum += -2.0 * texture2D(image, offset[2]+texcoord).rgb;

sum +=  2.0 * texture2D(image, offset[3]+texcoord).rgb;
sum += -1.0 * texture2D(image, offset[4]+texcoord).rgb;
sum +=  1.0 * texture2D(image, offset[5]+texcoord).rgb;


Lastly, we take the dot product of the sum by the sum and return the value if it is less then 1. Else, we return 0. This will ensure either black or white in our scene. If we wanted to calculate for vertical, we have to treat our kernel slightly different as we are targeting different numbers.

After all that fun code, all that is left is the normals of the two horizontal and vertical sobel filters then we can render out our image. With just edges, our scene will look something like this:

Finishing touches

Currently, we have two images that demonstrate the shading we want with no lines, and lines with no shading. If we take the result of both, and multiply them together, we get something a little like this:


In this image, we have our nice black lines complemented by the fun blocky shading style that we desired. By playing with different variables, such as the sobel kernel or changing the qmap, we can achieve different styles and visuals of our product.

In conclusion

The summarize everything discussed, cel-shading is a computer graphics technique to achieve a cartoon lighting effect. It is characterized by blocky lighting and thick black outlines around objects and characters. For programmers, they will need to know how to implement a proper qmap to get the right amount of shading they want, as well as sobel filters to get exactly the amount of black outline needed for their game.

Thank you for reading, hope you enjoyed it! Rate and comment!



Friday, 22 February 2013

Graphics in Video Games: Bloom

When developing a video game, regardless of the platform or if it will be 2D or 3D, the developers must consider how they will utilize the powers of different computer graphics effects. How developers manipulate image data to achieve different effects is absolutely critical for achieving different objectives in game development. An example of a powerful graphics effect is Bloom. Bloom is used to amplify light in a scene, create more realistic effects, or enhance the look and feel of something. This week, I will be discussing bloom in video games, and some GLSL shader code on achieving some bloom effects.

Syndicate: A newer game known for excessive bloom.

What is bloom?


Bloom, also known as glow or bloom light bloom is a computer graphics effect used to amplify light in a scene. As mentioned before, it can be used to enhance realism. The effect of bloom produces 'light bleeding' where bright light will extend onto other parts of the scene. In the image above, you can clearly see the light bleeding onto the play characters gun. This light bleeding effect adds to the illusion of a very bright light.

How can bloom change a game?

Bloom, in most cases, is simply used to enhance the visual appeal of something, whether it be an item, part of the environment, or otherwise. Bloom can also help define a games mood.While some games over-intensify their bloom, a good example of properly used bloom is Far Cry 3.

In Far Cry, the game takes place in a wild, dangerous, yet beautiful jungle across several different islands. In order to achieve different effects of beauty and wonder, as well as cool effects, bloom comes in handy.


In the image above, there are several cases of bloom being used here. Perhaps the first thing in this image someone may notice is the very well done explosion effect. However, using images for particles isn't enough. The light for the particles is being amplified to enhance the explosion so it draws more attention and looks more powerful. Along the outside is a glow to enhance the explosion. Also the interior is white very white. The excessive white we see is called threshold, an important component of bloom. Threshold, in a nutshell, makes colors closer to white more white and colors closer to black more black. This draws contrast and exaggerates (which is also a principle of animation) an effect a programmer is trying to achieve.

The other use of bloom is clearly in the background. The light is so intense that is whites out the background and adds blur as well. This is used to add realism to the scene and try and draw contrast between the mountains and sky. As mentioned with threshold, the mountains are made to be very dark while the sky very white. The light in so intense, you can see the bleeding of color onto the dark mountains.

Now that different uses of bloom in Fay Cry 3 have been discussed, let's move on to some GLSL fun.

How a programmer may code bloom

Seeing bloom and knowing generally how it works is one thing, but how to code it is another entirely. First and foremost, use FBO's to render everything out fast. Second, we want to understand what to use in order to get at least a basic bloom effect. We will need to control the lighting in the scene so we can rely on three different values: luminace, threshold, and middle grey. Middle grey is a controller for the strength of the luminace, allowing us to use different levels of luminace and threshold without going overboard.

Although controlling these would only be a very basic shader there is a problem. Just controlling the different light values is not enough to warrant for a proper bloom effect. Bloom also blurs the surroundings to give it that overbearing effect. For GLSL, we also have to consider down sampling and composite passes. Thus, we need a shader that handles the blur, down sampling, and composite passes.


Gaussian Blur

Gaussian Blur is a method in which to effectively blur in computer graphics. For bloom, it can be used to give the effect of an incredibly bright light that overcomes different parts of the scene, or the scene as a whole. For gaussian blur, we write our code so it is received in the fragment shader. We do this because we want to blur the pixel data itself, and not necessarily the have what is called a "kernel" that is a 3x3 square.

The kernel acts as a 3x3 matrix and our computer will filter the data. In the matrix, it takes the sum of all elements and divides it by the sum. So in the end, the source pixel is replaced by a weighted sum of itself and nearby pixels. This method is called convolution kernel filtering.

While we could take the weighted sum, this is not the only way either. An alternative is to have the kernel weighted stronger towards the center. So each pixel is not weighted on average, but towards the center means a larger number.

GLSL Code


Our original image

After discussing different variables we need to help control bloom, and an overview of Gaussian blur, we can start to work on some code. Using a fragment shader called brightPass, we need the following:

float luminance;
float middleGrey;
float threshold;

This will allow us to control the luminace of our scene or object, the "middleGrey" is used to control grey scale, and the threshold is used to control the white in the scene (as the Far Cry example). Then, we can calculate the lighting for our bloom scene using this shader.

void main()
{
vec3 colour = texture2D(inputImage, gl_TexCoord[0].st).rgb;

colour *= (middleGrey / luminance );
colour *= 1.0 + (colour / (threshold * threshold) );
colour -= 0.5;
colour /=(1.0 + colour);

gl_FragColor.rgb = colour;
}


The image below demonstrates a much more powerful illumination and blurs the scene slightly. This additional brightness, while may be ugly, can be useful in some gaming situations. For example, if the programmer wishes the player to be exposed to a massive amount of light very quickly, this type of high powered bloom can be useful.


However, this is only the brightPass fragment shader and in this, we have a blur effect as well down sampling and composite. For our blur fragment shader, as I mentioned before we need a kernel. However, we also need our image data and pixel size if we want to blur.


uniform sampler2D inputImage;
uniform vec2 pixelSize;


Now, we need a vector representing our box blur. In this vector, we will determine the strength of the blur at a location of x,y (hence vec2 pixelSize) and and output the 3D vector with the 2D blur values.


vec3 colourOut = vec3(0.0);

vec2 offset[9] = vec2[]( // Bottom of the kernal
vec2(-pixelSize.x, -pixelSize.y),
vec2( 0.0, -pixelSize.y),
vec2(+pixelSize.x, -pixelSize.y),

// Middle of the kernal
vec2(-pixelSize.x, 0.0),
vec2( 0.0, 0.0),
vec2(+pixelSize.x, 0.0),

//Top of the kernal
vec2(-pixelSize.x, +pixelSize.y),
vec2( 0.0, +pixelSize.y),
vec2(+pixelSize.x, +pixelSize.y)  );

for( int i = 0; i < 9; i++ )
{
colourOut += texture2D(inputImage, texCoord + offset[i] ).rgb;
}

return ( colourOut / 9.0);

In this, we take the pixel data at a pixel location, add the x and y values together and divide it so that the data at each part of the kernel is the weighted sum and it blurs. This is effectively our Gaussian blur shader. However, we are not done just yet.  We require just one more shader and this is the composite shader that will output for us the color of the scene and the bloom with the added blur. Because it outputs pixel data, it is a fragment shader.

First, we need the input data for the bloom and scene.


uniform sampler2D input_scene;
uniform sampler2D input_bloom;

Once these are declared, we can write our composite main.


vec3 screen(vec3 colour1, vec3 colour2)
{
return ( 1.0- (1.0 - colour1) * (1.0 - colour2) );
}

void main()
{
vec3 colourA = texture2D(input_scene, gl_TexCoord[0].st).rgb;
vec3 colourB = texture2D(input_bloom, gl_TexCoord[0].st).rgb;
gl_FragColor.rgb = screen(colourA, colourB);
}

In the end, this gives us a nice blur effect and a bloomed scene.





Before
After

In conclusion

Bloom in video games is a common method to enhance realism of effects and the environment the player is immersed within by amplifying the scenes lighting. It is a powerful tool that can hinder a game, or enhance the players experience.

I hope this blog taught you a thing or two about basic bloom and how you can implement it. Thank you for reading, don't forget to rate and comment!


Sunday, 10 February 2013

Particles in Video Games: Part II


Last week I discussed the basics of particles in video games. This included a basic overview of what particles and particle systems are, different implementations,  and why particles are important in video games. However there are also limitations to what I discussed. CPU rendered particles are greatly inefficient and limited in their ability to look good. This week, I will discuss how to improve upon CPU based rendering by utilizing frame buffer objects (FBO's) and geometry shaders.

The premier way to render any graphics system is through the graphics processing unit, or simply the GPU. The GPU is designed to rapidly build images in a frame buffer to be displayed. GPU's are incredibly common in almost any modern day device such as: mobile phones, personal computers, game consoles, etc. GPU's are very effective at rendering computer graphics because of how it can take blocks of data and process it in parallel. For a particle system, being able to render out blocks of data in a fast and effective manner is critical to producing desirable particles. 

FBO's

The question is, for a particle system, why use FBO's to help render your particles? What is a FBO? A frame buffer object is an extension to OpenGL for doing off-screen rendering. This includes rendering to a texture. By taking what would normally be drawn to screen, it can be used to apply various types of post-processing effects and image filtering. That being said, FBO's are used primarily for post-processing of rendered images (for example bloom, blurring), and composition between different scenes.

Geometry Shaders

Geometry shaders also represent a way to improve your particle system.  A geometry shader, in short, is a GLSL shader that administrates the processing of primitives. In regards to the OpenGL pipeline, it happens after primitive assembly. They can also be used to create new primitives which in turn can be used to help render out textures to quads, a common practice for particle coders. Geometry shaders have several advantages:
  • Unlike vertex shaders, a geometry shader can create new primitives.
  • Can do layered rendering. This means it can render different primitives to different layers in the frame buffer.
  • Excellent for rendering primitives.

How does this help programmers?

A particle programmer would be very wise to maximize the efficiency benefited from FBO's. Rendering to a texture will allow the programmer to create much more realistic effects, while applying different post-processing effects to give them the desired look. This is not the only benefit however, because of the additional speed and parallel rendering, a video game will be able to support many emitters handling many different particle systems that utilize all sorts of different effects. Not only will particles render faster, they can also achieve maximum appeal for the minimal cost.

A geometry shader is an incredibly powerful tool to a programmer. While FBO's assist with rendering, a geometry shader can create new primitives quickly. Many programmers will often render their particles to different primitives such as planes, spheres, or other shapes. This means that their emitter will render quickly because of FBO's and can also create new primitives using geometry shaders

Examples 

Crysis is a good example of a game that utilized geometry shaders in their particle systems.


In this image, the geometry shader could randomly generate the line bursts shooting outwards from the central explosion as well as trail size and intensity. This way, the game can have a unique explosion every time.


For similar reasons as above, the game can enhance player experience by adding in uniqueness to every explosion effect.

In Conclusion

To sum up this two-part blog about particles, they are a great method for achieving a wide array of different effects in video games. They can be used to set a mood in a game and to enhance a players experience. Particle systems, especially good ones, are most certainly a challenge to any proprogrammer. Thank you for reading, comment and rate!