Note: This is an older post updated with a lot of new information. Some source code snippets are coming soon.
I took a short break from my rendering project, for trying to keep my other coding skills in shape by trying to finish a simple game in a day or two. More on that to come. But right now, I just got done improving the depth of field blur, and combined it together with the light bloom effect.
Also, a change of scenery seems to be in order, don’t you think?
These models are not created by me, of course. I’m just using them for demonstration purposes.
Applying blur in the depth of field
The Depth of Field process currently requires two shaders- one to create a gaussian blur copy of the original composite image, and a second one to blend the original and blurred images depending on depth values. This shader takes two important constants, the “focal range” and “sharp focus”. It linearly interpolates between the pixel values of two render targets depending the values of these two constants. The sharp focus is a 0-1 depth value within the viewport range that will produce the image in its sharpest, where there’s no blending done with the blurred image. The focal range is the distance where the image becomes completely sharp to completely blurred.
The background blur is a 13-tap Gaussian blur on top of it, applied twice. When applied once it looks a bit like a box blur. By decreasing the sampling distance and applying it twice, the blur appears more smoothed out. While this is less accurate to real life, out-of focus backgrounds, aesthetically speaking, look a bit nicer. Together, and with 15 large area lights, these effects keep the program busy at 45-60 fps. Some optimization should be done soon. Can’t imagine then, how much slower it will perform on Xbox 360.
I improved the Gaussian blur by pre-calculating the steps and blur weights in the CPU. This code is borrowed from the GraphicsRunner variance shadow mapping example. These weights are calculated only once on the Blur Shader component of the rendering chain, during initialization. This adds more pre-shader work, which lets the pixel shader run with less instructions and more constant values to improve performance. There’s only one variable to adjust, which is the number of steps that you want to sample and calculate for the blur kernel.
There’s also a constant that you have to change in the shader to the same value. Now, the shader component reads the number of steps directly from the effect’s sampleWeights parameter, so that all arrays are set to the same size automatically.
Further filter improvements
There are at least two ways I can think of to improve the look and performance of the depth of field. Apparently, you can also compute a N x N filter by sampling less than N texture lookups. This article on bloom filtering explains it in a bit more detail, but generally it depends on the not-really-expensive hardware linear filter for textures. It also has a nice chart for manually setting blend weights for filter kernels of many sizes. It would be something I might look into if I ever want to get a bit more performance from blur filters, but it’s working fast enough as it is. Notice that there is some bleeding of colors happening between the foreground and background, which is a result of soft focus:
Soft focus happens simply from the interpolation of the blurred image with the original image. If you ever used Photoshop, it’s similar to applying a Gaussian blur filter and then fading the filter by a certain amount. It is a good enough depth of field effect, but for more accuracy we would apply the depth of field effect by adjusting the sampling distance based on the Z-distance to the camera. That will not only improve the appearance for some cases, but also make it more efficient as we can compute that in only one shader, thereby reducing the need to pass render targets around. However, some people like the ethereal look produced by the soft focus, so how depth of field is applied can be a matter of taste.
I will also just mention that I am not creating smaller render targets or otherwise downsampling the source images for this effect- there’s just too much texture swimming and flicker for my liking. What you see in the images are all full-size render targets being computed and blended. When done right, the results are pleasing to look at and computed quickly.
- Philip Rideout’s bloom tutorial: http://prideout.net/archive/bloom/
- Digitalerr0r on depth of field: http://digitalerr0r.wordpress.com/2009/05/16/xna-shader-programming-tutorial-20-depth-of-field/