Project overhaul

Well, I’ve gotten to the point where I’m satisfied enough with my engine to get some actual game coding done. There are still issues related to shadow rendering but those can be worked on alongside everything else. Right now I’m taking a look at everything from a top-down view and planning to trim down the engine to a leaner size. I will talk more about my game project in a later post… just don’t expect it to be groundbreaking or anything like that 😛

For the sake of completion, I don’t want to get carried away with a one-size-does-all engine that has more than what I truly need. This has been an ongoing learning project lasting over two months, and I feel like it’s come pretty far. The engine isn’t immensely huge- I am focusing on a lightweight design- but now that I’ve worked on it for this long, I have gotten a better idea of what works for my upcoming game, and what I can take out so I don’t wind up with something too big and general-purpose.

The rendering library

The biggest change I made in the past week or so is moving all the rendering code into its own library project. It can finally be considered something of an engine, or at the least, a rendering library. For so long I had just a MeteorEngineDemo project, which is the “game” that uses the renderer, and MeteorEngineDemo.Content which grouped all the effect files from the renderer, and meshes used by the game. Having looked at other open-source projects focused on rendering engines in the past, I knew that this wasn’t going to work in the long run. This is just modular programming 101 here. The effects and rendering logic are part of the engine and irrelevant to whatever content you’ll use for any program, and I gotta follow the DRY principle.

After moving the rendering code into its own project, I now have a library project called MeteorEngine, which is referenced by the MeteorEngineDemo, and the library in turn references MeteorEngine.Content. That last one contains all the content that the engine needs to use (effect files, special meshes, etc). That way, all the content can be compiled and added automatically into a separate content folder that’s used by the MeteorEngine.dll at runtime. These projects will probably be renamed from “MeteorEngine” to “MeteorRenderer” since I’m not concerned with building something that handles all sorts of game logic, and to make it clear that it’s for graphics only.

Now my workflow is greatly improved- I can add a reference to this library with any other project I’m working on, while still being able to update the library’s source, and all the updates are reflected in all the other projects referencing it. Of course, this leads to the downside of making sure that all the code in my projects will still work with the library, but I’m doing a fairly good job in separating the rendering logic from everything else.

Variance shadow mapping didn’t prove flexible enough for my needs so I rolled back to using a standard shadow mapping technique in orthogonal space. The shadow is manually filtered thanks to some code from XNA Info. Also, the renderer now uses two HdrBlendable render targets, one for light accumulation, and the second one is for compositing the completed image (before post-processing). These gave me a bit of trouble, since they require a Point filter and they threw off my texture sampling states, causing crashes on startup or textures to display incorrectly. To my surprise, they didn’t seem to impact the rendering performance a whole lot. As an added bonus, I also switched to using the Nuclex font importer for drawing better-looking SpriteFonts.

Performance

So you’re curious to know how well it does, huh? Well, here’s a quick overview of my specs. My home/testing computer, which has a Radeon HD 4670 series video card, an Intel Core 2 Duo E4500 with 3 GB of ram. So it’s nothing close to being mindblowing and probably somewhere in the mid-range as far as video specs go. I also have a Macbook but unfortunately I can only use the Reach profile on there :-/

Okay, so this pic isn’t HD resolution, but check out dat framerate:

Don’t mind the zero triangle count in the readout- I gotta fix that soon. Since it’s light pre-pass, geometry is rendered at least twice. Three times actually, to project the shadow mapping. Post-effects added are FXAA, depth of field, and light bloom in that order.

At 720p it’s still a respectable framerate at 124fps, but with the effects disabled and switched to deferred rendering mode. With effects enabled, it hovers just below 80fps.

Rendering an entire Sponza model is slower- between 65 and 70 fps with one 2048 x 2048 shadow render target. This is mostly to do with the fact that it requires a lot more draw calls-  the model is made up of almost 400 different meshes, and culling each brute force also takes time. It may be more efficient to render if I merged some of the smaller meshes together and re-export the model, but I’d rather move on and create my own models.

Cutting down the code to size

My design for this library just didn’t spring up from the moment I started. Over a year ago, when I was learning from tutorials on DirectX, I came up with a low-level design for a rendering system. I drew it on a sheet of paper, emphasizing the most important connections with the components (sheet coming later!).

The diagram includes some non-rendering components, the game state management (the AppState and Statemanager), which I added to better visualize how it integrates with the whole program. I also got too carried away with static classes, which are probably not necessary for the design. But this is the basis of the design for my current rendering engine, and now that I look back on it, I want to reduce it down to these bare components.

Some of these components will differ from the original design. Effects will be part of a SceneRenderer class, not the Scene class. The entry point of the renderer (Display) will only exist to load up the SceneRenderer and RenderProfiles, and plug in Scenes and Cameras to it. Also, I won’t even need anything like a ResourceMaker in XNA, because the ContentManager and content pipeline take care of everything in there.

To this end, there will also be a clearer separation of data and data processors. I already have this in place with Scenes and SceneRenderers, as the Scenes are just containers for lights and models. This allows me to make some further refinements to the code:

  • Separation of data and its processing systems
  • Changing data objects from reference types to value types wherever possible (on the other hand, it makes little sense to have value type objects that contain reference types)
  • Reducing allocation of memory as per one of the paths to minimize garbage collection. This would include, for example, less resizing of lists for meshes, visible instances, etc. A lot of this makes more sense on Xbox 360, which I don’t have, so unfortunately I can’t really test this to the fullest.
  • Condensing some related objects to a single, but still manageable system. For my game, I’m set on what specific rendering systems to use, so I could trade away some flexibility for more simplicity.

Experimenting with more advanced HLSL is also an option. With classes fully supported in Shader Model 3.0 and XNA as confirmed by this forum post, it can make things more interesting for creating shader frameworks based around deferred rendering. With them we have more ways to manage reusable code and abstraction in shader functions. Here’s a really good article thats shows how function pointer behavior can be done- fully compatible with Shader Model 3.0! To those not used to OOP in HLSL it’s a big eye opener in designing shader code.

So now that it seems like I’m on the final stretch of working on this as a standalone project, it’s time to battle-test it. I already made another project which will be the “new version” of my game, and threw in my rendering library along with it. The screenshots you saw above are taken from that project. For now it’s going to be a matter of adding my own content and whatever game code I could reuse from the old project. As I’m working on my game, the rendering library will also keep changing along the way.

Deferred rendering with skyboxes

XNA Dude has only one animation: "haters gonna hate"

There is one simple but important rendering feature that helps make any game world more immersive- skyboxes. I have planned to add it for a while, but for a deferred renderer the solution didn’t come to me immediately.

You mainly deal with rendered scenes that later get “flattened” to textures projected onto the viewport, so adding backdrops can’t be done as it is with forward rendering. Skyboxes have the problem of not needing to be treated with the same lighting effects as other objects. Also, they cannot interfere with the scene’s 3D space. I’ve read a few topics on the App Hub forums on how people handle backdrops with deferred renderers, and there was a lot of conflicting information. Should I use one depth clearing effect, or should I treat the skybox like a light? Or can two approaches be combined for more flexible usage? Fortunately the solution to was simpler than I expected.

No special skybox class is used- it’s just another InstancedModel object which is like any other one, except that it’s not placed in the culling list.  So actually, we can just assume that it can be anything, either a box or a sphere or something in between. There’s no need to cull a skybox most of the time, and if drawn after everything else, we can reduce the fillrate since a lot of the screen will already be covered with the scene objects. The skybox can be scaled, positioned and rotated manually so that it will look right with the rest of the scene.

What did come to me more naturally was the concept of viewport depth. By splitting up the viewport’s depth range, the skybox can be rendered separately from the rest of the scene without having it interfere or block other objects. With this in mind, a separate rendering method was used for only the skybox, which restricts the viewport to a very far distance. No big depth sorting tricks are necessary, we can just keep the GraphicsDevice.DepthStencilState to Default. We would use these steps to draw the skybox with everything else:

  • Clear the GBuffer
  • Set viewport depth range from 0 to n (some value very close to 1)
    If using deferred rendering:

    • Render the scene to color, normals, and depth in the GBuffer
    • Set viewport depth range from n to 1.0
    • Render the skybox with face culling disabled, only to the color of the GBuffer
    • Accumulate all lighting

    If using light pre-pass rendering:

    • Render the scene to normals and depth in the GBuffer
    • Accumulate all lighting
    • Clear the viewport, render the scene with just the color
    • Set viewport depth range from n to 1.0
    • Render the skybox’s color with face culling disabled
  • Combine color with lighting to draw the final image

Everything looks right, except that the skybox is unlit, so nothing would still show up. We could increase the ambient lighting a fair amount, but that will make everything else too bright and washed-out. So instead a special case is made for depths that are greater than n to set the ambient lighting bright enough for the skybox to show correctly. This was simple to add into one of the lighting shaders, where depth can be sampled. To keep the skybox from moving, we just translate its position along with the camera. We also need to clamp the textures in the skybox so that no visible seams appear.

Skinned animations and shadows now in place

The Meteor engine is starting to come into its own with two new important features. I have now intergrated skinned animations and shadows into the rendering. In comparison, adding some good looking directional shadows was fairly easy. There was just a lot of redundant drawing code to clean up and make it easier to adjust the shadows. The mesh animations are a different problem altogether. Both presented some good challenges for me, and still have problems I need to overcome.

Skinned Animations and Blender

The skinned animation feature is working but still underdeveloped- for one thing, they only link to the default (first) animation clip, and I only have the Dude model to test it on. This one was easy to import because it was already made for a skinned animation sample. But making another compatible model like this proved to be no easy task.

I have a greater appreciation for what the 3D artists do, because getting models animated and exported properly has proven to be a huge hassle. Having took a course on it before, I’m decent with 3D modeling tools, including Blender. But actual modeling of props and characters is a huge time suck, so I’m just making small adjustments to models I found elsewhere.

I have tried binding an armature (a skeleton in Blender) to another test model and have been unable to assign blend weights automatically. So instead I took the much longer route of manually painting blend weights to each bone myself. With some models this became tedious, because some have underlying nooks and crevices that makes it hard to paint properly, and I have to zoom and turn the camera at all sorts of odd angles.

The issue is that XNA will not accept any skinned models if even one vertex has a weight of zero, meaning that all vertices need to have some influence to be moved by at least one bone. When having to deal with character meshes with several thousands of vertices, assigning weights automatically is a great time saver, but only when it works properly. I decided to just stick with the Dude model for now and make do with a simple flexible tube that’s not complex in shape. Right now I’m busy cleaning and improving the rendering code, but I’ll get back to the animated mesh problem eventually.

Animating the Models

J. Coluna’s recent rendering sample helped me with applying bone transformations on the GPU. It’s just a short set of matrix additions to apply all the transformations in the matrix stack. I created two separate vertex shader functions to deal with skinned and non-skinned meshes, the former having the extra matrix calculations. Skinned meshes and static meshes are grouped in different lists to organize the rendering. I suggest that you check his solution out if you want to do something similar with combining animation with your own shaders.

So the animation code seems manageable, but my implementation is  clumsy. For every effect technique that renders a mesh, I copied that technique, with “Animated” added to the name, and replaced the vertex shader with the one supporting bone animations. This made my already huge GBuffer / uber-effects file a bit longer, and I have to constantly copy more stuff in it if I need to make a shader for another rendering style. Any missed techniques will make the program crash since it fails to find one with the proper name. This is not a big worry since I’m the only one working on the code, but I do want to use a better way to handle skinned models in the shaders and cut down on the code a bit.

For now, though, my biggest hurdle on skinning is not technical- it clearly lies in the fact that I gotta learn how to use Blender more effectively to skin and animate whatever models I want to.

Variance Shadow Mapping

I briefly covered this topic a in the last post, and I was still experimenting with the results that it will produce when integrated into the engine, but after a lot of tweaking parameters, it’s finally starting to look great. To my surprise, the shadows did not fade out with close proximity to the shadow casters, so the objects don’t look like they’re floating. I’m okay with not having very defined shapes for the shadows all the time.

You could still get some good quality rendering just a single 512×512 map with the Sponza scene, and with enough saturation, the edges look great without a lot of noticeable bleeding artifacts that come from using screen-space blur. With results like these, I could hold off implementing SSAO for a while. Here you can see a screenshot with only the directional light and shadows applied:

Shadowing only works with directional lights for now. It has a “casts shadows” property which can be set to true or false, and all lights that have this turned on will be considered for drawing shadow maps. A cool feature is that, for artistic purposes, you can have the intensity of the shadows adjusted independently from the brightness of the light that produces them. This means you can dim or darken the shadows as the light angle changes, or even have a “dummy” directional light that is completely colorless (no light) but you can use its directional and distance properties to cast shadows overhead. It is more possible to create visually striking scenes this way.

A Few Kinks in the Armor

For now, I am using a separate Camera object to represent the light’s point of view, as with any shadow mapping technique. But the camera’s orientation is not yet tied with the light’s viewing direction. This means that the camera’s orientation is essentially useless and does not factor in at all with the rendering of the depth map and shadow projection. But I need this information to correctly cull the scene objects from the light’s point of view. As it is, shadows pop in and out as the “main” camera moves, because the shadow mapping effect is rendering only the objects as visibly seen from that camera. Once I figure out how to assign shadow casters and receivers for the appropriate light, that problem should go away.

Another problem I noticed came about when I combined the skinned animation shader with the shadow mapping effect. Those animated models are giving me a hard time! Here, the Dude model is mostly dark, with light spots in random places, even when they’re not supposed to be light. They move a bit and generally stick around the same areas when he’s walking, which just look odd in the final render.

At first I thought that I didn’t correctly calculate the normals for the shader, but then I realized that the shadow map projection doesn’t take normals into account at all, so there’s definitely something weird going on with how the shadows are being projected onto them.

What Else I Can Improve

Other than these major issues, I’m pretty happy with how these features brought more life into the renderings. There are some pending goals that I plan to take on next week, and at least get two of these done:

  • Better implemenation of cameras for directional lights
  • Possible parallel-split variance shadows, for even better quality
  • An option to use more traditional shadow mapping with some screen-space blur
  • Support hardware instancing of meshes
  • A straightforward .obj file splitter program to break up large .obj models into smaller files (I’ll release it to the public)
  • Frustum culling from the light’s point of view

I’ll be sure to cover at least one of these topics soon, and continue providing samples related to these.

XNA 4.0 variance shadow mapping

I’ve just updated and refined another code sample from XNA Community, from XNA version 3.1 to 4.0. This one is on variance shadow mapping, which is basically a way to get shadow maps that are filterable- that is, you can apply any kind of texture filter onto the shadow mapping image to give it a smoother look. Optionally, and usually, a Guassian blur filter is applied. Together, variance shadow mapping improves the visual quality of the shadows as well as giving more leeway to the size and number of textures needed to produce good results.

In the sample code, the program uses one 1024×1024 texture to produce the shadow map, and applies a two-pass filtering technique for Gaussian blur. This blur is not done in screen-space, but because the original shadow map can be filtered, it is almost indistinguishable from a screen-space blur, which will produce leaking artifacts if done with normal shadow maps.. Most of the heavy image computation is done in this step.

The shadow uses a 2-channel 32-bit texture for the depth map, in contrast to a single floating-point texture format used in conventional shadow mapping. This allows us to store two “moments”, which are simply the depth and the squared depth stored in the depth texture. From here we are able to calculate the mean depth and variance of the given pixel in the map. One noticeable drawback to variance shadow mapping is light bleeding among shadows where the shadow casters are of very different depths. An easy fix to reduce this effect would be to raise the shadow amount by a certain exponent, but raised too high and the shadows dampen too much.

I plan to use some form of variance shadow mapping to my graphics engine, but in the meantime I’ll try to make improvements on it, in order to remove the light bleeding more effectively. But in the meantime, you can download current sample project here, compatible with XNA 4.0.

References:

Depth of field, revisited!

Note: This is an older post updated with a lot of new information. Some source code snippets are coming soon.

I took a short break from my rendering project, for trying to keep my other coding skills in shape by trying to finish a simple game in a day or two. More on that to come. But right now, I just got done improving the depth of field blur, and combined it together with the light bloom effect.

Also, a change of scenery seems to be in order, don’t you think?

Spencer's Estate from RE5

These models are not created by me, of course. I’m just using them for demonstration purposes.

Applying blur in the depth of field

The Depth of Field process currently requires two shaders- one to create a gaussian blur copy of the original composite image, and a second one to blend the original and blurred images depending on depth values. This shader takes two important constants, the “focal range” and “sharp focus”. It linearly interpolates between the pixel values of two render targets depending the values of these two constants. The sharp focus is a 0-1 depth value within the viewport range that will produce the image in its sharpest, where there’s no blending done with the blurred image. The focal range is the distance where the image becomes completely sharp to completely blurred.

The background blur is a 13-tap Gaussian blur on top of it, applied twice. When applied once it looks a bit like a box blur. By decreasing the sampling distance and applying it twice, the blur appears more smoothed out. While this is less accurate to real life, out-of focus backgrounds, aesthetically speaking, look a bit nicer. Together, and with 15 large area lights, these effects keep the program busy at 45-60 fps. Some optimization should be done soon. Can’t imagine then, how much slower it will perform on Xbox 360.

I improved the Gaussian blur by pre-calculating the steps and blur weights in the CPU. This code is borrowed from the GraphicsRunner variance shadow mapping example. These weights are calculated only once on the Blur Shader component of the rendering chain, during initialization. This adds more pre-shader work, which lets the pixel shader run with less instructions and more constant values to improve performance. There’s only one variable to adjust, which is the number of steps that you want to sample and calculate for the blur kernel. There’s also a constant that you have to change in the shader to the same value. Now, the shader component reads the number of steps directly from the effect’s sampleWeights parameter, so that all arrays are set to the same size automatically.

Further filter improvements

There are at least two ways I can think of to improve the look and performance of the depth of field. Apparently, you can also compute a N x N filter by sampling less than N texture lookups. This article on bloom filtering explains it in a bit more detail, but generally it depends on the not-really-expensive hardware linear filter for textures. It also has a nice chart for manually setting blend weights for filter kernels of many sizes. It would be something I might look into if I ever want to get a bit more performance from blur filters, but it’s working fast enough as it is. Notice that there is some bleeding of colors happening between the foreground and background, which is a result of soft focus:

C. Viper in Spencer's Estate. It's almost like I'm playing UMvC3!

Soft focus happens simply from the interpolation of the blurred image with the original image. If you ever used Photoshop, it’s similar to applying a Gaussian blur filter and then fading the filter by a certain amount. It is a good enough depth of field effect, but for more accuracy we would apply the depth of field effect by adjusting the sampling distance based on the Z-distance to the camera. That will not only improve the appearance for some cases, but also make it more efficient as we can compute that in only one shader, thereby reducing the need to pass render targets around. However, some people like the ethereal look produced by the soft focus, so how depth of field is applied can be a matter of taste.

I will also just mention that I am not creating smaller render targets or otherwise downsampling the source images for this effect- there’s just too much texture swimming and flicker for my liking. What you see in the images are all full-size render targets being computed and blended. When done right, the results are pleasing to look at and computed quickly.

References: