Skinned animations and shadows now in place

The Meteor engine is starting to come into its own with two new important features. I have now intergrated skinned animations and shadows into the rendering. In comparison, adding some good looking directional shadows was fairly easy. There was just a lot of redundant drawing code to clean up and make it easier to adjust the shadows. The mesh animations are a different problem altogether. Both presented some good challenges for me, and still have problems I need to overcome.

Skinned Animations and Blender

The skinned animation feature is working but still underdeveloped- for one thing, they only link to the default (first) animation clip, and I only have the Dude model to test it on. This one was easy to import because it was already made for a skinned animation sample. But making another compatible model like this proved to be no easy task.

I have a greater appreciation for what the 3D artists do, because getting models animated and exported properly has proven to be a huge hassle. Having took a course on it before, I’m decent with 3D modeling tools, including Blender. But actual modeling of props and characters is a huge time suck, so I’m just making small adjustments to models I found elsewhere.

I have tried binding an armature (a skeleton in Blender) to another test model and have been unable to assign blend weights automatically. So instead I took the much longer route of manually painting blend weights to each bone myself. With some models this became tedious, because some have underlying nooks and crevices that makes it hard to paint properly, and I have to zoom and turn the camera at all sorts of odd angles.

The issue is that XNA will not accept any skinned models if even one vertex has a weight of zero, meaning that all vertices need to have some influence to be moved by at least one bone. When having to deal with character meshes with several thousands of vertices, assigning weights automatically is a great time saver, but only when it works properly. I decided to just stick with the Dude model for now and make do with a simple flexible tube that’s not complex in shape. Right now I’m busy cleaning and improving the rendering code, but I’ll get back to the animated mesh problem eventually.

Animating the Models

J. Coluna’s recent rendering sample helped me with applying bone transformations on the GPU. It’s just a short set of matrix additions to apply all the transformations in the matrix stack. I created two separate vertex shader functions to deal with skinned and non-skinned meshes, the former having the extra matrix calculations. Skinned meshes and static meshes are grouped in different lists to organize the rendering. I suggest that you check his solution out if you want to do something similar with combining animation with your own shaders.

So the animation code seems manageable, but my implementation is  clumsy. For every effect technique that renders a mesh, I copied that technique, with “Animated” added to the name, and replaced the vertex shader with the one supporting bone animations. This made my already huge GBuffer / uber-effects file a bit longer, and I have to constantly copy more stuff in it if I need to make a shader for another rendering style. Any missed techniques will make the program crash since it fails to find one with the proper name. This is not a big worry since I’m the only one working on the code, but I do want to use a better way to handle skinned models in the shaders and cut down on the code a bit.

For now, though, my biggest hurdle on skinning is not technical- it clearly lies in the fact that I gotta learn how to use Blender more effectively to skin and animate whatever models I want to.

Variance Shadow Mapping

I briefly covered this topic a in the last post, and I was still experimenting with the results that it will produce when integrated into the engine, but after a lot of tweaking parameters, it’s finally starting to look great. To my surprise, the shadows did not fade out with close proximity to the shadow casters, so the objects don’t look like they’re floating. I’m okay with not having very defined shapes for the shadows all the time.

You could still get some good quality rendering just a single 512×512 map with the Sponza scene, and with enough saturation, the edges look great without a lot of noticeable bleeding artifacts that come from using screen-space blur. With results like these, I could hold off implementing SSAO for a while. Here you can see a screenshot with only the directional light and shadows applied:

Shadowing only works with directional lights for now. It has a “casts shadows” property which can be set to true or false, and all lights that have this turned on will be considered for drawing shadow maps. A cool feature is that, for artistic purposes, you can have the intensity of the shadows adjusted independently from the brightness of the light that produces them. This means you can dim or darken the shadows as the light angle changes, or even have a “dummy” directional light that is completely colorless (no light) but you can use its directional and distance properties to cast shadows overhead. It is more possible to create visually striking scenes this way.

A Few Kinks in the Armor

For now, I am using a separate Camera object to represent the light’s point of view, as with any shadow mapping technique. But the camera’s orientation is not yet tied with the light’s viewing direction. This means that the camera’s orientation is essentially useless and does not factor in at all with the rendering of the depth map and shadow projection. But I need this information to correctly cull the scene objects from the light’s point of view. As it is, shadows pop in and out as the “main” camera moves, because the shadow mapping effect is rendering only the objects as visibly seen from that camera. Once I figure out how to assign shadow casters and receivers for the appropriate light, that problem should go away.

Another problem I noticed came about when I combined the skinned animation shader with the shadow mapping effect. Those animated models are giving me a hard time! Here, the Dude model is mostly dark, with light spots in random places, even when they’re not supposed to be light. They move a bit and generally stick around the same areas when he’s walking, which just look odd in the final render.

At first I thought that I didn’t correctly calculate the normals for the shader, but then I realized that the shadow map projection doesn’t take normals into account at all, so there’s definitely something weird going on with how the shadows are being projected onto them.

What Else I Can Improve

Other than these major issues, I’m pretty happy with how these features brought more life into the renderings. There are some pending goals that I plan to take on next week, and at least get two of these done:

  • Better implemenation of cameras for directional lights
  • Possible parallel-split variance shadows, for even better quality
  • An option to use more traditional shadow mapping with some screen-space blur
  • Support hardware instancing of meshes
  • A straightforward .obj file splitter program to break up large .obj models into smaller files (I’ll release it to the public)
  • Frustum culling from the light’s point of view

I’ll be sure to cover at least one of these topics soon, and continue providing samples related to these.

Advertisements

Bounding boxes for your meshes

While making progress with my rendering engine, one of my goals for this week is to finally implement some kind of frustum culling for the meshes. I could have taken the easier route by only using the pre-built bounding spheres with every 3D model loaded in an XNA program, but I wanted to get tighter-fitting bounding boxes instead. They simply work better for selection and picking, and plus more meshes are culled out of the frustum, which means less false positives and less geometry being rendered off-screen.

Mesh bounding boxes

Tank model with boxes enclosing all of its parts

Today I have finally finished the first major step, creating the bounding boxes. Simply figuring out how to perfectly create the boxes proved to be a frustrating chore. It wasn’t really the formula to create a box from points that was the difficult part, but getting the correct set of points from each mesh. This required a proper understanding of the GetData method for the VertexBuffer object of each mesh part. I will show you how I obtained the data to create those boxes.

There are so many requests online for wanting to figure out how to correctly build a bounding box for a mesh object, and a lot of those queries are answered with outdated information, or they don’t turn out to be the best case for that particular user. I’ve browsed through several solutions and a few code samples of how to create them, but they were not working for me. Sometimes the program crashes with an OutOfBounds exception, and other times, the boxes are obviously misaligned with the meshes, even after double checking that the correct transformations are in place. But I finally came up with a solution that used a combination of a few approaches to read and add the vertex data.

Building The Box

Bounding boxes are just simple geometric boxes, and be represented with two three-dimensional points, the minimum and maximum coordinates. The distance between those two points is the longest possible diagonal for the box, and the points can be thought of as the upper right corner in the front of the box, and the lower left corner in the back. These boxes are usually created as mesh metadata, during build time or when resources are initialized. It would be very costly to read the vertices and update the bounding boxes on every frame- besides, you should use matrix transformations to do that. Here is how we would usually initialize a mesh model:

public MeshModel(String modelPath)
{
	/* Load your model from a file here */

	// Set up model data
	boundingBoxes = new List<BoundingBox>();

	Matrix[] transforms = new Matrix[model.Bones.Count];
	model.CopyAbsoluteBoneTransformsTo(transforms);

	foreach (ModelMesh mesh in model.Meshes)
	{
		Matrix meshTransform = transforms[mesh.ParentBone.Index];
		boundingBoxes.Add(BuildBoundingBox(mesh, meshTransform));
	}
}

This would typically go in the constructor or initialization method of the class used to keep your model object, and all of its related data. In this case, we have a List of BoundingBox objects, used to keep track of all the upper and lower bounds for all meshes the model might have. Possible uses may be to do basic picking and collision testing, and debugging those tests by drawing wireframe boxes on the screen (which I will cover further in this article).

You may have noticed the BuildBoundingBox method in adding to the BoundingBox list. This is where we will create an accurate, tight-fitting box for every mesh, and to do this we will need to count all the vertex data for all its mesh parts. It requires a ModelMesh object and a Matrix object which is the bone transformation for that particular mesh.

This method will start out by looping through all the mesh parts to keep track of the maximum and minimum vertices found in the mesh so far, and returns the smallest possible bounding box that contains those vertices:

private BoundingBox BuildBoundingBox(ModelMesh mesh, Matrix meshTransform)
{
	// Create initial variables to hold min and max xyz values for the mesh
	Vector3 meshMax = new Vector3(float.MinValue);
	Vector3 meshMin = new Vector3(float.MaxValue);

	foreach (ModelMeshPart part in mesh.MeshParts)
	{
		// The stride is how big, in bytes, one vertex is in the vertex buffer
		// We have to use this as we do not know the make up of the vertex
		int stride = part.VertexBuffer.VertexDeclaration.VertexStride;

		VertexPositionNormalTexture[] vertexData = new VertexPositionNormalTexture[part.NumVertices];
		part.VertexBuffer.GetData(part.VertexOffset * stride, vertexData, 0, part.NumVertices, stride);

		// Find minimum and maximum xyz values for this mesh part
		Vector3 vertPosition = new Vector3();

		for (int i = 0; i < vertexData.Length; i++)
		{
			vertPosition = vertexData[i].Position;

			// update our values from this vertex
			meshMin = Vector3.Min(meshMin, vertPosition);
			meshMax = Vector3.Max(meshMax, vertPosition);
		}
	}

	// transform by mesh bone matrix
	meshMin = Vector3.Transform(meshMin, meshTransform);
	meshMax = Vector3.Transform(meshMax, meshTransform);

	// Create the bounding box
	BoundingBox box = new BoundingBox(meshMin, meshMax);
	return box;
}

A lot of important stuff just happened here. First is the setting up of vertexData, which is an array of VertexPositionNormalTexture structures. This is one of several built-in vertex structures that can be used to classify and organize vertex data. In particular, I used this one because my vertex buffer contains position, normal and texture coordinates up front, and no color data. It will help us determine where our position data is located, which is the only data needed to create our box.

However, this is not enough to assess the alignment and structure of the vertex buffer. We also need to know the vertex stride, which is simply the number of bytes that each vertex element contains. This number will vary depending on how your meshes are created and what data was imported, and it can even vary with each vertex buffer. With this piece of info, stepping through the vertex buffer should now be straightforward, with the vertex stride ensuring that we get accurate data. The vertexData gets sent to an inner loop where we simply examine each vertex, seeing if we have found a new minimum or maximum position. By default the minimum and maximum are set to extreme opposite values.

After the loop is done, we now have the only two vertex points that matter, and these are transformed by the mesh’s parent bone matrix. Finally, a new bounding box is created and returned from these two points. Optionally you can also choose to create a custom bounding sphere from the bounding box.

Drawing the boxes for debugging

Now with our boxes stored in place, let’s put them to some use. We are going to draw the bounding boxes that correspond to the meshes for each model. If they are drawn together with the model, the wireframes should hide behind solid objects.

Every BoundingBox has a Vector3 array which represent the eight corners of the box. The first four corners are of the front side, and the last four corners are the back. We are going to use a line list to draw the 12 edges representing the box. Each line connects a pair of corners, and the following array will form the edges:

// Initialize an array of indices for the box. 12 lines require 24 indices
short[] bBoxIndices = {
	0, 1, 1, 2, 2, 3, 3, 0, // Front edges
	4, 5, 5, 6, 6, 7, 7, 4, // Back edges
	0, 4, 1, 5, 2, 6, 3, 7 // Side edges connecting front and back
};

Now in the drawing loop, we will loop through the bounding boxes for the model, set the vertices and draw a LineList for those using any desired effect. This example uses a BasicEffect called boxEffect.

// Use inside a drawing loop
foreach (BoundingBox box in boundingBoxes)
{
	Vector3[] corners = box.GetCorners();
	VertexPositionColor[] primitiveList = new VertexPositionColor[corners.Length];

	// Assign the 8 box vertices
	for (int i = 0; i < corners.Length; i++)
	{
 		primitiveList[i] = new VertexPositionColor(corners[i], Color.White);
	}

	/* Set your own effect parameters here */

	boxEffect.World = Matrix.Identity;
	boxEffect.View = View;
	boxEffect.Projection = Projection;
	boxEffect.TextureEnabled = false;

	// Draw the box with a LineList
	foreach (EffectPass pass in boxEffect.CurrentTechnique.Passes)
 	{
		pass.Apply();
		GraphicsDevice.DrawUserIndexedPrimitives(
 			PrimitiveType.LineList, primitiveList, 0, 8,
			bBoxIndices, 0, 12);
	}
}

In practice, you should make sure that if you transformed the scale, position, or rotation of your model, to apply the same transformations to all the boxes as well. Also, remember that it is best to move any shader parameters that won’t change outside of the drawing loop, and set them only once.

That’s all there is to it. This should render solid colored wireframe boxes, not simply around your models, but in all the meshes they contain.