Multiple textures and supporting more materials

I’m looking for ways to improve the content processor for meshes and my workflow for exporting .fbx files for XNA. Yesterday I figured out how to keep the correct texture filenames for the meshes and not have Blender assign “untitled” for the name. Remove any unwanted texture images in the UV/image editor by Shift-clicking the Unlink button, save the file and re-open it.

Now I can export a single mesh with multiple diffuse textures. I’m also looking to do the same to support multiple normal map textures for models, using this forum thread as a guide. This also works with specular maps, glow maps and just about any kind of texture data you want. Just give them different property names and have the content processor pick them up.

Although editing .fbx files is fine, I would rather have the files already prepared upon export, but Blender made a huge oversight with not supporting the inclusion of custom properties with their .fbx file exporter. So disappointed at the Blender team for that. Maybe someone has made a script to add that feature in?

Supporting multiple textures

I have tested texture layering on this awesome dwarf model I found on Open Game Art. The clothing, armor and body are different mesh objects in the blend file and in XNA.

Each mesh has its own diffuse, normal and specular texture loaded by the content processor. The GBuffer only writes to one channel for the spec map’s brightness, so the specular output is in monochrome. Color specular highlights would be something I would add later on. Most likely I will add support for multiple textures in meshes with a Dictionary to store the key names that map to effect parameters. Also, I will improve organization on reading paramters for rendering in custom effects. For instance, here is part of the rendering code for a mesh using a custom (non-GBuffer) effect:

if (instancedModel.animationPlayer != null)


It looks fine at first. All meshes have a World transformation matrix, and if a texture is found for the current mesh, set that parameter to the texture sampler. Also, animated models have an animationPlayer, so we know that it has bone matrix transformations.

But there’s a problem, what if you have an effect to render models but it does not need the textures? I see two quick but not so great fixes for this: check if the Texture parameter exists in the rendering code, or add a sampler for Texture in the shader. The first one is not very scalable if you start needing to check for many shader parameters, and the second one is just wasteful.

Currently I am using that second option as a temporary workaround to draw a depth map of objects from the light’s perspective. It has to support both static and animated objects, but textures are not important for this shader. The only purpose adding a Texture parameter in it is just to keep the program from crashing, otherwise the renderer is going to search for a null parameter. If you choose a shader that was compiled with diffuseTex defined, and your mesh has nothing defined for that, the shader will try to sample from something that’s not bound to it. It’s hacky and I don’t want to keep doing things this way.

Adding more flexibility for different materials

My next objective is to add some form of multiple material support with custom shaders. At first it will check to see if an object has diffuse, normal and specular textures, and whether the shader needs them. Later on I want to really support a wide array of materials for different types of objects. So nearly anything from reflective objects to transparent objects, or objects that require special types of lighting.

Something along the lines of a data-driven renderer is probably what I am looking for. It would use a combination of material states/types to use the appropriate commands and shaders to render the object with. It’s most commonly done using a long or integer type as a bitmask, which is the way I’ll plan to do it. I will start with a small set of material and texture types to choose from, and then gradually add more as I figure out how to get different effects working in the renderer.


Meteor Engine source code is now public

I have finally published the code to the Meteor rendering engine, and it’s available on CodePlex. Actually, it’s been up for several days, and I faced the pressure of not making the 30 day deadline CodePlex places on setting up the project area before making it public. For now there are no official downloads- you can only browse the source code. Anyone daring enough can get the latest revision of the source code and build a library from it. How to use the library is a different thing altogether.

It’s still continually being worked on and improved, so what I will describe below may be subject to change. I have added a new blog category, Meteor Engine, to better keep track of all the updates for this project. It is published under the MIT licence.

The source code is organized into two project folders- MeteorContentProcessor and MeteorEngine. Although there’s still some cleaning up to to do, the engine is small enough to easily describe the source files in a single post. So here we go.

MeteorContentProcessor files

MeteorContentProcessor is self-explanatory- it’s the content building pipeline for the models and other resources. It’s very similar to the content processor of XNA’s normal mapping sample, although there are a few tweaks to it.

It has very few files, and here are short descriptions for what they do:

  • DeferredMaterialProcessor.cs – Overrides MaterialProcessor.BuildTexture to convert normal maps to a signed format.
  • DeferredModelProcessor.cs – All models enter through here. Checks to see if it has normal maps and adds them, and also integrates the GBuffer effect. Specular map support is still in progress.
  • DeferredRendererModel.cs – A leftover from Catalin Zima’s deferred renderer, it also handles models but currently not supported. It’s just there for reference now.
  • DeferredTextureProcessor.cs – Used by DeferredMaterialProcessor to encode the normal maps.

MeteorEngine files

The MeteorEngine folder contains the main engine project code. The single source file in the root of this folder, Core.cs, is the entry point for the setup and execution of the engine’s processes. 90% of the code is in the Graphics folder, which is further split into rendering components, shader classes, helpers and render profiles.

Here is a rundown of what the code for the source files do. For completeness I will also list the shader files that derive from the base shader class. Because I did not plan as well as I expected to launch the project, there are redundant, unused files all over the place, so these need to be removed soon. I grayed these out in the file list, so they won’t be used any time soon. The file list will get shorter after I update.


Core.cs – This containes the Core class, the entry point of the engine which is responsible for loading and using cameras, scenes, and rendering profiles. It’s a bit of a mess right now- the class is not terribly long but it’s still disorganized and a bunch of stuff is hardcoded just for debugging purposes. This will likely be split into more classes in order to distinguish the functionality better.


  • Graphics/ – If Core.cs can be thought of as the hood of a car, here’s what’s under it.
  • Graphics/Scene.cs – Defines a scene, which consists of several static and/or skinned models, and different types of lights. Also provides a way to add them.


  • Graphics/Components/ – This includes custom extensions to XNA components, which are used by renderer classes.
  • Graphics/Components/DrawableComponent.cs – Implements IDrawable interface and extends MeteorComponent. Provides and handles the graphicsDevice service. The Core class extends from this.
  • Graphics/Components/MeteorComponent.cs – Implements IGameComponent, for the purpose of using component services instead of directly accessing the Game class.


  • Components/Cameras/ – Contains different kinds of cameras based on behavior.
  • Components/Cameras/Camera.cs – Base camera class, which defines and sets up a FPS style (euler angle) camera. It can be used as-is, but it provides no interactivity.
  • Components/Cameras/ChaseCamera.cs – A camera that follows a specific point.
  • Components/Cameras/DragCamera.cs – Camera that rotates when the mouse is dragged and uses WASD keys for movement along its view.
  • Components/Cameras/FreeCamera.cs – Camera that rotates when the mouse is moved, also uses WASD keys for movement.


  • Components/Drawables/ – Define different types of geometry that can be drawn onto the screen.
  • Components/Drawables/EntityInstance.cs – A possible holdover for supporting instanced meshes in the future.
  • Components/Drawables/InstancedModel.cs – Loads model files, prepares them, and also does manipulation such as scaling, moving and rotation.
  • Components/Drawables/QuadRenderer.cs – Creates and draws a quad that is typically used in screen space.


  • Components/Lights/ – Defines different types of lights used for shading geometry.
  • Components/Lights/DirectionalLight.cs – Structure for directional light data.
  • Components/Lights/PointLight.cs – Structure for point light data. It is treated as an instanced mesh for deferred rendering.


  • Graphics/Helpers/ – Functionality that aids or supplements the rendering process, but generic enough that they do not fit in any particular context.
  • Graphics/Helpers/CopyShader.cs – Produces a deep copy of a render target with a basic effect while keeping the original. I thought about putting this shader in RenderShaders, but the purpose seems very much more like an added utility than a visual effect.
  • Graphics/Helpers/GaussianBlur.cs – This was adapted from the XNA bloom shader example. A Gaussian blur filter meant for offline computation, and can be used for screen-space blur or textures.
  • Graphics/Helpers/RenderInput.cs – Used by shader classes to keep inputs for any given combination of resources to run effects with. These input types are cameras, scenes, and render targets.
  • Graphics/Helpers/RenderStats.cs – Provides simple profiling functions to measure time elapsed for different processes. Mostly used in the shader classes.
  • Graphics/Helpers/StringBuilderNumeric.cs – A StringBuilder class by Gavin Pugh, it outputs garbage-free strings converted from numeric types. Useful for debug rendering.


  • Graphics/Renderer/ – All files related to the usage of inputs for rendering.
  • Graphics/Renderer/RenderProfile.cs – Extends DrawableComponent. Loads up different shaders used for a complete rendering profile. Pools render targets for rendering and debugging for each step, which is called a RenderTask.
  • Graphics/Renderer/SceneRenderer.cs – Renders through a list of meshes in a scene.  Any combination of scene and camera can be used. It’s also responsible for basic culling and organizing meshes.


  • Graphics/RenderShaders/ – User-definable shaders for providing different effects.
  • Graphics/RenderShaders/BaseRenderer.cs – Base shader class which provides render inputs, universal shader parameters and profiling.

The PostProcessingShaders and SceneShaders folders reside in Graphics/Rendershaders.

Post-Processing Shaders

  • PostProcessingShaders/ – Post-processing (usually) screen-space effects. All shaders here only take render target inputs.
  • PostProcessingShaders/BloomShader.cs – Takes a bloom effect and does a 4-pass render. Additionally, change contrast and saturation.
  • PostProcessingShaders/BlurShader.cs – Screen-space blur using Gaussian blur.
  • PostProcessingShaders/DepthOfFieldShader.cs – Produces a soft-focus blur by interpolating a blurred image and a sharply focused image, and a depth map for adjustments.
  • PostProcessingShaders/FXAAShader.cs – Fast Approximate Anti-Aliasing. Typically used for deferred rendering setups, right after the color and lighting pass.
  • PostProcessingShaders/SSAOShader.cs – Reads a normal map and depth map to produce screen-space ambient occlusion, with a randomized normal texture.

Scene Shaders

  • SceneShaders/ – Scene effects. All shaders here use some combination of a camera and scene, with the exception of CombinationShader.cs.
  • SceneShaders/CombinationShader.cs – Combines lighting and diffuse/albedo render targets to provide a complete image. Used for deferred and light pre-pass rendering.
  • SceneShaders/DiffuseShader.cs – Draws diffuse/albedo textures from scene meshes.
  • SceneShaders/GBufferShader.cs – Produce a GBuffer with either two or three render targets using MRT. The larger GBuffer class extends from the small GBuffer class.
  • SceneShaders/LightShader.cs – Takes normal and depth maps to render directional and point lighting. Optionally does shadow mapping with drawing the scene from the light’s POV, and supports single-pass PSSM for up to 4 frustum splits.

Render Profiles

  • Graphics/SampleRenderProfiles/ – Contain rendering setups derived from RenderProfile.cs, as examples of different types of rendering that are possible.
  • Graphics/SampleRenderProfiles/DeferredRenderer.cs – Setup for deferred rendering, and some post-effects.
  • Graphics/SampleRenderProfiles/LightPrePassRenderer.cs – Setup for light pre-pass rendering, and some post-effects.

Properties/ – Contains the AssemblyInfo file for this project.

MeteorEngine.Content files

These files are loaded in the content project, and are used mainly by the shader and SceneRenderer classes. These include default textures for models that have no compatible texture types, a random noise texture, and the .fx files compiled by the shader objects that use them. Keep in mind that this is content only for the engine library, and is separate from the content of the game project that uses the engine.

Further updates

As I have some further cleaning up to do in the code repository, some files will be removed, and others may be renamed. I will explain a simple example use of the engine in a future post. Come back to this post soon to see the updates for the next build.

Tightening up graphics, and the other subsystems

– This post has been a hold-over for a few weeks, but I decided it’s now time to flush it to the screen! –

After coming to grips with my shortage of artistic inspiration, I’ve decided to go on full steam ahead with my game features and underlying systems. More menus, more screens- including the ubiquitous Pause screen- a continuous rebuild of the underlying system that controls them, and I even did some touch-ups with the visuals. Bubble Tower is slowly but surely becoming more polished in gameplay and presentation.

First I want to talk about the visuals, because it’s will just be a short overview. There are no graphics other than menu text, bubbles, and backgrounds, but I’m moving on to using more with 3D rendered graphics for a lot of the stuff. So I start with the most obvious, the bubbles. The original spritesheet contained 8 bubbles, one for each color. I decided to stick with this principle but generating the spritesheet at runtime by drawing the mesh to a render target with 8 different colors and viewports. I get to keep the original code that splits up the spritesheet and select the appropriate colored bubble to draw.

What’s nice about this is that the bubbles can be animated, or they can be replaced with another mesh. After working on a shader to get the highlights and shading looking just right on the bubbles, I messed around with some other geometric meshes, and found one that I got stuck on. I was inspired by the gem puzzle games and wanted to use more interesting shapes. Mine look more like some sort of paper origami balls than bubbles now, but I actually like the look. So after all it looks like I am getting somewhere with the visuals. Afterwards I just stuck in a wallpaper as a test background.

Improving the screen system

The more I try to make the screens do what I want, the more bugs I was finding, but I’m smoothing out the screen system as I keep progressing. The screens have the ability to transition in and out of view, with user-defined times (as far as user-defined animations I’m working on it). This added a lot of code bloat to the GameScreen class, so I isolated all this transition stuff into its own Transition class. Now every screen just has a Transition that it can use internally.

I’ve also made a new type of screen, the Splash Screen. Splash Screens are non-interactive, stay on for a fixed amount of time on the center of the screen, and leave. They can have text or an image. In the essence of time, I initially reused a class to display menu text, but broke it down further because it wasn’t using the button click functions.

It’s also possible to load a series of splash screens, each following in succession, with just one function call. The trick is passing a string array with your text in it, and storing it in the splash screen. The first splash screen uses the first string of text, removes it from the array and when it’s going to exit, passes the shorter array to a new SplashScreen constructor. If there is a transition time, you will see the text/images smoothly crossfade between each other. This feature is great for displaying several game logos in a row, or  having a visual countdown with your own text. I would not use it for long still-frame cutscenes, though. The player doesn’t have control of the splash screens, so not being able to skip through them would be very annoying.

Rules and control flow

Adding new splash screens on top of interactive screens have made the control flow a tad unpredictable, because I would want some splash screens to take away some control from the player, but not some others. I initially decided to make a distinction between interactive screens and non-interactive screens, and loop through the interactive screens to read input events, but that just made the results more confusing.

Here is the current list of screen classes that are used in this game:

  • GameScreen
  • GameMenu
    • MainMenu
    • LevelSelectMenu
    • PauseMenu
    • GameOverMenu
  • MainGameMode
  • LevelEditorMode
  • BackgroundScreen
  • LoadingScreen
  • SplashScreen

The menus may seem redundant in distinction, but I won’t need many menus anyways, and totally fine hard-coding the pathways in them instead of using external scripts. On the quest to make the screen manager more robust, here are some rules I’ve set to implement:

  • If a new screen is added, start reading input (if it’s interactive)
  • If the new screen is marked “exclusive”, stop updating/reading input for other screens
  • If an exclusive screen is removed, read input from the other screens again
  • Screens may start reading input as soon as they’re entering, or when they’ve completely entered
  • Screens should stop reading input as soon as they’re leaving

Pretty straightforward stuff. Here’s where the rules become more involved:

All screens undergoing transitions should be allowed to complete the transition phase.

What this basically means is, if a screen is exiting or entering, the transition timer that counts down or up must not be stopped at any cost. Just keep it going. To accomplish this, the transition updates need to be decoupled from the user-defined update code.

The reasoning behind this is, sometimes, an exclusive screen may enter at the same time other screens are entering. The exclusive screen wants to stop the others from updating, but if the transitions are ran in the Update function, those other screens will never get to complete their transitions! Decoupling that from the Update function would be the way to go, so I moved all the transition logic to an UpdateState function, which will always get called no matter what.

Avoid adding or removing screens while other screens are being queued for updates/input reading.

The screen manager has two special lists, ScreensToUpdate and ScreensToReadInput. ScreensToUpdate is a mainstay from the XNA code sample for game state management- it simply tells the manager to call Update on the screens in it. ScreensToReadInput was added later, and this tells the manager to HandleInput from the screens on that list. This action is done before calling Update on all the screens.

Here is the order of functions that are called in the game’s main loop:

  • Read input from screens
  • Update screen events
  • Draw each screen

There is no particular spot that says where AddScreen or RemoveScreen is called. They may be called one or multiple times where screens are updated or reading input. That can have undesirable effects and can lead to subtle bugs if we’re not being careful.

Suppose that in one screen inside the ScreensToReadInput list, a key press was detected which will tell the screen manager to remove this screen and then add another one. That would potentially modify the list while it’s still looping inside of it! This would throw an exception if done in a foreach loop, so you have to use a for loop to iterate through instead. Still, modifying a list of screens while screens are updating can lead to unexpected behavior.

Updating screen lists

To resolve this problem, screens should not use AddScreen or RemoveScreen at all. These two functions can be made private, and there are two new functions to supplement them: PushToAdd and PushToRemove. Those would be the functions that screens can call, to notify the screen manager that there are new screens waiting to be added or removed.

Both functions would add a screen to one of two new lists: ScreensToAdd and ScreensToRemove. They are a sort of waiting list for the screen manager to go through. The purpose of this to move AddScreen and RemoveScreen calls inside a separate function, away from updating and reading input. The new loop of functions would look like this:

  • Add or remove screens
  • Read input from screens
  • Update screen events
  • Draw each screen

The ScreensToAdd and ScreensToRemove lists are guaranteed to be cleared by the time the screen manager is done with the first step. Now we would know exactly where new screens are pushed or removed from all the lists.

Determining screen priority

Why not just have the topmost screen receive input, you might ask? Sometimes you’d want more than one screen at a time being interactive, like a group of menus for a strategy game. Or the aforementioned splash screens that would still allow player control of screens underneath, I’d have to give them a special flag or property that says they are not “exclusive”. I plan to use splash screens more in this way, such as displaying larger score numbers or messages for clearing certain groups of bubbles.

Once I have all these problems sorted out, though, I can go back to adding more game-centric features, and hopefully adding a two-player mode. Still don’t know how I’ll work that one out, as I am just using a keyboard and don’t even have an Xbox controller to hook up. I can always take the more daring route and start coding an AI to play against the player. That would certainly be another challenge, but most likely I will just start out making the “AI” fire bubbles at random times in random directions, then gradually mold its crazy mind into something more coherent. But for now, on to polishing more menus!