Applying ECS to a simple game

I’ve been going back to work on my Entity-Component-System framework. Aside from being a side project, I will also plan to use it for my voxel platform game. I’ve already created a Minimum Viable Product using it, which is the Bomberman clone game I mentioned a few posts back. Animations are still very buggy, and there is no AI implemented, but a barebones 2-4 player version is working.

Previously I initialized all the Components, Systems, and Entity templates in the framework code. While I did this for testing out the game, it’s not good for portability, so I had to remove all that initialization code out and update the framework so that it can accept new Components, Systems and templates from outside.

Finally, I isolated the code into its own assembly, so it would be possible to just link it as a DLL. This also meant I had to remove any XNA/MonoGame specific classes and all the rendering will have to do be done from outside. In short, it’s really is meant for game logic only, and your game will have to handle the rendering separately.

The framework itself is lightweight (and I hope it stays that way), and only consists of 5 files/classes: Component, EntityManager, EntitySystem, EntityTemplate, and SystemManager. The SystemManager handles all the high level control and flow of the EntitySystems, which you make custom systems from. EntityTemplate is a simple class used as a blueprint to add Components that define an Entity, and is deep-cloneable. EntityManager handles the creation of Entities from these templates, and also the organization of its components. Despite its name, there is no Entity class. I think I wil rename this manager to “ComponentManager” in another revision.

The Bomberman game has the following components:

  • Bomb
  • Collision
  • InputContext
  • PlayerInfo
  • PowerUp
  • ScreenPosition
  • Spread
  • Sprite
  • TilePosition
  • TimedEffect

They are used by the following systems:

  • BombSystem
  • CollisionSystem
  • ExplosionSystem
  • InputSystem
  • MovementSystem
  • PlayerSystem
  • PowerUpSystem
  • TileSystem

Some of the systems are more generic than others. There are a couple of systems like the Bomb system or Power-up system that have very specific logic pertaining to the game, while others like the Input system are pretty abstract and can be adapted to other game types with little or no change. Some are Drawable systems so they have an extra Draw() function that is called in a separate part of the game loop.

The funny thing is that I was going to talk about using Messages in this update, but in the time between, I did away with them completely. Messages were a static list of Message objects that was in the base System class. They were mostly used for one-time player triggered events (like setting a bomb) and every system had access to them, but I decided to just pass along the InputContext component into the systems that will be dependent on player input.

Setup and Gameplay

The game is started by initializing all the components and systems and then creating the entire set of Entities using a Level class. This class has only one job- to lay out the level. Specifically, it adds the components needed to make the tiles, sprites and players. My implementation of the game pre-allocates 9 Bomb entities (the maximum a player can have) for each player.

Each player can be custom controlled but right now that’s facing issues now that I moved from invoking methods to instantiate new Entities, to deep-cloning them. This works as well as long as none of the component have reference types.

The only Component that has reference types is the InputContext component as it needs to keep a Dictionary of the available control mappings. This breaks with deep-cloning and thus with multiple players, they all share the same control scheme. Other than that, it makes the component too bloated, especially with helper functions to initialize the mappings. So I am figuring out how to use value types only to represent an arbitrary group of control mappings.

The game starts immediately after setup, and every InputContext that is tied in with a PlayerInfo controls a player. Movement around the level is handled with the Movement System, while placing and remote-detonating bombs is handled with the Bomb System.

The Input System detects key and button presses from an InputContext’s available control mappings, and changes two integers, “current action” and “current state”, based on it. It is up to other systems to determine what it should do with these values, if needed.

The Tile System is responsible for keeping sprites aligned to a tile grid, or giving them their closest “tile coordinates” which is important in knowing where a bomb should be placed, for example.

Collision System is self-explanatory. It handles different collision types varying by enum, to differentiate solid objects, destructible objects or damaging objects, as well as the response (it pushes players off walls, for example). If a player walks into an explosion, the Collision System knows.

An Explosion System is used to propagate the explosions in pre-set directions. By default it’s in all 4 cardinal directions with a bomb’s Power attribute copied to a Spread component, subtracting one with each tile. It keeps creating more explosions until this attribute reaches 0 or it hits a wall.

The Powerup System handles tracking tile locations with the players’ own tile locations so if two identical locations are found, we know a player is over a power-up and it can be applied.

There used to be a system for drawing sprites, but I decided to remove it and have the rendering be done outside the ECS scope. This makes the code more portable and you can use your own renderer.

Now that the game is now done (with minimal specs), I am now ready to extend its use to produce one of the games I have wanted to make for a while, a top-down arena style shooter. This game will have similarities with components and systems for player movement, tile collision, and power-ups (which will be changed simply to items). I plan to make it in 2D at first but eventually switch the renderer to 3D and also offer customizable maps.

New Blog for SeedWorld

My SeedWorld voxel engine is getting its own blog, since it will be used to make a game out of it. It will be an open-ended RPG in the visual style of Cube World. I will be documenting the progress of this game and engine here:¬†seedworldgame.wordpress.com. There, I’ll try to be posting updates regularly as I add new features.

With that said, this blog won’t be completely abandoned. I’ll still be using it for programming articles and other smaller projects I may have. However SeedWorld would be my top priority, so make sure to follow that one as well!

One Game A Month: Here comes a new game project!

What? Another game already? That’s right, but this one will not be as big as my racing game project, which I expect to be ongoing for several months and likely at least a year. No, this game will be a short-term project, only planned for one month as part of the One Game A Month quest. I want to get in the habit of finishing games quicker. (Maybe then I could rename the blog Electronic Meteor Games! Imagine that) I want a game I can make more quickly and easily, and just as well be leveraged by the coding experience I have gotten so far. So it will re-use some of the code I’m currently working on right now, but refactored to fit the needs of the game.

The game will be a twin-stick top down shooter. The idea may not be original, but carrying it out should be fairly easy. I do not have a name for it yet, only know at least some features in it will include multiple levels and upgradeable weapons, local multiplayer (not sure yet if I can finish online networking code in a month), and a cool lighting atmosphere for the graphics. So basically what one may expect from a top-down shooter. Characters and setting will be fairly abstract and basic. I don’t have much know-how for modeling human characters so it will be robots blasting other robots.

Here are the main goals I intend to follow for the month-long project:

  • Simplistic but nice to look at graphics and setting
  • Multiple weapons and enemy types
  • Controller support (gotta really get a controller for that though ūüėõ )
  • Learn some more AI programming as I go along
  • Use what I learned from Meteor Engine for the graphics
  • A lighting mechanic to hide/show parts of the map (somewhat of a “fog of war” for a shooter)

I have been mostly inspired by some of the fast-paced games being put up on Steam Greenlight to do a top-down shooter. It’s a genre that is simple fun and engaging for many people, and I believe that a (stripped down) top-down shooter can be another good game for budding programmers, comparable to platform games. So for this month, I will be slowing down progress of the racing game to work on this one.

On the AI side, I have been reading this set of tutorials to create a state machine. Many game programmers may be familiar with the game state switching pattern to code a complete game. These tutorials take it further in applying it to other ways, like setting up rooms for a dungeon crawler or computer-controlled AI characters that follow their own motives. The latter is the one I’m most interested in. I plan to implement the tutorial code for this game to give me a head start on the AI. It won’t be pretty but the functionality is what counts here.

For graphics, I mentioned the Meteor Engine, but I will not be using it as-is. Rather, the game will have its own graphics code that will take several ideas from the engine. It will be a trimmed down, sort of “lite” version of the engine code, using mainly deferred rendering for graphics. The intent is to provide a setting with many moving lights, and most outdoor daytime scenes aren’t good for that. Features include mostly dark rooms, bullets that light up the room along the path they take, reflective surfaces on characters and level objects, and point light shadows. A lot of the visual inspiration comes from the Frank Engine demo, so expect the game to look more or less like that.

I will code this with XNA, as usual, but I will also try to get it portable to MonoGame. I have been researching this for a while but attempts to port any of my code to other platforms haven’t gone well so far. MonoGame (in its current 3.0 version) on Mac seems to be a no-go with Snow Leopard, something to do with the Apple SDK’s not being up-to-date with what MonoDevelop uses so I would have to upgrade XCode to 4.2 which requires a Lion upgrade. Not up to doing that right now. So it will likely be on Linux before Mac ūüėõ The cross-platform support is not part of the month-long deadline, it’s just something I would like to do to take my game further like online multiplayer.

I would like to get started today with programming the game, if I want to finish it before the 30th. Just for today to use a placeholder model for the character, draw everything with basic graphics and make the character shoot in all directions. At that point it’s not very different logically from a scrolling shoot-em-up. So look forward for more posts related to my month-long game. It’s been a while since I actually release a game and I want this to be the most complete game I’ve released so far.

Terrain picking solved

Finally, I figured out now to do precise terrain picking, so now I can make a lot of progress! As I’m going to be using BEPU for the physics engine, I just let BEPU do the dirty work. Using it to create a terrain height field and doing ray intersection testing is pretty intuitive. Storing 4 million points is no problem for it, but I may look into its source code to see how it’s doing the intersection tests so efficiently.

In the meantime, though, I can move on to creating the brush and mesh placement tools. Mesh placement should be easy, as I want most objects to be touching the ground. Placing meshes will also be treated as brushes so you can fill the landscape with loads of objects quickly. For now I have this video as a proof of concept for testing.

Some ideas on the placement tools:
– Mesh brushes will likely be done the way of Poisson Disk sampling as¬†demonstrated here, so the spacing of objects looks natural and you don’t have to think much about how their placement looks.
– Objects can still be changed individually if you wish. A single Selection tool will make it possible to change an object’s scaling and location.
– Rotation can be done on a basis of either having all objects orient towards the surface normal, or ignore the Y component. Rocks, for example are fine for rotating in all directions, but trees and buildings should generally point up.
– A toggle switch for each object so you can snap its vertical position to the ground surface in one click.

Physical interactions with the objects will come a bit later. I will at least need a wrapper class to associate mesh instances with BEPU Entities.

The current game plan

Bye bye geo-clipmaps, here comes geo-mipmapping! I’ve been busy at converting my terrain rendering code to use it from now on. It’s not fully completed but the basics work. I just need to fix up the holes that appear in the borders between different detail meshes. The terrain system has advanced a lot more than when I used geo-mipmapping. While it still only supports two textures, the cliff texture can be sampled twice in two different scales to remove any semblance of repetition. No fractals or noise patterns here, it’s all from pre-made texture images. I’m also messing around with a vignette effect, currently made part of the Depth of Field shader. The engine is also now running with a game screen framework built on top of the official XNA sample.

screen28-1

Now to move on to the game structure itself. I’m not too keen into drawing UML diagrams or other fancy charts, also because I have not gotten really comfortable with a diagram editor. I’d rather sketch them on paper and take some photos of it with a phone. But I do have a tree-like structure to organize my code in.

This planning comes easier as I figure out the first thing I want to do with my terrain renderer is to put it in a game state that will later on be the in-game map editor.

The puzzle game I made earlier made a good learning experience in understanding how a screen and menu system can be put together where multiple states and screens run independently of each other. The game may not be 100% done, but its code is stable enough for me to be able to port the screen system into this new game. Because this would be the most complex game I’ve attempted, I look forward to seeing how far I can take it. With a loading screen and transition to game modes are in place, it will at least finally start feeling more like something with a greater purpose than a tech demo.

The graphics engine is still a work in progress so I will work on it together with making the game. The game code will be organized in three different areas: Core, Game, and Screens.

Core

  • Graphics engine¬†(my own)
  • Physics engine¬†(BEPU or maybe Bullet)
  • File saving/loading
  • Input
  • Networking
  • Screen system¬†(from my last game)
  • Menu interactions
  • Screen drawing/updating system

Game

  • Game logic, AI
  • Player interactions
  • Game objects
  • Editor
  • Editing tools and functions

Screens

  • Background(s)
  • Loading screen
  • Menus
  • Gameplay modes
  • HUDs and interfaces

Core contains all the systems that deal with the lower-level workings of the game. Sending data to the graphics engine, setting up and managing physics, input management, and the loading and saving of files all go here.

Game contains all the very game-specific logic that’s all about setting the rules, game modes, and specific interactions with game objects. They all tie into Core in some way, depending on what they are responsible for doing. A more specific area, Editor would include all the tools and functions used for the game’s map editor mode.

Screens can be seen sort of like game states and also like components that, when grouped together, can describe a game state or mode of behavior. They are loaded and ran from the screen system, and either specialize on displaying information related to the game, or tell the user what actions are available. Background screens, gameplay screens, HUDs, inventory screens, and menus would belong here.

As you may have noticed, the three groups tend to progress from low-level to high-level code. This was not really intended, but does give me a better idea of how to pass data around.

The graphics engine is already running in the screen system. When the program launches it add a Screen to a list, which loads the content to be rendered. Here is the game loading a terrain in real-time, with some interactions handled by an “Editor” screen.

(lolwut, YouTube detected this video to be shaky. It’s all rigid camera movments in here)

There are a few issues I have to take care of with the screens and graphics. Both the screen system and graphics engine are loaded as XNA game components, which means they draw and update automatically within the game, outside of the screen system’s control. Although the content loading code is in the Editor screen, I need the option to make the explicit choice of what order the graphics should be drawn, so that any graphics that are set in a particular screen get drawn with that screen’s Draw call.

Triplanar normal mapping for terrain

First, before having gotten into terrain normal mapping, I added mouse picking for objects. I have some interactivity now!

screen26-1

This is taken from an XNA code sample, then I modified it so it supports instanced meshes. So now it’s able to pick the exact instances that the ray intersets, and displays their mesh name. It doesn’t do anything other than that for now, but it’s just the first step towards editing objects in the level editor.

Mapping the terrain

The new update was for fixing a problem that’s been bugging me for a few weeks- combining normal mapping with triplanar texturing. It was a tricky affair as the normal maps get re-oriented along three planes so you also have to shift the normals accordingly. After revising how I did my regular normal mapping for other objects, I was able to get correct triplanar normal mapping for the terrain. This goes for both forward and deferred rendering.

I have only two regular textures- the base texture for mostly flat areas, and a blend texture for cliffs in steep areas. My normal map is for the cliff texture, and no normal mapping is applied for the flat areas. You can also set a bump intensity which increases the roughness of the terrain. Naturally, with great roughness comes great respons- less specular highlights. So you would have to tune the specular and roughness so it achieves a good balance. Most of the time terrain, doesn’t need specular lighting, but it’s needed for wet and icy areas.

Bump up the volume

Terrain normals, binormals, and tangents are all calculated on the CPU, which is the ideal way to go as it saves a lot of overhead of doing it every frame. In the vertex shader, the normal, binormal and tangent are transformed to view space and added to a 3×3 matrix.

output.TangentToWorld[0] = mul(normalize(mul(input.tangent, World)), View);
output.TangentToWorld[1] = mul(normalize(mul(input.binormal, World)), View);
output.TangentToWorld[2] = mul(normalize(mul(input.Normal, World)), View);

In the main pixel shader function we must first compute the normal mapping output before it can be contributed to the vertex normal outputs.

PixelShaderOutput PixelTerrainGBuffer(VT_Output input)
{
    // Sample normal map color. 4 is the texture scale
    float3 normal = TriplanarNormalMapping(input, 4);

    // Output the normal, in [0,1] space
    // Get normal into world space

    float3 normalFromMap = mul(normal, input.TangentToWorld);
    normalFromMap = normalize(normalFromMap);
    output.Normal.rgb = 0.5f * (normalFromMap + 1.0f);

    // ... Then output the other G-Buffer stuff
}

The textures are expected to be in the [0, 1] range and TriplanarNormalMapping outputs them to [-1, 1] so they are properly transformed with the TBN matrix. After that we can set the normals right back to the [0, 1] range for the lighting pass. Remember that it outputs to an unsigned format, so if we don’t do this, all values below zero will be lost.

The following function computes triplanar normal mapping for terrains.

float3 TriplanarNormalMapping(VT_Output input, float scale = 1)
{
    float tighten = 0.3679f;

    float mXY = saturate(abs(input.Normal.z) - tighten);
    float mXZ = saturate(abs(input.Normal.y) - tighten);
    float mYZ = saturate(abs(input.Normal.x) - tighten);

    float total = mXY + mXZ + mYZ;
    mXY /= total;
    mXZ /= total;
    mYZ /= total;

    float3 cXY = tex2D(normalMapSampler, input.NewPosition.xy / scale);
    float3 cXZ = float3(0, 0, 1);
    float3 cYZ = tex2D(normalMapSampler, input.NewPosition.zy / scale);

    // Convert texture lookups to the [-1, 1] range
    cXY = 2.0f * cXY - 1.0f;
    cYZ = 2.0f * cYZ - 1.0f;

    float3 normal = cXY * mXY + cXZ * mXZ + cYZ * mYZ;
    normal.xy *= bumpIntensity;
    return normal;
}

Note that where I define the texture lookups, the XZ plane is just set to a normal pointing directly towards the viewer. The X and Y values are in the [-1, 1] range, and Z is by default 1 because it is not used for view-space coordinates. So don’t forget to flip normalized negative values! Then X and Y are multiplied by the bumpIntensity. The default roughness is 1, and a roughness of 0 will completely ignore the normal map for the final output.

A lot of my texture mapping code was adapted from Memoirs of a Texel. Take caution, that if you want to follow that guide, there is a glaring mistake in that code that I noticed only after seeing this GPU Gems example¬†(see example 1-3). You need to clamp your weight values to between 0 and 1 before averaging them out. The blog article doesn’t do this in its code. Otherwise you will get many dark patches in your textures. I fixed this with the saturate() function shown in the above example. This goes for regular texture mapping as well as normal mapping.

Here are some screenshots with the normal mapping in place. The bump intensity is set to 1.8 for a greater effect.

Edit: I’ve used some better textures for testing now. I got some free texture samples at FilterForge.

screen27-4

screen27-3

screen26-4

screen26-2

Normal computation is the same for forward rendering as it is for deferred rendering. The normals as they contribute to lighting would still be in the [0, 1] range in view space.

Some screenshots of forward rendering

Here’s a few quick test screens with the forward rendering in the engine, now that I have added shader support for terrains. The differences are subtle but somewhat surprising. First I’m testing out normal maps to see if they look OK.

screen25-2

screen25-3

Now for some money shots- comparing deferred and forward side by side.

screen25-4

Then I changed the splits so each half of the view is rendered separately. Deferred rendering is on the left (with shadows in the first image) and forward rendering is on the right.

screen25-5
screen25-6

I’ve noticed some harsher lighting in the forward renderer, which is better to notice in the last screenshot, as shadows are disabled. Compared to the deferred rendering, the colors all look somewhat more saturated. Even the skybox looks a little different. This does make the terrain more pronounced, though. Still haven’t figured out what steps in the shading process cause these differences. Perhaps it is in the different surface formats I use for forward rendering (RGB10A2) versus deferred (HDR for lighting, RGBA8 for the G-Buffer). Or how I am calculating the normal mapping. Update: I didn’t apply gamma correction to the textures, so after that was done, both screens look more or less the same.

These are not post-edits by the way. These are taken in real-time, as I decided to use the engine’s ability to handle simultaneous rendering profiles and output the buffers to split-screen. This is a good way to compare and tweak various results as I’m creating new shaders, like a “before and after” test. It’s simply a matter of adding an extra constructor to the list of profiles, and the engine loops through them. The only big limits are the buffer memory and GPU performance.

For these tests both images are actually rendered in full resolution, but cropped/resized for the frame buffer. It’s not efficient, but it’s good enough for early testing right now. However, the ability to swap in a forward renderer is a game changer for development- now I can test and compile my builds on my old MacBook. It will be pretty nice to see how it holds up to its crappy integrated ¬†GMA X3100 chip, which only supports the Reach XNA profile. It will be made selectable on runtime depending on the graphics adapter support.

This means possibly downgrading to Shader Model 2.0 for the forward renderer. Or maybe just have two versions so if your computer supports SM 3.0, use the more advanced one for better visuals. There’s no way I could transfer my current shadow mapping setup to 2.0 without some serious nerfing of features. As long as it doesn’t run very choppy on integrated graphics, I will be fine. Speaking of shader changes, all my scene shaders use the same vertex shader programs, so it looks like a good time to break up the .fx files into smaller parts.