SeedWorld (my voxel world engine) first update

So here’s what I’ve been working on for the past two weeks. It seems that I have wanted to do a procedural voxel style world for some time and now have just started, only 4 years late on the voxel/cube graphics trend 😀

I am still facing a lot of technical issues which is to be expected early in developing something, but still made a lot of progress. I finally have a octave noise function that I am very satisfied with, in creating those very believable rolling hills you see a lot in procedural landscapes. Here is the breakdown of the current technical specs of the voxel world generation.

  • Voxel data is discarded as soon as chunk meshes are made. Chunks store only vertex data at the minimum*
  • Far draw distance (I want to emphasize this in faster PCs)
  • World divided into 32x32x256 chunks, with an area roughly 2000×2000 in size for the visible portion
  • Multi-threaded support for voxel and mesh generation

Future specs include:

  • Material determines the attributes in game, color gradients, and sounds for interaction feedback
  • Persistent storage for voxels only in the player’s immediate surroundings*
  • Different biomes which affect the visuals and interactivity of the materials

*This supports interactivity for making the world destructible, but only where it makes sense (near the player), and keeps the managed memory footprint low.

Voxel World – First Attempt

Eventually, I will want to make some sort of action/adventure type of game with the voxel/cube engine once it is fleshed out well enough. So far it has been a mostly smooth experience to see what it goes into the code. To get a head start I picked out some pre-existing code to make 2D and 3D simplex noise. I quickly learned how the noise functions readily make a continuous texture without being repetitive, as it’s of huge importance when making a procedurally generated world.

I started working on this on Saturday the 6th, and made some decent progress by Sunday night, making a cube world with 3D Simplex noise and some mesh optimization to avoid rendering hidden cubes. The custom shader is very simple and low bandwidth, taking only a Vector3 for local position and a Byte4 for color. The mesh-building code also adds a shade of green to cubes that are visible from above, and the rest are brownish in color. This creates a somewhat convincing grass-and-dirt look. Finally I implemented some basic brute-force culling to avoid rendering invisible chunks. Quick and dirty procedural world!

Posted Image

Posted Image

Some problems I found with this voxel-cube generator were, I frequently ran into out-of-memory exceptions when I decided to go any higher than 192^3 cubes. I was splitting the world into 32^3 sized chunks but it didn’t really help out the memory problem. My guess is that the triple-nested loops used to calculate the noise values are wounded too tight, and it might benefit from single-dimensional arrays. Also, I was storing the chunks in a resizable list, but it makes more sense to have a fixed sized array to be able to keep track of them. Also, while interesting to look at, the shapes produced by 3D noise weren’t very desirable so I switched to 2D to make a more believable height map. From there, I will then experiment with some 3D noise to get some interesting rock formations and caves going on.

Improvements in Terrain Generation

When I was still tweaking with different combinations of noise patterns, I could only come up with very large smooth, round hills, or many little but very bumpy hills. No repetition, but very bland to look at.

I had the basic idea down- combine many layers of Simplex noise of different frequencies, offsetting the X and Y for each of them just a little. But I had a derp moment when I realized I should be reducing the amplitude (effectively, the height variation) as I increase the frequency for best results. JTippets’ article on world generation really helped here.

Here are some screenshots of various builds, in order of progression. Here is “revision 2” as it follows the first build mentioned in my last journal entry:

Posted Image

Already in revision 2 I have added optimized mesh generation to remove hidden faces. The wireframe render shows this well.

Revision 3 shows the vast improvements in terrain generation that I mentioned previously. The draw distance is improved, and noise patterns create much more natural looking hills and valleys. Color is determined by height variation and whether or not the block is a “surface” block. The white patches you see are sides of steep hills that don’t have the top face visible.

Posted Image

Between revisions 3 and 4 I was trying out ways to speed up voxel generation, mostly with octrees. That didn’t work out as planned, for reasons I will state later in this post. So I went back to my previous way of adding voxels. The biggest feature update here is simple vertex ambient occlusion through extensive neighbor voxel lookups.

Posted Image

Posted Image

It is a subtle update but it greatly improves the appearance of the landscape a lot. I applied the AO method that was discussed in the 0FPS blog. The solution is actually simple to do, but the tedious part was combining the numerical ID lookups for all the neighbor voxels so that each side is lit correctly. I should really change those numbers into Enums for voxel locations so the code is less confusing.

Here is a screenshot just showing just the AO effect.

Posted Image

It is around revision 4 when I also made a Git repo for the project, and it has also been uploaded to a private Bitbucket account.

Performance stats, you say? Unfortunately I am not yet counting the FPS in the engine and I believe my stopwatch use of tracking time for chunk updates is wrong, because when it reads 15 milliseconds (about 67 FPS) the program becomes incredibly slow, as if it was updating only twice per second, but at 10 milliseconds or less, the program runs silky smooth without any jerky movement.

What I can tell you, though, is that currently I am sticking to update just one 32x32x256 chunk per frame in order to keep that smooth framerate. At 60 chunks per row, It’s still quick enough for the world generation to catch up to movement up to around 25 blocks/second. This is throttled by a variable that I can change to tell the program how many “dirty” chunks per frame it should update. My processor is a Pentium G3258- a value CPU but still decent for many modern games (as long as they are not greatly dependent on multi-threading), especially since it is overclockable. I have mine overclocked to 4.2 Ghz. If you have a CPU that can run 4 threads, has 4 cores or more, you should be able to update several chunks per frame very easily.

About using octrees- I did not perceive any performance gains from using them so far. I wanted to use octrees as a way to better find potential visible voxels without the brute force option of going through all the voxels in the array. The good news is: I got the octrees to technically work (also did some nice test renders) and I also learned how to do so using Z-curve ordering and Morton encoding. At least I gained some interesting knowledge there. Bad news: reducing the amount of voxel lookups with octrees did not result in being able to quickly update more chunks per frame, which was the ultimate goal. So I am putting aside the octree-related code for now and maybe it will come in handy later.

Persistent local voxel storage concept, and future updates

The persistent storage for local voxels is definitely something I want to implement, and make a key feature in my engine. Keeping voxel data for the entire visible world is usually wasteful and it only makes sense really to know what you will see immediately around you. After all, if you have a pickaxe, you are not going to reach that block that is 500 meters away. This data storage will update as you move around the world, storing at the most 4 chunks worth of voxels.
This can be applied further with other objects that may interact with the world surface. Say you are a mage that can cast a destructive fireball. Upon impact, you want to calculate the voxel data for the area around the fireball so it can make a crater. Or an ice ball to freeze the surface. Obviously you want these calculations to be done very quickly, so it sounds like a good way to stress test the engine with lots of fireballs and who knows what else being thrown around.

Other more features I want to add soon are the creation of pseudo-random rock formations and measuring slope steepness which will help in generating other pseudo-random elements. Probably gonna add those voxel trees first, in order to add more to the landscape.

One Game A Month: Here comes a new game project!

What? Another game already? That’s right, but this one will not be as big as my racing game project, which I expect to be ongoing for several months and likely at least a year. No, this game will be a short-term project, only planned for one month as part of the One Game A Month quest. I want to get in the habit of finishing games quicker. (Maybe then I could rename the blog Electronic Meteor Games! Imagine that) I want a game I can make more quickly and easily, and just as well be leveraged by the coding experience I have gotten so far. So it will re-use some of the code I’m currently working on right now, but refactored to fit the needs of the game.

The game will be a twin-stick top down shooter. The idea may not be original, but carrying it out should be fairly easy. I do not have a name for it yet, only know at least some features in it will include multiple levels and upgradeable weapons, local multiplayer (not sure yet if I can finish online networking code in a month), and a cool lighting atmosphere for the graphics. So basically what one may expect from a top-down shooter. Characters and setting will be fairly abstract and basic. I don’t have much know-how for modeling human characters so it will be robots blasting other robots.

Here are the main goals I intend to follow for the month-long project:

  • Simplistic but nice to look at graphics and setting
  • Multiple weapons and enemy types
  • Controller support (gotta really get a controller for that though 😛 )
  • Learn some more AI programming as I go along
  • Use what I learned from Meteor Engine for the graphics
  • A lighting mechanic to hide/show parts of the map (somewhat of a “fog of war” for a shooter)

I have been mostly inspired by some of the fast-paced games being put up on Steam Greenlight to do a top-down shooter. It’s a genre that is simple fun and engaging for many people, and I believe that a (stripped down) top-down shooter can be another good game for budding programmers, comparable to platform games. So for this month, I will be slowing down progress of the racing game to work on this one.

On the AI side, I have been reading this set of tutorials to create a state machine. Many game programmers may be familiar with the game state switching pattern to code a complete game. These tutorials take it further in applying it to other ways, like setting up rooms for a dungeon crawler or computer-controlled AI characters that follow their own motives. The latter is the one I’m most interested in. I plan to implement the tutorial code for this game to give me a head start on the AI. It won’t be pretty but the functionality is what counts here.

For graphics, I mentioned the Meteor Engine, but I will not be using it as-is. Rather, the game will have its own graphics code that will take several ideas from the engine. It will be a trimmed down, sort of “lite” version of the engine code, using mainly deferred rendering for graphics. The intent is to provide a setting with many moving lights, and most outdoor daytime scenes aren’t good for that. Features include mostly dark rooms, bullets that light up the room along the path they take, reflective surfaces on characters and level objects, and point light shadows. A lot of the visual inspiration comes from the Frank Engine demo, so expect the game to look more or less like that.

I will code this with XNA, as usual, but I will also try to get it portable to MonoGame. I have been researching this for a while but attempts to port any of my code to other platforms haven’t gone well so far. MonoGame (in its current 3.0 version) on Mac seems to be a no-go with Snow Leopard, something to do with the Apple SDK’s not being up-to-date with what MonoDevelop uses so I would have to upgrade XCode to 4.2 which requires a Lion upgrade. Not up to doing that right now. So it will likely be on Linux before Mac 😛 The cross-platform support is not part of the month-long deadline, it’s just something I would like to do to take my game further like online multiplayer.

I would like to get started today with programming the game, if I want to finish it before the 30th. Just for today to use a placeholder model for the character, draw everything with basic graphics and make the character shoot in all directions. At that point it’s not very different logically from a scrolling shoot-em-up. So look forward for more posts related to my month-long game. It’s been a while since I actually release a game and I want this to be the most complete game I’ve released so far.

Terrain picking solved

Finally, I figured out now to do precise terrain picking, so now I can make a lot of progress! As I’m going to be using BEPU for the physics engine, I just let BEPU do the dirty work. Using it to create a terrain height field and doing ray intersection testing is pretty intuitive. Storing 4 million points is no problem for it, but I may look into its source code to see how it’s doing the intersection tests so efficiently.

In the meantime, though, I can move on to creating the brush and mesh placement tools. Mesh placement should be easy, as I want most objects to be touching the ground. Placing meshes will also be treated as brushes so you can fill the landscape with loads of objects quickly. For now I have this video as a proof of concept for testing.

Some ideas on the placement tools:
– Mesh brushes will likely be done the way of Poisson Disk sampling as demonstrated here, so the spacing of objects looks natural and you don’t have to think much about how their placement looks.
– Objects can still be changed individually if you wish. A single Selection tool will make it possible to change an object’s scaling and location.
– Rotation can be done on a basis of either having all objects orient towards the surface normal, or ignore the Y component. Rocks, for example are fine for rotating in all directions, but trees and buildings should generally point up.
– A toggle switch for each object so you can snap its vertical position to the ground surface in one click.

Physical interactions with the objects will come a bit later. I will at least need a wrapper class to associate mesh instances with BEPU Entities.

The current game plan

Bye bye geo-clipmaps, here comes geo-mipmapping! I’ve been busy at converting my terrain rendering code to use it from now on. It’s not fully completed but the basics work. I just need to fix up the holes that appear in the borders between different detail meshes. The terrain system has advanced a lot more than when I used geo-mipmapping. While it still only supports two textures, the cliff texture can be sampled twice in two different scales to remove any semblance of repetition. No fractals or noise patterns here, it’s all from pre-made texture images. I’m also messing around with a vignette effect, currently made part of the Depth of Field shader. The engine is also now running with a game screen framework built on top of the official XNA sample.

screen28-1

Now to move on to the game structure itself. I’m not too keen into drawing UML diagrams or other fancy charts, also because I have not gotten really comfortable with a diagram editor. I’d rather sketch them on paper and take some photos of it with a phone. But I do have a tree-like structure to organize my code in.

This planning comes easier as I figure out the first thing I want to do with my terrain renderer is to put it in a game state that will later on be the in-game map editor.

The puzzle game I made earlier made a good learning experience in understanding how a screen and menu system can be put together where multiple states and screens run independently of each other. The game may not be 100% done, but its code is stable enough for me to be able to port the screen system into this new game. Because this would be the most complex game I’ve attempted, I look forward to seeing how far I can take it. With a loading screen and transition to game modes are in place, it will at least finally start feeling more like something with a greater purpose than a tech demo.

The graphics engine is still a work in progress so I will work on it together with making the game. The game code will be organized in three different areas: Core, Game, and Screens.

Core

  • Graphics engine (my own)
  • Physics engine (BEPU or maybe Bullet)
  • File saving/loading
  • Input
  • Networking
  • Screen system (from my last game)
  • Menu interactions
  • Screen drawing/updating system

Game

  • Game logic, AI
  • Player interactions
  • Game objects
  • Editor
  • Editing tools and functions

Screens

  • Background(s)
  • Loading screen
  • Menus
  • Gameplay modes
  • HUDs and interfaces

Core contains all the systems that deal with the lower-level workings of the game. Sending data to the graphics engine, setting up and managing physics, input management, and the loading and saving of files all go here.

Game contains all the very game-specific logic that’s all about setting the rules, game modes, and specific interactions with game objects. They all tie into Core in some way, depending on what they are responsible for doing. A more specific area, Editor would include all the tools and functions used for the game’s map editor mode.

Screens can be seen sort of like game states and also like components that, when grouped together, can describe a game state or mode of behavior. They are loaded and ran from the screen system, and either specialize on displaying information related to the game, or tell the user what actions are available. Background screens, gameplay screens, HUDs, inventory screens, and menus would belong here.

As you may have noticed, the three groups tend to progress from low-level to high-level code. This was not really intended, but does give me a better idea of how to pass data around.

The graphics engine is already running in the screen system. When the program launches it add a Screen to a list, which loads the content to be rendered. Here is the game loading a terrain in real-time, with some interactions handled by an “Editor” screen.

(lolwut, YouTube detected this video to be shaky. It’s all rigid camera movments in here)

There are a few issues I have to take care of with the screens and graphics. Both the screen system and graphics engine are loaded as XNA game components, which means they draw and update automatically within the game, outside of the screen system’s control. Although the content loading code is in the Editor screen, I need the option to make the explicit choice of what order the graphics should be drawn, so that any graphics that are set in a particular screen get drawn with that screen’s Draw call.

Triplanar normal mapping for terrain

First, before having gotten into terrain normal mapping, I added mouse picking for objects. I have some interactivity now!

screen26-1

This is taken from an XNA code sample, then I modified it so it supports instanced meshes. So now it’s able to pick the exact instances that the ray intersets, and displays their mesh name. It doesn’t do anything other than that for now, but it’s just the first step towards editing objects in the level editor.

Mapping the terrain

The new update was for fixing a problem that’s been bugging me for a few weeks- combining normal mapping with triplanar texturing. It was a tricky affair as the normal maps get re-oriented along three planes so you also have to shift the normals accordingly. After revising how I did my regular normal mapping for other objects, I was able to get correct triplanar normal mapping for the terrain. This goes for both forward and deferred rendering.

I have only two regular textures- the base texture for mostly flat areas, and a blend texture for cliffs in steep areas. My normal map is for the cliff texture, and no normal mapping is applied for the flat areas. You can also set a bump intensity which increases the roughness of the terrain. Naturally, with great roughness comes great respons- less specular highlights. So you would have to tune the specular and roughness so it achieves a good balance. Most of the time terrain, doesn’t need specular lighting, but it’s needed for wet and icy areas.

Bump up the volume

Terrain normals, binormals, and tangents are all calculated on the CPU, which is the ideal way to go as it saves a lot of overhead of doing it every frame. In the vertex shader, the normal, binormal and tangent are transformed to view space and added to a 3×3 matrix.

output.TangentToWorld[0] = mul(normalize(mul(input.tangent, World)), View);
output.TangentToWorld[1] = mul(normalize(mul(input.binormal, World)), View);
output.TangentToWorld[2] = mul(normalize(mul(input.Normal, World)), View);

In the main pixel shader function we must first compute the normal mapping output before it can be contributed to the vertex normal outputs.

PixelShaderOutput PixelTerrainGBuffer(VT_Output input)
{
    // Sample normal map color. 4 is the texture scale
    float3 normal = TriplanarNormalMapping(input, 4);

    // Output the normal, in [0,1] space
    // Get normal into world space

    float3 normalFromMap = mul(normal, input.TangentToWorld);
    normalFromMap = normalize(normalFromMap);
    output.Normal.rgb = 0.5f * (normalFromMap + 1.0f);

    // ... Then output the other G-Buffer stuff
}

The textures are expected to be in the [0, 1] range and TriplanarNormalMapping outputs them to [-1, 1] so they are properly transformed with the TBN matrix. After that we can set the normals right back to the [0, 1] range for the lighting pass. Remember that it outputs to an unsigned format, so if we don’t do this, all values below zero will be lost.

The following function computes triplanar normal mapping for terrains.

float3 TriplanarNormalMapping(VT_Output input, float scale = 1)
{
    float tighten = 0.3679f;

    float mXY = saturate(abs(input.Normal.z) - tighten);
    float mXZ = saturate(abs(input.Normal.y) - tighten);
    float mYZ = saturate(abs(input.Normal.x) - tighten);

    float total = mXY + mXZ + mYZ;
    mXY /= total;
    mXZ /= total;
    mYZ /= total;

    float3 cXY = tex2D(normalMapSampler, input.NewPosition.xy / scale);
    float3 cXZ = float3(0, 0, 1);
    float3 cYZ = tex2D(normalMapSampler, input.NewPosition.zy / scale);

    // Convert texture lookups to the [-1, 1] range
    cXY = 2.0f * cXY - 1.0f;
    cYZ = 2.0f * cYZ - 1.0f;

    float3 normal = cXY * mXY + cXZ * mXZ + cYZ * mYZ;
    normal.xy *= bumpIntensity;
    return normal;
}

Note that where I define the texture lookups, the XZ plane is just set to a normal pointing directly towards the viewer. The X and Y values are in the [-1, 1] range, and Z is by default 1 because it is not used for view-space coordinates. So don’t forget to flip normalized negative values! Then X and Y are multiplied by the bumpIntensity. The default roughness is 1, and a roughness of 0 will completely ignore the normal map for the final output.

A lot of my texture mapping code was adapted from Memoirs of a Texel. Take caution, that if you want to follow that guide, there is a glaring mistake in that code that I noticed only after seeing this GPU Gems example (see example 1-3). You need to clamp your weight values to between 0 and 1 before averaging them out. The blog article doesn’t do this in its code. Otherwise you will get many dark patches in your textures. I fixed this with the saturate() function shown in the above example. This goes for regular texture mapping as well as normal mapping.

Here are some screenshots with the normal mapping in place. The bump intensity is set to 1.8 for a greater effect.

Edit: I’ve used some better textures for testing now. I got some free texture samples at FilterForge.

screen27-4

screen27-3

screen26-4

screen26-2

Normal computation is the same for forward rendering as it is for deferred rendering. The normals as they contribute to lighting would still be in the [0, 1] range in view space.

Looking back, and more game details

It’s been about a year and a half since I’ve started this blog, and back then my engine project was only 2 months old. This January is the first year my blog received more than 1000 visits in a month, and with February being second month in a row, I hope it continues. My most popular posts used to be the code sample articles (I probably should update those) but now more of my recent posts have been catching on there. I still plan to post some helpful code articles for some variety, but it will be more based on what I’m currently working on.

Also, I have started my own GameDev journal to show my progress to the community, so I hope to continue on there to promote my game. I used to think you only need to be a paid member to create journals, but I guess not anymore. This blog will still be updated more liberally and frequently, while articles in the journal will be posted with a bit more consideration and preparation.

Outside of this blog, I have gotten more involved in the game development scene by going to local meetups, usually organized by the IGDA members, and hearing the war stories of experienced developers (indie and AAA) and learning ways to get your games to a finished, workable state and other ways to make your life easier. Meeting people here has been fun, and although I haven’t collaborated with anyone for a project, it’s not out of the question. I would also like to get into a game jam in person, whenever I can schedule enough time for it.

Racing game details

Hey, you know that racing game that’s supposed to happen? Well, I still plan on making it happen, and my ideas are becoming a bit more solidified now. I always planned on making anything but a super-serious racing sim, because those take too long to get all the details right in the physics and the authenticity of the look and feel. Also I won’t have permission to use real-life cars in the game. Previously, I said I wanted to make an off-road racer, but that might still lead to some high expectations on the landscapes and environment. Also, I would have to limit myself to how the vehicles should handle. So for now, it’s a general arcadey racer that will just happen to have some trucks and ATVs in it.

Now on to some of the main features of the game.

Multiplayer

One of the things I want my racing game to have- MULTIPLAYER. That’s a big one. I want people to race other people, and especially across different computers. So this also means including network play. I have never done a networked game before, so this part would be completely new to me. I don’t know how to set up a waiting area, or find available players online. Should I use client-server or a P2P architecture? So many things to figure out. Lidgren is a popular library for online gaming in XNA, which I will probably use anyways for reasons stated later. Work on this likely won’t start until about halfway through the development process.

Before that, and easier to test, is to have local multiplayer. My graphics engine will make split-screen support a breeze, with the way I can hook up different cameras to the same scene and draw it several times. The multiple controller support should be easy as well.

Customizing tracks

Another big one, is wait for it- track creation. Yeah, it sounds ambitious but I figure if I am gonna aim for making the track creation tools easy for me to program, it might as well be user-friendly and simple enough for players to use. See ModNation Racers and a bit of TrackMania for inspiration, but moreso ModNation Racers because their editor looks more fun and inviting to use. And I don’t mean to brag much, but holy cow, my graphics engine looks about as good as MNR’s… I’m basically halfway there 😉 Their splatting techniques do not use a million textures, and cliff textures automatically appear in steep sides.

“Why are you making a clone?” You may ask. Well, it’s obvious it’s one of my favorite games and it gets me inspired. Other people have made clones other than for learning purposes, like CounterStrike clones or _gulp_ Minecraft clones. (Is it just me or are MineCraft clones the new MMO clones?) However one important point is that MNR is only sold on Sony’s platforms, and I do not plan on making something to directly compete with the other popular customizable racers. Besides, TrackMania and TrackVerse seem to co-exist well on the PC and TrackVerse has put their own spin on the genre with a faster-paced WipEout feel. It would be great if a WipEout like game could come out for XBLA or XBLIG but I digress.

Other platforms

That brings me to another feature I plan later down the road, which is cross-platform support. The ultimate goal would be, aside from Windows, to run it natively on Mac and Linux. This obviously means I cannot use XNA forever, and going to MonoGame for non-Windows builds looks like the most attractive option. For those platforms I will need to rebuild the engine library with the MonoGame references instead of the XNA ones. The sooner I can test out the engine on Linux, the better. Lidgren is also used with MonoGame, which will smooth out porting the networking code when I get to it.

The reason I’m sticking with XNA for Windows is because it uses DirectX, while MonoGame for Windows uses OpenGL, and driver compatibility is more of an issue with OpenGL on Windows. I had Linux Mint installed before, but removed it a few months ago. DarkGenesis’ blog looks to be a good resource for porting MonoGame applications, so I’m bookmarking that for now.

Gradually I will be integrating the screen system code that I wrote for the puzzle game I worked on a few months ago. It includes (almost) everything I need to create menus and screen states for the game. This code is big enough to be in its own library, and it may be easier if I just build it as such.

So these are some of the main features of the game. It’s gonna be a tough road ahead, but I think I can make a lot of progress in a few months. Out of these three, the first one I will begin work on is the track editor. It’s going to be the backbone of the game development process, so it makes sense to begin here first.

Some screenshots of forward rendering

Here’s a few quick test screens with the forward rendering in the engine, now that I have added shader support for terrains. The differences are subtle but somewhat surprising. First I’m testing out normal maps to see if they look OK.

screen25-2

screen25-3

Now for some money shots- comparing deferred and forward side by side.

screen25-4

Then I changed the splits so each half of the view is rendered separately. Deferred rendering is on the left (with shadows in the first image) and forward rendering is on the right.

screen25-5
screen25-6

I’ve noticed some harsher lighting in the forward renderer, which is better to notice in the last screenshot, as shadows are disabled. Compared to the deferred rendering, the colors all look somewhat more saturated. Even the skybox looks a little different. This does make the terrain more pronounced, though. Still haven’t figured out what steps in the shading process cause these differences. Perhaps it is in the different surface formats I use for forward rendering (RGB10A2) versus deferred (HDR for lighting, RGBA8 for the G-Buffer). Or how I am calculating the normal mapping. Update: I didn’t apply gamma correction to the textures, so after that was done, both screens look more or less the same.

These are not post-edits by the way. These are taken in real-time, as I decided to use the engine’s ability to handle simultaneous rendering profiles and output the buffers to split-screen. This is a good way to compare and tweak various results as I’m creating new shaders, like a “before and after” test. It’s simply a matter of adding an extra constructor to the list of profiles, and the engine loops through them. The only big limits are the buffer memory and GPU performance.

For these tests both images are actually rendered in full resolution, but cropped/resized for the frame buffer. It’s not efficient, but it’s good enough for early testing right now. However, the ability to swap in a forward renderer is a game changer for development- now I can test and compile my builds on my old MacBook. It will be pretty nice to see how it holds up to its crappy integrated  GMA X3100 chip, which only supports the Reach XNA profile. It will be made selectable on runtime depending on the graphics adapter support.

This means possibly downgrading to Shader Model 2.0 for the forward renderer. Or maybe just have two versions so if your computer supports SM 3.0, use the more advanced one for better visuals. There’s no way I could transfer my current shadow mapping setup to 2.0 without some serious nerfing of features. As long as it doesn’t run very choppy on integrated graphics, I will be fine. Speaking of shader changes, all my scene shaders use the same vertex shader programs, so it looks like a good time to break up the .fx files into smaller parts.