The current game plan

Bye bye geo-clipmaps, here comes geo-mipmapping! I’ve been busy at converting my terrain rendering code to use it from now on. It’s not fully completed but the basics work. I just need to fix up the holes that appear in the borders between different detail meshes. The terrain system has advanced a lot more than when I used geo-mipmapping. While it still only supports two textures, the cliff texture can be sampled twice in two different scales to remove any semblance of repetition. No fractals or noise patterns here, it’s all from pre-made texture images. I’m also messing around with a vignette effect, currently made part of the Depth of Field shader. The engine is also now running with a game screen framework built on top of the official XNA sample.

screen28-1

Now to move on to the game structure itself. I’m not too keen into drawing UML diagrams or other fancy charts, also because I have not gotten really comfortable with a diagram editor. I’d rather sketch them on paper and take some photos of it with a phone. But I do have a tree-like structure to organize my code in.

This planning comes easier as I figure out the first thing I want to do with my terrain renderer is to put it in a game state that will later on be the in-game map editor.

The puzzle game I made earlier made a good learning experience in understanding how a screen and menu system can be put together where multiple states and screens run independently of each other. The game may not be 100% done, but its code is stable enough for me to be able to port the screen system into this new game. Because this would be the most complex game I’ve attempted, I look forward to seeing how far I can take it. With a loading screen and transition to game modes are in place, it will at least finally start feeling more like something with a greater purpose than a tech demo.

The graphics engine is still a work in progress so I will work on it together with making the game. The game code will be organized in three different areas: Core, Game, and Screens.

Core

  • Graphics engine (my own)
  • Physics engine (BEPU or maybe Bullet)
  • File saving/loading
  • Input
  • Networking
  • Screen system (from my last game)
  • Menu interactions
  • Screen drawing/updating system

Game

  • Game logic, AI
  • Player interactions
  • Game objects
  • Editor
  • Editing tools and functions

Screens

  • Background(s)
  • Loading screen
  • Menus
  • Gameplay modes
  • HUDs and interfaces

Core contains all the systems that deal with the lower-level workings of the game. Sending data to the graphics engine, setting up and managing physics, input management, and the loading and saving of files all go here.

Game contains all the very game-specific logic that’s all about setting the rules, game modes, and specific interactions with game objects. They all tie into Core in some way, depending on what they are responsible for doing. A more specific area, Editor would include all the tools and functions used for the game’s map editor mode.

Screens can be seen sort of like game states and also like components that, when grouped together, can describe a game state or mode of behavior. They are loaded and ran from the screen system, and either specialize on displaying information related to the game, or tell the user what actions are available. Background screens, gameplay screens, HUDs, inventory screens, and menus would belong here.

As you may have noticed, the three groups tend to progress from low-level to high-level code. This was not really intended, but does give me a better idea of how to pass data around.

The graphics engine is already running in the screen system. When the program launches it add a Screen to a list, which loads the content to be rendered. Here is the game loading a terrain in real-time, with some interactions handled by an “Editor” screen.

(lolwut, YouTube detected this video to be shaky. It’s all rigid camera movments in here)

There are a few issues I have to take care of with the screens and graphics. Both the screen system and graphics engine are loaded as XNA game components, which means they draw and update automatically within the game, outside of the screen system’s control. Although the content loading code is in the Editor screen, I need the option to make the explicit choice of what order the graphics should be drawn, so that any graphics that are set in a particular screen get drawn with that screen’s Draw call.

Advertisements

Triplanar normal mapping for terrain

First, before having gotten into terrain normal mapping, I added mouse picking for objects. I have some interactivity now!

screen26-1

This is taken from an XNA code sample, then I modified it so it supports instanced meshes. So now it’s able to pick the exact instances that the ray intersets, and displays their mesh name. It doesn’t do anything other than that for now, but it’s just the first step towards editing objects in the level editor.

Mapping the terrain

The new update was for fixing a problem that’s been bugging me for a few weeks- combining normal mapping with triplanar texturing. It was a tricky affair as the normal maps get re-oriented along three planes so you also have to shift the normals accordingly. After revising how I did my regular normal mapping for other objects, I was able to get correct triplanar normal mapping for the terrain. This goes for both forward and deferred rendering.

I have only two regular textures- the base texture for mostly flat areas, and a blend texture for cliffs in steep areas. My normal map is for the cliff texture, and no normal mapping is applied for the flat areas. You can also set a bump intensity which increases the roughness of the terrain. Naturally, with great roughness comes great respons- less specular highlights. So you would have to tune the specular and roughness so it achieves a good balance. Most of the time terrain, doesn’t need specular lighting, but it’s needed for wet and icy areas.

Bump up the volume

Terrain normals, binormals, and tangents are all calculated on the CPU, which is the ideal way to go as it saves a lot of overhead of doing it every frame. In the vertex shader, the normal, binormal and tangent are transformed to view space and added to a 3×3 matrix.

output.TangentToWorld[0] = mul(normalize(mul(input.tangent, World)), View);
output.TangentToWorld[1] = mul(normalize(mul(input.binormal, World)), View);
output.TangentToWorld[2] = mul(normalize(mul(input.Normal, World)), View);

In the main pixel shader function we must first compute the normal mapping output before it can be contributed to the vertex normal outputs.

PixelShaderOutput PixelTerrainGBuffer(VT_Output input)
{
    // Sample normal map color. 4 is the texture scale
    float3 normal = TriplanarNormalMapping(input, 4);

    // Output the normal, in [0,1] space
    // Get normal into world space

    float3 normalFromMap = mul(normal, input.TangentToWorld);
    normalFromMap = normalize(normalFromMap);
    output.Normal.rgb = 0.5f * (normalFromMap + 1.0f);

    // ... Then output the other G-Buffer stuff
}

The textures are expected to be in the [0, 1] range and TriplanarNormalMapping outputs them to [-1, 1] so they are properly transformed with the TBN matrix. After that we can set the normals right back to the [0, 1] range for the lighting pass. Remember that it outputs to an unsigned format, so if we don’t do this, all values below zero will be lost.

The following function computes triplanar normal mapping for terrains.

float3 TriplanarNormalMapping(VT_Output input, float scale = 1)
{
    float tighten = 0.3679f;

    float mXY = saturate(abs(input.Normal.z) - tighten);
    float mXZ = saturate(abs(input.Normal.y) - tighten);
    float mYZ = saturate(abs(input.Normal.x) - tighten);

    float total = mXY + mXZ + mYZ;
    mXY /= total;
    mXZ /= total;
    mYZ /= total;

    float3 cXY = tex2D(normalMapSampler, input.NewPosition.xy / scale);
    float3 cXZ = float3(0, 0, 1);
    float3 cYZ = tex2D(normalMapSampler, input.NewPosition.zy / scale);

    // Convert texture lookups to the [-1, 1] range
    cXY = 2.0f * cXY - 1.0f;
    cYZ = 2.0f * cYZ - 1.0f;

    float3 normal = cXY * mXY + cXZ * mXZ + cYZ * mYZ;
    normal.xy *= bumpIntensity;
    return normal;
}

Note that where I define the texture lookups, the XZ plane is just set to a normal pointing directly towards the viewer. The X and Y values are in the [-1, 1] range, and Z is by default 1 because it is not used for view-space coordinates. So don’t forget to flip normalized negative values! Then X and Y are multiplied by the bumpIntensity. The default roughness is 1, and a roughness of 0 will completely ignore the normal map for the final output.

A lot of my texture mapping code was adapted from Memoirs of a Texel. Take caution, that if you want to follow that guide, there is a glaring mistake in that code that I noticed only after seeing this GPU Gems example (see example 1-3). You need to clamp your weight values to between 0 and 1 before averaging them out. The blog article doesn’t do this in its code. Otherwise you will get many dark patches in your textures. I fixed this with the saturate() function shown in the above example. This goes for regular texture mapping as well as normal mapping.

Here are some screenshots with the normal mapping in place. The bump intensity is set to 1.8 for a greater effect.

Edit: I’ve used some better textures for testing now. I got some free texture samples at FilterForge.

screen27-4

screen27-3

screen26-4

screen26-2

Normal computation is the same for forward rendering as it is for deferred rendering. The normals as they contribute to lighting would still be in the [0, 1] range in view space.

Some screenshots of forward rendering

Here’s a few quick test screens with the forward rendering in the engine, now that I have added shader support for terrains. The differences are subtle but somewhat surprising. First I’m testing out normal maps to see if they look OK.

screen25-2

screen25-3

Now for some money shots- comparing deferred and forward side by side.

screen25-4

Then I changed the splits so each half of the view is rendered separately. Deferred rendering is on the left (with shadows in the first image) and forward rendering is on the right.

screen25-5
screen25-6

I’ve noticed some harsher lighting in the forward renderer, which is better to notice in the last screenshot, as shadows are disabled. Compared to the deferred rendering, the colors all look somewhat more saturated. Even the skybox looks a little different. This does make the terrain more pronounced, though. Still haven’t figured out what steps in the shading process cause these differences. Perhaps it is in the different surface formats I use for forward rendering (RGB10A2) versus deferred (HDR for lighting, RGBA8 for the G-Buffer). Or how I am calculating the normal mapping. Update: I didn’t apply gamma correction to the textures, so after that was done, both screens look more or less the same.

These are not post-edits by the way. These are taken in real-time, as I decided to use the engine’s ability to handle simultaneous rendering profiles and output the buffers to split-screen. This is a good way to compare and tweak various results as I’m creating new shaders, like a “before and after” test. It’s simply a matter of adding an extra constructor to the list of profiles, and the engine loops through them. The only big limits are the buffer memory and GPU performance.

For these tests both images are actually rendered in full resolution, but cropped/resized for the frame buffer. It’s not efficient, but it’s good enough for early testing right now. However, the ability to swap in a forward renderer is a game changer for development- now I can test and compile my builds on my old MacBook. It will be pretty nice to see how it holds up to its crappy integrated  GMA X3100 chip, which only supports the Reach XNA profile. It will be made selectable on runtime depending on the graphics adapter support.

This means possibly downgrading to Shader Model 2.0 for the forward renderer. Or maybe just have two versions so if your computer supports SM 3.0, use the more advanced one for better visuals. There’s no way I could transfer my current shadow mapping setup to 2.0 without some serious nerfing of features. As long as it doesn’t run very choppy on integrated graphics, I will be fine. Speaking of shader changes, all my scene shaders use the same vertex shader programs, so it looks like a good time to break up the .fx files into smaller parts.

Forward rendering and scene objects

Now it’s time to get serious about ramping up production. I’ve brought up wanting to make a level editor before so here’s how I plan on making it. The rendering pipeline isn’t as good as I thought it out to be. Part of this has to do with how XNA’s content processors create models and meshes for your game, but another part of it is with how I am handling the instancing of meshes.

The problem with the code is more in the access of a particular object in the scene than the creation of them. I don’t see myself implementing a loading/saving system for scenes without fixing this first.

First, an aside: For a while I knew that the pine tree model I used in previous screenshots had some issues with how it was made, and I was right. It had a suspiciously high poly count. For reasons unknown to me, it had literally hundreds of overlapping polygons in several areas. Fortunately, they were hiding underneath the canopy, so removing them didn’t really change its overall appearance.

And remove them I did. I may not be a modeling wiz with Blender, but I know enough to “buff out” meshes in need of some clean-up. I set a temporary shortcut to delete the faces with a single click, and to much mouse abuse, cut the mesh down from almost 2700 polys to under 900. So a 67% reduction! After the fix, the test scene had a decrease in 25ms render time, and the model could now actually be usable for the game. Also, I improved the alpha testing so the mipmap textures look good in all distances, as well as added a kind of texture splatting that draws a different texture on steep slopes.

screen25-1

A new way to draw

I started to introduce the forward renderer to my engine. This will provide an option for low-performance hardware, when deferred rendering is too slow. Currently, it only supports one directional light, no shadows, and the terrain shader isn’t complete yet. But it’s already testable. I ran my test scene, and it averaged about 6 6.25 to 6.67 milliseconds per frame, while the deferred renderer (with shadows and terrain disabled) took 9 to 10 milliseconds per frame. Plus, having hardware MSAA is always nice.

It didn’t come without some significant rework to the scene rendering code, but this was actually a good thing in disguise. Older versions of the engine used the mesh’s GBuffer effect to draw the objects in the Scene Renderer. But it’s of no use if I didn’t want deferred rendering. I could’ve added some forward rendering shaders to the effect file, but then it will no longer be just a GBuffer effect, but a mash of many different rendering schemes. I chose to keep forward rendering in its own effect, and instead have the SceneRenderer set the current effect you wanted to use, and have that (hopefully) render the scene correctly with it.

There are two overloads to drawing a model in the Scene Renderer: one with a GBuffer effect and one with a custom effect. As I removed the requirement of a G-Buffer, the overloads now only made sense in a different way: the custom effect didn’t set parameters for camera settings, while the “standard” previously G-Buffer-only overload did. Now my rendering system is a bit more generic, and determined by whether you pass the camera vars to directly to the Scene Renderer or not.

So in the end, only the Shader classes are responsible for supplying the effects, and to send the appropriate one to the Scene Renderer. With this, implementing the forward shader class was a lot easier, and I created a new Render Profile to use it with.

Object pooling

After I brought instancing to the engine, the methods were altered to directly change the last instance (by default) or given a number, the instance that belongs to that number index. That isn’t going to cut it for the long run.

For a user interface, you shouldn’t need to care where the source of the object’s mesh is coming from. It’s more intuitive to think of them as separate entities. So I will introduce the ModelEntity  class, which define a position, rotation and scale, and most importantly, will have a reference to exactly one instance of the model. They just need a different name so you can tell which is which.

The current code to make two identical models would be:

scene.Model("fighterShipModel"); // Makes a ship model as it didn't exist yet
scene.Model("fighterShipModel").NewInstance();

The Model method lazy loads, so if the model didn’t exist yet, load its content and create the first instance. But if it already exists, just return that instance. This has a hidden bug/gotcha… if you only typed the second line, you’d get two ships regardless! Also, the only way to access the instances individually is if you remember the order in which you made them or assign an InstancedModel object explicitly. Otherwise it’s lost in the void forever (to be more accurate, stuck in the origin).

With the new Entity system, the new code should read something like this:

scene.AddModelEntity("yourFighterShip", "fighterShipModel");
scene.AddModelEntity("comradeFighterShip", "fighterShipModel");

Maybe I will just call the method AddEntity instead, it could be much clearer. Only the rendering code needs to access everything about the model’s mesh data. So with that said, my scenes would benefit from additional object pools. Not necessarily for improved performance, but to group the data differently for use in different contexts. An Entity only needs to know about its position, rotation, color, what model represents it, etc. It won’t be simply a part of a list of mesh data instances.

So, the next thing to do, is have a pool of Entities for easy lookup by name. Optionally, let the program name the entity for you, because it may not be very important. An example is many rock meshes copied hundreds of times as part of the scenery. They are still individual models, but you don’t care what each one is called. All you would care is that they are rocks and you can select them and move them around. Eventually, when the physics system is in place, they can have their own collision shape as well. Fortunately BEPU would make this simple because it has a straightforward way of accessing all of them.

More outdoor lighting, and starting the game

Right now I’m at a point where I can see art becoming a more important thing to consider. I might resort to making some of my own models, and get re-acquainted with modeling tools. I used to work with Maya at school but now it has to be Blender or nothing.

But there are a few tweaks I did with the directional lighting- in particular, the way the ambient term gets added now contributes to the shadow more directly instead adding it in the final step where the base textures are blended with the lighting render target. This while I can still manage to stay a bit shy of the instruction limit of the very heavy directional light shader (oh boy, will this be fun trying to squeeze this in a single pass shader in forward rendering). Just as notable, I managed to push back the far bounds of the shadow cascade blends, which means you get crisp shadows at a longer viewing distance! Here is how it looks after a few blending tweaks.

screen24-1

The plant models you see here are provided by BKcore. He does some pretty awesome work of his own so I suggest giving his site a look (It almost makes me want to try 3D in web browsers). These in particular are very good in pushing what the shadow renderer can do. And if you’re not a fan of crisp shadows, you can increase the kernel size to spread the samples more.

Guess it’s also time for a new video so here it is. The latest build of the engine in progress. Activate your HD buttons for an optimal experience.

Oh yeah, there’s a game….

I’ve also officially begun work on the game, by which I mean I created a new project and called it “OffroadRacing”. Right now it is nothing more than some of the assets copied over and getting some cameras and physics set up. The physics aren’t working with the rendering yet- I can make entity bodies and update them, but they’re not bound to the models. It’s proving to be more tricky than I thought, because of how instancing is set up in my engine and I probably want to make it possible for each instance to have its own rigid body.

The world scene hierarchy is as follows: Scene -> Model -> Mesh -> Instance. A scene has all unique models, with unique names, models have multiple meshes that have their own world matrix (for independent movement) and each mesh can have multiple instances of it. The reason I instance by mesh and not by model is because that’s where each separate vertex buffer is stored, and also because larger models with many meshes can be culled more finely.

However, for a program outside of the engine scope, accessing a specific instance of a mesh is still cumbersome to do. The instances aren’t treated as particles, so I still want a good degree of control for moving and modifying them. They are only treated as batches for rendering. I am considering making MeshInstance a more high-level class, something where you can hold a reference to the more important objects and use a string to name them. Maybe the player wouldn’t care whether his vehicle is kicking up “Rock_523” or mowing down “Grass_118” but for level creation it will work in a pinch for setting up physics to various objects.

Once I can get some cubes to rain down on the ground, I will get into testing out static bodies and then perhaps finally some kinematic action going down. From there I will shift to making more personalized levels to travel through. Add some triangle picking for placing objects as a start.

Here’s my list of the more important things to work on:

Racing game

  • EVERYTHING!!! (but especially starting on the physics)
  • A rudimentary level editor, will not have a very visual interface at first
  • Navigable physical body, just to move the camera for now

Engine

  • Easier manipulation of mesh instances
  • Object picking
  • Forward rendering
  • Impostor/billboard drawing
  • Easier-to-configure shaders

By the way I need to reorganize the categories of my posts. Basically everything here is related to XNA in some way. But I don’t have too many categories to start with so I don’t really need sub-categories that don’t make sense. Anyways, that’s all for now.

Engine changes on the way

Just a minor update this time. I haven’t done much more with the terrain or rendering code, but here’s what I am planning next for the engine.

In addition to terrain and culling, I’m doing a bunch of work on the internals itself, to make the code simpler and more readable. Gone will be the DragCamera and FreeCamera classes, which were really not much more than wrappers with XNA input functionality shoehorned in. From now on, input will be decided by the user, as I felt that it shouldn’t be the responsibility of the rendering engine to decide what control what.

Additionally, debug features will be in their own class, with the input also separated. This will probably lead to the development of a Immediate Mode GUI which would be just enough to do some of the more common actions for scene editing. I don’t really want to make a full-fledged GUI library because there are too many of those, and it’s easier to code in GUI elements by methods instead of using several objects and events.

Back on the rendering side of things- some more cleanup on shader code, and figure out a way to finally add full specular mapping support without bloating the buffer too much. But if that fails, I’ll bite the bullet and use an additional buffer for specular data. Also, a forward renderer! It seems like everyone and their grandma is too focused on deferred lighting these days, but left the simple yet overlooked forward rendering in the dust. Deferred rendering and lighting is not always the silver bullet for graphics… it has relatively more upfront cost and not worth it for basic outdoor lighting situations. They should be friends with forward rendering, which is still faster for many setups.

More user-friendliness

I don’t want users to dig too deep into the codebase to make the configurations they want to see take place in their game. For instance, right now the only way to change the Gaussian blur factor is to go into the effect class that uses the blur you want to change, and edit the parameter value from there. I want shader parameters to be editable on the high level, but without exposing the functions of the class that uses it. If possible, they should be able to use the engine as a library, and still tweak all the settings for the built-in effects.

This leads to more accessible methods to edit or create scenes, as well. Along with GUI interaction mentioned above, the changes you make should be persistent. Yep, that means loading and saving files for scenes. This feature will come eventually, but I already think JSON would be a good format to represent the data. It’s expressed in a concise manner, and there are many serializers/parsers for JSON out there, which is why I prefer using it.

With these changes, I can hopefully get closer to releasing an official candidate for people to download.

Big progress on geo clipmaps and terrain

First I have to mention that three weeks ago, I attended an IGDA meetup in my town and it went pretty well. I got to catch up with a friend I met at a previous meetup, and even managed to meet a developer who I only previous knew about on YouTube. The topic for the meetup was game jam survival techniques. It provided some insight on making games under a very tight deadline, and we got to hear some humbling experiences from people that have gone through many of those battles and how they managed through it.

It’s reassuring to know that completing the game is not the only important goal, but moreso knowing how to work as a team and re-evaluating your strengths and weaknesses. The bottleneck that developers overlook, is not their game’s performance but their own performance. Figure out what you can cut to reduce your time in meeting your goals is important. You can shave off some milliseconds in a scene render, but it’s better to cut down on days or weeks in your work for the sake of completion.

Last time I mentioned that I wanted to make a racing game. It will be an off-road racing game with an emphasis on speed, long courses, and arcade-like action. So if the DiRT series is considered to fall somewhere between sim and arcade, I consider my own game to be between arcade and DiRT. I’ve had a few other game ideas in the past, and mostly that puzzle game which I finished but lacked the will to polish it, add more levels and modes, etc. I would want to get back to doing that eventually, but for now I’d like to continue on my main project with a terrain engine and editor for the other game.

Keeping this in mind I decided to take a more headstrong approach in my progress. Working too long at one feature is mind numbing and demotivating, and I seem to work better rearranging and re-organizing my task priorities, to ignore the ones that seem more wasteful. Guess this can be considered agile programming for one person.

A start on geo clipmaps

With that said, I wanted to shift focus on finishing the initial bugs and shortcomings on the terrain. It was tempting to start a map editor alongside it, but that had problems of its own with me trying to get viewports playing nicely with graphic device states and other controls (WinForms definitely isn’t my strongest point). So I continued headstrong into making my terrain look and perform better.

Previously I said that I would start making a basic clipmap system for the terrain. So that’s what I’ve worked on the past few days, and man did I sure make a lot of progress. Remember that desert scene that I had at the beginning? I went back to square one and tested some clipmapping out with wireframes.

First I created a new InnerClipmap class which is responsible for setting vertices for only a portion of the terrain given a position which acts as the center. The Terrain class is responsible for keeping the entire heightmap in memory and drawing whatever the clipmaps send to it. I made the camera and the dude be the center in various tests, and the terrain meshes move in stepped increments along with it.

clipmaps1

You cannot see it here, but all those grids are solid squares- they overlap each other in the center. Obviously we cannot draw redundant vertices, as it’s wasteful and produces ugly Z-fighting.

The next step was to “punch” a hole right through those outer clipmaps. I’ve read a few articles about geo clipmaps but I didn’t follow any exactly to a tee. For instance the GPU Gems article suggests to split the clipmaps into parts, and moving those parts individually. That seemed too thorough to me- all I know is, if each clipmap increases in size by a power of two, each hole will be (roughly) between 1/4 and 3/4 of the grid’s width and height dimensions. So, if the vertices fall in those areas, they won’t be added to the index buffer.

clipmaps2

Bam, holes are cut! But they are too big- some sides don’t align completely, so inner rows and columns need to be inserted according to some specific positioning rules, which I didn’t get to fix until much later. There’s also another thing to note. I only update the vertex and index buffers when the clipmaps should move, instead of re-calculating every frame. This wasn’t as tough to accomplish. I store its previous center position and if the current position passes a threshold (usually the grid cell size) we re-draw the map.

I went back into triangle fillmode and re-loaded the scene. I intentionally chose a camera position and direction where the clipmaps have little space in between, but there’s still the matter of broken seams as a result of height values not being averaged out where they meet at the ends. But still, things are starting to look good again.

clipmaps3

I also decided to switch to a heightmap with more consistent rolling hills and mounds. Basically just a tileable Perlin noise texture that I found online. This allowed me to make the terrain much larger.

Textures, trees and more

Now, there is where I decided to move in another direction for a while. Since I’m now starting to do more serious work with terrain, I figured, it needs a few subtle things to make it even better. So for the first time, my scenes have depth fog. I hardcoded some values for color and exponential depth in the composite shader, so everything that is rendered in the previous passes gets the fog treatment. These values will be user-assignable later on.

Also, I got tired of seeing deserts so I switched to a grass texture. It’s the same one found on Riemer’s tutorial, which did help cut time on getting started with heightmaps. But, man, does it look terrible in the distance.

terrain1

I solved this problem with a quick fix: sampling the same texture twice on the GPU. The first time, scale the texture small for close distances, and then scale it for farther distances. Then blend those with a similar formula used for the fog. The result is that the ground looks less repetitive in any area.

terrain2

There’s a basic but very useful function in my terrain code, adapted from an official XNA sample called Tank on a Heightmap. This was another real timesaver for me. Basically, it “pins” down a tank to the surface height no matter where it is on the map. It also adjusted the angle based on the interpolation of normals, but that’s not something I need right now. But I consider this a crucial step to efficiently building worlds and levels for my game.

How do I start using it? Well, for starters, all the previous screenshots are from builds where the dude is always walking on the ground. Basically, I use a function called Terrain.GetHeight(Vector3 position) and it returns the height, which I reassign as the position’s Y component. But wait, I can use it to place many objects on the ground in a loop, and what does a landscape need? Trees! I looked into a folder of tree models that I downloaded for free online, and loaded a pine tree into my project. Added a random with a seed, and set it to a loop. That was fast.

terrain3

All of a sudden, my efforts became a lot more fruitful. Looking at these trees, even if they’re not real, gave me a feeling of enlightenment and a better focus. I have a much clearer sense of direction for how to approach this project. Here’s another view, with depth of field added. (The speck in the lower left clouds isn’t a glitch- it’s a floating box I forgot to remove)

terrain4

Unfortunately, this silver lining has its cloud. What you don’t notice from these screenshots is that the trees (or rather, their polygon and vertex count) has slowed the rendering to a crawl. I place anywhere between 1000 and 1500 instanced trees and everything is now running chuggy. At its worst it can drop somewhere below 10 FPS. This is largely contributed by the cascaded shadow mapping, because it re-draws a lot of those trees for every split. But hey, now at least we’re getting somewhere. The engine is now getting a lot of heavy duty work, so it’s time to strengthen it and put it through its paces. Part of the problem is due to the tree meshes, though. Each one is 2992 triangles, heavy on the polycount if they are to be drawn a very large number of times.

I could find a better tree that’s much lighter on polygons, but still, some optimizations are in order. LOD grouping and occlusion culling to the rescue? I know I can brute-force cull 10,000 simple cubes with no problem, so for now brute-force culling isn’t slowing me down. So yeah, on to mesh LODs for one of the solutions, and then to get into some serious editing.