Brendan Keesing   

FOMOGRAPHY     Projects     Blog     About


Fog Of War

May 29, 2021

A long long time ago I was prototyping a basic city builder in Unity and implemented fog of war for it. To my surprise, fog of war is an effect that many developers struggle with, so much so that I released FogOfWar on the unity asset store and has had quite a bit of success over the years. In this post, I will decipher the magic behind how fog of war works.

HSV

For those that are new to the concept of fog of war, typically we want these features:

  • Map is hidden by default.
  • Certain units clear a path in the fog as they move through it.
  • Different units may have different shapes (circles, cones, boxes, etc).
  • When leaving an unfogged area, it may return to being fogged or partially fogged.
  • Enemy units that are in the fog are hidden.

Fog of war is typically found in Realtime Strategy Games, but can also be found in other top-down genres, such as shooters and adventure games.

Representing the Fog

Before we can do anything, we must understand what the fog is. Internally, we want to represent it as a texture that covers the entire map. This texture will contain a single 8-bit channel (R instead of RGBA), where 255 is fully fogged, and 0 is fully unfogged. Immediately you will see limitations in this:

  • The texture resolution will dictate how fine the precision will be. There’s some tricks we can use later to avoid blurring.
  • All pixels must be rectangular. If you have a hexagonal grid, this can still be solved, but you will need to have a way to convert from the hex grid to the texture’s pixel and back. It’s important to have a texture representation so that we can make use of hardware acceleration later on!
  • There are only 256 shades/steps of fog. Since our monitors are only outputting 256 shades of color, this should not be an issue. It may not be sufficient if you are having slow-clearing fog, in which case you can up it to 16-bit or 32-bit.
  • Anything outside the map range can not be measured. Everything outside will need to be completely fogged, or completely unfogged.
  • Only a finite area can be represented. This can be solved using chunking (we’ll discuss this later).
  • Only 2D spaces can be represented. For 3D fog volumes, the same logic here can be applied, but you will need to layer the fog texture on top of each other, or use a 3D texture.

As harsh as these limitations may seem, in my experience, this is adequate for 99% of use cases.

Representing the Map

Firstly, we need to state where the fog-able region of the map lies.

1
2
3
4
5
class FogMap
{
	Vector2 worldMinPosition;
	Vector2 worldMaxPosition;
}

If the worldMinPosition is (20, 30) and the worldMaxPosition is (40, 50), then the world center point of the map would be (30, 40). This is calculated by averaging the min and max points.

I really need to emphasise that these positions/scales are in world-space. This distinction is important because we will also be working in fog-space. Fog-space is the texture coordinates of the fog texture. You may want to represent this in pixels, or in normalized coordinates (0 to 1). I will be using normalized coordinates just to keep things clear.

So, the next step is to be able to convert between world-space and fog-space, and back again.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
Vector2 WorldToFogSpace(Vector2 worldPos)
{
	// Inverse Lerp
	return (worldPos - worldMinPosition) / (worldMaxPosition - worldMinPosition)
}

Vector2 FogToWorldSpace(Vector2 fogPos)
{
	// Lerp
	return (worldMaxPosition - worldMinPosition) * fogPos + worldMinPosition;
}

With just this information, we can start doing the exciting stuff.

Manipulating the Fog

So we have a unit in world-space and we want to clear a simple box around its location. Too easy:

1
2
3
4
5
6
void ClearBoxInFog(Vector2 worldPos, Vector2 worldSize)
{
	Vector2 minFogPos = WorldToFogSpace(worldPos - worldSize / 2);
	Vector2 maxFogPos = WorldToFogSpace(worldPos + worldSize / 2);
	SetFogInBox(fogTexture, minFogPos, maxFogPos, 0);
}

The hard part here is the SetFogInBox(). This will take the fog-space dimensions of the box and set all of those pixels in the texture to a 0 (recall that 0 is completely unfogged).

The implementation of this really depends on your use case. Is your fog map relatively low resolution (roughly less than 128x128)? If this is the case, you may benefit from manipulating the texture all on the CPU by implementing your own rasterizer (multithreading is definitely possible here!). For any higher resolution, you will need to look at hardware acceleration. Using either compute shaders or rendering through the traditional render pipeline can work. Obviously these are out of scope for what we’re talking about here, so let’s move on.

I have used a box as an example here, yet there are so many other shapes you can quite easily support. You can easily add any shape that you describe through formula. And for any shape you can’t, use a texture. Texture sampling on CPU can be a pain to optimise, but you should have no excuse if you’re implementing it on the GPU.

Another point to make is that our fog is not binary (on or off). If you want to clear a circle, have the fog fade out along the edges to remove any aliasing.

HSV

Okay, so we’ve got our fog stored in a texture, and it’s clearing it up for us. This isn’t going to interact well with our game unless we can’t check if an enemy unit is in the fog or not.

1
2
3
4
5
bool IsUnitInFog(Vector2 unitWorldPos)
{
	Vector2 fogPos = WorldToFogSpace(worldPos);
	return fogTexture.Sample(fogPos) > 128;
}

Putting it Together

So putting this all together, we end up with a loop like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Texture fogTexture;
Unit[] units;

void UpdateFog()
{
	// clear all pixels
	SetAllPixels(fogTexture, 255);
	
	// rasterize all units
	for (int i = 0; i < units.length; ++i)
	{
		RenderUnitToFog(fogTexture, units[i])
	}
	
	RenderToScreen(fogTexture);
}

If we want to remember the fog and leave it fully cleared, we can remove the SetAllPixels() call so that the fog will remain as it was before. But what if we want to leave past-explored areas as partially fogged (let’s say a fog value of 128)? For this, we will need to have a few extra textures.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
Texture currentFogTexture;
Texture pastFogTexture;
Texture combinedFogTexture;
Unit[] units;

void UpdateFog()
{
	// clear all pixels
	SetAllPixels(currentFogTexture, 255);
	
	// rasterize all units
	for (int i = 0; i < units.length; ++i)
	{
		RenderUnitToFog(currentFogTexture, units[i])
	}
	
	// move fog data 
	pastFogTexture = AddToPastFogTexture(pastFogTexture, currentFogTexture);
	combinedFogTexture = CombineFogTextures(pastFogTexture, currentFogTexture);
	
	RenderToScreen(combinedFogTexture);
}

The implementation of AddToPastFogTexture() will go through each pixel and perform the following operation:

1
pastValue = Min(pastValue, currentValue);

HSV

That way, pastFogTexture will always contain a 0-255 value of the most cleared any part of the map has ever been.

CombineFogTextures() will do the following operation:

1
2
byte partialValue = pastValue * 128 / 255;
combinedValue = Min(currentValue, partialValue);

HSV

So the combined value will be the partial value (128) or the current value if it is less fogged.

Rendering the Fog

Alright, this is the important part. What’s the point of all this if we can’t see it? Unfortunately, there’s no clear best way of rendering the fog, yet there are many options of doing it.

Post Process

This will be the easiest to implement and be the most modular. I use this in my FogOfWar unity asset.

After the entire scene has been rendered (without fog), we will have the color of our scene and a depth buffer. Using the depth buffer, we can reconstruct the world position of any visible pixel. The method of doing this can vary greatly per game engine and rendering API, but the basic premise is: Determine the pixel being rendered in normalized view space (0 to 1). Sample the depth texture at this position to get the depth value. Convert the depth value to the world position from the view-projection matrix.

With the world-space position of the pixel, we can then call our WorldToFogSpace() function from earlier to sample the fog texture.

Pros:

  • A single draw call to render the fog.
  • Only one shader is required.
  • Works with both deferred and forward renderers.
  • Fog is only rendered once per visible pixel.

Cons:

  • Only depth-writing surfaces will receive correct fog values. This may be an issue with transparent objects such as windows or particles.

Per-Shader

In your world-space shaders, get the world position of every fragment being rendered and put it through our WorldToFogSpace() function from earlier. Then use that position to sample the fog texture. This works similarly to lights in a forward renderer.

Pros:

  • Works with all objects, regardless of transparency.

Cons:

  • Must modify every single shader that is rendering in world space.
  • May have color issues with deferred renderers.
  • Discarded pixels will be more expensive due to the fog being calculated. This can be circumvented with a depth prepass.

Alternative Solutions

I’ve heard of some games using grid-like meshes, or rendering a plane over the entire map with the fog texture on it, which gives more controls to the CPU. Older games (pre-GPU) would rasterize over their scene, or even use tiled sprites. It wouldn’t be unreasonable to even have a dense particle system over the scene. It all depends on what look you’re going for and what limitations you’re dealing with.

Optimisations

Before we get into some of the more advanced things you can do, we should probably optimise what we’ve done so far.

For general optimisations:

  • Reduce the resolution of the map. This is almost always the largest bottleneck. If this is not an option, consider adding some sort of chunking system (discussed in Advanced Effects).
  • Reduce the number of units being updated. Do units need to be updated if they’re offscreen? Can they be updated every X number of frames instead? Could a simpler shape be used instead when they are offscreen?
  • Update the fog over multiple in-game frames. For example, the fog may update once every 5 in-game frames. This is surprisingly bearable as the fog as the fog does not change significantly on a frame-by-frame basis.

CPU optimisations:

  • If you have lots of units, you may be able to have a separate thread for each unit, which should make things a bit easier.

GPU optimisations:

  • When sampling the fog texture from the CPU, use a compute shader or a down-sampled copy of the texture.
  • Batch together unit shapes into a single draw call using instancing.

Advanced Effects

Chunking

If your scene can be broken down into 2D chunks, you can do something like this:

1
2
3
4
Texture fogTextureCopy = fogTexture.Copy();
Vector2 fogOffset = WorldToFogSpace(map.worldCenter) - WorldToFogSpace(newCenter);
fogTexture.MoveAllPixels(fogOffset);
map.worldCenter = newCenter;

What this is doing is keeping a copy of the fog texture, moving the map’s position, then reapplying the fog texture with an offset so that it aligns with the old map position. This can create a seamless experience when you can keep walking infinitely and there will always be more fog. Obviously, if you want to remember the fog from previous chunks, you’ll need to store them somewhere and pull them up whenever you swap chunks.

Minimaps

It’s common to reflect the fog clearing in a minimap. The concept is exactly the same as rendering to the screen. You can optimise by just taking the fog texture and applying it over the top of your other minimap UI. Once again, it comes down to what look you’re trying to achieve.

Line Of Sight

If a wall stands between your unit and a point in the fog, should the fog be cleared? If your answer is yes, you will want to implement a line of sight component. This behaves much like a 2D lighting system, so we’ll use a similar strategy here.

HSV

The plan is to raycast outward from the unit’s position, store the max visible distance of each ray, and use that to determine the visibility later one when we’re rasterizing the unit’s shape.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
float[] GetVisibilityDistances(Vector3 worldPos, int rayCount, float maxRayDistance)
{
	float[] distances = new float[rayCount];
	for (int i = 0; i < rayCount; ++i)
	{
		float radians = i / (rayCount * PI * 2);
		Vector3 direction = new Vector2(Sin(radians), Cos(radians));
		if (Physics.Raycast(worldPos, direction, maxRayDistance, out float hitDistance))
			distances[i] = hitDistance;
		else
			distance = maxRayDistance;
	}
	return distances;
}

HSV

If you’re rasterizing on the GPU, this array of distances can be converted to a 1D texture (or multiple can be combined into a 2D texture) to be passed to the GPU.

Obviously, all these raycasts are going to be a significant hit on performance. I recommend keeping it as low as possible, such as 8 for a full circle. If the unit’s shape is a semi circle, it will be better to only raycast within the visible semicircle by limiting the angle. With this, you may be able to get the raycast count down to 2 or 3 per unit.

Is This Just A Fancy 2D Lighting System?

At this point, you might be thinking that our fog of war systems can also work as a 2D lighting system. Technically, yes. If you substitute the word “unit” with “light source”, and “line of sight” with “lit areas”, then it is practically a 2D light system. This happens to be something that many users of my FogOfWar asset use the asset for. And, although it works, I must stress that what we have created here is not optimized for lighting, and very much lacking in features.

For example, how can we set the color of unit shapes? How can we implement shadow bounces/GI? How can we have this interact with normals, glossiness, roughness, AO, and all the other cool PBR stuff?

It is very possible to achieve all this, but there would need to be large changes made throughout the process. Maybe another time…

Avoiding Aliasing

Texture-based fog of war is very prone to aliasing. This is where you can see the individual pixels in the fog texture. The obvious solution is to increase the resolution of the fog texture, yet this will quickly kill performance.

Alternatively, you can apply a blur to the final fog texture. This will need to be non-destructive (ie it should output to a separate temporary texture and not the persistent fog values, otherwise the blur will seep across the map over time). A basic gaussian blur can do wonders for aliased fog:

HSV

Improving The Visuals

So far, our fog has just been a plain boring color. Wouldn’t it be cool if we had some sort of animating cloud texture instead? Maybe the clouds could animate away instead of just an instant blend?

If we’re rendering the fog with shaders (including post processing), we will have a world position for each pixel. We can convert this from a 3D position to a 2D position (depending on the axes you’re using), then use that as texture coordinates for a tiling texture. Alternatively, you can use the screen space position to render a screen space texture. Both of these have the ability to render the output from a separate render target, allowing full control over animation of the clouds.

HSV

There’s one more thing irking me. If our map resolution is quite low, we’ll see these boring gradients when the background is blending with the fog. If we have a heightmap of our cloud texture, we can add variation to the fog amount. Each fragment on screen can do this:

1
fogAmount = saturate(fogAmount + (cloudHeightTexture.Sample(uv) - 0.5) * variance);

The variance variable should be between 0 and 1, and will control how strong the height will affect the fog amount. This is the equivalent to height-based blending for surfaces.



Twitter YouTube GitHub Email RSS