This time, let’s look at how we can add some smoothness to our shadows. There exist a variety techniques to do so, the most popular being percentage closer filterig (PCF). The idea is simple: instead of sampling the shadow map only once and getting a binary result (shadow or no shadow), we also sample the surrounding shadow map texels and average the resulting shadow/no shadow decisions like so:

const int halfkernelWidth = 2;

float result = 0;
for(int y=-halfkernelWidth; y<=halfkernelWidth; ++y)
{
for(int x=-halfkernelWidth; x<=halfkernelWidth; ++x)
{
float texCoords = float4(splitInfo.TexCoords + float2(x,y) / ShadowMapSize, 0, 0);
result += (splitInfo.LightSpaceDepth <  tex2Dlod( ShadowMapSampler, texCoords ).r);
}
}
result /= ((halfkernelWidth*2+1)*(halfkernelWidth*2+1));


Note that there exist a couple of more efficient and better looking sampling patterns but for the sake of simplicity I’m sticking with the above pattern. In any case, the averageing of yes-no decisions yields a result that can only take on a set of discrete values (25 for the above example), which means that our shadow levels will be more or less heavily quantized. Check out the image below: On the left the shadowed scene and on the right a closeup of a shadow border with the above PCF filter kernel and the corresponding pixel value histogram. You can clearly see the shadow value quantization in the image as well as in the pixel color histogram.

While this method produces good (with some tweaking excellent) results, it is rather sample hungry: In the above case we need a total of 25 samples for each shadow lookup, which is just too much for less powerful graphics hardware. So let’s look at the alternatives: Instead of using a regular grid sampling pattern, let’s switch to a stochastic sampling pattern: Pick a set of sampling positions from a disk with center at the current pixel and sample only those. A commonly used sampling pattern in this context is the poisson disk: its sampling positions are chosen uniformly at random but with a distance constraint. It can be shown that this sampling pattern has some highly desirable properties like most of it’s energy concentrated in the high frequencies. In the images below you can see the filter kernel on the left (12 taps) and the resulting shadow border on the right.

Notice how the shadow border became much more concentrated and somewhat less blocky. Still not quite satisfying though. So let’s apply another approach, proposed by [Mittring07]: Let’s apply a random rotation to the filter kernel before sampling. This will change up the sampling positions from pixel to pixel, reducing artifacts on neighbouring pixels. Note that randomness in shaders is a tricky issue: Each pixel is shaded independently of the surrounding pixels so traditional random number generators that depend on the result of the previously drawn random number cannot be used here. And usually, it is a requirement that the generated random number be stable with respect to the camera: You want to avoid changing your random number and with it your shading result when the camera stands still otherwise you might perceive flickering even in still scenes. Ideally your random number should not change even when the camera moves. I chose to precalculate a set of random numbers at application start and upload them via a texture. The texture is then indexed via the fragment’s world space position, which guarantees stability with a still standing camera. Since each random number represents a rotation angle $\phi$ and all we do is rotate the filter kernel, I directly store $cos(\phi)$ and $sin(\phi)$ in the texture so we ony need to do a multiply-add in the fragment shader.

// generate a volume texture for our random numbers
mRandomTexture3D = new Texture3D(mGraphicsDevice, 32, 32, 32, false, SurfaceFormat.Rg32);

// fill with cos/sin of random rotation angles
Func<int, IEnumerable<UInt16> > randomRotations = (count) =>
{
return Enumerable
.Range(0,count)
.Select(i => (float)(random.NextDouble() * Math.PI * 2))
.SelectMany(r => new[]{ Math.Cos(r), Math.Sin(r) })
.Select( v => (UInt16)((v*0.5+0.5) * UInt16.MaxValue));
};

mRandomTexture3D.SetData(randomRotations(mRandomTexture3D.Width * mRandomTexture3D.Height * mRandomTexture3D.Depth).ToArray());


The fragment shader then looks up the rotation values based on the fragment’s world position and rotates the poisson disk before doing the shadow map look-ups:

// get random kernel rotation (cos, sin) from texure, based on fragment world position
float2 randomValues = tex3Dlod(RandomSampler3D, randomTexCoord3D).rg;
float2 rotation = randomValues * 2 - 1;

float result = 0;
for(int s=0; s<numSamples; ++s)
{
// compute rotated sample position
float2 poissonOffset = float2(
rotation.x * PoissonKernel[s].x - rotation.y * PoissonKernel[s].y,
rotation.y * PoissonKernel[s].x + rotation.x * PoissonKernel[s].y
);

const float4 randomizedTexCoords = float4(splitInfo.TexCoords + poissonOffset * PoissonKernelScale[splitInfo.SplitIndex], 0, 0);
result += splitInfo.LightSpaceDepth <  tex2Dlod( ShadowMapSampler, randomizedTexCoords).r;
}

// normalize yes/no decisions and combine with ndotl term
float shadowFactor = result / numSamples * t;


And here are the results:

To the eye, the transition between shadow/no shadow looks much smoother than with the 5×5 block kernel – even though the shadow levels are even more quantized: check the histogram, there are only 12 different shadow levels. The scene itself doesn’t necessarily look better in total but once the textures and lighting are added things change drastically: the ‘noise’ disappears mostly in the surface detail and is quite less noticable except on very uniformly shaded surfaces.

Note that I held back some detail: how do we calculate the ndotl based term: t? Up to now we set t = (ndotl > 0) but this doesn’t work well together with the stochastic sampling approach shown above: (ndotl > 0) produces hard shadow edges whereas the rotated poisson disk gives us a dithered looking fall-off. Combining those two looks quite weird as shown below. So we need to dither ndotl as well:

float l = saturate(smoothstep(0, 0.2, ndotl));
float t = smoothstep(randomValues.x * 0.5, 1.0f, l);


So instead of creating a hard edge by computing ndotl > 0, we use the smoothstep function to create a smooth transition between 0 and 1 when ndotl lies in the range $[0, \dots, 0.2]$ and store it in the variable l. We then use our random value randomValue.x and l and feed them into the smoothstep function again. If l lies in the range $[\text{randomValue.x}, \dots, 1]$ it will be smoothly interpolated between 0 and 1, based on how close it is to either randomValue.x or 1. But since l represents a smooth transition between 0 (shadow) and 1 (no shadow) and randomValue.x (ideally) follows a uniform distribution, this means that the further l moves away from shadow , the more likely t will receive the value 1 (no shadow) too. Conversely, the closer l gets to the shadow border, the more likely t will be 0 (shadow). But there is still a random factor in there which can cause t to be 1 even if l is 0.. but very unlikely. The two images below illustrate the effect dithering on ndotl: On the left the scene is rendered without, and on the right with ndotl dithering.

Stay tuned for some more images and videos in the next post!

Let’s face it: even after stabilization, the shadows in my sample still look pretty bad. Very blocky, hard shadow edges and strange shapes everywhere… So let’s do something about it! But first a short disclaimer: The efficiency of the techniques I propose in this post is highly dependent on the scene and your visual target: There is no one approach that will fix all your shadow artifacts once and for all – it’s rather through trial and error and a combination well known techniques that you can get to a state you are satisfied with. I have yet to come across a game that does a perfect job with real time shadows.

But let’s start now, with something very simple: During shading, we do know the surface normal for each position on the geometry. So lets make use of that: If a surface normal points away from the light source we know for sure that that point does not receive any light. Hence it is in shadow. Determining wether the surface normal is pointing away from the light is simple: Just take the dot product between the surface normal and the light direction and check if it’s less than 0. So let’s combine that with the shadow map look up from GetShadowFactor in Cascaded Shadow Maps (3):

// compute shadow factor: 0 if in shadow, 1 if not.
{
float storedDepth = tex2Dlod( ShadowMapSampler, float4( splitInfo.TexCoords, 0, 0)).r;

return (splitInfo.LightSpaceDepth < storedDepth) * (ndotl > 0);
}


where ndotl is assumed to contain the dot product between surface normal and light direction. Note that we are combining the two terms “Pixel is occluded by some occluder in the shadow map” and “Surface is oriented away from the light” with a simple multiplication. This equation can be interpreted in a probabilistic sense, in that we are saying that a point can only be in light if it is not occluded by some object AND the surface is oriented towards the light. Check out the comparision below: Shading without ndotl term on the left, with ndotl term on the rightThe geometry based ndotl term is clearly helping a lot with thin surfaces, where the shadow map doesn’t have enough $Z$ precision to pick apart the front and back facing polygons. It can, however, only be as good as the surface normals, so correct surface normals are a must.

Next, let’s have a look at another geometric approach: depth bias. When rendering the shadow map, we store the scene distance with respect to the light source in a more or less accurate manner. In my case, I requested a SurfaceFormat.Single shadow map format, which means 32 bits (floating point) per pixel. But not all developers/hardware can afford such a surface format, sometimes 24 bit or even 16 bit per pixel are a requirement. Under such circumstances, the depth range that can be stored in the shadow map is quite limited, resulting in a high quantization error for the stored depth. This means though, when we read back the stored depth from the shadow map, we’ll get a depth value that is more or less close to the real depth of the corresponding surface element. So we can either end up thinking that a given surface point is in shadow because the stored depth value is (due to quantization) closer to the light source than the real depth value, or that the surface point is not in shadow because due to quantization it is further away from the light source than in reality. Since the first problem is much more visible than the later, a simple work around has been invented: Add some bias to the shadow stored depth. There is even hardware support for this functionality: the D3DRS_DEPTHBIAS and D3DRS_SLOPESCALEDEPTHBIAS renderstates, where D3DRS_DEPTHBIAS corresponds to a constant bias and D3DRS_SLOPESCALEDEPTHBIAS is a surface slope based bias. Slope based bias can be very helpful with surface elements that are oriented in a steep angle with respect to the light source. Unfortunately these rendersates only applied to the position interpolator, so not much use to us because we are passing the depth via a color interpolator. So we have to emulate these in the fragment shader:

float2 DepthBias;
{
float depthSlopeBias = max(
abs(ddx(input.Depth)),
abs(ddy(input.Depth))
);
return float4( input.Depth + depthSlopeBias * DepthBias.x + DepthBias.y, 0, 0, 0 );
}


The DepthBias values are passed to the shader via shader constants. Check out the images below:

On the left you can see the shadowed scene without any bias and on the right with tweaked bias values.

# Stable CSM

So… How can we fix this? First, lets agree to make the shadow map size in world space constant, say shadowMapSize, and decouple it from light and camera rotation. In my case I chose to align the shadow map to world space $X$ and $Z$ axis. I do so by exchanging the shadow view matrix for a matrix where the $X$ and $Y$ axis point in direction of world space $X$ and $Z$, positioned at mLightPosition like so:

 // Remember: XNA uses a right handed coordinate system, i.e. -Z goes into the screen
var look = Vector3.Normalize(arena.BoundingSphere.Center - mLightPosition);
new Matrix(
1,              0,              0,             0,
0,              0,             -1,             0,
-look.X,        -look.Y,        -look.Z,        0,
mLightPosition.X, mLightPosition.Y, mLightPosition.Z,     1
)
);


Note that the $Y$ axis is flipped in order to preserve culling order in the final view transform. Also note that this approach only works as long as mLightPosition does not lie in the $y=0$ plane in world space as then the resulting matrix becomes singular.

Now lets tackle camera movement: As outlined before, the problem is that even the slightest movement of the shadow map will affect all the scene as each scene position will change position in the shadow map (subpixel-wise speaking). What we need, however, is that the scene positions stay constant (at least relative to their corresponding pixel). So instead of moving the shadow map continuously, lets move it in fixed increments of one shadow map pixel. When moving the shadow map this way, each world space position might fall into a different shadow map texel than the frame before – but the relative position within the shadow map texel will stay the same, which means no more moving shadow boundaries.

So how can we implement this? Given our view transform defined like above, all we need to do is adjust the shadow projections: We want to place the shadow map corners at discrete positions only, separated by some value, e.g. quantizationStep. Remember, in one of my previous posts Cascaded Shadow Mapping (1), we defined the extent of the shadow projection matrices based on values min and max which were determined from the view frustum. All we need to do now is make sure the $X$ and $Y$ coordinates of min and max are properly discretized:

var quantizationStep = 1.0f / shadowMapSize;
var qx = (float)Math.IEEERemainder(min.X, quantizationStep);
var qy = (float)Math.IEEERemainder(min.Y, quantizationStep);

min.X -= qx;
min.Y -= qy;



Using the adjusted min and max values we create the shadow projection matrix as described before:

Projection = Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, minZ, maxZ);


The effect of these few lines of code is dramatic. Check out the video below:

You can clearly see how the scene shadows are stabilized, almost all artifacts during camera movements and rotations are gone. The remaining artifacts stem from transitions between the different shadow splits, as shown in the last part of the video.

Having seen how we can render the shadow map for each split in the last post, it’s time to look at the final step: map the shadow atlas onto the scene. So, basically, for each pixel in the scene we need to determine if it lies in shadow or not. To do so we first need to determine the appropriate shadow split for that pixel. This can be done in a couple different ways, for example, by determining the split based on the distance to the viewer or by picking the split based on the look up texture coordinates. I chose the latter, as this makes better usage of the higher resolution maps. Check out this article for a nice comparison.

So in order to determine the shadow split we basically transform the current fragment’s world position by each shadow transform and then pick the first shadow map where the coordinates fall within the range of the shadow atlas partition. Let’s define some helper functions first:

#define NumSplits 4

// the "world to shadow atlas partition" transforms

// the bounding rectangle (in texture coordinates) for each split
float4 TileBounds[NumSplits];

{
magfilter = POINT;    minfilter = POINT;    mipfilter = POINT;
};

{
float4 TexCoords_0_1;
float4 TexCoords_2_3;
float4 LightSpaceDepth;
};

// compute shadow parameters (tex coords and depth)
// for a given world space position
{
float4 texCoords[NumSplits];
float lightSpaceDepth[NumSplits];

for( int i=0; i<NumSplits; ++i )
{
float4 lightSpacePosition = mul( worldPosition, ShadowTransform[i] );
texCoords[i] = lightSpacePosition / lightSpacePosition.w;
lightSpaceDepth[i] = texCoords[i].z;
}

result.TexCoords_0_1 = float4(texCoords[0].xy, texCoords[1].xy);
result.TexCoords_2_3 = float4(texCoords[2].xy, texCoords[3].xy);
result.LightSpaceDepth = float4( lightSpaceDepth[0],
lightSpaceDepth[1],
lightSpaceDepth[2],
lightSpaceDepth[3] );

return result;
}

{
float2 TexCoords;
float  LightSpaceDepth;
int    SplitIndex;
};

// find split index, texcoords and light space depth for given shadow data
{
{
};

float lightSpaceDepth[NumSplits] =
{
};

for( int splitIndex=0; splitIndex < NumSplits; splitIndex++ )
{
{
result.LightSpaceDepth = lightSpaceDepth[splitIndex];
result.SplitIndex = splitIndex;

return result;
}
}

ShadowSplitInfo result = { float2(0,0), 0, NumSplits };
return result;
}

{
float storedDepth = tex2Dlod( ShadowMapSampler, float4( splitInfo.TexCoords, 0, 0)).r;

return (splitInfo.LightSpaceDepth <  storedDepth);
}


Armed with these definitions we can now add shadowing to any shader. All we need to do is call GetShadowData in order to convert a given world position into shadow atlas texture coordinates and then GetShadowFactor to do the lookup. Note that since the quantities in ShadowData are linear, I would suggest calling GetShadowFactor in the vertex shader in order to save fragment shader instructions. I collected these functions in a header file Shadow.h, which can be included by any shader.

What’s with ShadowTransform[i] though? The matrix is assumed to transform a vertex’s world space position into the texture coordinates of it’s corresponding shadow split $i$. Actually nothing new, the combined shadow view and projection matrix – if it weren’t for the nasty coordinate space differences: After projection, a vertex ends up in clip space which is defined as (ignoring the Z coordinate) $[-1, \dots, 1] \times [-1, \dots, 1]$. But we need texture coordinates for the shadow map lookup, which range from $[0, \dots, 1] \times [0, \dots, 1]$. Even more specific: we need to index into the sub rectangle of the corresponding shadow map in the shadow atlas. And to make things even worse: The y-axis of clip space points upwards while the y axis in texture space points downwards: top left in clip space is $(-1,1)$ whereas top left in texture coordinates is $(0,0)$. Luckily we can extend the combined shadow view and projection matrix to do the remapping:

// compute block index into shadow atlas
int tileX = i % 2;
int tileY = i / 2;

// tile matrix: maps from clip space to shadow atlas block
var tileMatrix = Matrix.Identity;
tileMatrix.M11 = 0.25f;
tileMatrix.M22 = -0.25f;
tileMatrix.Translation = new Vector3(0.25f + tileX * 0.5f, 0.25f + tileY * 0.5f, 0);

// now combine with shadow view and projection


So the first two diagonal elements of tileMatrix scale the clip space $x$ and $y$ coordinates down to $[-\frac{1}{4}, \dots, \frac{1}{4}]$ and the translation component shifts the coordinates to $[0, \dots, \frac{1}{2}]$. The tileX * 0.5f and tileY * 0.5f part then offsets the coordinates into the proper shadow atlas partition. Simple as that. Beware though, this only works for orthographic shadow projections.

Check out the images below, from left to right: The scene with shadows, shadows per split, shadow map resolution per split.

Have a look at the next visualization as well: Each pixel in the shadow map is back-projected into world space again and rendered as a cube. This sort of illustrates how the shadow map ‘sees’ the scene. You can clearly see the stepping on tilted surfaces, as well as discretization errors. Note that this rendering method is *very* demanding on the GPU (and I haven’t had time to put in some optimization), so the video is somewhat choppy.

So last time we saw how we can partition the view frustum into several subfrustums and how to compute a projection matrix for each. This time we’ll look at how the shadow maps are actually rendered.

For starters, I chose to render all shadow maps into a single texture atlas instead of assigning a unique texture to each shadow split. This avoids us to switch render targets for each shadow split and also simplifies the shadow mapping shader. In this sample I am using a shadow atlas of 1024×1024 pixels,

mShadowMap = new RenderTarget2D(mGraphicsDevice, 1024, 1024,
false, SurfaceFormat.Single, DepthFormat.Depth24);


and each shadow split is rendered into a 512×512 subrectangle. Ideally we’d render into the depth buffer only (double speed!) and then either resolve to a texture (XBox 360) or directly bind it as texture (DirectX). Unfortunately this is not possible in XNA. So I use a 32 bit float render target for the shadow map and a separate depth buffer.

During the shadow map render pass I then bind the atlas as render target and, for each split, set up the viewport to only render into the corresponding shadow atlas partition. Once the viewport is set we can finally render the model using the shadow transform as combined view and projection matrix:

// bind shadow atlas as render target and clear
mGraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.White, 1.0f, 0);

// get model bone transforms
Matrix[] transforms = new Matrix[arena.Bones.Count];
arena.CopyAbsoluteBoneTransformsTo(transforms);

for (int i = 0; i < mNumShadowSplits; ++i)
{
// set up viewport
{
int x = i % 2;
int y = i / 2;
var viewPort = new Viewport(x * 512, y * 512, 512, 512);

mGraphicsDevice.Viewport = viewPort;
}

// Draw the arena model
foreach (ModelMesh mesh in arena.Meshes)
{
foreach (var effect in mesh.Effects)
{
effect.Parameters["World"].SetValue(transforms[mesh.ParentBone.Index]);
}

mesh.Draw();
}
}


As for the shader used during this render pass, it’s nothing special: It just transforms each vertex into shadow clip space and then writes out the depth value as color – like any other shadow mapping shader:

float4x4 World;
float4x4 ViewProjection;

{
float4 Position        : POSITION0;
};

{
float4 Position        : POSITION0;
float  Depth           : COLOR0;
};

{

float4 worldPosition = mul(input.Position, worldTransform);
output.Position = mul(worldPosition, ViewProjection);
output.Depth = output.Position.z / output.Position.w;

return output;
}