Let’s face it: even after stabilization, the shadows in my sample still look pretty bad. Very blocky, hard shadow edges and strange shapes everywhere… So let’s do something about it! But first a short disclaimer: The efficiency of the techniques I propose in this post is highly dependent on the scene and your visual target: There is no one approach that will fix all your shadow artifacts once and for all – it’s rather through trial and error and a combination well known techniques that you can get to a state you are satisfied with. I have yet to come across a game that does a perfect job with real time shadows.

But let’s start now, with something very simple: During shading, we do know the surface normal for each position on the geometry. So lets make use of that: If a surface normal points away from the light source we know for sure that that point does not receive any light. Hence it is in shadow. Determining wether the surface normal is pointing away from the light is simple: Just take the dot product between the surface normal and the light direction and check if it’s less than 0. So let’s combine that with the shadow map look up from `GetShadowFactor`

in Cascaded Shadow Maps (3):

// compute shadow factor: 0 if in shadow, 1 if not. float GetShadowFactor( ShadowData shadowData, float ndotl ) { ShadowSplitInfo splitInfo = GetSplitInfo( shadowData ); float storedDepth = tex2Dlod( ShadowMapSampler, float4( splitInfo.TexCoords, 0, 0)).r; return (splitInfo.LightSpaceDepth < storedDepth) * (ndotl > 0); }

where `ndotl`

is assumed to contain the dot product between surface normal and light direction. Note that we are combining the two terms “Pixel is occluded by some occluder in the shadow map” and “Surface is oriented away from the light” with a simple multiplication. This equation can be interpreted in a probabilistic sense, in that we are saying that a point can only be in light if it is not occluded by some object AND the surface is oriented towards the light. Check out the comparision below: Shading without `ndotl`

term on the left, with `ndotl`

term on the rightThe geometry based `ndotl`

term is clearly helping a lot with thin surfaces, where the shadow map doesn’t have enough precision to pick apart the front and back facing polygons. It can, however, only be as good as the surface normals, so correct surface normals are a must.

Next, let’s have a look at another geometric approach: depth bias. When rendering the shadow map, we store the scene distance with respect to the light source in a more or less accurate manner. In my case, I requested a `SurfaceFormat.Single`

shadow map format, which means 32 bits (floating point) per pixel. But not all developers/hardware can afford such a surface format, sometimes 24 bit or even 16 bit per pixel are a requirement. Under such circumstances, the depth range that can be stored in the shadow map is quite limited, resulting in a high quantization error for the stored depth. This means though, when we read back the stored depth from the shadow map, we’ll get a depth value that is more or less close to the real depth of the corresponding surface element. So we can either end up thinking that a given surface point is in shadow because the stored depth value is (due to quantization) closer to the light source than the real depth value, or that the surface point is not in shadow because due to quantization it is further away from the light source than in reality. Since the first problem is much more visible than the later, a simple work around has been invented: Add some bias to the shadow stored depth. There is even hardware support for this functionality: the `D3DRS_DEPTHBIAS`

and `D3DRS_SLOPESCALEDEPTHBIAS`

renderstates, where `D3DRS_DEPTHBIAS`

corresponds to a constant bias and `D3DRS_SLOPESCALEDEPTHBIAS`

is a surface slope based bias. Slope based bias can be very helpful with surface elements that are oriented in a steep angle with respect to the light source. Unfortunately these rendersates only applied to the position interpolator, so not much use to us because we are passing the depth via a color interpolator. So we have to emulate these in the fragment shader:

float2 DepthBias; float4 ShadowPS(VertexShaderOutput input) : COLOR0 { float depthSlopeBias = max( abs(ddx(input.Depth)), abs(ddy(input.Depth)) ); return float4( input.Depth + depthSlopeBias * DepthBias.x + DepthBias.y, 0, 0, 0 ); }

The `DepthBias`

values are passed to the shader via shader constants. Check out the images below:

On the left you can see the shadowed scene without any bias and on the right with tweaked bias values.