Cascaded Shadow Maps (2)

So last time we saw how we can partition the view frustum into several subfrustums and how to compute a projection matrix for each. This time we’ll look at how the shadow maps are actually rendered.

For starters, I chose to render all shadow maps into a single texture atlas instead of assigning a unique texture to each shadow split. This avoids us to switch render targets for each shadow split and also simplifies the shadow mapping shader. In this sample I am using a shadow atlas of 1024×1024 pixels,

mShadowMap = new RenderTarget2D(mGraphicsDevice, 1024, 1024, 
                                false, SurfaceFormat.Single, DepthFormat.Depth24);

and each shadow split is rendered into a 512×512 subrectangle. Ideally we’d render into the depth buffer only (double speed!) and then either resolve to a texture (XBox 360) or directly bind it as texture (DirectX). Unfortunately this is not possible in XNA. So I use a 32 bit float render target for the shadow map and a separate depth buffer.

During the shadow map render pass I then bind the atlas as render target and, for each split, set up the viewport to only render into the corresponding shadow atlas partition. Once the viewport is set we can finally render the model using the shadow transform as combined view and projection matrix:

// bind shadow atlas as render target and clear
mGraphicsDevice.SetRenderTarget(mShadowMap);
mGraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.White, 1.0f, 0);

// get model bone transforms
Matrix[] transforms = new Matrix[arena.Bones.Count];
arena.CopyAbsoluteBoneTransformsTo(transforms);

for (int i = 0; i < mNumShadowSplits; ++i)
{
    // set up viewport
    {
        int x = i % 2;
        int y = i / 2;
        var viewPort = new Viewport(x * 512, y * 512, 512, 512);

        mGraphicsDevice.Viewport = viewPort;
    }

    // Draw the arena model
    foreach (ModelMesh mesh in arena.Meshes)
    {
        foreach (var effect in mesh.Effects)
        {
            effect.Parameters["ViewProjection"].SetValue(ShadowSplitProjections[i]);
            effect.Parameters["World"].SetValue(transforms[mesh.ParentBone.Index]);
        }

        mesh.Draw();
    }
}

As for the shader used during this render pass, it’s nothing special: It just transforms each vertex into shadow clip space and then writes out the depth value as color – like any other shadow mapping shader:

float4x4 World;
float4x4 ViewProjection;

struct VertexShaderInput
{
    float4 Position        : POSITION0;
};

struct VertexShaderOutput
{
    float4 Position        : POSITION0;
    float  Depth           : COLOR0;
};

VertexShaderOutput ShadowVS(VertexShaderInput input, float4x4 worldTransform)
{
    VertexShaderOutput output = (VertexShaderOutput)0;

    float4 worldPosition = mul(input.Position, worldTransform);
    output.Position = mul(worldPosition, ViewProjection);
    output.Depth = output.Position.z / output.Position.w;

    return output;
}

float4 ShadowPS(VertexShaderOutput input) : COLOR0
{
    return float4( input.Depth, 0, 0, 0 );
}

Check out the images below: On the left you can see the scene and the view frustum and on the right the corresponding shadow atlas.

Obviously, the split distances don’t match the arena dimensions very well, but I’ll get into that in another post.

Links

Cascaded Shadow Maps (1)

As mentioned in my previous post, cascaded shadow mapping is a technique to shadow large areas at reasonable memory and run-time cost. The basic idea is simple: Split the view frustum into several sub frustums and render a shadow map for each split. Since, naturally, splits closer to the viewer cover less area in world space (i.e. under perspective projection) you get better shadow map resolution close to the camera and less resolution further away. Check out the image below: Each split is visualized in a different color and the corresponding shadow map has been filled with a block pattern. All shadow maps are of the same resolution. You can clearly see how shadow maps closer to the viewer (red and green) allocate much higher detail per world space unit than shadow maps further away from the viewer (blue and yellow). This gives us a highly desired quality: Lots of resolution/details close to the viewer and less far away – without the need for a huge shadow map.

So how can we get this done? First we have to take the same steps as in regular shadow mapping: Define the shadow casting light i.e. light direction and position and the corresponding projection into the shadow map. In my case I chose to simulate sun light which can be represented by a simple orthogonal projection. I create the shadow transform like so:

// shadow view matrix
mShadowView = Matrix.CreateLookAt(mLightPosition, mLightPosition + mLightDirection, Vector3.Up);

// determine shadow projection based on model bounding sphere
{
    var center =  Vector3.Transform(arena.BoundingSphere.Center, mShadowView);

    var min = center - new Vector3(arena.BoundingSphere.Radius);
    var max = center + new Vector3(arena.BoundingSphere.Radius);

    mShadowProjection = Matrix.CreateOrthographicOffCenter( min.X, max.X, min.Y, max.Y, -max.Z, -min.Z);
}

mShadowTransform = mShadowView * mShadowProjection;

So the shadow view matrix is a simple look at matrix placed at the position of the light mLightPosition and looking towards mLightDirection. As Up vector I just use the y-axis, which works fine as long as mLightDirection is not pointing straight upwards. Using this definition I can then transform the model’s bounding sphere center into shadow space (i.e. compute the position of the bounding sphere center relative to the light) and get the minimum axis aligned bounding box (in shadow space) encompassing the sphere by just adding and subtracting the bounding sphere radius along each coordinate axis. The resulting vectors min and max give us thus the shadow frustum we need to project into our shadow maps in order to cover the whole model. Note that view space looks along the negative z axis so we need to negate the Z values.

The resulting matrix mShadowTransform gives us thus the projection of world space coordinates into our shadow map. Now, in cascaded shadow mapping we have not one, but multiple shadows maps. We therefore need to define multiple versions of mShadowTransform, one for each shadow split. And we also need to align the splits to the view frustum. So let’s start with the view frustum: Lets say we split the frustum at constant distances from the viewer, say the first split should range from -1 to -100, the second split from -100 to -300 and the third split from -300 to -600. Given these split distances we can use the functionality described in the post Frustum Splits to figure out the world space positions of the sub frustum corners for each split. Once we know the world space positions for each split we’re almost done: We just need to adjust the projection matrix to focus on the the split frustum.

// determine clip space split distances
var splitDistances = new[] {-1, -100.0f, -300.0f, -600 }
    .Select(d =>
    {
        var c = Vector4.Transform(new Vector3(0, 0, d), camera.projectionMatrix);
        return c.Z / c.W;
    })
    .ToArray();

// determine split projections
var splitData = Enumerable.Range(0, mNumShadowSplits).Select(i =>
{
    var n = splitDistances[i];
    var f = splitDistances[i + 1];

    // get frustum split corners and transform into shadow space
    var frustumCorners = splitFrustum(n, f)
        .Select(v => Vector3.Transform(v, mShadowView));

    var min = frustumCorners.Aggregate((v1, v2) => Vector3.Min(v1, v2));
    var max = frustumCorners.Aggregate((v1, v2) => Vector3.Max(v1, v2));
        
    // determine the min/max z values based on arena bounding box
    var arenaBB = GeometryHelper.transformBoundingBox(arena.CollisionData.geometry.boundingBox, ShadowView);
    var minZ = -arenaBB.Max.Z;
    var maxZ = -arenaBB.Min.Z;

    // return orthographic projection
    return new
    {
        Distance = f,
        Projection = Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, minZ, maxZ)
    };
}).ToArray();

// compute final split transforms
ShadowSplitProjections = splitData.Select(s => mShadowView * s.Projection).ToArray();
ShadowSplitDistances = splitData.Select(s => s.Distance).ToArray();

This gives us an array ShadowSplitProjections of matrices where each matrix projects the scene into a shadow map, from the view of the light. Note that I also store the clip space split distances, measured in view space from the camera’s center of projection. Have a look at the images below:

The images on the left show the view frustum split into three sub frustums, while the images on the right show the shadow transform’s frustum for each split.

This concludes the first step of implementing cascaded shadow mapping. In the next post I’ll show how the finally render the shadow maps and how to project them onto the scene. Hope you find it as interesting as I do 🙂

Links

Transform Axis Aligned Bounding Boxes

Axis aligned bounding boxes are quite often used in computer graphics, for example for fast collision detection. Often, you need to transform a given axis aligned bounding box by some matrix and then convert it into an axis aligned bounding box in the resulting coordinate space again. Here’s an efficient algorithm to do so:

public static BoundingBox transformBoundingBox(BoundingBox boundingBox, Matrix m)
{
    var xa = m.Right * boundingBox.Min.X;
    var xb = m.Right * boundingBox.Max.X;

    var ya = m.Up * boundingBox.Min.Y;
    var yb = m.Up * boundingBox.Max.Y;

    var za = m.Backward * boundingBox.Min.Z;
    var zb = m.Backward * boundingBox.Max.Z;

    return new BoundingBox(
        Vector3.Min(xa, xb) + Vector3.Min(ya, yb) + Vector3.Min(za, zb) + m.Translation,
        Vector3.Max(xa, xb) + Vector3.Max(ya, yb) + Vector3.Max(za, zb) + m.Translation
    );

Note that Matrix.Right returns the Matrix’s first column vector, i.e. (M_{11} M_{21} M_{31} M_{41})^T and Matrix.Up and Matrix.Backward return the second and third column vectors respectively. So looking at the source code above, all you have to do is multiply the bounding box min and max values in each coordinate direction by the corresponding Matrix column vector and then combine them to generate the result. Why does this work?

Let’s think about the definition of an axis aligned bounding box: It can be described by a center position \mathbf{c} = (c_x c_y c_z)^T and offsets for each coordinate axis \mathbf{r} = (r_x r_y r_z)^T. We can then write the box’s delimiting positions as follows:

\mathbf{B} = \left[ \mathbf{\min} \begin{pmatrix} c_x \pm r_x \\ c_y \pm r_y \\ c_z \pm r_z \end{pmatrix}, \mathbf{\max} \begin{pmatrix} c_x \pm r_x \\ c_y \pm r_y \\ c_z \pm r_z \end{pmatrix} \right] = \left[ \mathbf{c} - \mathbf{r}, \mathbf{c} + \mathbf{r} \right]

Note the \mathbf{\min} and \mathbf{\max} operators are applied for each dimension independently. Since each dimension’s offset \mathbf{r} is added and subtracted independently from the other dimensions as well the \mathbf{\min}, \mathbf{\max} operators go over 2*2*2 = 2^3 = 8 corner positions. So, lets transform the positions by some matrix \mathbf{M} and then convert the resulting corners into an axis aligned bounding box \mathbf{B}' again by finding the minimum and maximum values in each coordinate direction:

\mathbf{B}' = \left[\mathbf{\min} \bigg(\mathbf{M}\begin{pmatrix} c_x \pm r_x \\ c_y \pm r_y \\ c_z \pm r_z \\ 1 \end{pmatrix} \bigg), \mathbf{\max} \bigg( \mathbf{M}\begin{pmatrix} c_x \pm r_x \\ c_y \pm r_y \\ c_z \pm r_z \\ 1 \end{pmatrix} \bigg) \right]

In my previous post Rant: Matrix Layouts I pointed out that we can split up matrix vector products into a sum over the product of the vector’s components with individual matrix columns. Let’s try this here, just for the \mathbf{\min} vector \mathbf{B}'_{min} of \mathbf{B}' first:

\mathbf{\min} \bigg(\mathbf{M}\begin{pmatrix} c_x \pm r_x \\ c_y \pm r_y \\ c_z \pm r_z \\ 1 \end{pmatrix}\bigg) = \mathbf{\min} \bigg( \mathbf{M}_{|1}(c_x \pm r_x) + \mathbf{M}_{|2}(c_y \pm r_y) + \mathbf{M}_{|3}(c_z \pm r_z) + \mathbf{M}_{|4}\bigg)

Note how something magic happened here: each component of \mathbf{c} and \mathbf{r} appears only once in the resulting equation and we only have two possible values for each component! Now we just need to split up the \mathbf{\min} operator:

\mathbf{B}_{min} = \mathbf{\min} \bigg( \mathbf{M}_{|1}(c_x \pm r_x) \bigg) + \mathbf{\min} \bigg(\mathbf{M}_{|2}(c_y \pm r_y)\bigg) + \mathbf{\min} \bigg(\mathbf{M}_{|3}(c_z \pm r_z)\bigg) + \mathbf{M}_{|4}

and we are done: We successfully decomposed the minimum over all eight bounding box coordinates into a component-wise minimum and a sum over all dimensions. Obviously, the same procedure holds for the maximum bounding box extent as well. Have another look at the code above again: It performs the very computation mentioned just before.

Links

Frustum Splits

Recently, I have picked up work on one of my older projects again: Bounce. While being quite an elegant demonstration of collision detection via Binary Space Partitioning Trees, it lacks quite a bit on the visual side. So I decided to add some real-time shadows to the scene. I decided to implement cascaded shadow mapping as this is one of the most widely used approaches for rendering real-time shadows in games. I will post my thoughts on this blog and, of course, share the code. So check back every now and then in case you’re interested!

When it comes to cascaded shadow mapping, the first thing you have to do is split the view frustum into multiple parts. Each split will receive its own shadow map based on the idea that splits closer to the viewer cover less area and hence offer higher shadow map resolution. Check out the image blow: The view frustum is partitioned into four splits where each split is rendered in a different color.

But how can you calculate the coordinates of each sub frustum given the current camera? Consider the camera representation used in most games: a view matrix \mathbf{V} which transforms the geometry into view (eye) space and a projection matrix \mathbf{P}, combined with perspective division which projects the resulting view space geometry into clip space. So a world space vertex \mathbf{v} is transformed into it’s clip space position \mathbf{v}' like so:

  \label{test} \tilde{\mathbf{v}} = \mathbf{V} \mathbf{P} \mathbf{v} \qquad \mathbf{v}' = \tilde{\mathbf{v}}/\tilde{v}_w \qquad \qquad (1)

In DirectX clip space is defined as \{-1,\dots,1\} \times \{-1,\dots,1\} \times \{0,\dots,1\}. Being a (scaled) unit cube, it’s really easy to split clip space into sub frustums: Simply pick the corners (\pm 1, \pm 1) and the desired split depths (n,f) and the following points define your axis aligned frustum box:

\mathbf{p}_{min} = \begin{pmatrix}-1\\-1\\n\end{pmatrix} \qquad \mathbf{p}_{max} = \begin{pmatrix}1 \\ 1 \\ f \end{pmatrix}

Note that a little care has to be taken when picking the split depth values, as the distribution of z values in clip space is non-linear. Anyway, so having seen that its easy to define the split frustums in clip space, all we need to do now is convert clip space positions back to world space. This can be done by multiplying the clip space position with the inverse view and projection transforms and subsequently converting the result from homogenious coordinates to regular three dimensional coordinates:

\tilde{\mathbf{v}} = (\mathbf{V} \mathbf{P})^{-1} \mathbf{v}' \qquad \mathbf{v} = \tilde{\mathbf{v}}/\tilde{v}_w

Let’s see some code! The following function computes the corners of a split frustum in world space, given the distances of the near and far planes in clip space:

public IEnumerable<Vector3> splitFrustum(float clipSpaceNear, float clipSpaceFar, 
                                            Matrix viewProjectionInverse)
{
    var clipCorners = new[]
    {
        new Vector3( -1,  1, clipSpaceNear ),
        new Vector3(  1,  1, clipSpaceNear ), 
        new Vector3(  1, -1, clipSpaceNear ),
        new Vector3( -1, -1, clipSpaceNear ), 
        new Vector3( -1,  1, clipSpaceFar  ),
        new Vector3(  1,  1, clipSpaceFar  ),
        new Vector3(  1, -1, clipSpaceFar  ),
        new Vector3( -1, -1, clipSpaceFar  )
    };

    return clipCorners.Select(v =>
    {
        var vt = Vector4.Transform(v, viewProjectionInverse);
        vt /= vt.W;

        return new Vector3(vt.X, vt.Y, vt.Z);
    });
}

The only downside of this method is that we need to know the values clipSpaceNear and clipSpaceFar of the near and far plane in clip space – usually you only know them in view space. Not much of an issue though, as we can use formula (1) to convert view space depth into clip space.

float[] viewSpaceDepth = {-50.0f, -500.0f};
var clipSpaceDepth = viewSpaceDepth.Select(c =>
{
    var d = Vector4.Transform(new Vector3(0, 0, c), camera.projectionMatrix);
    return d.W != 0 ? d.Z / d.W : 0; 
}).ToArray();

Matrix viewProjInverse = Matrix.Invert(camera.viewMatrix * camera.projectionMatrix);
var frustumCorners = splitFrustum(clipSpaceDepth[0], clipSpaceDepth[1], 
                                  viewProjInverse).ToArray();

One of the big advantages of this method is the fact that it works with arbitrary projection transforms, like for example orthographic projections as shown in the image below:

Links

QTangents

Published in the 2011 Siggraph Presentation Spherical Skinning with Dual-Quaternions and QTangents, Crytek proposed a highly efficient way of representing tangent space per vertex. Instead of storing the basis vectors, they store a quaternion representing the tangent space rotation and reconstruct the basis vectors in the shader. Being a simple and straight forward idea, I decided to implement this approach in the anima skinning sample.

As a first step, the tangent space vectors need to be converted into quaternion form. I did so by assembling a rotation matrix that represents the rotation of an arbitrary world space vector into tangent space:

\mathbf{T} = \left(\begin{array}{l}\mathbf{B}_{binormal} \\ \mathbf{B}_{tangent} \\ \mathbf{B}_{normal} \end{array}\right)

where \mathbf{B}_{binormal} denotes the tangent space binormal vector (in row form!) etc. This matrix can then be converted into a quaternion by one of the standard algorithms. Note, however, that special care has to be taken during this step: First of all, \mathbf{T} might not be a proper rotation matrix. It is reasonable to assume that orthonormality is guaranteed, i.e.

\mathbf{T}\mathbf{T}^T = \mathbf{T}^T \mathbf{T} = \mathbf{I}

for the identity matrix \mathbf{I}, but \mathbf{T} might still encode a reflection. If this is the case the matrix to quaternion will fail because unit quaternions cannot encode reflections. Fortunately, the matrix can be converted into a regular rotation by simply picking any basis vector and reflecting it as well. In my case I chose to always reflect the \mathbf{B}_{normal} vector. Have a look at the figure blow which illustrates the case where \mathbf{B}_{binormal} is reflected.

Note that after the reflection of the normal, handedness is restored again. Of course, reconstructing the resulting quaternion yields a tangent space with a flipped normal vector, so we need to un-flip it first before we can use it in any shading algorithms! Thus, we need to store a flag along with our quaternions that indicates if the normal vector need to be flipped after reconstruction. And this is where the smart guys from Crytek scored their points: Realizing that for a given quaternion \mathbf{q} it’s negate -\mathbf{q} represents the exact same rotation (albeit in opposite direction, but that won’t bother us here) we can enforce non negativity of any quaternion element without impact on the reconstructed tangent space. But this also means that we can use the sign of that very component to store our reflection flag! This brings our memory requirements for the whole tangent frame down to 4 floating point values. Not bad! The only thing that can go wrong now is when the chosen quaternion element is zero. Should not be an issue in theory because IEEE 754 makes a distinction between +0 and -0 but GPUs don’t always stick to this rule. In this case we can set the value of this component to a very small bias, say 1e-7. Here’s how I implemented the tangent space matrix to QTangent conversion:

// generate tangent frame rotation matrix
Math::Matrix3x4 tangentFrame(
    binormal.x,    binormal.y,    binormal.z,    0,
    tangent.x,     tangent.y,     tangent.z,     0,
    normal.x,      normal.y,      normal.z,      0
);

// flip y axis in case the tangent frame encodes a reflection
float scale = tangentFrame.Determinant() < 0 ? 1.0f : -1.0f;

tangentFrame.data[2][0] *= scale;
tangentFrame.data[2][1] *= scale;
tangentFrame.data[2][2] *= scale;

// convert to quaternion
Math::Quaternion tangentFrameQuaternion = tangentFrame;

// make sure we don't end up with 0 as w component
{
    const float threshold = 0.000001f;
    const float renomalization = sqrt( 1.0f - threshold * threshold );

    if( abs(tangentFrameQuaternion.data.w)     {
        tangentFrameQuaternion.data.w =  tangentFrameQuaternion.data.w > 0
                                            ? threshold
                                            : -threshold;
        tangentFrameQuaternion.data.x *= renomalization;
        tangentFrameQuaternion.data.y *= renomalization;
        tangentFrameQuaternion.data.z *= renomalization;
    }
 }

// encode reflection into quaternion's w element by making sign of w negative
// if y axis needs to be flipped, positive otherwise
float qs = (scale<0 && tangentFrameQuaternion.data.w>0.f) || 
           (scale>0 && tangentFrameQuaternion.data.w<0) ? -1.f : 1.f;

tangentFrameQuaternion.data.x *= qs;
tangentFrameQuaternion.data.y *= qs;
tangentFrameQuaternion.data.z *= qs;
tangentFrameQuaternion.data.w *= qs;

On a side note: As implemented in the code above, reflection properties of a matrix can be detected via the matrix’s determinant, i.e. if a matrix \mathbf{T} encodes a reflection then

det(\mathbf{T}) = -1

holds. In order to obtain the world space tangent frame vectors we need to rotate the tangent frame quaternion into world space first. This can be done by concatenating it with the vertex’s blended bone transform. In the case of dual quaternion bone transforms we simply need to multiply the real part of the bone dual quaternion \hat{\mathbf{q}} = \mathbf{q}_r + \epsilon \mathbf{q}_d with the tangent space quaternion \mathbf{t}

\mathbf{t'} = \mathbf{q}_r * \mathbf{t}

The tangent space vectors can then be reconstructed via a standard quaternion to matrix routine. Note that it is advisable to only reconstruct two of the three basis vectors and recompute the third via a cross product in order to guarantee orthonormality.

float2x3 QuaternionToTangentBitangent( float4 q )
{
    return float2x3(
      1-2*(q.y*q.y + q.z*q.z),  2*(q.x*q.y + q.w*q.z),    2*(q.x*q.z - q.w*q.y),
      2*(q.x*q.y - q.w*q.z),    1-2*(q.x*q.x + q.z*q.z),  2*(q.y*q.z + q.w*q.x)
    );
}

float3x3 GetTangentFrame( float4 worldTransform, TangentFrame tangentFrame )
{
    float4 q = QuaternionMultiply( worldTransform,  tangentFrame.Rotation );
    float2x3 tBt = QuaternionToTangentBitangent( q );

    return float3x3(
        tBt[0],
        tBt[1],
        cross(tBt[0],tBt[1]) * (tangentFrame.Rotation.w < 0 ? -1 : 1)
    );
}

Links