Normal Mapping

Normal mapping is a highly popular technique in video game development and real time rendering applications where the polygon count needs to be kept as low as possible. It allows us, at reasonable cost, to add high frequency surface detail to arbitrary surfaces of low polygon count. The basic idea is simple: Encode the extra detail in an additional texture and use during shading.

In the case of normal mapping we simulate additional surface detail by storing the orientation of the surface normal at each texel. This gives us much more fine grained normal information than vertex normals as the texture covers the model in a much more fine-grained fashion. The straight forward way to accomplish this would be to simply store the exact surface normal at each texel. At run-time all we’d have to do is sample the texture in the fragment shader and replace the interpolated normal with the sampled one. This approach is usually called Object Space Normal Mapping as the normals are defined relative to the whole object. While being simple and efficient, this approach has some significant draw-backs: We cannot animate the vertices of the model as this would change their world space orientation and we cannot reuse the same map on mirrored geometry. As a result most games and applications use Tangent Space Normal Mapping where, as the name implies, the normals are specified relative the the object’s local tangent space.

Have a look at the image above: We define local tangent space by the orthogonal basis vectors ‘normal’, ‘tangent’ and ‘binormal’. Relative to this coordinate system, we can represent any other normal vector (in red the image above) by a linear combination of the three basis vectors:

\mathbf{n}' = b \mathbf{B}_{binormal} + t \mathbf{B}_{tangent} + n \mathbf{B}_{normal}

where \mathbf{B}_{binormal} denotes the basis vector called ‘binormal’, etc. We will thus store in our normal map the coefficients of the linear combination: \{b, t, n\} and evaluate the equation above in the fragment shader. Though, where will we get the basis vectors from? Usually the tangent space basis vectors are computed during model export or import. The standard procedure does so by aligning tangent and binormal with the direction of the UV coordinates and computing the normal vector by a cross product of tangent and binormal. Details can be found in various publications. Once the tangent space basis vectors are known for each vertex, they are passed to the vertex shader. The vertex shader transforms the vectors into world space according to it’s associated world space transform and then passes them on to the fragment shader. So.. Let’s have a look at some code:

In order to get normal, tangent and binormal data into the DirectX vertex buffer I added converters for each vector to the mesh importer:

if( mesh->HasTangentsAndBitangents() )
{
    converters.push_back( new aiVector3DConverter( D3DDECLUSAGE_TEXCOORD, 5, 
        offset, mesh->mNormals, vertexCount ) );
    offset += converters.back()->Size();

    converters.push_back( new aiVector3DConverter( D3DDECLUSAGE_TEXCOORD, 6, 
        offset, mesh->mTangents, vertexCount ) );
    offset += converters.back()->Size();

    converters.push_back( new aiVector3DConverter( D3DDECLUSAGE_TEXCOORD, 7, 
        offset, mesh->mBitangents, vertexCount ) );
    offset += converters.back()->Size();
}

Note that they are sent to the GPU via the TEXCOORD5TEXCOORD7 semantics. The vertex shader input needed to reflect this fact, so I extended the VertexShaderInput struct with the definition of tangent frame data:

struct TangentFrame
{
    float3 Normal     : TEXCOORD5;
    float3 Tangent    : TEXCOORD6;
    float3 Binormal   : TEXCOORD7;
};

struct VertexShaderInput
{
    float4 Position                    : POSITION;
    TangentFrame TangentFrameData;
    float2 TexCoord                    : TEXCOORD0;
    float4 BlendWeights                : BLENDWEIGHT0;
    uint4 BlendIndices                 : BLENDINDICES0;
};

The Vertex shader itself now needs to transform the tangent space vectors into world space and pass them on to the pixel shader:

struct VertexShaderOutput
{
    float4 Position                    : POSITION0;
    float2 TexCoord                    : TEXCOORD0;
    float3x3 TangentFrame              : TEXCOORD2;
};

VertexShaderOutput Model_VS( VertexShaderInput input )
{
    VertexShaderOutput result;
    result.TexCoord = input.TexCoord;

    float3x4 blendedTransform =
        BoneTransforms[input.BlendIndices.x] * input.BlendWeights.x +
        BoneTransforms[input.BlendIndices.y] * input.BlendWeights.y +
        BoneTransforms[input.BlendIndices.z] * input.BlendWeights.z +
        BoneTransforms[input.BlendIndices.w] * input.BlendWeights.w;

    // position
    {
        float4 posH = float4( input.Position.xyz, 1.f );
        float4 blendedPosition = mul( posH, blendedTransform );

        result.Position =  mul( blendedPosition, ViewProjection );
    }

    // tangent space
    {
        float3x3 tf = float3x3(
            tangentFrame.Binormal,
            tangentFrame.Tangent,
            tangentFrame.Normal
         );

        result.TangentFrame = mul( tf, transpose((float3x3)blendedTransform) );
    }

    return result;
}

The tangent space vectors, now in world space, are then interpolated across the current polygon and then passed on to the fragment shader. In the fragment shader we sample the normal map and compute the surface normal according to the equation above. Beware of an implementation detail: Depending on the texture format, individual texels store values in the range of \{0,\dots,1\}. Our tangent space coefficients, however, can range from \{-1\dots1\}. So we need to remap the texel values before plugging them into the equation above:

float4 Model_PS( VertexShaderOutput input ) : COLOR0
{
    float3 textureNormal = tex2D( NormalSampler, input.TexCoord ).xyz*2-1;
    float3 normal = normalize( mul( textureNormal, input.TangentFrame ) );

    const float ambient = 0.75f;
    float diffuseFactor = 0.25f;

    float diffuse = dot(normal, LightDirection.xyz );
    float4 textureColor = tex2D( DiffuseSampler, input.TexCoord );

    return textureColor * (ambient + diffuse * diffuseFactor);
}

Lets look at the result in pictures: Below you can see the albedo texture for frank’s shoes and the corresponding normal map. 
Next you can see the frank model with a simple lambert shader with and without normal map. You can clearly make out the additional detail the normal map adds to the model.

Links

Leave a Reply

Your email address will not be published. Required fields are marked *