# Dual Quaternion Skinning

In the last two posts we’ve reviewed skeletal animation and smooth skinning based on matrices (which, by incidence, is usually called linear blend skinning), so now we’re ready to look at an alternative: dual quaternion skinning. Linear blend skinning has some serious disadvantages that come down to one simple truth: The weighted average $W'$ of all bone transforms that influence one vertex

$\mathbf{v}' = \sum_i w_i \mathbf{W}_i(\mathbf{v}) = \underbrace{(\sum_i w_i \mathbf{W}_i )}_{\mathbf{W}'}(\mathbf{v}) = \mathbf{W}'(\mathbf{v})$

is simply not a satisfying method of blending the transforms of multiple bones. I am going to discuss the issues in more detail in another post later, but let’s just say for now that the linear blending of rotation matrices does result in a number of undesired artifacts.

Dual Quaternions provide a solution for some of these artifacts, as the interpolation of rotations also blends the rotation centers. Le me point you to the following paper, which provides some more in-depth explanation as well as an upper error bound on the accuracy of rotation interpolation.

Without going into too much detail, let me point out that dual quaternions are an extension to regular quaternions, where two quaternions $\mathbf{q}_r$ and $\mathbf{q}_d$ are concatenated via the dual element $\epsilon$:

$\hat{\mathbf{q}} = \mathbf{q}_r + \epsilon \mathbf{q}_d, \quad \epsilon^2 = 0$

What makes dual quaternions so interesting is the fact that each unit dual quaternion can be interpreted as a rigid body tranform representing a rotation and a translation. I’m going to spare you with the formulas for addition, multiplication, inverse etc of dual quaternions here but lets just say that knowing the fundamental property of $\epsilon^2 = 0$ we can easily derive them ourselves. As the paper also points out, the concatenation of two rigid body transforms represented by unit dual quaternions is accomplished by multiplying both dual quaternions. The weighted blending of multiple unit dual quaternions corresponds to an approximate interpolation of the corresponding transforms. Perfect circumstances for skinning!!

So lets get our hands dirty and dive into some code: In principle, all I am doing is replace the current code representation of a joint transform (4×3 Matrix) by a Dual Quaternion. Since I wanted to keep the linear blend skinning code path and I wanted to avoid copy/pasting lots of code, I decided to templetize the shared code, making the joint transform type a variable parameter.So as a first step I updated the skeleton class:

template<class BoneTransform>
class SkeletonGeneric
{
std::vector<int> mParents;
std::vector<BoneTransform> mTransforms;
std::vector<BoneTransform> mBindingTransforms;

virtual BoneTransform Concatenate( const BoneTransform& first,
const BoneTransform& second ) const
{
return second * first;
}

...
};


Note the new function Concatenate(), which is from now on used everywhere where two bone transforms need to be concatenated. I provide a default implementation that will just apply the multiplication operator: BoneTransform::operator*, but this definition can be easily overridden to do more specialized stuff.

Next, I defined a dual quaternion class and implemented all the basic dual quaternion math. Have a look at math.cpp in case you’re interested. The dual quaternion class also posesses a couple of specialized constructors that allow conversion of animation data (translation, rotation pairs) and binding transform data (4×3 matrix) into dual quaternion form. I also had to update the renderer and the UI in order to support switching between linear blend skinning and dual quaternion skinning on keypress, but I am going to leave out these boring details.

These steps concluded the CPU implementation: evaluating animation channels, computing joint world space transforms and uploading the transform data to the GPU work just the same as before. The only difference being that the amount of data that needs to be sent to the GPU every frame is quite a bit smaller: 8 float values instead of 12 per joint.

On the GPU side I needed to update the code for blending the bone transforms and the transformation of the vertex position and normal to work with dual quaternions. Blending is quite simple: Compute a weighted average of the bone transforms and renormalize:

float2x4 GetBlendedDualQuaternion( uint4 boneIndices, float4 boneWeights )
{
float2x4 dq0 = GetBoneDualQuaternion( boneIndices.x );
float2x4 dq1 = GetBoneDualQuaternion( boneIndices.y );
float2x4 dq2 = GetBoneDualQuaternion( boneIndices.z );
float2x4 dq3 = GetBoneDualQuaternion( boneIndices.w );

float2x4 blendedDQ =
dq0 * boneWeights.x +
dq1 * boneWeights.y +
dq2 * boneWeights.z +
dq3 * boneWeights.w;

float normDQ = length(blendedDQ[0]);
return blendedDQ / normDQ;
}


Simliar to regular quaternions, a vertex is transformed via the ‘sandwich’ type operation:

$\mathbf{\hat{v}}' = \mathbf{\hat{q}}\mathbf{\hat{v}}\overline{\mathbf{\hat{q}}}^*$

where $\hat{\mathbf{v}}$ is the vertex in dual quaternion form (just a translation) and $\overline{\hat{q}}^*$ is the conjugate of $q$ (well actually both types of conjugates that exist for dual quaternions). As shown in the paper, this operation can be simplified to

$\mathbf{v}' = \mathbf{v} + 2 \overrightarrow{\mathbf{q}}_r \times ( \overrightarrow{\mathbf{q}}_r \times \mathbf{v} + \dot{q}_r \mathbf{v} ) + 2 [ \dot{q}_r \overrightarrow{\mathbf{q}}_d - \dot{q}_d \overrightarrow{\mathbf{q}}_r + \overrightarrow{\mathbf{q}}_r \times \overrightarrow{\mathbf{q}}_d ]$

where $\overrightarrow{\mathbf{q}}$ denotes the ‘imaginary’ or ‘vector’ part of quaternion $\mathbf{q}$ and $\dot{q}$ denotes the scalar part. Assuming that our dual quaternions only ever perform rotations and translations, the transform for the normal can be written as

$\mathbf{n}' = \mathbf{n} + 2 \overrightarrow{\mathbf{q}}_r \times ( \overrightarrow{\mathbf{q}}_r \times \mathbf{n} + \dot{q}_r \mathbf{n} )$

which corresponds to only evaluating the rotational part of the dual quaternion. So given these definitions I can now present the relevant parts of the vertex shader:

float3 transformPositionDQ( float3 position, float4 realDQ, float4 dualDQ )
{
return position +
2 * cross( realDQ.xyz, cross(realDQ.xyz, position) + realDQ.w*position ) +
2 * (realDQ.w * dualDQ.xyz - dualDQ.w * realDQ.xyz +
cross( realDQ.xyz, dualDQ.xyz));
}

float3 transformNormalDQ( float3 normal, float4 realDQ, float4 dualDQ )
{
return normal + 2.0 * cross( realDQ.xyz, cross( realDQ.xyz, normal ) +
realDQ.w * normal );
}

{
result.TexCoord = input.TexCoord;
result.Color = float4( 0.75, 0.75, 0.75, 1 );

// blend bone DQs
float2x4 blendedDQ
= GetBlendedDualQuaternion( input.BlendIndices, input.BlendWeights );

// transform position into clip space
float3 blendedPosition = transformPositionDQ( input.Position.xyz,
blendedDQ[0], blendedDQ[1] );

result.Position = mul( float4( blendedPosition.xyz, 1), ViewProjection );

// transform normal vector into world space
result.Normal = transformNormalDQ( input.Normal, blendedDQ[0], blendedDQ[1] );
return result;
}