Last time we’ve reviewed skeletal animation. Lets now get into how we can deform a mesh based on that skeleton. What we want to achieve is that each mesh’s vertex moves along when the skeleton is animated. So the simplest idea is attach each vertex to one bone and transform the vertex’s position by that bone’s world transform. I.e the transformed position of a vertex , attached to bone with world space transform is

Remember though: Joint transforms always expect data relative to the joint’s local coordinate space. This means that we need to store the mesh’s vertices in the local coordinate system of their associated joint! Not a great restriction because our modeling tool can do that for us during export, but that also means that we cannot render the geometry without the skeleton.

Now, attaching each vertex to one bone has some serious limitations: It’s simply not a realistic model for most common materials. This is especially true for materials like skin or cloth. Consider human skin in joint areas: it stretches and wrinkles all over the place. So let’s make our model a little bit more complicated: Let’s allow each vertex to be influenced by more than one bone. Let us thus, for each vertex, define a list of bones the vertex is attached to and a weight factor for each bone in that list. Provided that the weight factors are normalized , we can then simply transform the vertex once by each bone and sum up the result, weighted by the weight factor

Beware though: Aren’t we forgetting something? Right, joint transforms expect their data relative to their local coordinate system. But how can we accomplish that, given that each vertex can only be defined relative to one coordinates system? Of course, we could store N copies of the mesh but that would be a huge waste of memory. So let’s instead store our vertices in world space and transform them into joint local space whenever required. This can be accomplished by adding yet another transformation for each bone, the so called Binding Transform . This yields

You might ask yourself: How the heck am I going to come up with these binding transforms? Well, the answer is simple: The binding transforms are just a snapshot of the skeleton’s world space joint transform at the time of binding. So whenever the artist poses the model and then binds the skin, the world space joint transforms are recorded for each bone and stored. Note that since a bone’s world space transform maps data from joint local space to world space, the corresponding binding transform is actually the inverse of

Let’s look at some code now, hopefully this will help make things more clear. First we need to extend the existing skeleton class to store the binding transform for each bone.

class Skeleton { std::vector<aiMatrix4x4> mParents; std::vector<aiMatrix4x4> mBindingTransforms; std::vector<int> mTransforms; â€¦ };

Now we can modify the `getWorldTransform`

method to take into account the binding transform

aiMatrix4x4 Skeleton::getWorldTransform( int bone ) const { int p = mParents[bone]; aiMatrix4x4 result = mTransforms[bone] * mBindingTransforms[bone]; while( p >= 0 ) { result = mTransforms[p] * result; p = mParents[p]; } return result; }

Next, each vertex needs to know the index of the bones it is attached to, as well as the weighting factors. So I extended the vertex declaration to include a `UBYTE4`

vector storing up to four bone indices per vertex and a `FLOAT4`

vector storing the corresponding weight factors. In fact, I created a new type of data converter, which computes both values at once. Currently I am passing the bone indices and weights via the `D3DDECLUSAGE_BLENDINDICES`

and `D3DDECLUSAGE_BLENDWEIGHT`

semantics to the vertex shader. The bone matrices themselves are passed to the shader via an array of vertex shader constants. The vertex shader now looks like this (unimportant parts stripped for the sake of clarity):

float4 LightDirection : LIGHTDIRECTION; float4x4 BoneTransforms[16] : BONETRANSFORMS; struct VertexShaderInput { float4 Position : POSITION; float3 Normal : NORMAL; float2 TexCoord : TEXCOORD0; float4 BlendWeights : BLENDWEIGHT0; uint4 BlendIndices : BLENDINDICES0; }; VertexShaderOutput Model_VS( VertexShaderInput input ) { VertexShaderOutput result; result.Normal = input.Normal.xyz; result.TexCoord = input.TexCoord; result.Color = input.BlendWeights; float4 posH = float4( input.Position.xyz, 1.f ); float4 blendedPosition = mul( posH, BoneTransforms[input.BlendIndices.x] ) * input.BlendWeights.x + mul( posH, BoneTransforms[input.BlendIndices.y] ) * input.BlendWeights.y + mul( posH, BoneTransforms[input.BlendIndices.z] ) * input.BlendWeights.z + mul( posH, BoneTransforms[input.BlendIndices.w] ) * input.BlendWeights.w; result.Position = mul( blendedPosition, ViewProjection ); return result; }

Thanks , very useful and informative .