The model renderer v.0.1

Allright, so here I was, having defined a couple of DataConverters like so:

// Used for positions, normals, tangents and bitangents and 3D texcoords
struct aiVector3DConverter : public ArrayDataConverter

// Used for colors
struct aiColor4DConverter : public ArrayDataConverter

// Used for 2D texcoords
struct aiVector3DToFloat2Converter : public ArrayDataConverter

These simple definitions allowed me to convert Vertex positions, normals, Colors and UVs to a DirectX 9 compatible data format. After model loading, I simply create a list of converters for each data stream in the mesh and then gather all vertex elements (via CopyType()) and the actual data (via CopyData()) into the corresponding buffers:

// build vertex declaration
std::vector vertexElements;
    for( unsigned int i=0; i<converters.size(); ++i )
        converters&#91;i&#93;->CopyType( vertexElements );

    vertexElements.push_back( endElement );

    context->Device()->CreateVertexDeclaration( &vertexElements[0], 
                                                &result.m_pVertexDeclaration );

// now create vertex buffer
    const int vertexBufferSize = importedMesh->mNumVertices * result.m_VertexSize;
    context->Device()->CreateVertexBuffer( vertexBufferSize, 0, 0, 
        D3DPOOL_DEFAULT, &result.m_pVertexBuffer, NULL );

    BYTE* vertexData;
    result.m_pVertexBuffer->Lock( 0, 0, reinterpret_cast( &vertexData ), 0 );

    BYTE* curOffset = reinterpret_cast( vertexData );

    for( unsigned int v=0; vmNumVertices; ++v )
       for( unsigned int i=0; i<converters.size(); ++i )
              converters&#91;i&#93;->CopyData( &meshData.mVertexData[v * meshData.mVertexSize], v );

    result.m_VertexCount = importedMesh->mNumVertices;

Done! Simple as that. Only the creation of all these converters is still somewhat messy with a check for existence of every stream type. But well… I figured I won’t overcomplicate stuff.

So, given the vertex declaration and the vertex buffer the next step was to create an index buffer. I won’t bother you with the code here as it is really straight forward. The only part that is worth mentioning is that I decided to use 16bit indices if possible i.e. if the number of vertices is less than 0xFFFF = 65535. This is actually a really simple thing to do but quite effective in saving video memory on lots of models. I’ve rarely come across meshes with more than 65535 vertices per material – at least on consoles.

Almost done now! What was missing now was a simple shader and code to set up the GPU.
In terms of shading I decided to go bare bones:

float4x4 WorldViewProjection : WORLDVIEWPROJECTION;
float4 LightDirection        : LIGHTDIRECTION;

struct VertexShaderInput
    float4 Position : POSITION;
    float3 Normal   : NORMAL;
    float2 TexCoord : TEXCOORD0;

struct VertexShaderOutput
    float4 Position : POSITION0;
    float3 Normal   : TEXCOORD1;
    float2 TexCoord : TEXCOORD0;

VertexShaderOutput Model_VS( VertexShaderInput input )
    VertexShaderOutput result;
    result.Position = mul( float4(,1), WorldViewProjection );
    result.Normal =;
    result.TexCoord = input.TexCoord;

    return result;

float4 Model_PS( VertexShaderOutput input ) : COLOR0
    const float ambient = 0.3f;
    float diffuse = saturate(dot(input.Normal, ) );

    return float4(float3(1,0,0) * (ambient + diffuse),1);

technique Model
    pass P0
        VertexShader = compile vs_2_0 Model_VS();
        PixelShader = compile ps_2_0 Model_PS();

So basically a simple lambert shader with some hard coded material parameters. Loading the shader using D3DX is really simple, a call to D3DXCreateEffectFromFile() is enough. And finally we can add the code for actually rendering the model:

void Model::Render( RenderContext* context )
    for( unsigned int m=0; m<mMeshes.size(); ++m )
        D3DXHANDLE hTechnique = mGetTechniqueByName( "Model" );
        D3DXHANDLE hWorldViewProjection = 
            mesh.m_pEffect->GetParameterBySemantic( NULL, "WORLDVIEWPROJECTION" );
        D3DXHANDLE hLightDirection = 
            mesh.m_pEffect->GetParameterBySemantic( NULL, "LIGHTDIRECTION" );

        context->Device()->SetVertexDeclaration( mesh.m_pVertexDeclaration );
        context->Device()->SetStreamSource( 0, mesh.m_pVertexBuffer, 
            0, mesh.m_VertexSize );
        context->Device()->SetIndices( mesh.m_pIndexBuffer );

        mesh.m_pEffect->SetMatrix( hWorldViewProjection, 
            &(context->GetViewMatrix() * context->GetProjectionMatrix()).data );

        D3DXVECTOR4 lightDirection = D3DXVECTOR4( 0, 1, 0, 1 );
        mesh.m_pEffect->SetVector( hLightDirection, &lightDirection );

        mesh.m_pEffect->SetTechnique( hTechnique );

        UINT cPasses;
        mesh.m_pEffect->Begin( &cPasses, 0 );

        for( unsigned int iPass = 0; iPass < cPasses; iPass++ )                 
            mesh.m_pEffect->BeginPass( iPass );

            HRESULT hr = context->Device()->DrawIndexedPrimitive( D3DPT_TRIANGLELIST, 
                            0, 0, mesh.m_VertexCount, 0, mesh.m_TriangleCount );



    context->Device()->SetStreamSource( 0, NULL, 0, 0 );
    context->Device()->SetIndices( NULL );

And that’s it! Lots of code this time 🙂


Leave a Reply

Your email address will not be published. Required fields are marked *