About theo

Theodor is a passionate graphics programmer with a strong interest in cutting edge technology. He worked on multiple shipped titles on current and next generation game consoles.

MacBook Dual Boot

I recently replaced the Superdrive of my MacBook Pro with a regular HDD in order to add some disk space to my chronically full SSD. Installation went fine until I came to the point where I wanted to put Windows 7 on the new HDD: Windows simply refused to boot. I tried all sorts of rescue tools, boot managers, etc but no luck. Until I stumbled upon the following instructions in the Apple discussion forums:

  1. Insert Windows CD and boot to normal OSX.
  2. Run Boot Camp, partition your OSX drive and wait for it to reboot.
  3. When MacBook goes to reboot hold down power button until it shuts off.
  4. Swap out OSX drive in primary HDD location with purposed Windows HDD.
  5. Power on and delete all partitions, format and install Windows to new drive [Hold the option key on startup to boot from CD].
  6. I personally installed all updates and made sure Windows was running fine here. I was able to reboot and MacBook would power on straight into Windows. I could hold down option key and see the Superdrive and the Windows Drive.
  7. Take out Windows drive, replace with OSX and run Boot Camp and remove partition, shutdown.
  8. Take out Superdrive, move Windows into Superdrive with HDD Caddy, replace OSX back into original position.

Prerequisites: Regular OSX drive in drive bay 0, superdrive in drive bay 1 (=standard config), Windows 7 DVD

Weird right? But anyhow, this worked like a charm for me and I finally have a fully working dual boot system again. Oh, and on a side note: if you plan to share data between MacOS and Windows via a HDD partition, your best format option is probably going to be exFAT: Windows can’t read hfs+, MacOS can’t write NTFS and FAT32 has some nasty limitations like 4GB maximum file size.

WordPress Customization

After my work is done on XBox One game Ryse, I can finally invest some time in my blog again. Yay! For starters I have cleaned up the website code, as initially (due to time constraints) I had made all my changes directly to the wordpress source. Not a good idea, as these changes have to be painfully reapplied every time the software is updated.

Child Themes

A very convenient way to customize a wordpress site is the use of of child themes. The concept is simple: You create a new theme that inherits the settings of an existing theme and overwrite whatever you need to. To do so, simply create a folder for your theme in the wp-content/themes directory and put a file called style.css inside. Here’s the beginning of my style.css:

/*
 Theme Name:     Twenty Eleven Customized
 Author:         Theodor Mader
 Author URI:     http://theomader.com
 Template:       twentyeleven
 Version:        1.0.0
*/

@import url("../twentyeleven/style.css");

/* =Theme customization starts here
-------------------------------------------------------------- */

Note that the Theme Name and the Template entries are required. You can now overwrite the original theme’s styles by simply adding your code after the import statement.

Syntax Highlighter

I had also made quite some changes to the Syntax Highlighter Evolved plugin to better fit my blog’s style and contents. Fortunately the plugin is nicely customizable and with only a bunch of lines of code you can load a custom style sheet and custom syntax definitions.

In order to get a custom style sheet into syntax highlighter, you simply need to add a filter for syntaxhighlighter_themes and add the name of your style sheet the array of style sheets that gets passed in. Don’t forget to register your stylesheet with WordPress as well, for example via wp_register_style. Syntax definitions (or ‘brushes’ as they are called in Syntax Highlighter) can be added in a similar way via the syntaxhighlighter_brushes filter. Since these are java scripts they also need to be registered with WordPress via wp_register_script. For the sake of simplicity, I added the php code to functions.php of my child theme and do the script/style registration via the init action. Here’s the relevant code from my functions.php:

<?php
    
////////////// syntax highlighter customizations /////////////////

add_action( 'init', 'sh_register_customizations' );
add_filter( 'syntaxhighlighter_themes', 'sh_add_custom_style' );
add_filter( 'syntaxhighlighter_brushes', 'sh_add_custom_brushes' );
 
// Register with wordpress
function sh_register_customizations()
{
  $sh_customizations_uri = get_stylesheet_directory_uri() . '/syntaxhighlighter/';
    
  // customized sh style
  wp_register_style('syntaxhighlighter-theme-default_custom', $sh_customizations_uri . 'shCoreDefaultCustom.css', array('syntaxhighlighter-core'), '0.1');

  // custom sh brushes
  wp_register_script( 'syntaxhighlighter-brush-cg', $sh_customizations_uri . 'shBrushCg.js', array('syntaxhighlighter-core'), '0.1' );
  wp_register_script( 'syntaxhighlighter-brush-cppcustom', $sh_customizations_uri . 'shBrushCppCustom.js', array('syntaxhighlighter-core'), '0.1' );
  wp_register_script( 'syntaxhighlighter-brush-csharpcustom', $sh_customizations_uri . 'shBrushCSharpCustom.js', array('syntaxhighlighter-core'), '0.1' );
}

function sh_add_custom_style($themes)
{
  $themes['default_custom'] = 'Default Custom';
  return $themes;
}

function sh_add_custom_brushes( $brushes )
{
  $brushes['cg'] = 'cg';
  $brushes['cppcustom'] = 'cppcustom';
  $brushes['csharpcustom'] = 'csharpcustom';
 
  return $brushes;
}

?>

Oh, and here’s a neat trick I picked up on another website: You can expand the syntaxhighlighter code boxes by adding for example width: 150%; (on hover) and a transition like transition: width 0.2s ease 0s; to the .syntaxhighlighter style.

.syntaxhighlighter {
    transition: width 0.2s ease 0s;
}

.syntaxhighlighter:hover {
    width: 150% !important;
}

Note that this scales up the boxes by a fixed 150% instead of the exact required size but unfortunately I haven’t found a way do to the proper scaling without some form of scripting. You can toggle overflow: visible; on hover but so far I couldn’t figure out how to make this a smooth transition via css alone.

Links

Shadow Quality (2)

This time, let’s look at how we can add some smoothness to our shadows. There exist a variety techniques to do so, the most popular being percentage closer filterig (PCF). The idea is simple: instead of sampling the shadow map only once and getting a binary result (shadow or no shadow), we also sample the surrounding shadow map texels and average the resulting shadow/no shadow decisions like so:

const int halfkernelWidth = 2;

float result = 0;
for(int y=-halfkernelWidth; y<=halfkernelWidth; ++y)
{
    for(int x=-halfkernelWidth; x<=halfkernelWidth; ++x)
    {
        float texCoords = float4(splitInfo.TexCoords + float2(x,y) / ShadowMapSize, 0, 0);
        result += (splitInfo.LightSpaceDepth <  tex2Dlod( ShadowMapSampler, texCoords ).r);
    }
}
result /= ((halfkernelWidth*2+1)*(halfkernelWidth*2+1));

Note that there exist a couple of more efficient and better looking sampling patterns but for the sake of simplicity I’m sticking with the above pattern. In any case, the averageing of yes-no decisions yields a result that can only take on a set of discrete values (25 for the above example), which means that our shadow levels will be more or less heavily quantized. Check out the image below: On the left the shadowed scene and on the right a closeup of a shadow border with the above PCF filter kernel and the corresponding pixel value histogram. You can clearly see the shadow value quantization in the image as well as in the pixel color histogram.

While this method produces good (with some tweaking excellent) results, it is rather sample hungry: In the above case we need a total of 25 samples for each shadow lookup, which is just too much for less powerful graphics hardware. So let’s look at the alternatives: Instead of using a regular grid sampling pattern, let’s switch to a stochastic sampling pattern: Pick a set of sampling positions from a disk with center at the current pixel and sample only those. A commonly used sampling pattern in this context is the poisson disk: its sampling positions are chosen uniformly at random but with a distance constraint. It can be shown that this sampling pattern has some highly desirable properties like most of it’s energy concentrated in the high frequencies. In the images below you can see the filter kernel on the left (12 taps) and the resulting shadow border on the right.

Notice how the shadow border became much more concentrated and somewhat less blocky. Still not quite satisfying though. So let’s apply another approach, proposed by [Mittring07]: Let’s apply a random rotation to the filter kernel before sampling. This will change up the sampling positions from pixel to pixel, reducing artifacts on neighbouring pixels. Note that randomness in shaders is a tricky issue: Each pixel is shaded independently of the surrounding pixels so traditional random number generators that depend on the result of the previously drawn random number cannot be used here. And usually, it is a requirement that the generated random number be stable with respect to the camera: You want to avoid changing your random number and with it your shading result when the camera stands still otherwise you might perceive flickering even in still scenes. Ideally your random number should not change even when the camera moves. I chose to precalculate a set of random numbers at application start and upload them via a texture. The texture is then indexed via the fragment’s world space position, which guarantees stability with a still standing camera. Since each random number represents a rotation angle \phi and all we do is rotate the filter kernel, I directly store cos(\phi) and sin(\phi) in the texture so we ony need to do a multiply-add in the fragment shader.

// generate a volume texture for our random numbers
mRandomTexture3D = new Texture3D(mGraphicsDevice, 32, 32, 32, false, SurfaceFormat.Rg32);

// fill with cos/sin of random rotation angles
Func<int, IEnumerable<UInt16> > randomRotations = (count) =>
    {
        return Enumerable
            .Range(0,count)
            .Select(i => (float)(random.NextDouble() * Math.PI * 2))
            .SelectMany(r => new[]{ Math.Cos(r), Math.Sin(r) })
            .Select( v => (UInt16)((v*0.5+0.5) * UInt16.MaxValue));
    };

mRandomTexture3D.SetData(randomRotations(mRandomTexture3D.Width * mRandomTexture3D.Height * mRandomTexture3D.Depth).ToArray());

The fragment shader then looks up the rotation values based on the fragment’s world position and rotates the poisson disk before doing the shadow map look-ups:

// get random kernel rotation (cos, sin) from texure, based on fragment world position
float4 randomTexCoord3D = float4(shadowData.WorldPosition.xyz*100, 0);
float2 randomValues = tex3Dlod(RandomSampler3D, randomTexCoord3D).rg;
float2 rotation = randomValues * 2 - 1;

float result = 0;
for(int s=0; s<numSamples; ++s)
{
    // compute rotated sample position
    float2 poissonOffset = float2(
        rotation.x * PoissonKernel[s].x - rotation.y * PoissonKernel[s].y,
        rotation.y * PoissonKernel[s].x + rotation.x * PoissonKernel[s].y
    );

    // perform shadow map look up and add binary shadow/no shadow decision to result
    const float4 randomizedTexCoords = float4(splitInfo.TexCoords + poissonOffset * PoissonKernelScale[splitInfo.SplitIndex], 0, 0);
    result += splitInfo.LightSpaceDepth <  tex2Dlod( ShadowMapSampler, randomizedTexCoords).r;
}

// normalize yes/no decisions and combine with ndotl term
float shadowFactor = result / numSamples * t;

And here are the results:

To the eye, the transition between shadow/no shadow looks much smoother than with the 5×5 block kernel – even though the shadow levels are even more quantized: check the histogram, there are only 12 different shadow levels. The scene itself doesn’t necessarily look better in total but once the textures and lighting are added things change drastically: the ‘noise’ disappears mostly in the surface detail and is quite less noticable except on very uniformly shaded surfaces.

Note that I held back some detail: how do we calculate the ndotl based term: t? Up to now we set t = (ndotl > 0) but this doesn’t work well together with the stochastic sampling approach shown above: (ndotl > 0) produces hard shadow edges whereas the rotated poisson disk gives us a dithered looking fall-off. Combining those two looks quite weird as shown below. So we need to dither ndotl as well:

float l = saturate(smoothstep(0, 0.2, ndotl));
float t = smoothstep(randomValues.x * 0.5, 1.0f, l);

So instead of creating a hard edge by computing ndotl > 0, we use the smoothstep function to create a smooth transition between 0 and 1 when ndotl lies in the range [0, \dots, 0.2] and store it in the variable l. We then use our random value randomValue.x and l and feed them into the smoothstep function again. If l lies in the range [\text{randomValue.x}, \dots, 1] it will be smoothly interpolated between 0 and 1, based on how close it is to either randomValue.x or 1. But since l represents a smooth transition between 0 (shadow) and 1 (no shadow) and randomValue.x (ideally) follows a uniform distribution, this means that the further l moves away from shadow , the more likely t will receive the value 1 (no shadow) too. Conversely, the closer l gets to the shadow border, the more likely t will be 0 (shadow). But there is still a random factor in there which can cause t to be 1 even if l is 0.. but very unlikely. The two images below illustrate the effect dithering on ndotl: On the left the scene is rendered without, and on the right with ndotl dithering.

Stay tuned for some more images and videos in the next post!

Links

Shadow Quality (1)

Let’s face it: even after stabilization, the shadows in my sample still look pretty bad. Very blocky, hard shadow edges and strange shapes everywhere… So let’s do something about it! But first a short disclaimer: The efficiency of the techniques I propose in this post is highly dependent on the scene and your visual target: There is no one approach that will fix all your shadow artifacts once and for all – it’s rather through trial and error and a combination well known techniques that you can get to a state you are satisfied with. I have yet to come across a game that does a perfect job with real time shadows.

But let’s start now, with something very simple: During shading, we do know the surface normal for each position on the geometry. So lets make use of that: If a surface normal points away from the light source we know for sure that that point does not receive any light. Hence it is in shadow. Determining wether the surface normal is pointing away from the light is simple: Just take the dot product between the surface normal and the light direction and check if it’s less than 0. So let’s combine that with the shadow map look up from GetShadowFactor in Cascaded Shadow Maps (3):

// compute shadow factor: 0 if in shadow, 1 if not.
float GetShadowFactor( ShadowData shadowData, float ndotl )
{
    ShadowSplitInfo splitInfo = GetSplitInfo( shadowData );
    float storedDepth = tex2Dlod( ShadowMapSampler, float4( splitInfo.TexCoords, 0, 0)).r;

    return (splitInfo.LightSpaceDepth < storedDepth) * (ndotl > 0);
}

where ndotl is assumed to contain the dot product between surface normal and light direction. Note that we are combining the two terms “Pixel is occluded by some occluder in the shadow map” and “Surface is oriented away from the light” with a simple multiplication. This equation can be interpreted in a probabilistic sense, in that we are saying that a point can only be in light if it is not occluded by some object AND the surface is oriented towards the light. Check out the comparision below: Shading without ndotl term on the left, with ndotl term on the rightThe geometry based ndotl term is clearly helping a lot with thin surfaces, where the shadow map doesn’t have enough Z precision to pick apart the front and back facing polygons. It can, however, only be as good as the surface normals, so correct surface normals are a must.

Next, let’s have a look at another geometric approach: depth bias. When rendering the shadow map, we store the scene distance with respect to the light source in a more or less accurate manner. In my case, I requested a SurfaceFormat.Single shadow map format, which means 32 bits (floating point) per pixel. But not all developers/hardware can afford such a surface format, sometimes 24 bit or even 16 bit per pixel are a requirement. Under such circumstances, the depth range that can be stored in the shadow map is quite limited, resulting in a high quantization error for the stored depth. This means though, when we read back the stored depth from the shadow map, we’ll get a depth value that is more or less close to the real depth of the corresponding surface element. So we can either end up thinking that a given surface point is in shadow because the stored depth value is (due to quantization) closer to the light source than the real depth value, or that the surface point is not in shadow because due to quantization it is further away from the light source than in reality. Since the first problem is much more visible than the later, a simple work around has been invented: Add some bias to the shadow stored depth. There is even hardware support for this functionality: the D3DRS_DEPTHBIAS and D3DRS_SLOPESCALEDEPTHBIAS renderstates, where D3DRS_DEPTHBIAS corresponds to a constant bias and D3DRS_SLOPESCALEDEPTHBIAS is a surface slope based bias. Slope based bias can be very helpful with surface elements that are oriented in a steep angle with respect to the light source. Unfortunately these rendersates only applied to the position interpolator, so not much use to us because we are passing the depth via a color interpolator. So we have to emulate these in the fragment shader:

float2 DepthBias;
float4 ShadowPS(VertexShaderOutput input) : COLOR0
{
    float depthSlopeBias = max(
        abs(ddx(input.Depth)), 
        abs(ddy(input.Depth))
    );
    return float4( input.Depth + depthSlopeBias * DepthBias.x + DepthBias.y, 0, 0, 0 );
}

The DepthBias values are passed to the shader via shader constants. Check out the images below:

On the left you can see the shadowed scene without any bias and on the right with tweaked bias values.

Links

Stable CSM

Now that we have basic shadowing working, let’s look into improving shadow quality. When loading up the current state of the sample, the first thing you might notice is that shadows are showing this weird ‘shimmering’ or ‘swimming’ effect. The boundaries where shadowed area meets non-shadowed area seem to be in constant flux with pixels coming in and out of the shadow as soon as the camera moves. The reason for this type of artifact is the discretization of the scene into shadow map pixels. Imagine the shadow map as a regular grid moving with the camera: All parts of the scene that fall within one grid cell will look up the same depth value. Now let’s think about some shadow casting edge somewhere in the scene and assume we are moving towards it. As soon as any shadow map texel comes in contact with the edge it’s stored depth value will be overwritten by the edge depth, hence putting the whole scene area covered by said texel into shadow. If we continue moving towards the edge, the depth value of the shadow map texel will not change – but the scene area covered by it will continue to move, effectively making it look like the shadow boundary is moving away from the viewer. And as soon as the next pixel comes in contact with the edge the whole game starts over. A similar effect happens when the shadow map changes size every frame.

So… How can we fix this? First, lets agree to make the shadow map size in world space constant, say shadowMapSize, and decouple it from light and camera rotation. In my case I chose to align the shadow map to world space X and Z axis. I do so by exchanging the shadow view matrix for a matrix where the X and Y axis point in direction of world space X and Z, positioned at mLightPosition like so:

 // Remember: XNA uses a right handed coordinate system, i.e. -Z goes into the screen
var look = Vector3.Normalize(arena.BoundingSphere.Center - mLightPosition);
mShadowView = Matrix.Invert(
    new Matrix(
        1,              0,              0,             0,
        0,              0,             -1,             0, 
       -look.X,        -look.Y,        -look.Z,        0,
        mLightPosition.X, mLightPosition.Y, mLightPosition.Z,     1
    )
);

Note that the Y axis is flipped in order to preserve culling order in the final view transform. Also note that this approach only works as long as mLightPosition does not lie in the y=0 plane in world space as then the resulting matrix becomes singular.

Now lets tackle camera movement: As outlined before, the problem is that even the slightest movement of the shadow map will affect all the scene as each scene position will change position in the shadow map (subpixel-wise speaking). What we need, however, is that the scene positions stay constant (at least relative to their corresponding pixel). So instead of moving the shadow map continuously, lets move it in fixed increments of one shadow map pixel. When moving the shadow map this way, each world space position might fall into a different shadow map texel than the frame before – but the relative position within the shadow map texel will stay the same, which means no more moving shadow boundaries.

So how can we implement this? Given our view transform defined like above, all we need to do is adjust the shadow projections: We want to place the shadow map corners at discrete positions only, separated by some value, e.g. quantizationStep. Remember, in one of my previous posts Cascaded Shadow Mapping (1), we defined the extent of the shadow projection matrices based on values min and max which were determined from the view frustum. All we need to do now is make sure the X and Y coordinates of min and max are properly discretized:

var quantizationStep = 1.0f / shadowMapSize;
var qx = (float)Math.IEEERemainder(min.X, quantizationStep);
var qy = (float)Math.IEEERemainder(min.Y, quantizationStep);

min.X -= qx;
min.Y -= qy;

max.X += shadowMapSize;
max.Y += shadowMapSize;

Using the adjusted min and max values we create the shadow projection matrix as described before:

Projection = Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, minZ, maxZ);

The effect of these few lines of code is dramatic. Check out the video below:

You can clearly see how the scene shadows are stabilized, almost all artifacts during camera movements and rotations are gone. The remaining artifacts stem from transitions between the different shadow splits, as shown in the last part of the video.

Links