In a previous post I discussed the use of reciprocal depth in depth buffer generation. I gave some figures showing the problematic hyperbolical depth value distribution in the depth buffer and the dependence on the near plane.

Let’s expand a bit on that and investigate strategies to better distribute depth values. In the following I will be using a right handed coordinate system, i.e. points forward and (as usual) vectors are multiplied from the right. View space depth is denoted and the near/far planes are and .

## Standard Depth

The standard DirectX projection matrix as produced by `D3DXMatrixPerspectiveFovRH`

, transforms view space positions into clip space positions

and results in depth buffer values

As shown before, this can cause a significant warp of the resulting depth values due to the division by .

## Reverse Depth

Reverse depth aims to better distribute depth values by reversing clip space: Instead of mapping the interval , the projection matrix is adjusted to produce . This can be achieved by multiplying the projection matrix with a simple ‘z reversal’ matrix, yielding

The advantage of this mapping is that it’s much better suited to storing depth values in floating point format: Close to the near plane, where similar view depth values are pushed far apart by the hyperbolic depth distribution, not much floating point precision is required. It is thus safe to map these values to the vicinity of 1.0 where the floating point exponent is ‘locked’. Similar values near the far plane on the other hand are compressed to even closer clip space values and thus benefit from the extremely precise range around 0.0. Interestingly this results in the least precision in the middle area of the view space depth range.

## Linear depth (i.e W-Buffer)

As the name already suggests, the idea here is to write out the depth value itself, normalized to range:

Unfortunately, without explicit hardware support, this method causes significant performance overhead as it requires depth export from pixel shaders.

## Asymptotic behaviour

For situations where extremely large view distances are required, one can let approach infinity. For standard depth we get

and for reverse depth

Note how the matrices are suddenly free of approximate numerical computations, resulting in less rounding and truncation errors.

## Precision Analysis

With all these possibilities, which one will be the fit best for a given scenario? I try to answer this question by comparing depth resolution for each projection type. The idea is simple: View space depth is sampled in regular intervals between and . Each sampled depth value is transformed into clip space and then into the selected buffer format. The next adjacent buffer value is then projected back into view space. This gives the minimum view space distance two objects need to be apart so they don’t map to the same buffer value – hence the minimum separation required to avoid z-fighting.

The following graph overlays depth resolution for the different projection methods. Click the legend on top of the graph to toggle individual curves. The slider on the bottom lets you restrict the displayed depth range. The zoom button resamples the graph for the current depth range. Use the reset button to reset the depth range.

Near Plane | Far Plane | Buffer Type |

## Discussion

Results depend a lot on the buffer type. For floating point buffers, reverse depth clearly beats the other candidates: It has the lowest error rates and is impressively stable w.r.t. extremely close near planes. As the distance between the near and far plane increases starts to drop more and more mantissa bits of , effectively making the projection converge gracefully to the reverse infinite far plane projection. On top of that, on the AMD GCN architecture floating point comes at the same cost as 24-bit integer depth.

With integer buffers, linear Z would be the method of choice if it wouldn’t entail the massive performance overhead. Reverse depth seems to perform slightly better than standard depth, especially towards the far plane. But both methods share the sensitivity to small near plane values.

## Reverse depth on OpenGL

In order to get proper reverse depth working on OpenGL one needs to work around OpenGL’s definition of clip space. Ways to do so would either be via the extension arb_clip_control or a combination of gl_DepthRangedNV and a custom far clipping plane. The result should then be the same as reverse depth on DirectX. I included the *Reversed OpenGL* curve to visualise the depth resolution if clip space is kept at and the regular window transform glDepthRange(0,1) is used to transform clip space values into buffer values.

# Links

- Creating vast game worlds
- Infinite Projection Matrix notes
- Maximizing Depth Buffer Range and Precision
- Tightening the Precision of Perspective Rendering