Did you ever wonder how some algorithm would perform with a slightly different Gaussian blur kernel? Well than this page might come in handy: just enter the desired standard deviation and the kernel size
(all units in pixels) and press the “Calculate Kernel” button. You’ll get the corresponding kernel weights for use in a one or two pass blur algorithm in two neat tables below.
One dimensional Kernel
This kernel is useful for a two pass algorithm: First perform a horizontal blur with the weights below and then perform a vertical blur on the resulting image (or vice versa).
0.06136 | 0.24477 | 0.38774 | 0.24477 | 0.06136 |
Two dimensional Kernel
These weights below be used directly in a single pass blur algorithm: samples per pixel.
0.003765 | 0.015019 | 0.023792 | 0.015019 | 0.003765 |
0.015019 | 0.059912 | 0.094907 | 0.059912 | 0.015019 |
0.023792 | 0.094907 | 0.150342 | 0.094907 | 0.023792 |
0.015019 | 0.059912 | 0.094907 | 0.059912 | 0.015019 |
0.003765 | 0.015019 | 0.023792 | 0.015019 | 0.003765 |
Analysis & Implementation Details
Below you can find a plot of the continuous distribution function and the discrete kernel approximation. One thing to look out for are the tails of the distribution vs. kernel support: For the current configuration we have 1.24% of the curve’s area outside the discrete kernel. Note that the weights are renormalized such that the sum of all weights is one. Or in other words: the probability mass outside the discrete kernel is redistributed evenly to all pixels within the kernel.
The weights are calculated by numerical integration of the continuous gaussian distribution over each discrete kernel tap. Take a look at the java script source in case you are interested.
Links
This was really useful. Can’t thank you enough!
Very nice!
So useful! Thanks so much!
This post was useful for me. Thanks.
Thank you very much.
Hi, this is really handy, but I’m getting different value when I calculate it myself. It’d be nice to see the code you use to generate and normalise the kernal. I’m using the following C#, which can be easily pasted into LINQPad:
double Guassian(int x, double sigma) {
double c = 2.0 * sigma * sigma;
return Math.Exp(-(x * x) / c) / Math.Sqrt(c * Math.PI);
}
double[] GuassianTerms(int kernalSize, double sigma) {
var terms = new double[kernalSize];
for (int i = 0; i t1 + t2); // aggregate to normalise result
Console.WriteLine(String.Join(“\r\n”, terms.Select(i => (i / sum).ToString(“0.00000”))));
}
Seems some of the code was stripped. Here it is: http://pastebin.com/bKLYdmdi
Hi Aranda,
The JS code is linked in the post, check it out: http://dev.theomader.com/scripts/gaussian_weights.js
Out of curiosity: How different are the results? Looks like we are using the same normalization but a different sampling strategy.
Cheers,
Theo
Pingback: Screen Space Glossy Reflections | Roar11.com
Pingback: Gaussian Blur | The blog at the bottom of the sea
This is cool. However, you are missing a potential optimization. If you get free bilinear filtering, you can leverage that to get two samples for the price of 1!
Say you have a kernel of width 5 with weights a, b, c, d, e corresponding to pixels with values p0, p1, p2, p3, p4. The positions of the samples are -2, -1, 0, 1, 2.
The total kernel result is k = ap0 + bp1 + cp2 + dp3 + ep4.
You can evaluate this kernel equivalently with only 3 samples, instead of 5. 1 in the center, and 1 each somewhere between p0 and p1, and p3 and p4 respectively.
The task is to figure out WHERE that somewhere is, and what the WEIGHT of that sample should be.
The contribution of the first two samples to the kernel total is
ap0 + bp1 = (a+b)( a/(a+b)p0 + b/(a+b)p1 )
Bilinear filtering p0 and p1 in one axis with weight c is:
(c)p0 + (1-c)p1
setting c = a/(a+b), we get
1-c = (a+b)/(a+b) – a/(a+b) = b/(a+b)
Now that we know that a/(a+b)p0 + b/(a+b)p1 can be expressed as (c)p0 + (1-c)p1, and
ap0 + bp1 = (a+b)( a/(a+b)p0 + b/(a+b)p1 ) = (a+b)( cp0 + (1-c)p1 )
We use c = a/(a+b) as our uv offset, and a+b as the weight of the dual sample. We know that the sample needs to be somewhere between -2 and -1. So we set it to -1 – c = -1 – a/(a+b).
As an example, for a 5 tap kernel of sigma=1, the calculator gives us these weights:
0.06136 0.24477 0.38774 0.24477 0.06136
Plugging these into the equations,
c = 0.06136 / (0.06136 + 0.24477) = 0.2004, therefore
-1 – c = -1.2004
and
a+b = 0.30613
So the new kernel that evaluates to the same result would have weights:
0.30613 0.38774 0.30613
With sample offsets
-1.2004 0 1.2004
Notice that the sample offset -1.2004 is closer to p1 (-1) than p0 (-2). This makes sense, because the weight of p1 is higher than the weight of p0, and lerping gives us the correct proportion between the two weights.
It would be cool if you updated your calculator to calculate optimal weights and offsets in this way. I found your page at the top of the google search results, so I think enough people might be using this as a reference to be a useful addition.
Good luck, and thanks for the article!
Steve
Pingback: Online Gaussian kernel generator |
Whoa, thank you for making this 😀
Very useful and helpful!
There is a better way to integrate than the monte-carlo integration in your code.
Take the integral of the gaussian function.
integral e^(-1/2 ((x-μ)/σ)^2)/(σ sqrt(2 π)) dx = 1/2 erf((x-μ)/(sqrt(2) σ))+constant
with erf being the error function: https://en.wikipedia.org/wiki/Error_function. I gave it a try, works fine:
//from http://picomath.org/javascript/erf.js.html
function erf(x) {
// constants
var a1 = 0.254829592;
var a2 = -0.284496736;
var a3 = 1.421413741;
var a4 = -1.453152027;
var a5 = 1.061405429;
var p = 0.3275911;
// Save the sign of x
var sign = 1;
if (x < 0)
sign = -1;
x = Math.abs(x);
// A&S formula 7.1.26
var t = 1.0/(1.0 + p*x);
var y = 1.0 – (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*Math.exp(-x*x);
return sign*y;
}
var sqrt_2 = Math.sqrt(2);
function def_int_gaussian(x, mu, sigma) {
return 0.5 * erf((x-mu)/(sqrt_2 * sigma));
}
var sigma = 1;
var mu = 0;
var kernel_size = 5;
var start_x = -(kernel_size/2);
var end_x = (kernel_size/2);
var step = 1;
coeff = []
var last_int = def_int_gaussian(start_x, mu, sigma);
for (var xi = start_x; xi < end_x; xi+=step) {
var new_int = def_int_gaussian(xi+step, mu, sigma)
coeff.push(new_int-last_int);
last_int = new_int;
}
sum = 0;
for (var i in coeff) {
sum += coeff[i]
}
//normalize
for (var i in coeff) {
coeff[i] /= sum;
}
Thank you very much. It’s a really useful tool.
great share… very informative. thanks