David Baron's Weblog

What does a blur radius mean?

Friday, 2011-02-25, 19:00 -0800

[Note that this blog entry contains a good bit of markup, including script and SVG, and will probably not syndicate very well.]

A bunch of Web platform features involve blurring. For example, the CSS text-shadow property lets a shadow be both positioned and blurred. Each shadow is given with three numbers: the first two give the position and the third gives the blur radius. For example:

text with a shadow
span {
  font: italic 3em serif;
  color: rgb(255, 64, 0);
  text-shadow: silver 0.2em 0.2em 0.07em;

The CSS box-shadow property is similar, though it also takes a fourth number, a spread radius, which I won't discuss here, except to say that nothing discussed here is relevant to it.

The HTML canvas element has a similar shadow mechanism, which applies to all drawing operations:

Different browsers, however, have historically done different things for the same blur radius, both in terms of blurring algorithm and what the radius means for that algorithm (i.e., how blurry a given radius makes things). In some cases the same browser has done different things for canvas and the CSS properties. Over the past year, the CSS and HTML specifications have changed (for CSS) to define what this blur radius means or (for HTML) so that they agree with each other on this definition.

(I'm largely ignoring SVG filters here, which also have a blurring mechanism, since the rules for it have been stable for years. But I'll explain at the end how they relate to the rules for canvas and CSS.)

In Firefox 4, we've changed our implementation to match these changes in the specs.

Here, I'll explain what the blur radius now means. To do that properly, I need to explain how blurring works. So let's look at how we'd blur an image or any other pixel-based display. I'm going to do this with grayscale images for now, since it's simpler, but this treatment extends to RGB images and to premultiplied RGBA images, by just doing the same math for each color channel.

So how would we do a blur operation that takes some source image and produces a blurrier result image? In general, we'd do this using what's called a kernel function, where each pixel in the result image is a weighted average of pixels near that location in the source image. For example, we might use the following kernel to to compute the value of the highlighted pixel at the center in the result image from the nearby pixels in the source image:

In other words, we compute the pixel at that center position in the result image by multiplying these numbers times the color values in the source image. (It doesn't matter if 0 is black and 1 is white, or 0 is black and 255 is white, as long as we're consistent.) It's important that the numbers in the kernel add up to 1; that's what makes it a weighted average, and what keeps it from darkening or lightening the image. All the values further away, outside this grid, are also 0 in this kernel function. I just didn't bother making the grid bigger than needed. (Note that I'm ignoring what happens when we're near the edge of the image. There are multiple options, including assuming that the pixels at the edge of the image extend out infinitely, and assuming that everything past the edge of the image is transparent.)

However, computing a blur this way is very expensive, especially for large blurs. It requires that, for each pixel in the result image, we look at a large number of pixels in the source image and multiply it by some number that's specific to that result pixel. For a blur that takes input from pixels up to only 10 pixels away from the result pixel, this would mean looking at up to 441 (21×21) pixels in the source image. This would make blurring very slow.

So let's look at what we could do faster. One thing that we can do quickly is a blur in one dimension, with a kernel function whose values are uniform across a specific width. For example, the function:


does a horizontal blur that looks at pixels in the source image up to three pixels away, horizontally, from the result pixel. We can also represent this kernel function as a graph:

-4 -3 -2 -1 0 1 2 3 4

where the area under the graph is 1.

This kernel function is called a box blur. It's quick to compute because we can compute it by keeping a running total. We don't need to do an amount of work proportional to the size of the blur for each pixel in the result image. In particular, we can compute it as follows (assuming we're treating area past the edges of the image as transparent):

  1. Add up the values in the first four pixels in the row in the source image.
  2. Put the current value divided by seven in the first pixel in the row in the result image.
  3. Add the fifth pixel of the source.
  4. Put the current value divided by seven in the second pixel in the row in the result image.
  5. ...
  6. Add the seventh pixel of the source.
  7. Put the current value divided by seven in the fourth pixel in the row in the result image.
  8. Add the eight pixel of the source and subtract the first pixel
  9. Put the current value divided by seven in the fifth pixel in the row in the result image.
  10. ...

So far, this isn't a very interesting looking kernel function, though. It produces an ugly and horizontal-only blur.

But we can do something else here. We can run the same blur again, which is a form of convolution. When we do that, we end up with a kernel function like this:


-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7

Alternatively, we can pretend we have a large number of pixels and draw a smooth graph:

(When we did a single box blur, we had a graph that was piecewise constant. Now, with a double box blur, we have a graph that's piecewise linear. If you don't know what this means, don't worry about it.)

We can keep repeating this convolution, and as we do, the central limit theorem says we'll end up with a distribution that is closer and closer to a Gaussian, also known as a normal distribution. When we do a box blur three times, we have a function that is piecewise quadratic, and quite close to a Gaussian:

Gaussian Triple box blur

We could do more than three times, but three is (as shown above) quite close.

Now, the Gaussian function has a very interesting property. The product of a Gaussian kernel function in the horizontal direction and one in the vertical direction is a kernel function composed of perfect circles. In other words, if we do a Gaussian blur horizontally and then do the same vertically, the contribution of a point in the source image to a nearby point in the result image is a function only of the distance between the points; it doesn't matter at all what component of that distance is horizontal and what component is vertical. Or, to put it another way, there's no difference between an image that is rotated 45 degrees, run through a horizontal Gaussian blur and then a vertical Gaussian blur or one where the rotation happens last.

So the standard technique for blurring is to do exactly this: approximate a Gaussian blur by doing a triple box blur: three quick passes over the image in one dimension and then three quick passes over the image in the other dimension.

We use a Gaussian blur because it's computationally easy to approximate and it's relatively smooth looking, not, as far as I can tell, because it matches the way physical blurring effects happen with lenses or non-point light sources. My understanding is that the use of this approximation technique is mostly common across browsers, although Chrome uses (or used?) a single box blur, which leads the blur to have square-looking artifacts.

So, getting back to CSS and HTML: what does this blur radius mean? A Gaussian distribution is described by two parameters: the mean (μ) and the standard deviation (σ). We obviously want the mean (the center) of the kernel function applied to the source image to be the same as the pixel in the result image that we're computing. The blur effect is now defined by css3-background and by HTML to be a Gaussian blur with the standard deviation (σ) equal to half the given blur radius, with allowance for reasonable approximation error. So the kernel function looks, in one dimension, like this:

σ blur radius = 2σ

This means that a blur with a 10px radius has an effect that extends a little past 10 pixels, but the bulk of the visible effect is within the 10px blur radius.

I mentioned SVG at the beginning, and I'll mention it again here. The SVG feGaussianBlur filter primitive has a stdDeviation attribute, which takes the standard deviation (σ) of the Gaussian blur. So in SVG, the number given is half the number that you would give to get the same blur with CSS or canvas.

[Update 2016-03-11, 14:45 +0800: The “in SVG” rule also applies to the blur() filter function for the ‘filter’ CSS property, since the CSS filter functions are modeled on SVG filters. This means that the distinction is now that the length given is 2σ for the blur in a CSS ‘box-shadow’ or ‘text-shadow’, and the length given is σ for the blur in an SVG feGaussianBlur filter or CSS blur() filter.]

See also: