Progressive Tiling 2D Blue Noise Point Sets Encoded as Images

by Sean Barrett, 2021-08-01

Alpha channel of image-encoded point set.
Values rescaled for visibility.

RGB channels of image-encoded point set.

Three simultaneous blue noise point sets
from a single progressive set.


This page reintroduces progressive blue noise point sets, a blue noise point set which can be truncated to any number of points while still producing blue noise. Multiple progressive blue noise point sets are provided, encoded as images of different sizes.


Blue noise is a type of noise (randomness) without low frequencies in its spectrum.

Blue noise point sets are collections of points whose distribution is "blue noise"-y. Often, such point sets are generated by poisson disk sampling, such that the points are no closer to each other than some specified minimum distance.

Tiling blue noise point sets are point sets contained in some square region (e.g. x=[0..1), y=[0..1)) such that the point set can be repeatedly tiled at a regular spacing and the tiling retains the blue noise property locally (although this does introduce low frequencies at the tiling scale). Basically, this means that the poisson disk sampling for the point set guarantees points are no closer to each other than a specific distance even assuming wraparound; roughly, the poisson disk sampling occurs on a (distorted) torus.

Progressive blue noise point sets are point sets that can be listed in some order and any prefix of that list is a blue noise point set. This idea appears to have been first introduced by McCool & Fiume in "Hierarchical Poisson Disk Sampling Distributions" (1992). When generated with poisson disk sampling, the minimum distance between the points become progressively more closely spaced as more of the list is included.

2D blue noise point sets encoded as images distribute the points over an image grid whose size is related to the range of the points divide by the minimum spacing in the poisson disk sampling threshold. Each point is stored in the image pixel corresponding to its coarse position in the grid, and the pixel value encodes additional information such as fine positioning or location in a progressive list.

Image Encoding Details

Each image available on this page below is encoded as follows, interpreting each color channel as a value from 0..255. For every pixel for which alpha does not equal 0, a point is located within that pixel.

Note that this alpha value makes the alpha channel have a very different distribution than a blue-noise image, though it can nearly be monotonically remapped to one. The image of the alpha channel at the top of this page was remapped to more closely resemble such an image, although not completely.

In other words:

(Note the threshold operation described above automatically discards image pixels in which no point is present.)

If k is below the square root of 2, the image grid cannot capture the complete density of points and some will be missing, giving lower quality results, but points are defined in the image for 1<k<100 just in case it's useful.

This format is designed to facilitate random access to points based on approximate location. No explicit ordering for the truncatable "sorted list" is provided, although one can be easily generated by generating an explicit list of points and sorting by the threshold value.

Encoding as an image rather than making some special grid file format provides the advantage of a relatively easy to remember and use format as well as making it straightforward to use from GPU shaders. Here are some mnemonic devices to help remember the image channel meanings: To a first approximation, the point is located at the center of its image pixel. Integer spacing thresholds only need to compare against the alpha channel, corresponding to the "alpha test" functionality available in traditional graphics APIs. The units of the values in the alpha channel are units of image pixels. The subpixel positions for x and y can be accessed from a GPU shading language as the vector components x and y of an image pixel. (If loaded as 8-bit UNORM values, technically they should be rescaled by 255/256, but acceptable results can be achieved without doing so, as the spacing error will be at most 1/256th.) In a bytestream ordered RGBA, BA forms a little-endian 16-bit value which can be interpreted as an 8.8 fixedpoint value to be thresholded.

Another option would be to change the alpha channel so that the image alpha channel resembles traditional blue noise. This is very different from the format used here, where more than half of the pixels have a value of less than 2 (interpreting encoded values as 0..255), with an exponentially decaying distribution and a largest value of 100; the format here is designed to easily support thresholding that guarantees a specified minimum spacing.


Not every possible spacing alpha+blue/256 appears in these images. To make creation of these images more tractable, only a small number of blue values were generated for each alpha (typically from 2..4 for large spacings to 8..12 for small spacings). Therefore small changes in the threshold value may produce no change in the tresholded point set. Larger spacings are omitted from smaller images, as they would not produce a meaningfully blue-noise distribution.

Due to rounding, points may be 1/256th closer together than specified by the threshold.

Multiple images are avaiable for most sizes, each using a different random seed to produce independent blue noise point sets. You can also use rotations and reflections of a single image to achieve similar effects.

These images are available for free for any and all uses and may be freely used without giving any credit. Formally, I place them in the public domain; I apply the Creative Commons CC0 license to them; steal this book; this machine destroys fascists.


It is plausible that a progressive blue noise point set thresholded at some value might be lower quality than one generated directly for that value with poisson disk sampling; because the set is seeded with a set with a slightly larger spacing, that set might be less efficiently filled in than one starting with an open area. In other words, there might be fewer points with a worse distribution than you would see if directly generated.

Or this might not be true. I didn't test it. I decided the results were acceptable for my applications.

The McCool92 paper described above claims to find the distributions empirically acceptable, but I don't know if they apply to my generation method.


I use the images provided on this page for object placement in simulated 3D worlds, objects like vegetation, which naturally tends to be spaced out by some minimum spacing due to natural processes.

A classic poisson disk sampling algorithm can be quite fast, and it can be adapted to include almost all of the features described in this section except random access. For me the primary advantage is separation of concerns, or equivalently, simplicity of code. The code for placing objects does not have to include a poisson disk sampler with modifications to support extra features; the complexity has been isolated to another program, and the object placement code just thresholds an image.

Multiple scales

A single progressive blue noise point set image can be used to place objects at multiple scales by using different thresholds. A traditional precomputed blue noise point set is fixed at a single scale, but can be rescaled arbitrarily by multiplying the point coordinates by a constant. However, in such a case, the multiple scales of points are not aware of each other. Using a progressive blue noise set with two thresholds allows one set of objects to be placed with one spacing, and a second set of objects to be placed using a second smaller spacing, automatically avoiding the first set of objects as well.

A progressive blue noise point set thresholded at three different values

Unfortunately, the smaller-spaced objects only avoid the larger-spaced objects by the smaller spacing. In the above image, black points are far from each other, but green points come as close to black points as they come to other green points. In a more realistic placement approach, we might want red and green objects to remain a bit further from black objects, e.g. because the black objects themselves are larger. If this sort of spacing is required, additional explicit rejection testing must be performed.

Note that if a red or black object is suppressed for some other reason, the gap should be filled by the next smaller-spaced color to avoid a gap in that color of object. In other words, generation of the above image with optional object suppresion by any method desired would resemble something like:

   if (pixel.alpha >= black_object_threshold && !suppress_black_object(x,y))
      ... generate black object at this pixel ...
   else if (pixel.alpha >= red_object_threshold && !suppress_red_object(x,y))
      ... generate red object at this pixel ...
   else if (pixel.alpha >= green_object_threshold && !suppress_green_object(x,y))
      ... generate green object at this pixel ...
Here, testing the blue channel is omitted for the sake of clarity.

Continuous spacing

Objects can be placed with a continuously varying spacing by computing a spatially-varying value to use as the threshold, e.g. with a noise function or a user-defined map.

Random access

A small area can be computed/recomputed while still maintaining consistency with the rest of the area. For example, users can edit a continuous-spacing map and an object placement system can regenerate objects in that area without concern for interactions with other objects outside the edited area, and in a way that is consistent regardless of what order areas are recomputed.

GPU shaders computing values for a single output point can threshold a small number of noise image pixels to check if there are nearby active points, typically 1, 4, or 9 pixels.

Future work

Generate mipmaps which contain only the points for larger spacings to allow GPU shaders to limit their searches better; e.g. if nearby wide-spaced objects need to be known from further away. In mipmaps, the R & G subpixel values need to cover a larger area of the original images, since the image pixel coordinates have less precision, so the final positions will be more coarsely quantized. For consistency, the points which appear in mipmaps should be quantized in the higher resolution maps to match. This quantization shouldn't be objectionable since the points in the mipmaps are more widely spaced anyway.

Note that I have never tried to use these images from GPU shaders.


I tried several approaches, but in the end I just used the naive solution: applying the algorithm from Fast Poisson Disk Sampling in Arbitrary Dimensions (Bridson07), repeatedly at smaller and smaller minimum distances, each pass seeded with the points from the previous pass. (Each pass gets slower since it is seeded with, and must generate more points than, the previous pass, so the run time was dominated by the final few passes using poisson disk sizes between 1 and 1.2 pixels.)

I was unaware of the McCool92 paper referenced above at the time I did this work. I only found it while I was finishing writing this article after I hit upon the word "progressive" as an appropriate term and searched for that. It didn't make sense at that time to try to regenerate new data using their method.

The specific sequence of minimum spacings I used to generate most of the images on this page (except, I think, the 4096x4096 images, which used fewer steps) was computed by the following code:

            if (curDist < 4)
               curDist *= 0.968f;
            else if (curDist < 6)
               curDist *= 0.98f;
            else if (curDist < 12)
               curDist *= 0.99f;
            else if (curDist < 32)
               curDist -= 0.125f;
               curDist -= 0.25f;