What is Bitmapping

In an 8-bit grayscale image, each pixel uses 8 digits of binary to store tonal information, meaning its value can range from 0 (black) to 255 (white), including every integer value between. Thus each pixel in a grayscale image can take one of 256 possible values. Bitmapping is the process of converting an 8-bit image to a 1-bit image: a binary image where each pixel is either 0 (black) or 1 (white). A variety of algorithms exist to determine how each 8-pit pixel is allocated either black or white. Most of these use some form of error diffusion: rounding a pixel to either black or white, and dividing remainder between nearby pixels before they in turn are rounded. This ensures that if a pixel was rounded by a lot (i.e. it was close to medium gray), nearby pixels are more likely to be rounded in the opposite direction. All of the following algorithms start in the top left corner of the image, and move from left to right, top to bottom. For ease of understanding, all 8-bit grayscale values are represented in the following examples by a decimal between 0 (black) and 1 (white).

The Quantization Error

Any pixel with a final grayscale value below 0.5 (0-127/255) will be rounded down to black. Any pixel with a final grayscale value at or above 0.5(128-255/255) will be rounded up to white. The Quantization Error for a given pixel is the pixel’s original value in 8-bit grayscale – the pixel’s new value in 1-bit bitmap. For example, a grayscale pixel with a value of 0.4 (60% gray) is rounded down to 0 (black), with a quantization error of 0.4 (0.4 – 0 = 0.4). A grayscale pixel with a value of 0.75 (25% gray) will be rounded up to 1 (white), with a quantization error of -0.25 (0.75 – 1 = -0.25). Each of the following algorithms takes the quantization error for the pixel being evaluated, and divides it among nearby pixels. This means that by the time a given pixel is evaluated, its value may have changed significantly from its original grayscale value, as it will have received a percentage of quantization error from each of its neighbors.

Floyd-Steinbereg

The Floyd-Steinberg algorithm was published in 1976 by Robert W Floyd and Louis Steinberg. It divides the quantization error between 4 adjacent pixels according to the following diagram.

#7/16
3/165/161/16

In this diagram, the # represents the pixel currently under evaluation. Its quantization error is split between the pixel to its right, and three pixels below, with each pixel receiving the fraction of the total quantization error listed in the chart. The pixel to the left was the last pixel to be evaluated, and therefore receives none of the error.

Jarvis

Released around the same time as Floyd-Steinberg by J F Jarvis, N C Judice, and W H Ninke of Bell Labs, the Jarvis algorithm divides the error up much finer, and spreads it out between 12 nearby pixels, as shown below.

#7/485/48
3/485/487/485/483/48
1/483/485/483/481/48

Stucki 

Stucki is a further development of the Jarvis algorithm. It uses the same 12 nearby pixels as Jarvis, but with slightly different ratios.

#8/424/42
2/424/428/424/422/42
1/422/424/422/421/42