Optimized balancing of resolution and depth of field

The two most important variables in image processing are a pair of apparent opposites. Maximizing just one of them alone will not lead to success. Instead, skillful juggling with the laws of optics is what is required.

Objektiv mit Blick auf Schachbrett

Image processing often demands both: maximum resolution and perfect depth of field. Unfortunately though, these key optical parameters work against each other. Only by skillfully balancing aperture, wavelength, and image scale is it possible to achieve the best possible imaging for every task.

 

High resolution with short wavelengths

 

Even on a lens with no aberration errors, the resolution capacity is limited by diffraction. An object point is always displayed by the optical system as a diffraction disk with a bright center surrounded by light and dark interference rings. The radius in µm of this disk, which is known as the Airy disk, is determined by multiplying the wavelength of the radiated light with the f-stop and a factor of 1.22. If two adjacent image points are offset by the radius of this Airy disk, the bright center of the first point will coincide exactly with the first minimum of the other. This means that the Rayleigh criterion is met and the limit is reached at which the two points can be comfortably distinguished with the eye. As well as keeping the f-stop as small as possible, the wavelength of the lighting system plays a decisive role. With green light (550 nm), two adjacent points at a distance of 5.4 µm can still be just distinguished. If blue light is used (450 nm), this distance can be reduced to 4.4 µm.

In many cases, the required resolution and the field of vision are already pre-defined. For example, if a structure measuring 5 µm still needs to be displayed with high contrast, then its image must be larger than the limiting resolution based on the Rayleigh criterion. The pixel resolution of the sensor should be at least twice this. For a common pixel size of 3.45 µm, this therefore means that the structure needs to be imaged at a size of at least 7 µm, better still 10 µm. This makes an image scale of 2 necessary.

 

Increased depth with small aperture and small scale

 

The image with maximum sharpness is only produced in the image plane. However, real objects are rarely completely flat. This makes it necessary to depict a certain range of depth within which blurring in the image does not exceed a maximum value. For imaging in the close range, this depth of field is directly proportional to the f-stop and the square of the image scale. In the example mentioned above with a required resolution of 5 µm, an image scale of 2 with an aperture of 11 only produces a depth of field of 0.3 mm (see Figure 4). Although an image scale of 1 and an f-stop of 8 produces a resolution of just 5.5 µm, the associated range of the depth of field is twice as large. Here, a small loss in sharpness is offset by major gains in terms of depth.

To achieve the greatest possible depth of field, small image scales and a correspondingly smaller sensor are advantageous. Simply stopping down a lens with a large image scale and a large sensor will not lead to success.

 

Figure 1: As defined in the Rayleigh criterion, the limit of resolution is reached when maximum and minimum illumination intensity on adjacent image points coincide.

 

 

Figure 2: If an object point is shifted by the value a’ out of the image plane, this produces a circle of confusion u’. The magnitude of the resulting circles of confusion u’1 and u’2 is proportional to the aperture diameters D1 and D2.

 

 

Figure 3: Resolution in the object plane for different image scales at a wavelength of 550 nm. For 450 nm the resolution values need to be multiplied by a factor of 0.82; for 880 nm this factor is 1.6.

 

 

Figure 4: Depth of field for different image scales and aperture values, with a circle of confusion diameter of 20 µm.

 

published in: inVision ePaper | Objektive und Beleuchtungen 2020