16.01.2013

Megapixel lenses – what resolution does my image processing application really need?

Lenses have recently increasingly been advertised with a highest possible number of megapixels. With this value, however, the attainable lens resolution cannot be evaluated, which ultimately makes it harder to choose a suitable lens. This article explores the relationship between the choice of lens and the choice of sensor and which values actually indicate the resolution that can be attained.

Megapixel lens is an inexact term for lenses that have been optimised for a particular sensor with a number of pixels greater than 1 megapixel, and for which the sensor resolution in pixels has been used to indicate the resolution of the lens. Even if a concrete number of pixels is specified, the sensor can still have different dimensions and therefore different pixel sizes. For information about the resolvable structure detail, chip or pixel size and the image scale need to be specified.

How do you choose the right lens?

In order to adhere to the target cost framework for the optics of a particular image processing application and capture the relevant image information on the sensor, the interplay between a range of different parameters needs to be carefully observed. This does not necessarily mean that the higher the resolution of the lens, the better, because this is also generally associated with higher costs. For high-resolution objective lenses, for example, aspherical lenses are used. These are only profitable to manufacture in large quantities, however, as is the case with consumer lenses which are produced in their tens of millions. The capital goods sector, however, cannot always use an off-the-peg objective lens for its optics. If a specially developed objective lens is needed for an image processing application, the use of aspherical lenses would make the objective lenses much more expensive. This is because these lenses can only be produced in quantities of a few thousand. The required resolution of the objective lens should therefore only be as high as absolutely necessary. The first step when developing or choosing an objective lens is to test how much structure detail actually needs to be resolved on the test object. Two key basic conditions apply to this: the resolution is limited by the sensor on the one hand, and by the required depth of field on the other. Lens resolution and depth of field are fundamentally linked to each other. The higher the resolution, the smaller the depth of field and vice versa.

 

Fig. 1: Key criterion for the evaluation of lenses – the MTF

A key criterion for the evaluation of lenses in machine vision is the MTF (modulation transfer function). Usually, this is specified for spatial frequencies on the image side. The ideal MTF curve is dependent on the diameter of the aperture stop of the lens and is called the diffraction limit. It is shown here for an image-side numerical aperture of 0.025 and a wavelength of 550 nm. The spatial frequency for the MTF value 0.2 (62.5 LP/mm in this example) is assumed as the resolution limit for machine vision lenses.

The different meanings of resolution

 

The definition of resolution varies depending on the application and the area of development. In sensor development, it indicates the number of pixels of an image sensor. The unit used in this context is megapixels. In optical development, however, resolution refers to the barely perceptible structure detail. Maximum transferable spatial frequency in line pairs per millimetre, for example, or minimum resolvable structure detail in µm are used in this context.

Resolution limit of the lens

In the machine vision sector too, spatial frequencies have proved to be a suitable way to describe the test objects. According to Ernst Abbe, an object can be thought of as being composed of many grids with different periods, amplitudes, orientations and directions of propagation. These must then all be transferred through the lens to the sensor. The structure detail is described by the period of the grid Δr or its inverse – the associated spatial frequency R. This is particularly clear when imaging complex objects with various fine details such as printed circuit boards. Even individual points, edges or lines, however, can be described as the sum of spatial frequencies.

The quality of the transfer of the spatial frequencies through the lens is characterised by the modulation transfer function (MTF). This defines the relationship between image contrast and object contrast depending on the spatial frequency R: MTF(R) = M '(R)/M(R). The contrast or modulation M is defined as follows: M = (ImaxImin)/(Imax + Imin). This means that the contrast can vary between a minimum value of 0 and a maximum value of 1 or 100% Imin = 0. In image processing, the analogue greyscale value can be used instead of the intensity. The shape of the ideal MTF curve is determined by the diameter of the aperture stop of the lens (fig. 1). For physical reasons, the MTF decreases as the spatial frequencies increase. There are two ways of reading this curve. One is as the transfer function itself, i.e. as the MTF in percent. Alternatively, the curve indicates the image contrast M '(R) when the object has the ideal contrast of M(R) = 1. The minimum detectable contrast is in the 10-20% range. The spatial frequency at which the MTF is reduced to 20% is therefore often seen as the resolution limit. If the object modulation is already reduced, this can lead to image contrasts below 20%. The real MTF data is partially specified for the lenses as well, or can be requested from the manufacturer of the lenses. It is essential to note whether the curves are specified for R ' spatial frequencies on the image side or R spatial frequencies on the object side. Usually, the image-side values are specified. These can be converted using the image scale of the lens β ': R = Rβ '.

 

Fig. 2: Relevant size for machine vision – the definitively resolved spatial frequency

The maximum resolvable spatial frequency from the sensor is determined by the Nyquist criterion: it is read by 2 pixels and is 1/(2p'). If the position of the object relative to the sensor is unfavourable, this spatial frequency can no longer be resolved. The definitively resolved spatial frequency amounts to half of the Nyquist frequency: 1/(4p'). This size is primarily relevant for applications in the machine vision sector, where readjustment to set the optimum position of the object relative to the sensor is not possible due to the required speed of the processes.

 

Resolution limit of the sensor

The maximum detectable spatial frequency determined by the sensor occurs when precisely one light and one dark stripe each encounter a pixel. This is the Nyquist frequency R 'Nyquist, which is dependent on the pixel size p'. Above this frequency, aliasing effects can occur. This means that structures with periods are shown which were not present in the original image. If the position of the Nyquist frequency is not favourable in relation to the pixels, a half light and half dark stripe arrive at the active sensor area in each case, so that the same greyscale value is detected from all pixels in the image (fig. 2). This spatial frequency can then also no longer be resolved. The definitively resolved spatial frequency R 'definitive amounts to half of the Nyquist frequency, as then at least one pixel always detects the minimum intensity and one the maximum intensity even if there is a shift relative to the pixel grid. This resolution criterion is more suitable for the machine vision sector, as fast processes are frequently involved which do not allow for readjustment of the object relative to the sensor (table 1).

 

Table 1: Sensor resolution – limited by the pixel size

Overview of the definitively resolved spatial frequencies for different common pixel sizes. They amount to half of the Nyquist frequency.

What is the relationship between lens resolution and sensor resolution?   

To make the best use of the sensor resolution, the lens should be able to offer definitively resolved spatial frequency on the image side. The MTF should therefore be at least 20% for this frequency: MTF(R 'definitive) ≥ 20%. With the image scale, the minimum resolvable structure detail on the object side can then also be calculated from this image-side limit frequency:

This means that if only the spatial frequency on the image side is specified for a lens, no conclusions can be drawn about the attainable spatial resolution. Only in combination with the image scale of the lens can this type of information be obtained.

The relationship stated above shows that image scale, spatial resolution of the lens and limit resolution of the sensor are dependent on each other. When designing an image processing application, therefore, only 2 sizes can be freely selected.

Application example: printed circuit board inspection

On printed circuit boards, the position and presence of the components, soldering points, pin connections, PCB tracks, lettering and markings typically need to be recognised. Individual lines and points below the resolution limit defined by the MTF can also be recognised, as the information sent from them is not overlaid by neighbouring structures. Individual points, lines and edges fall into this category. Periodic structure details such as PCB tracks or pin connections are critical, however. These can be assigned to a particular spatial frequency that must be transferred from the system. The PCB tracks in figure 3 are 200 µm apart which corresponds to a spatial frequency of 5 LP/mm. For the camera selected in the example with a pixel size of 4.4 µm, the minimum image scale can therefore be 0.088. The definitively resolved spatial frequency on the camera side is 56.8 LP/mm. The MTF of the lens was 20% at this spatial frequency. The image scale of the lens was 0.139 and was therefore much higher than the minimum value. This means that the spatial frequency of the PCB tracks on the image side is 36 LP/mm. It is therefore below the limit frequency. This allows a certain depth of field range. This is systematically defined so that the MTF is still at least 20% at the edges. This can be calculated for the actual lens using an optical design programme and in this case was 9 mm.

 

Fig. 3: Perfect sharpness – the best imaging properties

(Section of an image taken of a printed circuit board. Photo: Vision & Control)

The finest structures are the PCB tracks that are 200 µm apart and therefore have a spatial frequency of 5 line pairs/mm. The camera had a pixel pitch of 4.4 µm. With the lens used (image scale 0.139), the relevant structure details within a depth of field range of 9 mm can still be definitively resolved.

 

The right lens for every application

What resolution the image processing application really needs depends firstly on the finest structure detail that is to be resolved on the test object. For the resolution of this object information, the lens and sensor need to be suited to each other. This is the only way that the sensor can definitively resolve the spatial frequencies transferred from the lens. For the example of the printed circuit boards, these are the PCB tracks with a spatial frequency of 5 LP/mm. There is a fixed relationship between the maximum resolvable structure detail on the object side, the image scale and the resolution limit of the sensor. For the resolution of this object information, therefore, the lens and sensor must be suited to each other so that the spatial frequencies transferred from the lens can be definitively resolved by the sensor. The required lens resolution should nevertheless not be higher than the definitive resolution capacity of the sensor. This is the only way to adhere to the target cost framework for the optics of an image processing application.