Selection and evaluation of telecentric lenses

The success of implementing a measurement or inspection task will depend to a large extent on specifying the correct lens. The following article provides an overview of the most important selection criteria for telecentric lenses.

Here, a particular focus is on the specifications for the resolution and edge position uncertainty, because the choice of which of these two criteria to apply depends greatly on the specifics of the application.

Telecentric lenses are essential tools for precise measurement tasks and sophisticated inspection tasks. Whereas with standard, i.e. entocentric, camera lenses the image is always reproduced more or less in perspective, telecentric lenses depict objects without perspective distortion. This means that objects are always reproduced with the same size, regardless of distance from the lens, and it also avoids edge occlusions. In essence, telecentric lenses are characterized by having a parallel main optical path on the object side (image 1(a)).

image 1: (a) Functional principle telezentic lens with object-sided telecentricity and definition of telecentric angle and area from the allowed dimensions change of the test object.

This can be used, for example, to inspect catalytic converters for dirt and contamination (image 2).

 

image 2. shots of a passenger car catalyst (wide 80 mm, depth 76 mm) with different perspectives: left: entozentric; right: telezentric. source: Vision & Control

 

In addition, the shape of test objects is reproduced very accurately, making it possible to measure e.g. gradients and angles image 3).

 

image 3: Form- and Dimensional accuracy checks by means of telezentric image: angle measurement of bolt thread, gradient check in screw feathers and form control of hypodermic needles. source: Vision & Control

It is also possible to inspect highly detailed objects with many features, such as printed circuit boards (image 4).

 

image 4: Detailed test object from the solid-state industry. A high resolution of the lens is necessary for a resolution of the leading roads and component connections and also to the written recognition.

1.      Determining the selection criteria based on important lens parameters

The most important parameter is the size of the object being examined. This defines the front diameter of the lens. It needs to be at least the same size as the object being examined, plus an additional allowance required to eliminate vignetting (image 1(a)).

The required image scale can be determined based on the sensor size of the camera used. The definition of the image scale will, in some cases, limit the maximum achievable resolution or depth of field.

If the required resolution cannot be achieved with a particular lens, then a smaller section of the test object needs to be depicted and therefore the image scale increased.

As the next step the required working distance is specified, so that the test object can be positioned in the object plane of the lens. At this point it is also necessary to define the necessary spectral range that will maximize the visibility of the object attributes. This is followed by the telecentrics and – as key parameters – the resolution and depth of field, which is discussed in more detail in the next sections. Then the further optical parameters such as distortion, lateral chromatic aberration, longitudinal chromatic aberration, and curvature of the field of view, as well as brightness and edge decay can be specified. As the final step to ensure that the lens fits in the intended application, mechanical data such as the lens dimensions, connection threads, weight and ambient conditions need to be coordinated.

 

2.      Telecentrics

The telecentrics can be quantified by specifying a maximum permitted angle for the main optical path, which is also known as the telecentric angle φ (image 1(a)). Any deviation from parallelism of the principal rays will result in faults in the image scale and the occurrence of edge occlusions.

This angle can be calculated via the permissible change in size of the lens Dh or of the image Dh'. Both of these values are linked by the image scale: |β'| = Dh'/Dh. It is thus possible to require that the change in size on the image side does not exceed a particular magnitude, typically that of one pixel (Dh' ≤ 1 pixel). The telecentric range z is then the range in which the object can be moved back and forth along the optical axis without the permissible change in size being exceeded. Here, the telecentric angle can deviate both in a positive and negative direction. The telecentric angle is determined by the ratio between the permissible change in height in object space or image space and the telecentric range:

tan(φ) = Dh/z = Dh'/(z∙|β'|)

In an ideal scenario the telecentric range and the depth of field range are the same. Then the object can be moved axially back and forth within the depth of field range without the change in size of its depicted image exceeding a particular value.

In the case of bi-telecentric lenses, the principal rays on the image side are also parallel to the main axis of the system (image 1(b)). As a result, changes to the image scale can be avoided in the event of fluctuations in the distance between the sensor and the lens. In addition, shading is avoided on sensors with microlens arrays, and they offer homogeneous image illumination.

 

3.      Resolution vs. edge position uncertainty

The definition of the resolution depends on the specifics of the intended application. In some applications, complex objects need to be depicted with as much detail as possible. This is the case in particular for the inspection of printed circuit boards, where conductor tracks in the range of micrometers need to be inspected. In the case of other applications, it is enough to know whether a defect point is present in order to identify and separate out an object as "defective", for example in the case of contamination in a catalytic converter. Here, the main issue is the size of the smallest detectable characteristic feature. However, there are also inspection tasks in which, rather than the resolution of finest structural details, we are instead interested in a precise depiction of the object contours. This is the case in applications where objects are to be tested to make sure that they comply with requirements relating to shape and/or dimensional accuracy (Bild 3). This task represents a significant proportion of industrial image processing.

 

3.1 A key comparison tool for lenses: MTF

The most important characteristic curve for pre-determining the detectability of object details is the Modulation Transfer Function (MTF). This indicates the ratio of image to object contrast M'/M with which individual spatial frequencies are transmitted by the optical system. At an object contrast of 1 the MTF yields the expected image contrast directly. Typically, the spatial frequency R' on the image side is stated in line pairs per millimeter and is formed as the reciprocal of the image-side period of a sinusoidal intensity distribution Δr' (applications shown in the images 5(a) and (b)). The link between the object-side structure detail Δr and the image-side spatial frequency R' is given via the image scale: Δr = 1/(R' ∙|β'|)

As complex objects display entire spectra of spatial frequencies, this should also be transmitted with a preferably high contrast. The maximum contrast is limited by the diffraction, which provides for a natural decay at higher spatial frequencies. At the so-called limit frequency R'G it drops to zero altogether. This diffraction limit can be achieved either via well-corrected systems or by apodization, albeit at the cost of the maximum limit frequency. Figs. 5(a) and (b) each show the diffraction-limited MTF curves for a lens with an effective f-number of 6.3 and 25.1 respectively. The image scale of the optical system is 0.5 in both cases. The wavelength is 550 nm. So that a structure can still be resolved, the contrast should be at least 20%. As a result, the minimum resolvable structural period is 10 µm in the left-hand image and 40 µm in the right-hand one.

So, if it is important to obtain a detailed image – as is the case in printed circuit board inspections – then it has to be a requirement that the lens is able to still transmit a particular spatial frequency with a minimal contrast ratio.

 

3.2 For identification of defects: Point and line detection

If individual points or lines such as dirt or craters are to be detected on an object, it is difficult to state a maximum spatial frequency for this. The imaging characteristics of an optical system for points or lines are reflected in the point spread function and line spread (position uncertainty) function. Both functions act in similar ways, which is why only the point spread function is explained here by way of example. Figs. 5(c) and (d) show the ideal functions for both optical systems. The applications in the graphics show simulated imaging of a point with a diameter of 50 µm, which is scanned in each case with a pixel size of 5 µm. The interplay between MTF and the point spread function is shown here: The higher the limit frequency of the optical system, the narrower the point spread function and the sharper the depiction of the points, meaning that, overall, it is also possible to detect smaller defects.

It makes sense here to have a direct requirement that points of a certain diameter or lines of a certain width can still be detected. This can already be verified in the design of the optical system via an image simulation.

 

3.3  A frequent requirement for Machine Vision: edge detection

In the detection of object contours, the contour of edge transitions plays a significant role. This is because the localization of the objects is performed on the basis of the calculated edge location. Here, the edge location can be measured more accurately the better the optical system can transmit the edge location and the more pixels are used to scan the edge location. The edge depiction properties of a lens are represented with the aid of the edge position uncertainty function.

Figs. 5(e) and (f) show the ideal edge image curves for the two lenses. These generally have an inflexion point at half the intensity, which corresponds exactly to the ideal image location. Imaging errors can lead to distortion of the edge contour.

 

Bild 5: Zusammenhang von Modulationstransferfunktion, Punktbildverwaschungsfunktion und Kantenbildverwaschungsfunktion eines Objektivs für zwei verschiedene Blendenzahlen für jeweils 550 nm Wellenlänge.

This function is also linked to the MTF: The smaller the maximum spatial frequency, and therefore the lens resolution, the wider the edge contours. Although wider edge contours give the image a less sharp appearance, it means that an increased number of pixels can be used to scan the edge. This then in turn enables more reliable detection of the edge location.

Here, the specification is performed using the angle of inclination of the curves or using the edge image width, which is typically stated as the width of the rise in normalized image intensity from 10% to 90%. Figs. 5(e) and (f) show that, for a pixel size of 5 µm, the entire edge transition is scanned with just 2 to 3 pixels on a high-resolution lens, whereas a lower-resolution lens will require 4 to 5 pixels, which represents an almost doubling of the measuring accuracy.

If accurate detection of the edge location is important, the resolution should not be specified too high. Here, a compromise needs to be found so that the structure of the test object can still be reliably recognized and the edges can be detected with a high scanning rate.

 

4.      Depth of field

The term 'depth of field' means that the required resolution or edge position uncertainty is complied with over a certain range in front of and behind the object plane. Typically, the MTF should still be 20% at the edges of the depth of field range. However, to make this happen it is necessary to do without the maximum possible resolution. The reason for this is that, if the MTF for a spatial frequency has already dropped to 20% in the object plane, this barely offers any leeway for the depth of field. Depth of field and resolution are fundamentally linked to each other: the product of these two variables is always constant. So, if greater depth of field is required in an application, then the resolution needs to be reduced. This also means that, since the resolution increases for a larger image scale, the depth of field drops at the same time.

 

Summary

A number of details are required for correct selection and assessment of a telecentric lens. In addition, the optical parameters are also wavelength-dependent. For this reason, it is difficult to provide a full characterization of a lens in a data sheet. In cases of doubt it is possible to consult the manufacturer for information, or the manufacturer can make available so-called black box models of the lenses. All of the optical data can then be determined from these. The explanations show further that it is important to consider beforehand whether detail-rich images are required for an application (high resolution) or whether object contours need to be accurately measured (edge position uncertainty). If theoretical predictions appear difficult then the lens should be tested beforehand. Here, adjustable apertures can be helpful tools for identifying the perfect balance between resolution and depth of field.

Contact

Press contact

 

 

pth-mediaberatung GmbH

Friedrich-Bergius-Ring 20

D-97076 Würzburg / Germany

 

Contact Person

Mr Paul-Thomas Hinkel

Phone: +49 9 31 / 32 93 0-0

E-Mail: cg@mediaberatung.de

Internet: www.mediaberatung.de


Company contact

 

 

Vision & Control GmbH

Mittelbergstr. 16

D-98527 Suhl / Germany

 

Contact Person

Ms  Beate Koch

Phone: +49 36 81 / 79 74-34

E-Mail: presse@vision-control.com

Internet: www.vision-control.com