So, it is worth taking a closer look at how humans and machines differ in their visual perception.
Human vision is an achievement of evolution that has been optimised over millions of years. It is designed for gaining a quick grasp of situations and reducing complex information to the essentials. To enable it to do that, our eyes are connected to a very special kind of image processor: the human brain. Thousands of optic nerves working in parallel are connected in such a way that they can recognise patterns from the signals as they are being transmitted to the brain, which is a special ability that leads to lightning-fast results.
Our vision is fascinating and makes use of unique features, but is by no means infallible. It also has quirks and shortcomings. Not everything is always visible to the human eye, and it is also susceptible to optical illusions.
Much though humans and machines may differ, they nonetheless have similarities. The processes of seeing, recognising, deciding and executing are not alike for no reason. Humans seek to simulate vision by technical means on the basis of their own insights. And since technology is based on other active principles, has different properties to humans and is also determined by other parameters, different recognition results follow.
If humans see faults in a workpiece or look for faults, they do so by means of the most complicated control processes that take place successively as follows:
- Position the workpiece (take it in your hand and move it around)
- Create optimal light conditions (move to a light source and hold the part in the light)
- Position and turn your body and hands around constantly to ensure optimal light conditions at all times (constantly optimising the intake situation)
- Turn and push the part in order to evaluate the entire object (hand−eye coordination)
- Continually check decisions: is what you see a fault or not? (constant comparison with the individual, but not a constant error threshold)
And, most importantly:
- Even with new and previously unseen features or faults, assess the part as either good or a reject
And what do machines offer to counter this?
- A combination of image processing and machine components (that is frequently not optimal for solving the task)
- An image processing program that is at present mostly rigid and inflexible (written by a non-specialist in the subject on the basis of his or her knowledge and heavily dependent on light and environmental conditions)
- A handling system that is limited in its range of movements
- Fixed performance data
- Recognition of known faults that are listed in a fault catalogue or are precisely defined mathematically in respect of colour, size and shape
- An overall concept that reflects the state of knowledge of an individual or a team as laid down in the requirement specifications
What both testers - the human and the machine - have in common is that each in their own way seeks to compensate imperfect environmental conditions.
Humans remain flexible whereas machines know only their fixed parameters.
Humans rely on their fast-learning, error-tolerant, flexible system of vision together with experience-based cognitive sight. They skilfully and intuitively use their hands with their natural six degrees of freedom of movement. In this way, humans can compensate most flexibly imperfections for the widest range of individual inspection tasks and objects. And if a decision appears to be dubious, the part is inspected again and the decision is reconsidered on the basis of very specific criteria.
With machines, it is entirely different. They can only compensate to a limited extent because temporal and economic limits are involved. The machine runs on a rigid, specialised program, of which the results are also heavily dependent on lighting conditions. Handling is frequently limited to fewer than six degrees of freedom of movement and flexibility is limited by the constraints of the program. With everything optimised for speed and a mass throughput of similar parts, there is hardly any time to react flexibly.
This comparison shows that the strong points of humans are their flexibility, their spontaneity and their creativity, whereas the machine stands for reliability, accuracy and speed. Tiring and constantly recurring inspection work weighs heavily on humans. Whilst no suitable test equipment was available during mechanisation, industrialisation and automation, this burden was a necessary evil. Image processing has brought considerable relief to humans.
In practice, expectations of image processing are very high. It is often the particularly sophisticated and expensive high-end applications and their basic technical data that potential users remember. This data is then swiftly but erroneously seen as being the technical performance standard.
But image processing does not always work as fast as the user might like it to do. As a matter of principle, it doesn't work without wear and tear either and is often difficult to use. There are many different reasons why that is the case, but with so much appearing to be technically feasible, it is often overburdened with tasks and, once installed, not infrequently mutates into the Jack of all trades that should solve all problems.
Corporate views on the subject often differ widely. Some claim that image processing can do almost everything. Others, based on bad experiences, have largely lost confidence in the performance of image processing.
In the resulting discussions people are fond of ignoring certain risks of using the technology, such as task overload, lack of precision in defining tasks, technical requirements that do not match the technology employed or lack of knowledge on the part of the participants.
Critical error propagation
In few technologies are the effects of error propagation as critical as in image processing. From the test object, the lighting, the optics and the image sensor via the signal-converting, signal-conducting and signal-processing electronics to the software, errors that occur are passed on and added up. It becomes especially critical when working in extreme physical areas where the resolution desired can often only be reached with the aid of statistical methods. And how reliable is an overall result that is based on a sequence of wide-ranging mathematical corrections? Specifically, if the result is based on a sequence of shading correction, distortion correction and colour correction, the conclusion needs to be checked for reliability. In this regard, the old engineering principle "error avoidance before error compensation" could not be of greater relevance.
High performance comes at a price
Image processing can only register by means of extremely high computing power what the human eye sees in the shortest of time. The machine's cycle time sets the pace for image data processing and the speed at which workpieces are conveyed determines the time that is available for image recording.
Image processing eyes see things differently to human eyes: different receivers for different wavelenght ranges, different perceptions of brightness and lighting that is suitable for the task.
As a rule, algorithms for processing image data are much faster than the mechanical processes. Typical values are around 500 µs for a blob analysis of the entire four-megapixel image and 10 ms for calculating an edge location to sub-pixel accuracy. Depending on the size and nature of the pattern and the size of the image to be scanned, a pattern search can easily take a treble-digit millisecond time.
Working through a test program consisting of many functions takes time, and the higher the camera resolution the more data must be processed. That takes more time or requires more computing power - and with it a higher economic input.
It is also a fundamental fact that there is no one image processing solution that can do everything. Many roads lead to the objective - from the fast and secure solution that everybody wants to the slow and insecure solution that nobody wants. Practice will tell which category your solution comes under. Along with the material factors required for peak performance, the technical competence and experience of the users determine the performance and reliability of image processing solutions.
As a very rough guide, the success of an image processing solution can be said to depend half on skilful systematics, a quarter on experience gained and a further quarter on dedicated trials.
The environment also exerts its influence
Extraneous light, oscillations, temperature, dust and dirt and aggressive environments are critical points that affect what can be seen by image processing and thereby on the results. And not just the physical environment where it takes place; the human factor also influences the final results by means of its interaction with the system before, during and after the project.
So, it would be wrong to assume that good organisation and perfect technology are enough to solve all problems. People who work with image processing and are to integrate it permanently into their individual work processes need to be persuaded. Image processing technology does not optimise itself and the relevant processes on its own. It often also requires an expert with in-depth knowledge as a "carer". In addition, it requires well-trained employees who have been made fit to take on new challenges.
published in: inspect 2_2017