High speed machine vision systems are everywhere in an advanced manufacturing site, but not all of them are equal. While there are many vendors that claim a superior product that can magically achieve amazing results in high speed machine vision applications, the key to any successful high speed vision system lies within a few very basic but important fundamentals.
- Trigger Efficiency
- How quick you can trigger the camera without delay upon request
- ADC Conversion
- Luminous Intensity
- How much light is reflected form the object of interest
- CMOS Sensor Quantum efficiency
- How effective the CMOS sensors is at convert photons into electrons
- Camera Controller
- Speed of the camera in converting electrons to digital values using onboard ADC
- Rolling vs Global Shutter
- Luminous Intensity
- Software Architecture & Efficiency
- How fast you can process the image while taking advantage of all hardware capabilities, such as multiple computation cores
Camera
For high-speed application, you want global shutter instead of rolling shutter. If the salesperson does not tell you what type of shutter the camera has, it is likely rolling, and you should look somewhere else. Without going into detail, rolling shutter gives you less superior image in comparison with global shutter for high-speed object.
Resolution also plays a huge factor in high-speed application. High resolution camera are not suitable for high speed application, because of the inverse relationship to FPS, and the non-linear growth to computation intensity. In short, pick a camera with only as much resolution as you needed, and further reducing the ROI to increase FPS to achieve the optimal hardware setup.
Luminous Intensity
Hardware and software need to be carefully designed to achieve optimal results high speed application. For hardware, light intensity is the most important factor, which can be significantly amplified by strobe overdrive and the use of focusing light lens.
- Strobe Overdrive allows LED light sources to be driven well above its manufacturing output limit for brief period without permanent damage to the LED.
- Focusing light lens can redirect waste luminance onto a single spot of interest to significant increase the light intensity.
It can be tempting to compensate the lack of luminous intensity with ‘software gain’, by artificially increasing the brightness of the image using imaging process. This trend is common by sales ‘engineer’ who attempts to sell you a fancy light with a strange name. While one can increase the brightness of the image magically using gain, the gain is often linear with basic imaging process, which leads to increase in image noise as well. The recommendation is to purchase a brighter light, instead of that fancy multispectral light that sure sounds fancy, but offers no actual benefit in real life. However, this is not to say imaging processing can not be used to increase image brightness/intensity, but one should keep image noise in mind, as most ‘gain’ algorisms are linear/constant.
Triggering
Triggering is the most basic yet overlooked area in high-speed vision application. Let’s say your camera can communication through EthernNet/IP or USB, but it also has those weird looking 8 pin connectors. Instead of finding out what those 8 pin connectors do, you simply connect the camera to the controller, and perform what is called software trigger. The issue with this approach is the delay in triggering. As result, you will often get an inconsistent image as the object is moving by. Or even worst, it will leave you little to no time to perform the image processing. While USB and EtherNet/IP may be fast enough for basic application, it will result in significant delay in high-speed vision application.
The optimal setup is simple. Configure the camera to perform hardware trigger, then connect the trigger sensor directly to the camera. Lastly, ensure your lighting source is also connected directly to the camera. As result, when the trigger sensor is triggered, the signal goes directly to the camera for minimal delay.
Software Architecture & Efficiency
For software, the efficiency of the algorithm, computation power and communication handshaking dictate the speed of image processing, which is an important part of high-speed machine vision application. The focus of communication handshaking includes the reduction of delay. For example, the trigger signal should go directly from the trigger sensor to the image sensor and should not be relayed through the camera controller. Computation power and algorithm goes hand in hand for image processing speed. For example, if multiple processing cores are available, the parallel algorithm should use as opposed to serial algorithm to take advantage of multiple cores.
The take advantage of multiple cores, one may attempt to device the image into multiple sections, and process each section of the image separate, before joining the results. For example, you the goal is to inspect an unform service, one can first divide the image into multiple sections, followed by parallel computing of each section, before joining the results together for the final output. On the other hand, if the image is color based, one can further split the image into three images consist of three different color channels, process the images concurrently using multiple cores, before combing the results.
The process of parallel computing varies depending on the program utilized, however is largely restricted to low level programing languages. In many cases, one may not have the time or the ability, or desire, to perform parallel computing to speed up image process. If that is the case, one should focus on core clock speed instead of core count.
Take Away
- Pick camera with only as much resolution as you needed.
- Avoid using gain unless you have no other choice.
- Perform denoising image process if gain is used.
- Pick global shutter vs rolling shutter.
- Have a fast single core process if you do not want to bother with parallel programing.
- Reduce needless delay in communication, including triggering.
- A custom Machine Vision System will always be faster than any off the self ‘smart’ camera.