Visual inspection is the cornerstone of most quality control workflows. When performed by humans the process is expensive, prone to error, and inefficient: a 10%-20% pseudo scrap and slippage rate and production bottlenecks are not uncommon. Under the name IQZeProd (Inline Quality control for Zero-error Products), researchers at Fraunhofer IWU are developing new, inline monitoring solutions to recognize defects as early in the production process as possible for a variety of materials such as wood, plastics, metals, and painted surfaces. The system uses multi-sensor data fusion from a variety of sensors to recognize structural and surface defects as the components travel the production line. The goal is to make industrial manufacturing processes more robust and sustainable by increasing process reliability and improving defect detection. At the heart of the system is the researchers' own Xeidana® software framework and a matrix of twenty industrial cameras. The researchers had very specific camera criteria: global-shutter monochrome sensor; low-jitter real-time triggering; reliable data transmission at very high data rates and straightforward integration into their software framework. They selected GigE Vision-standard industrial cameras from The Imaging Source.
While Xeidana's framework approach offers the flexibility necessary to process data from optical, thermal, multi-spectral, polarization or non-optical sensors (e.g. eddy current), many inspection tasks are completed using the data delivered by standard optical sensors. Project manager, Alexander Pierer, commented, "We often use data fusion to redundantly scan critical component areas. This redundancy can consist of scanning the same region from different perspectives, which simulates 'mirroring' used during manual inspection." To acquire the visual data needed to complete these tasks, the researchers created a camera matrix consisting of twenty GigE industrial cameras: nineteen monochrome and one color.
Due to their intrinsic physical properties, monochrome sensors deliver higher detail, improved sensitivity, and less noise than their color counterparts. Pierer notes: "monochrome sensors are sufficient for detecting defects that appear as differences in brightness on the surface. While color data is very important for us humans, in technical applications the color data very often does not provide additional information. We use the color camera for color tone analysis, by means of HSI-Transformation, to detect color deviations that may indicate a problem with paint coating thickness."
Task requirements and short exposure times meant that the engineers had very precise camera criteria: Pierer continues, "The main selection criteria were global shutter and real-time triggering with very low jitter, because we shoot the parts in motion with very short exposure times in the 10µs range. The exposure between the camera and the Lumimax illumination (iiM AG), which is also triggered via hardware input, must be absolutely synchronous. We tested some of your competitors here and many of them had problems. It was also important to us that the ROI could already be limited to relevant areas in the camera's firmware in order to optimize the network load for image transmission. Furthermore, we are dependent on reliable data transmission at very high data rates. Since the parts are inspected in throughput, image failures or fragmented image transmissions must not occur."
Over the course of the project, the team built several systems: for industrial settings as well as for demonstration and testing purposes. In the typical industrial setting where the components under inspection remain constant, the imaging provided by the fixed-focus industrial cameras met the team's requirements. For the demo/test system, however, the researchers were using a number of diverse components including metal parts, wooden blanks and 3D-printed plastics which required cameras with an adjustable field of view (FOV). The Imaging Source's monochrome zoom cameras with integrated, motorized zoom offered this functionality.
With over 20 sensors of varying kinds delivering data to the system, there is a data stream on the order of 400 MB/s to contend with. Pierer explains, "The system is designed for throughput speeds of up to 1 m/s. [...] Every three to four seconds, the twenty-camera matrix creates 400 images. Added to this is the data coming from the hyperspectral line camera and the roughness measurement system, all of which must be processed and evaluated within the 10 second cycle time. In order to meet this requirement, so-called massively parallel data processing is necessary, involving 28 computing cores (CPU) and the graphics processing unit (GPU). This parallelization enables the inspection system to keep pace with the production cycle, delivering an inline-capable system with 100% control." Optimized for modern multi-core systems to enable massively parallel processing, Xeidana's modular framework approach allows application engineers to quickly realize a massively parallel, application-specific, quality control program using a system of plug ins that can be extended with new functionalities via a variety of imaging libraries.
The system's data fusion capabilities can be used in several ways depending on what information is likely to provide the soundest results. In addition to the more standard machine vision inspection tasks, the team of researchers are currently working on integrating other non-destructive evaluation techniques such as 3D vision as well as additional sensors from the non-visible spectrum (e.g. x-ray, radar, UV, terahertz) to detect other types of surface and internal defects.
Because Xeidana supports massively parallel processing, Deep Learning techniques can also be applied to defect detection of components whose inspection criteria are not readily quantified or defined. Pierer clarifies, "These methods are especially important for organic components with an irregular texture, such as wood and leather, as well as for textiles." Because machine learning techniques are sometimes tricky to apply in certain contexts (e.g. limited traceability of the classification decision and the inability to adjust algorithms manually during commissioning), Pierer adds, "we mostly rely on classical image processing algorithms and statistical methods of signal processing in our projects. Only when we reach our limits do we switch to machine learning."
Acknowledgement: The Imaging Source Europe GmbH is an active member of the industry working group of the IQZeProd project and is in close professional exchange with the research partners. The IGF project IQZeProd (232 EBG) of the German Research Association for Measurement, Control and Systems Engineering - DFMRS, Linzer Str. 13, 28359 Bremen was funded by the AiF within the framework of the program for the promotion of joint industrial research (IGF) by the Federal Ministry of Economics and Energy based on a resolution of the German Bundestag. Please note that the final report of the IGF project 232 EBG is available to the interested public in the Federal Republic of Germany. The final report can be obtained from The German Research Association for Measurement, Control and Systems Technology - DFMRS, Linzer Str. 13, 28359 Bremen and the Fraunhofer IWU, Reichenhainer Straße 88, 09126 Chemnitz. Supported by the Federal Ministry of Economics and Energy on the basis of a resolution of the German Bundestag.
Post published by TIS Marketing on December 7, 2020.
Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, video converters and embedded vision components for factory automation, quality assurance, medicine, science, security and a variety of other markets.
Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE, MIPI interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.