The Imaging Source blog

MVTec Innovation Day 2021: Online!

Published on January 18, 2021

On February 3, 2021 from 9:00 AM - 8:00 PM (CET), system developers and decision-makers are invited to join MVTec for their annual Innovation Day. Due to the continuing pandemic, this year's customer event will take place online with the choice between two packages: "Light" Experience and "Full" Experience. Registration for both the "Full" and "Light" Experience will be open until February 2, 2021. Customers residing in the EU who register for the "Full" Experience by January 20 will receive an exclusive catering and giveaway package. Agenda highlights include Deep Learning, OCR, the HALCON Toolbox, Shape-based and Surface-based matching as well as exciting insights from Research@mvtec. MVTec has also developed an event app to help attendees better plan their agendas and connect with other participants.

<strong>MVTec Innovation Day 2021</strong>: Agenda highlights include Deep Learning, OCR, the HALCON Toolbox, Shape-based and Surface-based matching as well as information from Research@mvtec.

Learn More

TIS Microscopy Cameras Deliver Images for Digital Pathology

Published on January 14, 2021

For a number of serious blood diseases such as leukemia, multiple myeloma and lymphoma, differential counting of blood cells in bone marrow smears is the diagnostic gold-standard. Currently, these morphological assessments are still performed manually by pathologists and other highly-trained lab personnel and demand a great deal of concentration and precision from the technicians performing them. "Human factors" such as stress, fatigue, distraction, and level of training can make the interpretation of such tests prone to error, or as the scientists themselves call it "inter-operator variation". To improve accuracy and efficiency in diagnostic hematology, aetherAI has developed Microscope x Hema--a complete digital pathology system which uses The Imaging Source's USB 3.0 color microscopy cameras to create digital images which are then processed using Deep Learning techniques.

The Imaging Source USB microscope cameras provide images for <strong>aetherAI's <i>Microscope x Hema</i></strong> system which uses Deep Learning to improve accuracy and speed in the evaluation of bone marrow smears.

Cell Classification via Deep Learning

In order to properly train the system's CNNs, the company worked with the National Taiwan University Hospital to develop the world's first differential-counting AI model for bone marrow smears. The model is trained on a comprehensive image dataset of 500,000 annotated bone marrow samples. Microscope x Hema's embedded solution includes AI-powered microscope control software, AI model for differential counting and dedicated hardware to support AI inferencing. Images made using standard optical microscopes often contain complex backgrounds which can negatively impact efficient cell analysis. The 20 MP DFK 33UX183 microscope cameras' high-sensitivity CMOS sensor delivers low-noise images (high signal-to-noise ratio). The cameras' image pre-processing reduces any residual noise to enhance the edges and contours of the image, highlighting the details and reducing image blur. Microscope x Hema's image algorithms extract features from the images and then set parameters such as shape, contour, irregular fragments, color and texture. The workflow is complete once the system has classified and counted the cells in the sample.

Differentiation and categorization of nucleated bone marrow cells using <strong>aetherAI's <i>Microscope x Hema</i></strong> which analyses images made by <strong>DFK 33UX183</strong> microscope cameras. <i>Image: aetherAI</i>

By easing the burden on healthcare professionals, aetherAI aims to improve the quality of medical diagnostics by "providing solutions for digital pathology and AI-powered diagnostic support." Company founder, Dr. Joe Yeh, stated "the AI revolution will realize the ultimate value of digital medical images and bring healthcare to the next level."

The software's user interface displays the pre-processed image of bone marrow cells, and provides a report on the percentage and number of cell categories. <i>Image: aetherAI</i>

Multi-sensor Data Fusion for In-line Visual Inspection

Published on December 7, 2020

Visual inspection is the cornerstone of most quality control workflows. When performed by humans the process is expensive, prone to error, and inefficient: a 10%-20% pseudo scrap and slippage rate and production bottlenecks are not uncommon. Under the name IQZeProd (Inline Quality control for Zero-error Products), researchers at Fraunhofer IWU are developing new, inline monitoring solutions to recognize defects as early in the production process as possible for a variety of materials such as wood, plastics, metals, and painted surfaces. The system uses multi-sensor data fusion from a variety of sensors to recognize structural and surface defects as the components travel the production line. The goal is to make industrial manufacturing processes more robust and sustainable by increasing process reliability and improving defect detection. At the heart of the system is the researchers' own Xeidana® software framework and a matrix of twenty industrial cameras. The researchers had very specific camera criteria: global-shutter monochrome sensor; low-jitter real-time triggering; reliable data transmission at very high data rates and straightforward integration into their software framework. They selected GigE Vision-standard industrial cameras from The Imaging Source.

Image data from IQZeProd's twenty TIS GigE industrial cameras as well as data from hyperspectral and non-optical sensors are fused using the Xeidana software framework to enable an inline QC system with zero errors. <i>Image: Fraunhofer IWU</i>

While Xeidana's framework approach offers the flexibility necessary to process data from optical, thermal, multi-spectral, polarization or non-optical sensors (e.g. eddy current), many inspection tasks are completed using the data delivered by standard optical sensors. Project manager, Alexander Pierer, commented, "We often use data fusion to redundantly scan critical component areas. This redundancy can consist of scanning the same region from different perspectives, which simulates 'mirroring' used during manual inspection." To acquire the visual data needed to complete these tasks, the researchers created a camera matrix consisting of twenty GigE industrial cameras: nineteen monochrome and one color.

Nineteen monochrome industrial cameras gather data from critical component areas. Xeidana processes the redundant data to simulate the process known as 'mirroring' - a technique commonly used for manual inspection. <i>Image: Fraunhofer IWU</i>

Monochrome Sensors: Optimal for Defect Detection

Due to their intrinsic physical properties, monochrome sensors deliver higher detail, improved sensitivity, and less noise than their color counterparts. Pierer notes: "monochrome sensors are sufficient for detecting defects that appear as differences in brightness on the surface. While color data is very important for us humans, in technical applications the color data very often does not provide additional information. We use the color camera for color tone analysis, by means of HSI-Transformation, to detect color deviations that may indicate a problem with paint coating thickness."

Task requirements and short exposure times meant that the engineers had very precise camera criteria: Pierer continues, "The main selection criteria were global shutter and real-time triggering with very low jitter, because we shoot the parts in motion with very short exposure times in the 10µs range. The exposure between the camera and the Lumimax illumination (iiM AG), which is also triggered via hardware input, must be absolutely synchronous. We tested some of your competitors here and many of them had problems. It was also important to us that the ROI could already be limited to relevant areas in the camera's firmware in order to optimize the network load for image transmission. Furthermore, we are dependent on reliable data transmission at very high data rates. Since the parts are inspected in throughput, image failures or fragmented image transmissions must not occur."

Motorized Zoom Cameras Allow for Quick Adjustments to FOV

Over the course of the project, the team built several systems: for industrial settings as well as for demonstration and testing purposes. In the typical industrial setting where the components under inspection remain constant, the imaging provided by the fixed-focus industrial cameras met the team's requirements. For the demo/test system, however, the researchers were using a number of diverse components including metal parts, wooden blanks and 3D-printed plastics which required cameras with an adjustable field of view (FOV). The Imaging Source's monochrome zoom cameras with integrated, motorized zoom offered this functionality.

Zoom cameras provide a rapidly adjustable field of view (FOV), allowing the demo system to scan components of diverse size and shape. <i>Images: Fraunhofer IWU</i>

Massively Parallel Processing Keeps Pace with Data Transmission and Enables Deep Learning

With over 20 sensors of varying kinds delivering data to the system, there is a data stream on the order of 400 MB/s to contend with. Pierer explains, "The system is designed for throughput speeds of up to 1 m/s. [...] Every three to four seconds, the twenty-camera matrix creates 400 images. Added to this is the data coming from the hyperspectral line camera and the roughness measurement system, all of which must be processed and evaluated within the 10 second cycle time. In order to meet this requirement, so-called massively parallel data processing is necessary, involving 28 computing cores (CPU) and the graphics processing unit (GPU). This parallelization enables the inspection system to keep pace with the production cycle, delivering an inline-capable system with 100% control." Optimized for modern multi-core systems to enable massively parallel processing, Xeidana's modular framework approach allows application engineers to quickly realize a massively parallel, application-specific, quality control program using a system of plug ins that can be extended with new functionalities via a variety of imaging libraries.

The system's data fusion capabilities can be used in several ways depending on what information is likely to provide the soundest results. In addition to the more standard machine vision inspection tasks, the team of researchers are currently working on integrating other non-destructive evaluation techniques such as 3D vision as well as additional sensors from the non-visible spectrum (e.g. x-ray, radar, UV, terahertz) to detect other types of surface and internal defects.

Processing network. Blue and yellow modules execute individual image processing tasks in parallel. <i>Image: Fraunhofer IWU</i>

Because Xeidana supports massively parallel processing, Deep Learning techniques can also be applied to defect detection of components whose inspection criteria are not readily quantified or defined. Pierer clarifies, "These methods are especially important for organic components with an irregular texture, such as wood and leather, as well as for textiles." Because machine learning techniques are sometimes tricky to apply in certain contexts (e.g. limited traceability of the classification decision and the inability to adjust algorithms manually during commissioning), Pierer adds, "we mostly rely on classical image processing algorithms and statistical methods of signal processing in our projects. Only when we reach our limits do we switch to machine learning."

BMWI logo Acknowledgement: The Imaging Source Europe GmbH is an active member of the industry working group of the IQZeProd project and is in close professional exchange with the research partners. The IGF project IQZeProd (232 EBG) of the German Research Association for Measurement, Control and Systems Engineering - DFMRS, Linzer Str. 13, 28359 Bremen was funded by the AiF within the framework of the program for the promotion of joint industrial research (IGF) by the Federal Ministry of Economics and Energy based on a resolution of the German Bundestag. Please note that the final report of the IGF project 232 EBG is available to the interested public in the Federal Republic of Germany. The final report can be obtained from The German Research Association for Measurement, Control and Systems Technology - DFMRS, Linzer Str. 13, 28359 Bremen and the Fraunhofer IWU, Reichenhainer Straße 88, 09126 Chemnitz. Supported by the Federal Ministry of Economics and Energy on the basis of a resolution of the German Bundestag.

MVTec Releases HALCON 20.11: Customers save up to 20%

Published on October 27, 2020

HALCON 20.11 On November 20, 2020, MVTec will release HALCON 20.11. This release includes many new and improved features such as optimized technologies for code reading, OCR, 3D as well as deep learning. The great news is that this new version will be released simultaneously for the HALCON Steady and HALCON Progress editions, meaning HALCON Steady customers will have access to the many new features available from the last three Progress releases, including anomaly detection, the generic box finder, and optimized identification technologies.

To introduce customers to the new functionalities found in HALCON 20.11, MVTec will be offering a free webinar on two dates in order to accomodate schedules and time zones.

Limited-time Offer!

To celebrate the release of HALCON 20.11, customers will receive a 20% discount on all HALCON Steady 20.11 SDK products. This discount applies to new licenses, upgrades and deep learning add-ons for the HALCON Steady edition.

Additionally, each new SDK subscription for HALCON Progress received during the campaign period will be extended by 2 months at no additional charge (thereafter extended for the regular 12 months).

To take advantage of the special discounted price, please contact our HALCON sales staff to receive a quotation.

Please Note: The Imaging Source is a certified MVTec distributor for Germany, Austria, Switzerland and number of other countries around the world. Please click here to find the authorized MVTec distributor for your country.

Expanded Embedded Vision Product Line

Published on August 28, 2020

In order to efficiently access the performance features of the NVIDIA and Raspberry Pi 4 platforms for embedded vision applications, The Imaging Source offers a portfolio of MIPI CSI-2 board cameras. Many embedded machine vision applications (especially multi-camera applications) require cable lengths longer than the maximum 20 cm reached with MIPI CSI-2 cameras. Using the FPD-Link III bridge, cable lengths of up to 15m can be achieved. Image data is transmitted via a thin (Ø 2.8 mm) coaxial cable at up to 4.16 Gbps, with image data, control commands and power supply being transmitted simultaneously. The bandwidth is sufficient, for example, for the transfer of image data from a 5-MP camera at 30 frames/s. Developers can choose between MIPI-CSI-2 and FPD-Link-III board cameras as well as FPD-Link-III cameras with IP67-rated housing. The compact cameras are available as monochrome and color versions with the latest CMOS image sensors from Sony and On Semiconductor (global shutter and rolling shutter with resolutions from 0.3 MP [VGA] - 8.3 MP).

Whether an application requires MIPI CSI-2 or FPD Link interface, high resolution or high frame rates, The Imaging Source offers a broad portfolio of embedded vision camera modules.

Why Choose MIPI CSI-2 or FPD-Link-III Camera Interfaces?

The Imaging Source provides MIPI CSI-2 and FPD-Link-III camera carrier boards with up to 6 camera inputs (platform dependent) for connecting the board cameras to the NVIDIA® Jetson Nano™, NVIDIA® Jetson Xavier™ NX, NVIDIA® Jetson AGX Xavier™ and Raspberry Pi 4. Some users might be left wondering what the reasons are for using MIPI CSI-2 or FPD-Link-III cameras instead of USB-3 or GigE cameras. The answer lies in the hardware features of the embedded platforms themselves, namely the hardware-accelerated image signal processor (ISP). The embedded platforms' MIPI-CSI-2 interfaces are directly connected to the ISP to avoid latencies and data conversions. The Imaging Source's MIPI-CSI-2 cameras have been specifically designed to pass the raw image data directly to the interface. Since these embedded board cameras include only essential functionality, they are particularly cost effective. The ISP handles hardware-accelerated operations such as de-bayering, color correction, color space conversion, white balance, lens correction and image data compression (e.g. H.264/H.256).

Two-camera development kit with the NVIDIA Jetson Nano (MIPI CSI-2 interface) and added active cooling to prevent thermal throttling.

Platforms Bring the Power of AI to Embedded Vision

In addition to their ISPs, the aforementioned embedded computers from NVIDIA (e.g. NVIDIA Jetson Nano, Xavier NX* and AGX Xavier*) offer a GPU with CUDA cores and several MIPI-CSI-2 camera interfaces which make them predestined for AI and demanding machine vision applications. The NVIDIA platforms are capable of running several neural networks in parallel to implement image segmentation, image classification and object recognition. The NVIDIA SDK "JetPack" supports all NVIDIA Jetson-based embedded platforms.

Carrier board with six FPD Link III camera modules using the Jetson AGX Xavier.

NVIDIA offers extensive software libraries for deep learning and for image and video processing. The Imaging Source provides the corresponding camera drivers, which are seamlessly integrated into the NVIDIA software framework. This allows image data to be transferred directly to a pre-trained deep learning module.

For less demanding image processing tasks, the Raspberry Pi 4 with a MIPI-CSI-2 camera interface and an ISP is also an excellent choice. Here too, the MIPI-CSI-2 interface is directly connected to the ISP. The Raspberry Pi 4, for example, is capable of compressing high-resolution H.264 images and sending them via Ethernet or WLAN.

The NVIDIA and Raspberry Pi embedded platforms are supported by a large open source and maker communities. Support questions can be posted in appropriate forums, where developers can exchange information with each other. NVIDIA also offers free online training courses on the subject of deep learning.

*The Imaging Source's latest embedded products for NVIDIA's Xavier NX and AGX Xavier platforms will be available in Q1 2021. Please contact us if you would like to be informed as the products become available.

The above article was published in the September 2020 edition of the German-language industry journal Markt&Technik under the title, FPD-Link III ergänzt MIPI CSI-2.

Looking for older posts?

We have been blogging since 2007. All old posts are in the archive and tagcloud.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, frame grabbers and video converters for production automation, quality assurance, logistics, medicine, science and security.

Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.

Automated Imaging Association ISO 9001:2015 certified

Contact us