The Imaging Source blog

Updates to the TIS Linux Library: Linux for Embedded

Published on April 13, 2021

In use by thousands of professional and volunteer programmers, Linux’s strong and vibrant programming community provides a variety of support resources making it the dominant operating system for embedded developers.

Linux, the highly-stable, open-source operating system, enjoys widespread acceptance among embedded developers because of its low cost, scalability and easy customization. For years, The Imaging Source has continuously maintained and developed its Linux library including the tiscamera SDK for Linux, an open-source project published under the Apache 2.0 license. The latest release for tiscamera SDK includes additions and updates to improve embedded application performance:

  • Supports the integration of the NVIDIA Jetson and Raspberry Pi platforms with The Imaging Source's USB and GigE cameras
  • Supports The Imaging Source's GStreamer element for MIPI-based cameras on Raspberry Pi 4, enabling standardized camera handling
  • General stability and performance improvements

You can find all the changes to the tiscamera SDK in the changelog.

MVTec Innovation Day 2021: Online!

Published on January 18, 2021

On February 3, 2021 from 9:00 AM - 8:00 PM (CET), system developers and decision-makers are invited to join MVTec for their annual Innovation Day. Due to the continuing pandemic, this year's customer event will take place online with the choice between two packages: "Light" Experience and "Full" Experience. Registration for both the "Full" and "Light" Experience will be open until February 2, 2021. Customers residing in the EU who register for the "Full" Experience by January 20 will receive an exclusive catering and giveaway package. Agenda highlights include Deep Learning, OCR, the HALCON Toolbox, Shape-based and Surface-based matching as well as exciting insights from Research@mvtec. MVTec has also developed an event app to help attendees better plan their agendas and connect with other participants.

<strong>MVTec Innovation Day 2021</strong>: Agenda highlights include Deep Learning, OCR, the HALCON Toolbox, Shape-based and Surface-based matching as well as information from Research@mvtec.

Learn More

TIS Microscopy Cameras Deliver Images for Digital Pathology

Published on January 14, 2021

For a number of serious blood diseases such as leukemia, multiple myeloma and lymphoma, differential counting of blood cells in bone marrow smears is the diagnostic gold-standard. Currently, these morphological assessments are still performed manually by pathologists and other highly-trained lab personnel and demand a great deal of concentration and precision from the technicians performing them. "Human factors" such as stress, fatigue, distraction, and level of training can make the interpretation of such tests prone to error, or as the scientists themselves call it "inter-operator variation". To improve accuracy and efficiency in diagnostic hematology, aetherAI has developed Microscope x Hema--a complete digital pathology system which uses The Imaging Source's USB 3.0 color microscopy cameras to create digital images which are then processed using Deep Learning techniques.

The Imaging Source USB microscope cameras provide images for <strong>aetherAI's <i>Microscope x Hema</i></strong> system which uses Deep Learning to improve accuracy and speed in the evaluation of bone marrow smears.

Cell Classification via Deep Learning

In order to properly train the system's CNNs, the company worked with the National Taiwan University Hospital to develop the world's first differential-counting AI model for bone marrow smears. The model is trained on a comprehensive image dataset of 500,000 annotated bone marrow samples. Microscope x Hema's embedded solution includes AI-powered microscope control software, AI model for differential counting and dedicated hardware to support AI inferencing. Images made using standard optical microscopes often contain complex backgrounds which can negatively impact efficient cell analysis. The 20 MP DFK 33UX183 microscope cameras' high-sensitivity CMOS sensor delivers low-noise images (high signal-to-noise ratio). The cameras' image pre-processing reduces any residual noise to enhance the edges and contours of the image, highlighting the details and reducing image blur. Microscope x Hema's image algorithms extract features from the images and then set parameters such as shape, contour, irregular fragments, color and texture. The workflow is complete once the system has classified and counted the cells in the sample.

Differentiation and categorization of nucleated bone marrow cells using <strong>aetherAI's <i>Microscope x Hema</i></strong> which analyses images made by <strong>DFK 33UX183</strong> microscope cameras. <i>Image: aetherAI</i>

By easing the burden on healthcare professionals, aetherAI aims to improve the quality of medical diagnostics by "providing solutions for digital pathology and AI-powered diagnostic support." Company founder, Dr. Joe Yeh, stated "the AI revolution will realize the ultimate value of digital medical images and bring healthcare to the next level."

The software's user interface displays the pre-processed image of bone marrow cells, and provides a report on the percentage and number of cell categories. <i>Image: aetherAI</i>

Multi-sensor Data Fusion for In-line Visual Inspection

Published on December 7, 2020

Visual inspection is the cornerstone of most quality control workflows. When performed by humans the process is expensive, prone to error, and inefficient: a 10%-20% pseudo scrap and slippage rate and production bottlenecks are not uncommon. Under the name IQZeProd (Inline Quality control for Zero-error Products), researchers at Fraunhofer IWU are developing new, inline monitoring solutions to recognize defects as early in the production process as possible for a variety of materials such as wood, plastics, metals, and painted surfaces. The system uses multi-sensor data fusion from a variety of sensors to recognize structural and surface defects as the components travel the production line. The goal is to make industrial manufacturing processes more robust and sustainable by increasing process reliability and improving defect detection. At the heart of the system is the researchers' own Xeidana® software framework and a matrix of twenty industrial cameras. The researchers had very specific camera criteria: global-shutter monochrome sensor; low-jitter real-time triggering; reliable data transmission at very high data rates and straightforward integration into their software framework. They selected GigE Vision-standard industrial cameras from The Imaging Source.

Image data from IQZeProd's twenty TIS GigE industrial cameras as well as data from hyperspectral and non-optical sensors are fused using the Xeidana software framework to enable an inline QC system with zero errors. <i>Image: Fraunhofer IWU</i>

While Xeidana's framework approach offers the flexibility necessary to process data from optical, thermal, multi-spectral, polarization or non-optical sensors (e.g. eddy current), many inspection tasks are completed using the data delivered by standard optical sensors. Project manager, Alexander Pierer, commented, "We often use data fusion to redundantly scan critical component areas. This redundancy can consist of scanning the same region from different perspectives, which simulates 'mirroring' used during manual inspection." To acquire the visual data needed to complete these tasks, the researchers created a camera matrix consisting of twenty GigE industrial cameras: nineteen monochrome and one color.

Nineteen monochrome industrial cameras gather data from critical component areas. Xeidana processes the redundant data to simulate the process known as 'mirroring' - a technique commonly used for manual inspection. <i>Image: Fraunhofer IWU</i>

Monochrome Sensors: Optimal for Defect Detection

Due to their intrinsic physical properties, monochrome sensors deliver higher detail, improved sensitivity, and less noise than their color counterparts. Pierer notes: "monochrome sensors are sufficient for detecting defects that appear as differences in brightness on the surface. While color data is very important for us humans, in technical applications the color data very often does not provide additional information. We use the color camera for color tone analysis, by means of HSI-Transformation, to detect color deviations that may indicate a problem with paint coating thickness."

Task requirements and short exposure times meant that the engineers had very precise camera criteria: Pierer continues, "The main selection criteria were global shutter and real-time triggering with very low jitter, because we shoot the parts in motion with very short exposure times in the 10µs range. The exposure between the camera and the Lumimax illumination (iiM AG), which is also triggered via hardware input, must be absolutely synchronous. We tested some of your competitors here and many of them had problems. It was also important to us that the ROI could already be limited to relevant areas in the camera's firmware in order to optimize the network load for image transmission. Furthermore, we are dependent on reliable data transmission at very high data rates. Since the parts are inspected in throughput, image failures or fragmented image transmissions must not occur."

Motorized Zoom Cameras Allow for Quick Adjustments to FOV

Over the course of the project, the team built several systems: for industrial settings as well as for demonstration and testing purposes. In the typical industrial setting where the components under inspection remain constant, the imaging provided by the fixed-focus industrial cameras met the team's requirements. For the demo/test system, however, the researchers were using a number of diverse components including metal parts, wooden blanks and 3D-printed plastics which required cameras with an adjustable field of view (FOV). The Imaging Source's monochrome zoom cameras with integrated, motorized zoom offered this functionality.

Zoom cameras provide a rapidly adjustable field of view (FOV), allowing the demo system to scan components of diverse size and shape. <i>Images: Fraunhofer IWU</i>

Massively Parallel Processing Keeps Pace with Data Transmission and Enables Deep Learning

With over 20 sensors of varying kinds delivering data to the system, there is a data stream on the order of 400 MB/s to contend with. Pierer explains, "The system is designed for throughput speeds of up to 1 m/s. [...] Every three to four seconds, the twenty-camera matrix creates 400 images. Added to this is the data coming from the hyperspectral line camera and the roughness measurement system, all of which must be processed and evaluated within the 10 second cycle time. In order to meet this requirement, so-called massively parallel data processing is necessary, involving 28 computing cores (CPU) and the graphics processing unit (GPU). This parallelization enables the inspection system to keep pace with the production cycle, delivering an inline-capable system with 100% control." Optimized for modern multi-core systems to enable massively parallel processing, Xeidana's modular framework approach allows application engineers to quickly realize a massively parallel, application-specific, quality control program using a system of plug ins that can be extended with new functionalities via a variety of imaging libraries.

The system's data fusion capabilities can be used in several ways depending on what information is likely to provide the soundest results. In addition to the more standard machine vision inspection tasks, the team of researchers are currently working on integrating other non-destructive evaluation techniques such as 3D vision as well as additional sensors from the non-visible spectrum (e.g. x-ray, radar, UV, terahertz) to detect other types of surface and internal defects.

Processing network. Blue and yellow modules execute individual image processing tasks in parallel. <i>Image: Fraunhofer IWU</i>

Because Xeidana supports massively parallel processing, Deep Learning techniques can also be applied to defect detection of components whose inspection criteria are not readily quantified or defined. Pierer clarifies, "These methods are especially important for organic components with an irregular texture, such as wood and leather, as well as for textiles." Because machine learning techniques are sometimes tricky to apply in certain contexts (e.g. limited traceability of the classification decision and the inability to adjust algorithms manually during commissioning), Pierer adds, "we mostly rely on classical image processing algorithms and statistical methods of signal processing in our projects. Only when we reach our limits do we switch to machine learning."

BMWI logo Acknowledgement: The Imaging Source Europe GmbH is an active member of the industry working group of the IQZeProd project and is in close professional exchange with the research partners. The IGF project IQZeProd (232 EBG) of the German Research Association for Measurement, Control and Systems Engineering - DFMRS, Linzer Str. 13, 28359 Bremen was funded by the AiF within the framework of the program for the promotion of joint industrial research (IGF) by the Federal Ministry of Economics and Energy based on a resolution of the German Bundestag. Please note that the final report of the IGF project 232 EBG is available to the interested public in the Federal Republic of Germany. The final report can be obtained from The German Research Association for Measurement, Control and Systems Technology - DFMRS, Linzer Str. 13, 28359 Bremen and the Fraunhofer IWU, Reichenhainer Straße 88, 09126 Chemnitz. Supported by the Federal Ministry of Economics and Energy on the basis of a resolution of the German Bundestag.

MVTec Releases HALCON 20.11: Customers save up to 20%

Published on October 27, 2020

HALCON 20.11 On November 20, 2020, MVTec will release HALCON 20.11. This release includes many new and improved features such as optimized technologies for code reading, OCR, 3D as well as deep learning. The great news is that this new version will be released simultaneously for the HALCON Steady and HALCON Progress editions, meaning HALCON Steady customers will have access to the many new features available from the last three Progress releases, including anomaly detection, the generic box finder, and optimized identification technologies.

To introduce customers to the new functionalities found in HALCON 20.11, MVTec will be offering a free webinar on two dates in order to accomodate schedules and time zones.

Limited-time Offer!

To celebrate the release of HALCON 20.11, customers will receive a 20% discount on all HALCON Steady 20.11 SDK products. This discount applies to new licenses, upgrades and deep learning add-ons for the HALCON Steady edition.

Additionally, each new SDK subscription for HALCON Progress received during the campaign period will be extended by 2 months at no additional charge (thereafter extended for the regular 12 months).

To take advantage of the special discounted price, please contact our HALCON sales staff to receive a quotation.

Please Note: The Imaging Source is a certified MVTec distributor for Germany, Austria, Switzerland and number of other countries around the world. Please click here to find the authorized MVTec distributor for your country.

Looking for older posts?

We have been blogging since 2007. All old posts are in the archive and tagcloud.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, video converters and embedded vision components for factory automation, quality assurance, medicine, science, security and a variety of other markets.

Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE, MIPI interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.

ISO 9001:2015 certified MVTEC | Edge AI + Vision Alliance | EMVA

Contact us