The Imaging Source blog

New 5 MP Polarization Cameras: A New Tool for Industrial Imaging

Published on September 27, 2019

Sony's Polarsens™ 5.1 MP global-shutter CMOS image sensors (IMX250MZR/ IMX250MYR) capture visual data which cannot be obtained using other standard monochrome and color sensors. The Imaging Source's new USB 3.0 and GigE polarization cameras feature the Polarsens technology which uses four-directional (0°, 45°, 90°, 135°) nanowire micro-polarizers placed in front of each 2x2 pixel array (calculation unit) to deliver multi-directional polarized images.

Illustration of Sony's Polarsens sensor structure.

Many materials, such as plastics, glass, metals and liquids display intrinsic polarization properties which can result in glare and other image artifacts in standard intensity images.

Standard intensity images of transparent objects often yield little useful visual data.

The sensors' polarization filters make use of these polarization properties to visualize material stress and surface scratches as well as to reduce glare, improve edge detection or to enhance contrast in low-contrast materials.

Images from DZK 33UX250: (Left) Using AoLP processing of the polarization data and HSV color mapping to show residual stress in plastic. Images using DoLP processing of the polarization data to reduce glare and improve contrast for defect and presence inspection.

Standard intensity image of black granite pyramid shows low contrast (left). DoLP processing of the polarization data (middle) adds contrast; AoLP processing of the polarization data with HSV color mapping (right) from DZK 33UX250 adds additional image information which can be used for effective segmentation.

On-chip, four-channel polarization: Users can isolate specific channels for additional image processing.

The 5.1 MP cameras are available as color and monochrome variants with either a GigE (max. 24 fps) interface or a faster USB 3.0 interface (max. 75 fps). For additional information, please have a look at our whitepaper.

MVTec Releases HALCON 19.11: Customers save 50%

Published on September 26, 2019

HALCON 18.11 On November 15th, MVTec will release HALCON 19.11, the latest version of the HALCON Progress Edition. With this release, MVTec has equipped its comprehensive machine vision software with many new and optimized functions that advance the level of machine vision technology available to OEMs, integrators, and end-users worldwide.

Deep-learning-based inspection tasks, for example, can be implemented much more efficiently via HALCON 19.11's anomaly detection algorithm which now requires only a handful of training images. Additionally, HALCON's new generic box finder is able to determine the exact position and size of arbitrary boxes within 3D point clouds. HALCON 19.11 further integrates groundbreaking deep learning functions with improvements to core technologies such as code reading and 3D vision. These and many other innovations are included in the latest version of the "HALCON Progress Edition", "HALCON 19.11".

Special Offer - 50% Discount on the First Year

To celebrate the release of HALCON 19.11, we are offering an upgrade campaign from October 1 to December 15, 2019: HALCON Steady customers are eligible to receive a 50% discount on every newly purchased HALCON Progress SDK subscription during the first year.

The "HALCON Progress SDK" subscription includes the complete Deep Learning functionality and offers you and your company the opportunity to use the latest features of "HALCON 19.11" at a reduced price.

To take advantage of the special discounted price, please contact our HALCON support staff.

TIS to Make its Debut at Embedded VISION Europe (EVE) 2019

Published on September 25, 2019

From October 24 - 25, 2019, The Imaging Source will attend Embedded VISION Europe (EVE). Our technical sales and project managers will be there in Stuttgart, Germany to showcase our latest product developments.

Join us at Embedded VISION Europe (EVE) 2019.

The Imaging Source will present the new MIPI / CSI-2 module lineup together with a novel FPD-Link III™ Serializer / Deserializer bridge. The new product line features a variety of industrial sensor modules and supported platforms. The compact camera modules directly execute demosaicing, color correction and other post-processing tasks via the ISP of the embedded target platform.

For applications where longer cable lengths are required, The Imaging Source offers a bridge solution using the FPD-Link protocol. The FPD-Link III bridge allows for cable lengths up to 15m and simultaneous data transmission, control channels and power over a single compact coaxial cable.

The Imaging Source provides embedded system solutions based on the most powerful embedded platform currently on the market: The NVIDIA Jetson (TX2, Nano and AGX Xavier). In addition to its powerful GPU, it offers a dedicated ISP which processes 12 CSI-2 camera lanes with up to 1.5Gbps per lane and up to six simultaneous camera streams.

AI Revolutionizes Markerless Pose Extraction from Videography

Published on August 9, 2019

Which neural circuits drive adaptive motor behavior? How are these behaviors represented in the neural code? Researchers at the Mathis Lab (The Rowland Institute at Harvard University) are unlocking the answers to these questions by studying brain/behavior interaction. The team, led by Mackenzie Mathis, "[aims] to understand how neural circuits contribute to adaptive motor behaviors." The challenge is to relate specific brain events to particular behaviors. Using mice as a model, the scientists are tracking behavioral events and corresponding brain activity using high-speed videography provided by The Imaging Source DMK 37BUX287 cameras and machine learning algorithms from their own open-source toolbox, DeepLabCut.

Researchers at <strong>Mathis Lab</strong> use machine learning tools and optogenetics to understand how neural circuits contribute to adaptive motor behaviors. <i>Image credit: Cassandra Klos</i>

Fundamentally, the researchers must be able to accurately and vigorously track mouse behavior and deliver quantitative data to describe animal movement. "We care how animals adapt to their environment, so watching their motor actions is a great way to start to interpret how the brain does this. Therefore, the first step in our research is to observe the animals during learning new tasks," says Dr. Mathis. Her research team relies on a multi-camera system using DMK 37BUX287 cameras. Their test subjects are fast: "[...] mice can reach out and grab an object in about 200 ms, so we wanted high frame rates and good resolution" says Dr. Mathis.

Videography provides an efficient method of recording animal behavior, but pose extraction (i.e. the geometric configuration of multiple body parts) has been a problem for researchers for years. In human studies, state-of-the-art motion capture is achieved by using markers to track joints and limb movement, or very recently, by new deep learning methods. With animals, however, such methods are impractical for a variety of reasons. Which meant, up until now, animal behavior was tracked using manually-digitized videography (i.e. humans coding videos frame by frame) - a labor-intensive process which was often imprecise and could add hundreds of hours to research projects.

Currently, <strong>DeepLabCut</strong> supports a two-camera set up: Two <strong>DMK 37BUX287</strong> cameras are used to capture high-speed videography whose frames are used for markerless 3D pose extraction. <i>Image credit: Cassandra Klos</i>

In order to automate pose extraction, Dr. Mathis's team developed DeepLabCut: an open-source software for markerless pose estimation of user-defined body parts. Based on the (human) pose estimation algorithm, DeeperCut, the researchers use deep-convolutional-network-based algorithms which they have specifically trained for the task. In a paper published in Nature Neuroscience, the authors write that the team was able to dramatically reduce the amount of training data necessary by "adapting pretrained models to new tasks [...through] a phenomenon known as transfer learning." DeepLabCut has become so robust and efficient that even with a relatively small number of images (~200), "the algorithm achieves excellent tracking performance". Many scientists are hailing the development of the software as a "game changer". Mathis Lab also uses The Imaging Source's IC Capture and has added a camera control API for The Imaging Source cameras to GitHub.

DeepLabCut automatically tracks and labels (red, white and blue dots) a mouse's movements. Image credit: Mackenzie Mathis

Machine Vision Technology Forum: Register Now

Published on July 18, 2019

October 8 marks the start of STEMMER IMAGING's fourth Machine Vision Technology Forum. Approximately 40 leading machine vision manufacturers will present their latest developments and state-of-the-art technology, for both newcomers and pros, in a series of presentations and exhibitions. Specifically, attendees can level-up their machine vision expertise and speak with experts from seven areas: IIOT, Embedded Vision, 3D-Technology, Machine Learning, Spectral Imaging, Future Trends and Fundamentals. During the five-city European tour, The Imaging Source will present its latest embedded vision solutions and give a talk on the advantages of FPD Link III - a technology which allows for cable lengths of up to 15 m.

<strong>Machine Vision Technology Forum: Tour 2019</strong> features the latest developments in new and emerging applications for newcomers and pros alike.

Registration has already begun, so please click on the links above for additional information regarding each of the five events.

Looking for older posts?

We have been blogging since 2007. All old posts are in the archive and tagcloud.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, frame grabbers and video converters for production automation, quality assurance, logistics, medicine, science and security.

Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.

Automated Imaging Association ISO 9001:2015 certified

Contact us