The Imaging Source blog

3D Vision Made Easy

Published on June 21, 2017

3D Data Acquisition: Passive and Active Techniques

Whether it is the industrial smart robot in the age of IIoT using three dimensional data to orient itself in its working space, the reverse vending machine counting empty bottles in a case, or the surface inspection system alerting personnel to the smallest material defect - three dimensional information acquired by modern 3D sensors from the environment and the objects therein belongs to many industrial applications of the future.

Currently, there are a variety of technologies on the market which can be used to collect three-dimensional information from a scene. One critical point of differentiation which must be made among them, however, is between active and passive techniques: active techniques such as Lidar (Light detection and ranging) or time-of-flight sensors use an active light source in order to provide distance information; passive techniques, however, rely solely upon the camera-acquired image data - similar to depth perception in human visual systems.

Too little computing power, high prices and imprecise results put the breaks on early 3D systems in many applications. Thanks, however, to improvements in computer performance and high-resolution sensors, the technology is finding its way into more and more applications.

All of the techniques each have their advantages and disadvantages: so while time-of-flight systems as a rule use less computational power and have few limitations in terms of scene structure, the maximum spatial resolution of current ToF systems (800 x 600 pixels) is relatively low and their outdoor use very limited due to infrared radiation from the sun. Newer sensors on the market, however, have now enabled passive multi-view stereo vision systems to offer very high spatial resolution; they are, however, processor intensive and perform poorly when confronted with low-contrast or repeated textures. Nevertheless, today's computational resources as well as optional pattern projectors make real-time operation of stereo systems at high spatial and depth resolutions possible. Precisely for this reason, passive multi-view stereo systems are among the most popular and flexible systems for the acquisition of 3D information. Multi-view stereo systems consist of two or more cameras which simultaneously record data from a scene. When the cameras are calibrated and can be focused on a real-world point in the scene whose pixel can be located in the camera, a three-dimensional feature can then be reconstructed from the pixels via triangulation. The highest-possible level of precision which can be obtained depends on the distance between the cameras (baseline), the convergence angle between the cameras, the sensor's pixel size and the focal length. The essential aspects of calibration and correspondence matching alone make great demands on the underlying image processing algorithms.

Stereo Vision Systems in Real-time Use

Through camera calibration, the position and orientation of the individual cameras can be determined (external parameters) as well as the focal length, principal point and distortion parameters (internal parameters) which are significantly influenced by the selected lenses.

Camera calibration is usually performed by using a two-dimensional calibration pattern such as a checkerboard or dots in which control points can be easily and clearly detected. Where, of course, the measurements of the calibration pattern such as the distances between control points are precisely known. Next, image sequences of the calibration patterns (with varying positions of pattern and orientation) are made. Image processing algorithms detect the control points in the calibration pattern from the individual images. Edge and corner detection algorithms serve, for example, as the basis when using a checker board pattern and blob detection algorithms when using a dot calibration pattern. In so doing, a multitude of 3D-2D correspondences between the calibration object and the individual images emerge. Based on these correspondences, an optimization process subsequently delivers the camera parameters.

Example detection results from a calibration pattern in various positions and directions. Via the detected control points from the calibration pattern, the camera’s internal and external parameters can be determined.

While the calibration is run only once (assuming the camera parameters do not change during system operation), the significantly more processor-intensive task of finding correspondences between the views must be carried out for each image in order to deliver the scene's 3D information. In the case of a stereo system, correspondences between two views are identified. In preprocessing, the images are usually rectified by means of the internal distortion parameters. For a pixel in the reference image, there will be a subsequent search for the corresponding point in the target image which represents the same 3D coordinate in the observed scene. Assuming Lambertian reflectance (i.e. a perfectly diffused surface), local regions in the reference and target image should be very similar. The correlation is computed between the source and the target region and indicates similarity. This is not the same as computing the correlation coefficients beforehand and comparing them afterwards (normalized cross-correlation is well-established). The normalized cross-correlation is one such similarity measure.

Correspondence Points

All available scene points are not needed for the target image: geometrically there are potentially corresponding points which lie in the rectified views on a line, a so-called epipolar line. Correspondences need only to be searched for along these epipolar lines. In order to additionally accelerate the search, undistorted input images are often rectified. The input images are transformed so that all corresponding epipolar lines share the same vertical image coordinates. Accordingly, for any given point in the reference image, one need only search along the line with the same vertical coordinate when looking for correspondences in the target image. While the algorithmic complexity of the search remains the same, the previous rectification allows for a more efficient search for correspondences. Furthermore, if the minimum and maximum working distances of the scene are known, the search can be additionally refined along the epipolar lines in order to accelerate it.

Above: original image pair from The Imaging Source’s stereo vision system. Below: rectified image pair. For a point in the reference image (below, left), a corresponding point need only be searched for along the same image line in the target image (shown lower right as a red line for demonstration purposes).

If all possible target environments along the epipolar lines have been compared with the reference environments, the target environment with the greatest similarity is, as a rule (in the case of local stereo algorithms), selected as the final correspondence. If the correspondence search is complete, assuming that a clear correspondence has been found, for every pixel of the reference image (in a rectified stereo vision system) there will be the distance information in the form of the disparity - in other words, as the offset in pixels along the epipolar line. Here one speaks of the disparity image or disparity map.

With the help of the previously calibrated internal and external parameters, the disparity can in turn be converted into actual metric distance information. If the distance for every point is calculated, where a disparity could be estimated, the result is a three dimensional model in the form of what is known as a point cloud. In the case of low-contrast or repetitive patterns in a scene, the use of local 3D stereo techniques can lead to less reliable disparity calculations since many points with a low uniqueness value will exist in the target view. Global stereo techniques can help in such cases but are considerably more processor intensive as they place additional demands on the final disparity card (e.g. in the form of a smoothness constraint that penalizes discontinuities). Often here, it is easier to project an artificial structure onto the object so as to produce clarity in the correspondences (projected texture stereo). Whereby, the projector is not calibrated with reference to the camera since it serves only as a source of artificial structure.

Visualization of the disparity estimate and the final point cloud using an SDK from The Imaging Source. Left: disparity map relative to the reference image. Middle: 3D view of the texturized point cloud. Right: color-coded point cloud which shows distance from the camera.

Acceleration via GPUs

When high frame rates and high spatial resolution are needed, modern GPUs calculate 3D information at significantly accelerated speeds. For the final integration of a stereo vision system in an existing environment, The Imaging Source relies on modular solutions: the acquisition of 3D data can be achieved using either The Imaging Source's own C++ SDK with optional GPU acceleration in connection with cameras from The Imaging Source or MVTec's HALCON programming environment. While the SDK allows for the easy calibration of stereo vision systems as well as the acquisition and visualization of the 3D data, HALCON offers additional modalities such as hand-eye calibration for the integration of robotic systems and additional algorithms such as the registration of CAD models in relation to acquired 3D data.

The above article, written by Dr. Oliver Fleischmann (Project Manager at The Imaging Source), was published in the May edition of the German-language industry journal Computer&AUTOMATION under the title, "3D-Sehen leicht gemacht." Please click these links to find additional information about the IC 3D Stereo Camera System and about the IC 3D SDK.

The Imaging Source at Control 2017

Published on June 2, 2017

Automated quality assurance is ever-growing in importance and the visitor numbers from this year's Control in Stuttgart reflect this fact. From May 9-12, Control hosted almost 30,000 visitors from 106 countries (up by about 13% compared to last year) and enjoyed increased participation in its educational events. The Imaging Source supports its customers in the development of machine vision solutions to suit their needs, and to this end, displayed its new stereo 3D-vision system and 42 MP autofocus camera as well as the most up-to-date USB and GigE cameras featuring the latest Sony and On Semiconductor sensors. Since efficient industrial machine vision relies on powerful machine-vision software, a live OCR demonstration of MVTec's MERLIC software also featured prominently.

<b>Control 2017:</b> Stereo 3D-vision system as well as zoom and autofocus cameras on display at Germany's premier trade show for quality assurance.

Anniversary with Nikon's Strategic Partner Program

Published on May 8, 2017

May marks the first anniversary of Nikon Metrology's designation of The Imaging Source as a key supplier within their Strategic Partnership program. Nikon Metrology provides precision instruments for optical inspection and visual and mechanical metrology solutions. In partnering with The Imaging Source, Nikon Metrology is able to offer their customers a wider pallet of imaging hardware for inspection needs - enabling greater product precision and flexibility.

The Imaging Source: Key Supplier in Nikon Metrology's Strategic Partner Program

The anniversary provides an opportunity to express our appreciation for being part of a program which supports Nikon's goal of offering the widest range of metrology and microscope-imaging solutions possible. The Imaging Source looks forward to contributing to the continuing success of Nikon Metrology's Strategic Partner Program.

DMK 33GP031: Dropwatching for Inkjet Applications

Published on May 2, 2017

The following interview with The Institute for Printing (iPrint) gives specific application information about the DMK 33GP031 mentioned in our May 1, 2017 blog post. If you would like more information about the Inkjet Training course, please visit iPrint.

You chose the camera model DMK 33GP031.
Why was this particular model right for your application?

dropwatching The DMK 33GP031 fulfills all demands for available drivers (our inkjet analysis system works primarily with Matlab drivers), synchronization, resolution, color depth and frame rates necessary for the analysis of inkjet systems and this with, to our knowledge, an unbeatable price to performance ratio. For many inkjet analyses (especially volume measurement and measurements made using multiple independent color channels) multiple cameras are necessary. Thanks to the Gigabit Ethernet interface, we can equip all of our controllers with minimal effort.

What specifically were the cameras used for?
During the Inkjet Training course, we use the DMK 33GP031 for inkjet droplet analysis (dropwatching). The camera exposure, droplet generator and flash diode are synched together using a FPGA-based controller. Inkjet drops have a velocity of approximately 1-20 m/s. Very short flash durations are required in order to capture the smallest high-speed drops with minimal motion distortion. With a flash duration of only a microsecond and a droplet speed of 10 m/s the droplet image is stretched to 10 µm. By controlling the flash durations, several droplets can be stacked. By illuminating a droplet twice at different points during its flight, droplet speeds or flight angles can be determined independent of the background fluctuations influencing droplet velocity (jitter).

In order to capture and measure details of droplet formation, resolutions of approximately 1 - 2.5 µm /pixel are necessary. With pixels measuring 2.2 µm, the DMK 33GP031's pixels are already quite small. We were able to use an inexpensive standard lens with minimal enlargement in order to reach the resolution we needed. Additionally, color information is valuable for interpolating contours or in order to stack multiple measurements. With 12-bit color depth, the DMK 33GP031 offers adequate color information for most inkjet analyses. Thanks to the 5 MP resolution, we can maintain a large field of view even at high resolutions.

In comparison to the previous model, the DMK 33GP031 is also able to reach higher frame rates due to its 12-bit color depth; it supports controlled exposure times via the trigger which allowed for greatly simplified and efficient synchronization with our analysis system. A significant additional plus point for the camera is its high frame rate when using a reduced region of interest (ROI). At 2592 x 24 pixels, we reached frame rates of up to 500Hz; when used with two cameras and layered measuring techniques, we were able to verify the presence of all droplets with frequencies of over 10kHz.

How do the images/data you acquire from the cameras figure into your application (e.g. quality control etc.)?
dropletoverlay The captured images enable us to make adjustments to the settings of the printing system or printer itself which means printing with higher quality and efficiency. Because each Piezo inkjet printhead requires a slightly different working voltage to produce the same droplet velocity, printhead manufacturers specify a nominal voltage with which a nominal droplet size and velocity can be achieved.

In order to reach the same results using ones own printing fluid/ink with the specified nominal voltage as those of the test ink, we use droplet analysis to test several nominal voltages until we reach settings that will give us the desired printing properties.

What software did you use in connection with the cameras?
We used Matlab, IC-Capture and IC-Measure (we also use the DMK 23GP031 on our microscopes).

How was your experience with the cameras and the software?
The performance of The Imaging Source cameras is very good. The high-performance driver package enables simple, efficient integration. Occasionally, the Matlab interface had some problems during longer live previews on lower-performace PCs. In such cases, we use IC Capture for the live preview before the measurements are made; in settings it's possible to change from Live-Preview to Matlab. IC Capture is also a helpful tool for the hassle-free recording of image sequences and with which to test camera settings and performance. We also use IC Measure in conjunction with our microscopes which with its variety of measurement capabilities and compatibility, we find it to be an excellent tool for microscopy.

Mind the Gap: Dropwatching with Machine Vision

Published on May 1, 2017

The following is an overview of an inkjet technology workshop and their use of the DMK 33GP031. For more technical information about how the DMK 33GP031 was used in the iPrint workshop lab, please see the interview here.

Since the 1970s when inkjet printing first became commercially viable, the technology has commanded the continued interest of researchers. In short, it is a no-contact method for the digital delivery and positioning of extremely small volumes of material or fluid with precision and high frequency. Inasmuch, inkjet technology is fundamentally a form of additive manufacturing; currently used in printed electronics, direct printing, ceramics and textiles and even with experimental jetable fluids and substrates such as live-cell printing for biofabrication, organic semi-conductors and organic light-emitting diodes. The Institute for Printing (iPrint) at the College of Engineering and Architecture at the University of Applied Sciences of Western Switzerland began offering an inkjet training course in 2015 which multidisciplinary professionals a week of theoretical and practical training with the various components of inkjet systems.

DMK 33GP031: GigE monochrome camera used for dropwatching during iPrint's <i>Inkjet Training</i> courses.

During the course, participants learn a key aspect of the printing process is determining the print quality delivered by an inkjet print head. This monitoring process begins with the properties of the jetted drop: the drop velocity, jetting direction, presence of satellite drops and drop volume. These droplet properties must be tightly adjusted in order to achieve and maintain the precision required by the application at hand. Machine vision systems are used to image the droplets during printing which allows for the continuous measurement of droplet properties - a process known as dropwatching. There are many possible imaging configurations for dropwatching, each with its own advantages: nozzle plate analysis, multi-camera dropwatching, overlaid imaging, multi-wavelength dropwatching or dropwatching in the printing gap. For their course, iPrint selected The Imaging Source's DMK 33GP031 industrial monochrome cameras. Regardless of which method is preferred, dropwatching is critical for performance optimization, print system (or print head) analysis, reliability verification and monitoring of the printing process.

Dropwatching: image of jetted droplets with ligaments captured by the DMK 33GP031, GigE monochrome camera.

As iPrint notes on their website, [inkjet research is] "by its very nature, multidisciplinary as it requires cutting-edge skills from various domains, notably those of mechanical engineering, chemistry and nanotechnologies." Typical participants are highly-skilled engineers or chemists who are experts in one area of inkjet but who might have only a limited working knowledge of other aspects of the technology; these participants are looking to broaden their knowledge base and develop a cross-system awareness of the entire process.

Looking for older posts?

We have been blogging since 2007. All old posts are in the archive and tagcloud.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, frame grabbers and video converters for production automation, quality assurance, logistics, medicine, science and security.

Our comprehensive range of cameras with USB 3.0, USB 2.0, GigE, FireWire 400, FireWire 800 interfaces and other machine vision products are renowned for being innovative, high quality and for constantly meeting the performance requirements of demanding applications.

Automated Imaging Association ISO 9001:2015 certified

Contact us