The Imaging Source blog

MERLIC 4: New Release on February 15

Published on February 13, 2019

MERLIC 4 On February 15, 2019, MVTec will release the latest version of its all-in-one software MERLIC. MERLIC is a software product for quickly building machine vision applications without any need for programming. An image-centered user interface and intuitive interaction concepts provide an efficient workflow and time and cost savings.

MERLIC provides powerful tools to design and build complete machine vision applications with a graphical user interface, integrated PLC communication, and image acquisition based on industry standards. All standard machine vision tools such as calibration, measuring, counting, checking, reading and position determination are included in MVTec MERLIC.

New Release Offers Parallelization, PLC Integration and 3D Vision

MERLIC 4, the successor of MERLIC 3, is released on February 15, 2019. With this release, the strong demands for parallelization, PLC integration and 3D vision are addressed.

The new version provides optimized process integration via Hilscher cifX cards which are able to communicate with common fieldbus and real-time Ethernet industrial protocols such as EtherCAT or PROFINET. This makes it possible to seamlessly integrate machine vision systems running MERLIC with a programmable logic controller (PLC).

Moreover, parallel processing and parallel execution of different tools is a major highlight of the new version as well as a set of new 3D vision tools.

Redesigned Tool Flow

The completely redesigned tool flow provides an even clearer and more intuitive user interface than before. Some obsolete features are no longer included in MERLIC 4 - for example the MERLIC engine is no longer included and the interface for custom tools has been changed. Customers who want to create a custom tool from the respective HALCON procedure, should now contact the usual support channels which will provide a converted MERLIC tool on demand.

As with MERLIC 3, MERLIC 4 will be available in the three editions "Standard", "Advanced" and "Professional". Additionally, it will be possible to purchase the multiple/remote frontends functionality as an add-on to the standard version.

For current MERLIC customers, an upgrade from all previous MERLIC versions to MERLIC 4 is offered free of charge. If you have any questions regarding whether an upgrade is suitable for your needs or if you need assistance in your decision to buy MERLIC, please feel free to contact us.

TIS to Make its Debut at Embedded World 2019

Published on February 11, 2019

From February 26 - 28, 2019, The Imaging Source will attend embedded world for the first time. Our technical sales and project managers will be staffing the booth (Hall 3A, Booth 417) in Nuremburg, Germany to showcase our latest product developments.

<strong>embedded world 2019:</strong> Join us at the leading international fair for embedded systems.

The Imaging Source will present the new MIPI / CSI-2 module lineup together with a novel FPD-Link III™ Serializer / Deserializer bridge. The new product line features a variety of industrial sensor modules and supported platforms. The compact camera modules directly execute demosaicing, color correction and other post-processing tasks via the ISP of the embedded target platform.

For applications where longer cable lengths are required, The Imaging Source offers a bridge solution using the FPD-Link protocol. The FPD-Link III bridge allows for cable lengths up to 15m and simultaneous data transmission, control channels (e.g. I2C or CAN) and power over a single compact coaxial cable.

The Imaging Source provides embedded system solutions based on the most powerful embedded platform currently on the market: The NVIDIA Jetson TX1 / TX2. In addition to its powerful GPU, it offers a dedicated ISP which processes 12 CSI-2 camera lanes with up to 1.5Gbps per lane and up to six simultaneous camera streams.

In addition to the new MIPI/CSI-2 module lineup, "AI Labs", a subsidiary of The Imaging Source dedicated to automation intelligence and artificial intelligence, will introduce their first product, Pick & Load, to the public.

Pick & Load is an affordable and efficient solution for the automation of machine loading and unloading tasks. Even small and medium sized enterprises can now automate their CNC loading and unloading tasks. A compact 3D sensor with active illumination is integrated with an embedded system and smart software to deliver a cost-effective solution with low power dissipation and a compact form factor. With a minimum amount of time and effort, Pick & Load can be configured to load and unload unpalletized parts from tables, palettes or drawers.

Please contact our Sales Department, if you require an entry pass to embedded world 2019.

TIS Drives TUfast: Students Win Driverless Challenge

Published on February 8, 2019

Illustration showing sensor coverage for the muc018: Various sensing techniques including camera-based data provide the 360-degree coverage necessary for autonomous driving.

For over 15 years, TUfast, a student initiative at the Technical University of Munich, has been taking part in "Formula SAE" competitions. In 2010, a spin-off team for high-efficiency vehicles and sustainable mobility was introduced: TUfast Eco Team. After winning three first prizes with the debut of their first UrbanConcept car (muc017) in 2017, the team wanted to take on a new challenge: autonomous driving. The Imaging Source sponsored DFK 33GX265e cameras and accessories to help the team get to the finish line.

Autonomous Driving has its Challenges

The Shell Eco-marathon hosts three distinct races: Shell Eco-marathon, Drivers' World Championship, Autonomous UrbanConcept Competition. TUfast's goal was to participate not only in the newest event (Autonomous UrbanConcept) but also in the traditional efficiency races with a driver. Limited to one car, the team had to develop a vehicle concept that would allow them to use all the equipment necessary to compete in the autonomous driving challenges while keeping the vehicle's overall weight low. To fulfill these requirements, TUfast created an innovative vehicle concept using a modular design. In so doing, the equipment necessary for autonomous driving could be completely and efficiently removed to reduce weight for the efficiency races.

Stereo cameras make the rounds: Student making preliminary tests using camera-based data in the muc018.

The team took part in several autonomous driving tasks at the Shell Eco-marathon including the "Parking Challenge" where the team relied solely on vision-based recognition of the parking blocks. The software maneuvered the vehicle into the optimal parking position using the camera-based data with an ultrasound system navigating the last few centimeters. The TUfast Eco Team was the only team to successfully complete the task. Overall, TUfast earned a very respectable second place for the Autonomous UrbanConcept competition, trailing the first-place recipient by only a few points.

A security driver is required for all autonomous driving challenges. White gloves enable judges to easily see if the driver makes contact with the steering wheel.

After the Event is Before the Event

On the heels of the Shell Eco-marathon in London, the students are currently planning next year's competitive season. The students will further use the stereo camera system to explore new, more advanced topics such as depth estimation and stereo SLAM (visual odometry via simultaneous localization and mapping) for the 2019 racing season.

Machine Vision Cameras Guide CTA's Prototype Telescope

Published on January 8, 2019

Artist rendering of CTA's Large-Size Telescope Array. <i>Image: Akihiro Ikeshita, Mero-TSK, International</i>

On October 10, 2018, the first prototype Large-Size Telescope (LST-1) of the Cherenkov Telescope Array (CTA) project was officially inaugurated at the northern array site situated at the Observatorio Roque de los Muchachos (Canary Islands). Two months later on December 19, 2018, the prototype telescope delivered its first images. This next-generation telescope serves as the working prototype for planned arrays in the northern and southern hemispheres. Over 100 telescopes will eventually be built for the arrays which together will comprise the CTA Observatory (CTAO). The University of Tokyo, a consortium member and major contributor to the CTA project, worked with The Imaging Source to supply cameras for the telescope's Active Mirror Control (AMC) system.

The large number of telescopes planned for the arrays will deliver unprecedented sensitivity (10 times that of current systems) and accuracy in the detection and imaging of high-energy gamma rays. Using a design based on current-generation gamma-ray detectors called Imaging Air Cherenkov Telescopes (IACT), the LST-1 has a 23 m reflector covered by 198 hexagonal mirror segments. In order to maintain optimal accuracy, each of the 198 mirrors must maintain a precise angle in relation to the main camera and its 265 photomultiplier tubes which sits 28 m above the reflector.

Fig. 2, LST-1 construction phase: hexagonal mirror segment with cutaway corner (bottom-most corner) for the CMOS guidance camera. <i>Image: T. Inada (ICRR, U-Tokyo)</i>

Accurate Mirror Angles via Machine Vision

Project requirements specify that rapid telescope repositioning toward a desired target area be accomplished in under 20 seconds. Additionally, weather conditions and the reflector's own weight (approx. 50 tons) cause deformations in the dish and the camera support structure which affect the alignment between the 198 mirrors and the telescope's main camera. These factors make an efficient and reliable system for mirror adjustment (i.e. focus) critical. When the telescope was first designed, several methods were considered including a laser-scan system and gyroscope system. In the end, none of these methods proved feasible due to price and performance concerns.

Scientists from the University of Tokyo were tasked with delivering a viable, cost-effective solution. They turned to machine vision and selected The Imaging Source's GigE monochrome cameras for the project. The GigE cameras feature global shutter sensors with 1.2 MP resolution. The CMOS cameras' compact and robust form meant they could be easily positioned in an IP67 case for additional protection from the elements. Once encased, the CMOS cameras were then mounted in a cutaway corner of each mirror (fig. 2 and below right). The reference point of each mirror is defined by the Optical Axis Reference Laser (OARL), whose wavelength is in the near infrared region. The CMOS camera in each mirror measures the position of the OARL's light spot on the main camera's target in order to identify the current mirror direction with respect to the optical axis.

Mirror segment back Each camera is connected with a board computer via the GigE interface. When the telescope is moved to a new target, the mirrors are adjusted based on look-up tables which store the correct position of each mirror. Because look-up tables are pre-defined, however, they do not take structural changes due to weather and the telescope's own weight into consideration. On the basis of the OARL position captured by the CMOS camera, a position is calculated and sent back to the actuators on the back of each mirror (image right) so that each mirror can be adjusted to the required angle.

Cherenkov Radiation and Gamma Ray Research

First discovered accidentally by defence satellites in the 1960s, gamma ray bursts (GRBs) from deep space result from the most violent interactions in the universe. Gamma rays, the highest energy wave on the electromagnetic spectrum, are about 10 trillion times more energetic than visible light and are ionizing radiation, making them biologically hazardous. Fortunately for life on Earth, the planet's atmosphere destroys virtually all gamma rays before they reach the surface which initially meant that the first gamma ray detectors were satellite-borne observatories.

Upon entry into Earth's atmosphere, gamma rays produce subatomic particle cascades. These charged particles emit a blue light called Cherenkov radiation. In the early 1980s, scientists at the Whipple Observatory developed a terrestrial telescope system using Cherenkov radiation to detect and track gamma rays to their sources.

Artist's rendering: Capturing Cherenkov radiation to track gamma rays. <i>Image: CTAO</i>

Much like seeing through the body with X-rays, gamma rays allow astrophysicists to examine some of the most violent environments in the universe and see through cosmic objects such as black holes and super nova. This new data will enable fundamental discoveries physics and in particular the nature and properties of dark matter.

Future Expectations for CTAO

In addition to the LST, two additional telescope sizes are required to completely cover the range of energies: Medium-Sized Telescope (MST) and Small-Sized Telescope (SST). Somewhere between 2021 and 2025, the number of telescopes online worldwide should be high enough to enable large-scale data collection which will dramatically improve accuracy and sensitivity.

Technical details provided in the article are based on the published research paper, written by Prof. M. Hayashida, Prof. M. Teshima et. al., published in Proceedings of Science under the title, "The Optical System for the Large Size Telescope of the Cherenkov Telescope Array." For detailed information about the Cherenkov Telescope Array project and its scientific goals, please visit: www.cta-observatory.org/.

ITE 2018: Yokohama, Japan

Published on December 17, 2018

From December 5-7, The Imaging Source and their partner Argo attended the ITE (International Technical Exhibition on Image Technology and Equipment) in Yokohama, Japan. One of the largest technology trade shows in Japan, the show's core themes included robot vision, 3D imaging, medicine, ITS and deep learning. Product engineers, systems designers, and researchers showed particular interest in The Imaging Source's autofocus and 42 MP cameras. When paired with high-resolution lenses, the 42 MP camera delivers a low-cost vision solution with excellent price-performance ratio. In addition to the over 100 cameras on display which included the newest cameras with USB 3.1 (gen. 1) and ix Industrial® Ethernet interface, booth demonstrations also included The Imaging Source's latest end-user software for camera control which was used for a multi-camera demonstration.

The latest vision software and hardware solutions at <strong>ITE 2018</strong> Yokohama, Japan

Many thanks to ARGO Corp. for a great show!

Looking for older posts?

We have been blogging since 2007. All old posts are in the archive and tagcloud.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, frame grabbers and video converters for production automation, quality assurance, logistics, medicine, science and security.

Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.

Automated Imaging Association ISO 9001:2015 certified

Contact us