AI Revolutionizes Markerless Pose Extraction from Videography

Published on August 9, 2019

Which neural circuits drive adaptive motor behavior? How are these behaviors represented in the neural code? Researchers at the Mathis Lab (The Rowland Institute at Harvard University) are unlocking the answers to these questions by studying brain/behavior interaction. The team, led by Mackenzie Mathis, "[aims] to understand how neural circuits contribute to adaptive motor behaviors." The challenge is to relate specific brain events to particular behaviors. Using mice as a model, the scientists are tracking behavioral events and corresponding brain activity using high-speed videography provided by The Imaging Source DMK 37BUX287 cameras and machine learning algorithms from their own open-source toolbox, DeepLabCut.

Researchers at <strong>Mathis Lab</strong> use machine learning tools and optogenetics to understand how neural circuits contribute to adaptive motor behaviors. <i>Image credit: Cassandra Klos</i>

Fundamentally, the researchers must be able to accurately and vigorously track mouse behavior and deliver quantitative data to describe animal movement. "We care how animals adapt to their environment, so watching their motor actions is a great way to start to interpret how the brain does this. Therefore, the first step in our research is to observe the animals during learning new tasks," says Dr. Mathis. Her research team relies on a multi-camera system using DMK 37BUX287 cameras. Their test subjects are fast: "[...] mice can reach out and grab an object in about 200 ms, so we wanted high frame rates and good resolution" says Dr. Mathis.

Videography provides an efficient method of recording animal behavior, but pose extraction (i.e. the geometric configuration of multiple body parts) has been a problem for researchers for years. In human studies, state-of-the-art motion capture is achieved by using markers to track joints and limb movement, or very recently, by new deep learning methods. With animals, however, such methods are impractical for a variety of reasons. Which meant, up until now, animal behavior was tracked using manually-digitized videography (i.e. humans coding videos frame by frame) - a labor-intensive process which was often imprecise and could add hundreds of hours to research projects.

Currently, <strong>DeepLabCut</strong> supports a two-camera set up: Two <strong>DMK 37BUX287</strong> cameras are used to capture high-speed videography whose frames are used for markerless 3D pose extraction. <i>Image credit: Cassandra Klos</i>

In order to automate pose extraction, Dr. Mathis's team developed DeepLabCut: an open-source software for markerless pose estimation of user-defined body parts. Based on the (human) pose estimation algorithm, DeeperCut, the researchers use deep-convolutional-network-based algorithms which they have specifically trained for the task. In a paper published in Nature Neuroscience, the authors write that the team was able to dramatically reduce the amount of training data necessary by "adapting pretrained models to new tasks [...through] a phenomenon known as transfer learning." DeepLabCut has become so robust and efficient that even with a relatively small number of images (~200), "the algorithm achieves excellent tracking performance". Many scientists are hailing the development of the software as a "game changer". Mathis Lab also uses The Imaging Source's IC Capture and has added a camera control API for The Imaging Source cameras to GitHub.

DeepLabCut automatically tracks and labels (red, white and blue dots) a mouse's movements. Image credit: Mackenzie Mathis

Share this post with your friends and coworkers:

Post published by TIS Marketing on August 9, 2019.

About The Imaging Source

Established in 1990, The Imaging Source is one of the leading manufacturers of industrial cameras, video converters and embedded vision components for factory automation, quality assurance, medicine, science, security and a variety of other markets.

Our comprehensive range of cameras with USB 3.1, USB 3.0, USB 2.0, GigE, MIPI interfaces and other innovative machine vision products are renowned for their high quality and ability to meet the performance requirements of demanding applications.

ISO 9001:2015 certified MVTEC | Edge AI + Vision Alliance | EMVA

Contact us