AI Revolutionizes Markerless Pose Extraction from Videography

Published on August 9, 2019 by TIS Marketing.

Which neural circuits drive adaptive motor behavior? How are these behaviors represented in the neural code? Researchers at the Mathis Lab (The Rowland Institute at Harvard University) are unlocking the answers to these questions by studying brain/behavior interaction. The team, led by Mackenzie Mathis, "[aims] to understand how neural circuits contribute to adaptive motor behaviors." The challenge is to relate specific brain events to particular behaviors. Using mice as a model, the scientists are tracking behavioral events and corresponding brain activity using high-speed videography provided by The Imaging Source DMK 37BUX287 cameras and machine learning algorithms from their own open-source toolbox, DeepLabCut.

Researchers at Mathis Lab use machine learning tools and optogenetics to understand how neural circuits contribute to adaptive motor behaviors. Image credit: Cassandra Klos

Fundamentally, the researchers must be able to accurately and vigorously track mouse behavior and deliver quantitative data to describe animal movement. "We care how animals adapt to their environment, so watching their motor actions is a great way to start to interpret how the brain does this. Therefore, the first step in our research is to observe the animals during learning new tasks," says Dr. Mathis. Her research team relies on a multi-camera system using DMK 37BUX287 cameras. Their test subjects are fast: "[...] mice can reach out and grab an object in about 200 ms, so we wanted high frame rates and good resolution" says Dr. Mathis.

Videography provides an efficient method of recording animal behavior, but pose extraction (i.e. the geometric configuration of multiple body parts) has been a problem for researchers for years. In human studies, state-of-the-art motion capture is achieved by using markers to track joints and limb movement, or very recently, by new deep learning methods. With animals, however, such methods are impractical for a variety of reasons. Which meant, up until now, animal behavior was tracked using manually-digitized videography (i.e. humans coding videos frame by frame) - a labor-intensive process which was often imprecise and could add hundreds of hours to research projects.

Currently, DeepLabCut supports a two-camera set up: Two DMK 37BUX287 cameras are used to capture high-speed videography whose frames are used for markerless 3D pose extraction. Image credit: Cassandra Klos

In order to automate pose extraction, Dr. Mathis's team developed DeepLabCut: an open-source software for markerless pose estimation of user-defined body parts. Based on the (human) pose estimation algorithm, DeeperCut, the researchers use deep-convolutional-network-based algorithms which they have specifically trained for the task. In a paper published in Nature Neuroscience, the authors write that the team was able to dramatically reduce the amount of training data necessary by "adapting pretrained models to new tasks [...through] a phenomenon known as transfer learning." DeepLabCut has become so robust and efficient that even with a relatively small number of images (~200), "the algorithm achieves excellent tracking performance". Many scientists are hailing the development of the software as a "game changer". Mathis Lab also uses The Imaging Source's IC Capture and has added a camera control API for The Imaging Source cameras to GitHub.

DeepLabCut automatically tracks and labels (red, white and blue dots) a mouse's movements. Image credit: Mackenzie Mathis