News

Deep learning system helps create more accurate picture of what’s happening in complex brain circuits

New research by Matt Kaufman leverages modern math and machine learning to capture neuron activity accurately in both time and space.

Scientists have long been interested in understanding how neurons in the brain coordinate their activity to drive motor, sensory, and cognitive processes.  As such, a lot of research has gone into methods that can record activity of these brain cells as they perform millisecond scale computations.

These recording techniques favor one of two types of information: either capturing fewer neurons’ activity more precisely in time, or more neurons with less timing precision.  This tradeoff has made it difficult to understand what exactly is happening in complex brain circuits.

But a recent study published in Nature Neuroscience in November 2022 by researchers in the lab of Matthew Kaufman, PhD at the University of Chicago, as well as scientists at Emory and University of North Carolina at Chapel Hill, could change this. These scientists developed a deep learning system to resolve time information from a technique that favors recording from large numbers of neurons, creating a tool that can analyze what is happening in the brain precisely in both time and space.

The system does this by learning the underlying rules of brain activity to predict how the way neurons fire will change over time, even when using a recording technique that images the brain slowly, with relatively few frames per second.

Kaufman, who is an Assistant Professor of Organismal Biology and Anatomy, describes it like trying to take a video recording of a ball bouncing in a darkened room.  If all you have is one frame of video, you’re not likely to see anything.  But, if you watch the full video, you’re able to pick out the ball pretty easily because you know the underlying rules the ball follows. It adheres to the laws of physics, so its location from one frame to the next is mostly predictable. There is only one ball bouncing around in the room, and you know where the walls of the room that the ball might hit are located.

“You can get a very good estimate of where this ball is going at any moment in time, even though each individual frame is really lousy,” Kaufman said.

Tradeoffs in time and space

This deep learning algorithm had been previously applied to a technique called electrophysiology, which favors temporal resolution. In this approach, researchers record activity of neurons through electrodes, but the possible neuron count is limited, and the locations of the recording sites are not very precise since they cannot see their target. Moreover, it is difficult to figure out what type of neuron you’re recording from, making this technique less ideal for understanding how different types of cells in specific brain areas work together.

A different approach called 2-photon calcium imaging is more suited for deciphering precise spatial location because it can record thousands of neurons of various cell types in 3D.  This technique relies on the principle that calcium floods into neurons when they are activated. Scientists have created a way to trigger the fluorescence of genetically engineered molecules in response to calcium, so that cells briefly emit light when they are active. Two-photon imaging shines a laser into the brain and scans the field of neurons.  When the laser shines on an activated neuron, the calcium sensor molecules give off light that a sensor can detect. Though 2-photon calcium imaging gives researchers much richer information about what’s going on in the brain, it can only yield time resolution in tenths of a second, not milliseconds.

The key innovation that Kaufman and his colleagues have crafted is the ability to apply the deep learning algorithm to 2-photon calcium imaging in order to recover the lost temporal resolution.  This system, which they call RADICaL (Recurrent Autoencoder for Discovering Imaged Calcium Latents), had to solve issues related to the slowness of scanning technology like 2-photon imaging.  Two-photon imaging operates by measuring the brightness in one pixel on the field, moving over and getting the brightness at the next pixel, and so on, until every pixel in the image has been scanned one by one. While this process has been optimized to be relatively fast, researchers can still only image at around 30 frames per second.

The need for speed

To address this speed limit, Kaufman and colleagues turned to a centuries-old photography technique for inspiration, called a rolling shutter.  In the early days of photography, before modern camera shutters had been invented, photographers pulled a piece of fabric with a slit over the film. The slit of light started exposing the film at the bottom, exposing a line at a time all the way up to the top of the image, so different parts of the image got exposed at different times.

Imagine you're using a rolling shutter in a dark room and a flash occurs partway through the exposure: only the part of the image exposed after the flash has started will be bright. “If you know when each line was exposed, you can figure out very precisely when the flash was. If I have 1,000 lines in my photo, I can get 1,000 times the accuracy,” Kaufman said.

Because 2-photon imaging is a scanning technology, neurons at the top of the frame get imaged at a different time than the neurons at the bottom of the frame. So, the researchers converted the image into 3 strips: top, middle, and bottom.  This allowed them to pinpoint when each neuron was sampled. Instead of having a scan rate of 30 frames per second, they could treat this as 90 frames per second where only a third of the neurons are sampled on each “frame.” This allowed for greater time resolution, but with a lot of missing data.  To prevent confusing the deep learning algorithm, Kaufman’s collaborators, Chethan Pandarinath at Emory and Andrea Giovannucci at University of North Carolina at Chapel Hill, came up with a neat trick: training the system so that it learns only from the data that was actually sampled at any given time.  This, combined with the ability of RADICaL to learn the underlying “rules” of the neurons’ activity, allowed researchers to make full use of the improved temporal resolution.

The result of this analysis allows for the ability to resolve much more precise time information from a recording technique that already provided rich spatial information, pushing back on the quintessential tradeoff for brain recording techniques and allowing scientists to understand precisely what’s occurring in the brain in both time and space.

“It shows us what power we have when we leverage modern math and machine learning to understand what's going on in the brain. Math is the tool that we have used to understand the universe, and we're part of the universe. So, it makes sense to bring the incredible power of machine learning and neural networks to neuroscience and understanding the human brain.”

Matthew Kaufman, PhD

Assistant Professor of Organismal Biology and Anatomy
Assistant Professor of Neuroscience Institute

 

Explore the Biological Sciences Division