Scientists have long been interested in understanding how neurons in the brain coordinate their activity to drive motor, sensory, and cognitive processes. As such, a lot of research has gone into methods that can record activity of these brain cells as they perform millisecond scale computations.
These recording techniques favor one of two types of information: either capturing fewer neurons’ activity more precisely in time, or more neurons with less timing precision. This tradeoff has made it difficult to understand what exactly is happening in complex brain circuits.
But a recent study published in Nature Neuroscience in November 2022 by researchers in the lab of Matthew Kaufman, PhD at the University of Chicago, as well as scientists at Emory and University of North Carolina at Chapel Hill, could change this. These scientists developed a deep learning system to resolve time information from a technique that favors recording from large numbers of neurons, creating a tool that can analyze what is happening in the brain precisely in both time and space.
The system does this by learning the underlying rules of brain activity to predict how the way neurons fire will change over time, even when using a recording technique that images the brain slowly, with relatively few frames per second.
Kaufman, who is an Assistant Professor of Organismal Biology and Anatomy, describes it like trying to take a video recording of a ball bouncing in a darkened room. If all you have is one frame of video, you’re not likely to see anything. But, if you watch the full video, you’re able to pick out the ball pretty easily because you know the underlying rules the ball follows. It adheres to the laws of physics, so its location from one frame to the next is mostly predictable. There is only one ball bouncing around in the room, and you know where the walls of the room that the ball might hit are located.
“You can get a very good estimate of where this ball is going at any moment in time, even though each individual frame is really lousy,” Kaufman said.
Tradeoffs in time and space
This deep learning algorithm had been previously applied to a technique called electrophysiology, which favors temporal resolution. In this approach, researchers record activity of neurons through electrodes, but the possible neuron count is limited, and the locations of the recording sites are not very precise since they cannot see their target. Moreover, it is difficult to figure out what type of neuron you’re recording from, making this technique less ideal for understanding how different types of cells in specific brain areas work together.
A different approach called 2-photon calcium imaging is more suited for deciphering precise spatial location because it can record thousands of neurons of various cell types in 3D. This technique relies on the principle that calcium floods into neurons when they are activated. Scientists have created a way to trigger the fluorescence of genetically engineered molecules in response to calcium, so that cells briefly emit light when they are active. Two-photon imaging shines a laser into the brain and scans the field of neurons. When the laser shines on an activated neuron, the calcium sensor molecules give off light that a sensor can detect. Though 2-photon calcium imaging gives researchers much richer information about what’s going on in the brain, it can only yield time resolution in tenths of a second, not milliseconds.