This new AI tool uses brain signals to predict what a mouse sees
Although a team of researchers has previously been able to decode thoughts using artificial intelligence and MRI scans, we are far from developing the technology which will enable us to see what other humans see from their eyes.
But now, a team of scientists from Switzerland has inched one step closer to making this a possibility.
In a demonstration, the researchers from the Swiss Federal Institute of Technology (EPFL) in Lausanne made a mouse watch a black-and-white film and reconstructed what the animal had seen using a new AI tool that the team has developed.
The novel machine-learning algorithm is called CEBRA, which can reveal the hidden structure in data recorded from the brain, predicting complex information such as what mice see.
“This work is just one step towards the theoretically-backed algorithms that are needed in neurotechnology to enable high-performance BMIs (brain-machine-interfaces),” says Mackenzie Mathis, EPFL’s Bertarelli Chair of Integrative Neuroscience, who headed the study.
Mathis, along with her team, used an animal model for the study. They observed the brain activity of 50 mice while they watched a 30-second movie clip. They made the mice watch the movie nine times. The researchers then trained an artificial intelligence (AI) program, CEBRA, to link the brain data to the movie clip.
During the training period, CEBRA learns to map brain activity to specific frames. CEBRA performs well with less than 1% of neurons in the visual cortex, considering that, in mice, this brain area consists of roughly 0.5 million neurons, says the press release.
The team then played the video for the 10th time and tested CEBRA to predict the order of frames within the clip using brain activity data. They wanted to assess if they could decode, on a frame-by-frame basis, the natural video watched by the mice. They achieved greater than 95% decoding accuracy.
“The goal of CEBRA is to uncover structure in complex systems. And, given the brain is the most complex structure in our universe, it’s the ultimate test space for CEBRA. It can also give us insight into how the brain processes information and could be a platform for discovering new principles in neuroscience by combining data across animals, and even species,” says Mathis.
“This algorithm is not limited to neuroscience research, as it can be applied to many datasets involving time or joint information, including animal behavior and gene-expression data. Thus, the potential clinical applications are exciting.”
Study abstract:
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool’s utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.