-
Categories
-
Pharmaceutical Intermediates
-
Active Pharmaceutical Ingredients
-
Food Additives
- Industrial Coatings
- Agrochemicals
- Dyes and Pigments
- Surfactant
- Flavors and Fragrances
- Chemical Reagents
- Catalyst and Auxiliary
- Natural Products
- Inorganic Chemistry
-
Organic Chemistry
-
Biochemical Engineering
- Analytical Chemistry
-
Cosmetic Ingredient
- Water Treatment Chemical
-
Pharmaceutical Intermediates
Promotion
ECHEMI Mall
Wholesale
Weekly Price
Exhibition
News
-
Trade Service
The researchers correlated
the output (activity at the top and decoder precision at the bottom) with real neural data (left column) and several working memory models (right column).
Most similar to the real data is the "PS" model
with short-term synaptic plasticity.
Between when you read the Wi-Fi password from the Café's menu board and when you go back to your laptop to enter it, you have to keep it in mind
.
If you're wondering how your brain does this, you're asking a question
about working memory that researchers have been trying to explain for decades.
Now, neuroscientists at MIT have published an important new insight to explain how it works
.
In a study published in PLOS Computational Biology, scientists at the Pikaul Institute for Learning and Memory compared measurements of brain cell activity in animals performing working memory tasks with the output of various computer models that represent two basic mechanistic theories
of remembering information 。 The findings strongly support the idea that neuronal networks store information by making brief changes to their connectivity patterns (or synapses) and contradict the traditional alternative view that memory is sustained by neurons remaining continuously active (like idling engines
).
While both models allow the information to be memorized, only the version that allows synapses to briefly change connections ("short-term synaptic plasticity") produces patterns of neural activity that mimic what is actually observed when the brain is working
.
Senior author Earl K.
Miller acknowledges that the idea that brain cells maintain memories by always "turning on" may be simpler, but it doesn't represent what nature is doing, nor does it generate complex mental flexibility that can be generated
by intermittent neural activity supported by short-term synaptic plasticity.
Miller, the Picaul Professor of Neuroscience at MIT's Department of Brain and Cognitive Sciences (BCS), said: "You need these mechanisms to provide flexible freedom
for working memory activity.
If working memory is just a solitary continuous activity, it's as simple
as a light switch.
But working memory is as complex and dynamic
as our thoughts.
”
Co-first author Leo Kozachkov, who received his PhD from MIT last November, included his theoretical modeling work
.
Matching computer models with real-world data is critical
, he said.
"Most people think that working memory 'happens' in neurons – that constant neural activity generates constant thoughts
.
However, this view has recently come under scrutiny because it doesn't quite align with the data," said Kozachkov, whose co-senior author Jean-Jacques Slotine, a professor of
BCS and mechanical engineering.
"Using artificial neural networks with short-term synaptic plasticity, we demonstrated that synaptic activity, rather than neural activity, can be the basis of
working memory.
The important takeaway from our paper is that these 'plastic' neural network models are more brain-like in a quantitative sense, and have additional functional advantages
in terms of robustness.
”
The match of the model with nature
Kozachkov co-authored the study with MIT graduate student John Tauber, whose goal was not only to determine how working memory information is remembered, but also to shed light on how nature actually does this
.
This means that when animals play working memory games, they start
by measuring the electrical "peak" activity of hundreds of neurons in the animal's prefrontal cortex "ground facts.
" In many rounds, each round shows an image to the animal and then disappears
.
After a second, it will see two pictures, including the original, and it must look at the original to get a little reward
.
The key moment is the second in between, called the "delay period", during which the image
must be remembered before testing.
The team has been observing what Miller's lab has seen many times before: Neurons proliferate massively when seen the original image, only intermittently during delays, and then surge again when the image has to be recalled during testing (these dynamics are controlled by the interaction of β frequency and gamma frequency brain rhythms).
In other words, peaks are intense when information must be stored initially and must be recalled, but sporadic when information must be maintained
.
Spikes are not sustained
during delays.
In addition, the team trained a software "decoder" to read out working memory information
from measurements of peak activity.
When peaks are high, they are highly accurate, but when peaks are low, this is not the case
.
This indicates that the spike does not represent information
during the delay.
But this raises a key question: If spikes can't remember information, then what can?
Researchers including Mark Stokes of the University of Oxford have proposed that changes in the relative strength, or "weight," of synapses can store information
.
The MIT team tested this idea by computationally modeling neural networks that embody two versions
of each major theory.
Like real animals, machine learning networks are trained to perform the same working memory tasks and output neural activity
that can also be interpreted by decoders.
As a result, computational networks that allow short-term synaptic plasticity to encode information increase when the actual brain increases, but does not increase
when the actual brain does not.
Networks that use sustained peaks as a method of maintaining memory have been peaking, including when
the natural brain does not have peaks.
The results of the decoder show that accuracy decreases during delay in synaptic plasticity models but remains unnaturally high
in persistent spike models.
In another layer of analysis, the team created a decoder that reads out information
from synaptic weights.
They found that during the delay, synapses represented working memory information, while peaks did not
.
Of the two versions of the model characterized by short-term synaptic plasticity, the most realistic one, known as "PS-Hebb," Kozachkov said, features a negative feedback loop that keeps the neural network stable and robust
.
The functioning of working memory
In addition to better matching nature, the synaptic plasticity model confers other benefits
that may be important for the real brain.
One is that even after up to half of the artificial neurons have been "ablated," the plasticity model retains information
from synaptic weights.
The persistent activity model collapses
after losing 10-20% of synapses.
And, Miller adds, occasional smashes consume less
energy than consistent smashes.
In addition, Miller said, rapid bursts of spikes rather than sustained spikes leave time in memory for storing multiple items
.
Studies have shown that people can remember up to four different things
in their working memory.
Miller's lab plans to conduct new experiments to determine whether models with intermittent spikes and synaptic weight-based information storage match real neural data when animals have to remember multiple things rather than just one image
.
In addition to Miller, Kozachkov, Tauber and Slotine, the paper's other authors are Mikael Lundqvist and Scott Brincat
.
Robust and brain-like working memory through short-term synaptic plasticity