Back in the day when information was fed into computers on punch cards, the data was the holes in the paper not the paper itself. A far out (but similar) theory of how memories are stored in the brain just got a lot more support [ Neuron vol. 93 pp. 6 -8, 132 – 146 ’17 ].
The theory says that memories are stored in the proteins and sugar polymers surrounding neurons rather than the neurons themselves. These go by the name of extracellular matrix, and memories are the holes drilled in it which allow synapses to form.
Here’s some stuff I wrote about the idea when I first ran across it two years ago.
——
An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.
You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.
#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory. However, RAM memory in the computers of the 70s used the localized buildup of charge to store bits and bytes. Since charge would leak away from where it was stored, it had to be refreshed constantly –e.g. at least 12 times a second, or it would be lost. Yet another reason data should always be frequently backed up.
#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).
#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.
This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.
So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.
For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.
Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.
—-
So how does the new work support this idea? It involves a structure that I’ve never talked about — the lysosome (for more info see https://en.wikipedia.org/wiki/Lysosome). It’s basically a bag of at least 40 digestive and synthetic enzymes inside the cell, which chops anything brought to it (e.g. bacteria). Mutations in the enzymes cause all sorts of (fortunately rare) neurologic diseases — mucopolysaccharidoses, lipid storage diseases (Gaucher’s, Farber’s) the list goes on and on.
So I’ve always thought of the structure as a Pandora’s box best kept closed. I always thought of them as confined to the cell body, but they’re also found in dendrites according to this paper. Even more interesting, a rather unphysiologic treatment of neurons in culture (depolarization by high potassium) causes the lysosomes to migrate to the neuronal membrane and release its contents outside. One enzyme released is cathepsin B, a proteolytic enzyme which chops up the TIMP1 outside the cell. So what. TIMP1 is an endogenous inhibitor of Matrix MetalloProteinases (MMPs) which break down the extracellular matrix. So what?
Are neurons ever depolarized by natural events? Just by synaptic transmission, action potentials and spontaneously. So here we have a way that neuronal activity can cause holes in the extracellular matrix,the holes in the punch cards if you will.
Speculation? Of course. But that’s the fun of reading this stuff. As Mark Twain said ” There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”
Comments
“Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).”
You said CaMKII works as a complex with 12 copies. It’s not so hard to imagine if the copies are always coming and going a few at a time. A new one arrives, gets it’s phosphorylation state changed, and replaces a current member of the complex.
It could be analogous to the “latch” circuit of an SRAM computer memory. Electrons are constantly leaking away and being replaced, but the state of a given latch is likely to persist for centuries.
Another analogy is a room full of babies that are screaming because the other babies are screaming. You can gradually replace every baby and the room remains loud. In fact it is likely to remain loud longer if replacement goes at just the right speed, slow enough for the newcomer to be irritated (phosphorylation adjustment), but not so slow that a baby develops fatigue and stops screaming (chemical damage to protein unit).
As a graduate student, the thing I hear the most of in conversations with people is the concept of memory as a discrete “packet” of information that is kept in a specific locus somewhere in the brain (hippocampus) or on the neuron (within a synapse or local ECM, in this case). I think this is implicit in theories #2 and #3 that you’ve mentioned. This idea reconciles with how we identify the concept of memory as a discrete representation of specific snapshots of the past, but I can’t think of a popular theory that has been backed up by evidence to demonstrate its sufficiency (vs necessity of CaMKII or the hippocampal region, which is inevitably tested by blunt genetic or surgical methods that likely induce sweeping perturbations of brain function).
I think there is another way to conceptualize memory storage that is relatively uncharacterized (to me, anyway- I’m a cell biologist, not a systems or circuit guy) but very plausible. We know from FMRI that information processing and recall in the brain requires the collaboration of various brain regions that are likely connected by specific neuronal circuits. Moreover, within the last few years, groups have shown using sophisticated live imaging that motor task training requires specific synapse formation in neurons of the motor cortex, and seems to be reversible when one inhibits the formation of those synapses by ectopically activating inhibitory proteins with light (in this case, motor cortex makes a good model because it is visually and optically accessible, with a correlated training paradigm).
Because synapses are essentially physical connections between neighboring neurons, through which information is passed down a circuit (converted from electrical to chemical and back to electrical impulses at each neuronal “unit”), I think it’s worth considering that memories could be actually be represented within neural circuits themselves, connecting all of the individual brain regions that are responsible for the processing of various classes of information (sight, sound, written and spoken language, feelings), and in addition possibly accounting for the richness of memories as we recall them.
Remember caroling last Christmas? That could be a circuit, specifically uniting your senses of hearing, temperature if it was cold out, sight if it was dark outside with gentle snowfall, etc. Or the delicious dinner you had afterward? What if that’s another circuit, bridging your individual impressions of the tastes and smells and comfort of being at home/with family. To form these circuits would only require the addition of new synapses here and there, since each neuron can in theory synapse with hundreds or thousands of neighbors. I’m not sure what mechanism would induce recall, but maybe it would mimic the stimulation of these spatially disparate information processing systems in the same way as occurred initially.
I don’t think the technology exists to test this directly yet, but I’m interested in hearing people’s thoughts.
TS — interesting comment. Thanks Being from outside the field, be very careful in what you accept as true based on the fMRI literature — until recently they weren’t checking to see if subjects were asleep in the scanner (over half were during the test) and the statistical tests they used to look for significant findings had errors in the software, and were used inappropriately — for details (including references) — see https://luysii.wordpress.com/2016/07/17/functional-mri-research-is-a-scientific-sewer/
What you describe is the network theory of memory vs. what has been called the grandmother cell — a single neuron responsible for storing the concept grandmother. Similar reasoning applies to memories.