Tag Archives: wiring diagram of the brain

Silent synapses

For about the past 20 years we’ve been able to observe dendritic spines forming synapses  in the living (rodent) brain  — for months ! ! In 1970, if you told me that, I’d have said you were smoking something.  The surprising finding is that dendritic spines are a work in progress, being newly formed and removed all the time.  The early literature (e.g. 10 years ago) is contentious about how long a given spine lasts, but most agree that spine plasticity is present every time it’s looked for.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].

It is yet another reason why a wiring diagram of the brain wouldn’t help you understand it.   For much more on this please see — https://luysii.wordpress.com/2021/04/25/the-wiring-diagram-of-the-brain-takes-another hit

Not only that, but not all of these new synapses are functional, e.g. stimulating the presynaptic side doesn’t result in a response in the post-synaptic side.  These are the silent synapses. This is thought to be due to a lack of postsynaptic ion channels which can respond to released neurotransmitter.  In particular AMPAR ion channels which respond to glutamic acid are thought to be absent in the silent synapse.  Only after stimulation of NMDAR ion channels (which are thought to be present) are AMPAR ion channels inserted into the postSynaptic membrane converting it to an active synapse.

Obviously, in the fetal brain most synapses are newly formed, hence likely to be silent. It was thought that silent synapses are few and far between in the adult brain.

Not so says Nature vol. 612 pp. 323 – 327 ’22.  They used superResolution protein imaging to study some 2,234 synapses in layer V pyramidal neurons in the adult mouse primary visual cortex (probably the best studied piece of cortex in the brain).   Amazingly some 25% of these synapses lacked AMPARs and were presumably silent.  Most of them were found where you’d expect — at the tips of dendritic filopodia, which are moving around looking to form a new synapse.

If this is generally true of the cerebral cortex, it helps explain our ability to learn.  In this sense the brain is both similar and not similar to neural nets, which learn by increasing or decreasing efficiency (weights) of connections between ‘neurons’ as they are exposed to stimuli with feedback.  The connections (synapses) are fixed in neural nets, but the individual synapses are not fixed in the human brain.  However, if you think of all the connections between two neurons in our brains as a ‘synapse’ then clearly efficiency is clearly being adjusted, as synapses form and die.

Synapses on Axons !

Every now and then a paper comes along which shows how little we really know about the brain and how it works.  Even better, it demands a major rethink of what we thought we knew.  Such a paper is — https://www.cell.com/neuron/fulltext/S0896-6273(22)00656-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627322006560%3Fshowall%3Dtrue

which I doubt you can get unless you are a subscriber to Neuron.    What [ Neuron vol. 110 pp. 2889 – 2890 ’22 ] does is pretty much prove that an axon from one neuron can synapse on an axon of another neuron.  When one neuron is stimulated the axon of another neuron fires an impulse (an action potential) as measured by patch clamping the second axon.  This happens way too fast after stimulation to be explained by volume neurotransmission (about which more later).  Such synapses are well known on the initial segment of the axon as it leaves the cell body (the soma) of the neuron.

But these synapses occur very near to the end of the axon in the part of the brain (the striatum) the parent neuron (a midbrain dopamine neuron) innervates (the striatum).   The neurotransmitter involved is acetylcholine and the striatum has lots of neurons using acetylcholine as a neurotransmitter.  There are two basic types of acetylcholine receptor in the brain — muscarinic and nicotinic.  Muscarinic receptors are slow acting and change the internal chemistry of the neuron.  This takes time.  Nicotinic receptors are ion channels, and when they open, an action potential is nearly immediate.  Also using a drug to block the nicotinic acetyl choline receptor, blocks action potential formation after stimulation.

Why is this work so radical? (which of course means that it must be repeated by others).  It implies that all sorts of computations in the brain can occur locally at the end of an axon, far away from the neuron cell body which is supposed to be in total control of it.  The computations could occur without any input from the cell body, and spontaneous activity of the axons they studied occur without an impulse from the cell body.   If replicated, we’re going to have to rethink our models of how the brain actually works.  The authors note that they have just studied one system, but other workers are certain to study others, to find out how general this.

Neuropil, is an old term for areas of the brain with few neuron or glial cell bodies, but lots of neural and glial processes.  It never was much studied, and our brain has lots of it.  Perhaps it is actually performing computations, in which case it must be added to the 80 billion neurons we are thought to have.

Now for a bit more detail

The cell body of the parent neuron of the axon to be synapsed on uses dopamine as a neurotransmitter.  It sits in the pars compacta of the substantia nigra a fair piece away from the target they studied. “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].”  These axons release dopamine all over the brain, and not necessarily synapsing with a neuron.  So when that single neuron fires, dopamine is likely to bathe every neuron in the brain.This is called volume neurotransmission which is important because the following neurotransmitters use it — dopamine, serotonin, acetyl choline and norepinephrine. Each has only a small number of cells using them as a transmitter.  The ramification of these neurons is incredible.

So now you see why massive release of any of the 4 neurotransmitters mentioned (norepinephrine, serotonin, dopamine, acetyl choline) would have profound effects on brain states.  The four are vitally involved in emotional state and psychiatric disease. The SSRIs treat depression, they prevent reuptake of released serotonin.  Cocaine has similar effects on dopamine.  The list goes on and on and on.

Axons synapsing on other axons is yet another reason to modify our rather tattered wiring diagram of the brain — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/

Parts of your brain that move

Sick of COVID19 and omicron are you?  So am I.  It’s time for some hard core neuroscience.  Looking at slides or electron micrographs gives a very static picture of the brain.  There are parts of the brain that move.

Microglia are the macrophages of the brain.  They’re actually rather creepy, extending and retracting processes and feeling up neurons, removing synapses from processes.  They use receptors for ATP and ADP to detect when a neuron is in trouble.  A new cellular specialization is described — Somatic Purinergic Junctions — a combination of mitochondria, reticular membrane structures, vesicle-like membrane structures and clusters of a particular voltage gated potassium channel (Kv2.1).  You can actually watch this happening.   [ Science vol. 367 pp. 510 – 511, 528 – 537 ’20 ].

For about the past 20 years we’ve been able to observe dendritic spines for months in the living (rodent) brain.  In 1970, if you told me that, I’d have said you were smoking something.  The surprising finding is that dendritic spines are a work in progress, being newly formed and removed all the time.  The early literature (e.g. 10 years ago) is contentious about how long a given spine lasts, but most agree that spine plasticity is present every time it’s looked for.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].

It is yet another reason why a wiring diagram of the brain wouldn’t help you understand it.   For much more on this please see — https://luysii.wordpress.com/2021/04/25/the-wiring-diagram-of-the-brain-takes-another-hit/

Now on to Long Term Potentiation — https://en.wikipedia.org/wiki/Long-term_potentiation.  This is basically a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons.One of the things that happens with long term potentiation is that the potentiated dendritic spines enlarge.  Now we know that there are all sorts of proteins crossing the synaptic cleft between presynaptic axon terminal and post-synaptic dendritic spine.  They hold the two together.

So the authors of Nature vol. 600 pp. 696 – 689 ’21 wondered if the enlargement of the spine changed neurotransmission.  They studied CA3 neurons in slice culture preparations of the rat hippocampus.  Synapses formed between axons and dendrites. The mimicked LTP (and produced dendritic spine enlargement) by two photon uncaging of glutamic acid.  Spine enlargement ensured which then pushed on the presynaptic bouton.  This caused increased release of glutamic acid by the presynaptic neuron.

This may actually be the mechanism behind long term potentiation.

The wiring diagram of the brain takes another hit

Is there anything duller than wire? It conducts electricity. That’s about it. Copper wires conduct better than Aluminum wires. So what.  End of story. 

That’s pretty much the way we thought of axons, the wires of the nervous system. Thicker axons conduct faster than thin ones, and insulated axons conduct faster than non-insulated ones. The insulation is made out of fat and called myelin.  Just as fat in meat looks white, a bunch of axons sheathed by myelin looks white, which is how white matter got its name. 

Those of you old enough to remember vinyl records, know just how different a record sounds when played at the wrong speed.  That’s what an MS patient has to deal with.  The disease attacks white matter mostly, which means that when myelin is lost or damaged, nerve impulses slow down.  Information gets through, but it’s garbled. 

So we knew that losing myelin causes trouble, but other than that, it was assumed that myelin, once laid down by the cell producing it (the oligodendrocyte) was stable unless trauma or disease damaged it. 

That was until adaptive myelination came along roughly 10 years ago.  There is an excellent review [ Neuron vol. 109 pp. 1258 – 1273 ’21 ] which is irritating to read if you are looking for solid experimental facts.  This is not the fault of the authors.  They are trying to picture the frontier of a fast moving field.  By nature there is a lot of speculation in such an article, which would be a lot shorter (and duller) without it.  

However the following words occur frequently — could (43), has been understood (3), suggested (6), would (12), may (39) and is thought to (2).

The cells making the myelin are just that: cells.  Since the myelin they make is confined within them, a myelinated axon looks like a string of hot dogs, each dog the province of one oligo.  The space between the hot dogs is called the node (of Ranvier), and this is why myelinated axons conduct faster.  The impulse jumps between the nodes (saltatory conduction). 

Adaptive myelination comes in when you stimulate an axon — the myelin gets thicker, meaning that it conducts faster.   Also neuronal activity is held to alter myelin (the space between nodes gets longer meaning they conduct faster). 

Not all axons are myelinated, and activity ‘is thought to’ increase myelination of them. 

This has extremely profound consequences for how we think the brain works.  At the end of the post you’ll find an older one arguing that a wiring diagram of the brain (how the neurons are connected to each other) is far from enough to understand the brain.  But the article assumes that the wires are pretty much fixed in how they act. The Neuron article shows that this is wrong.  

Imagine if the connections between transistors on a computer chip, grew and shrunk depending on how much current flowed through them.   That appears to be the case for the brain.

Here’s the old post

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home.  Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain.  They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to.  For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana).  This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time.  Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them).  This is where the neurological urban myth of the loss of 10,000 neurons each day came from.  For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses.  Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it.  I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the  complexity of the interconnection between neurons and just how many there are.  Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope.  Stare at figure 2.f. a while and try to figure out what’s going on.  It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing.  There are at least 3 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses).  A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible.  It is now possible to watch synapses in the living brain of awake animal for extended periods of time.  But we now know that synapses come and go in the brain.  The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].  So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses.  Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission)  Their importance in psychiatry and drug addiction is unparalleled.  Examples of such volume transmitters include serotonin, dopamine and norepinephrine.  Drugs of abuse affecting their action include cocaine, amphetamine.  Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg.  This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles.  There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse.  The construction of the brain from the fertilized egg must be in some sense statistical.  Remarkable that it happens at all.  Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

 

 

The brain is far more wired up than we thought

The hardest thing in experimental psychology is being smart enough to think of something truly clever.  Here is a beauty, showing that the brain is far more wired together than we ever thought.

First some background.  You’ve probably heard of the blind spot (although you’ve never seen it).  It’s the part in your eye were all the never fibers from the sensory part of the eye (the retina) are collected together forming the optic nerve.  Through an ophthalmoscope it appears like a white oval (largest diameter under 2 milliMeters)  It’s white because it’s all nerve fibers (1,000,000 of them) with no sensory retina overlying it.  So if you shine a very narrow light on it, you’ll see nothing.   That’s the blind spot.

Have a look at https://en.wikipedia.org/wiki/Visual_system. Both eyes project to both halves of he brain.  Because the blind spot is off to one side in your visual field, the other eye maps a different part of the retina to the same area of the brain.  But if you patch that eye, on one side of the brain the blind spot gets no input.

 

 In the healthy visual system, the cortical representation of the blind spot (BS) of the right eye receives information from the left eye only (and vice versa). Therefore, if the left eye is patched, the cortex corresponding to the BS of the right eye is deprived of its normal bottom-up input.

Proc. Natl. Acad. Sci. vol. 117 pp. 11059 – 11067 ’20 https://www.pnas.org/content/pnas/117/20/11059.full.pdf

Hopefully you’ll be able to follow the link and look at figure 1 p. 11060 which will explain things.

Patching the left eye deprives that area of visual cortex of any input at all.

Here comes the brunt of the paper — within minutes of patching the left eye, the cortical representation of that spot begins to widen.  It starts responding to stimuli from areas outside its usual receptive field.

Nerves just don’t grow that fast, so the connections have to have been there to begin with.   So the brain is more wired together than we thought.  Perhaps this is just true of the visual system.

If not, the work has profound implications for neurologic rehabilitation.

I do apologize for not being able to explain this better, but the work is sufficiently important that you should know about it.

Addendum 4 June — here’s another shot at explaining things.

    • As you look straight ahead, light falls on the part of your retina with the highest spatial resolution (the macula). The blind spot due to the optic nerve is found closer to your nose, which means that in the right eye, the retina surrounding the blind spot ‘sees’ light coming from toward your ear. Light from the same direction ( your right ear) will NOT fall on the optic nerve of your left eye (which is toward your nose) so information from that area gets back to the brain (which is why you don’t see your blind spot).

      Now visual space (say looking toward the right) is sent back to the brain coherently, so that areas of visual space transmitted by either eye go to the same place in the brain.

      So if you now cover your left eye, there is an area of the brain (corresponding to the blind spot of the right eye) which is getting no information from the retina at all. So it is effectively blind. Technology permits us to actively stimulate the retina anywhere we want.. We also have ways to measure activity of the brain in any small area (functional MRI). Activity increases with visual input.

      Now with the left eye patched, stimulate with light directed at the right eye’s blind spot. Nothing happens (no increase in activity) in the cortical area representing that part of the visual field. It isn’t getting any input. So it is possible to accurately map the representation of the right eye’s blind spot in the brain in terms of the brain areas responding to it.

      Next visually stimulate the right eye with light hitting the retina adjacent to the right eye’s blind spot. Initially the blind spot area of the brain shows no activity, After just a few minutes, the area of the brain for the right eye’s blind spot begins to respond to stimuli it never responded to initially. This implies that those two areas of the brain have connections between them, that were always there, as new nerve processes just don’t grow that fast.

      To be clever enough to think of a way to show this is truly brilliant. Bravo to the authors.

 

Why don’t serotonin neurons die like dopamine neurons do in Parkinson’s disease

Say what ?  “This proportion will likely be higher in rat dopaminergic neurons, which have even larger axonal arbors with ~500,000 presynapses, or in human serotonergic neurons, which are estimated to extend axons for 350 meters” – from [ Science vol. 366 3aaw9997 p. 4 ’19 ]

I thought I was reasonably well informed but I found these numbers astounding, so I looked up the papers.  Here is how such statement can be made with chapter and verse.

“The validity of the single-cell axon length measurements for dopaminergic and cholinergic neurons can be independently checked with calculations based on the total volume of the target territory, the density of the particular type of axon (axon length per volume of target territory), and the number of neuronal cell bodies giving rise to that type of axonThese population analyses are made possible by the availability of antibodies that localize to different types of axons: anti-ChAT for cholinergic axons (also visualized with acetylcholine esterase histochemistry), anti-tyrosine hydroxylase for striatal dopaminergic axons, and anti-serotonin for serotonergic axons.

The human data for axon density and neuron counts have been published for forebrain cholinergic neurons and for serotonergic neurons projecting from the dorsal raphe nucleus to the cortex, and cortical volume estimates for humans are available from MRI analyses; forebrain cholinergic neuron data is also available for chimpanzees. These calculations lead to axon length estimates of 107 m and 31 m, respectively, for human and chimpanzee forebrain cholinergic neurons, and an axon length estimate of 170–348 meters for human serotonergic neurons.”

H. Wu, J. Williams, J. Nathans, Complete morphologies of basal forebrain cholinergic neurons in the mouse. eLife 3, e02444 (2014). doi: 10.7554/eLife.02444; pmid: 24894464

How in the world can these neurons survive as long as they do?

Not all of them do–  At birth there are 450,000 neurons in the substantia nigra (one side or both sides?), declining to 275 by age 60.  Patients with Parkinsonism all had cell counts below 140,000 [  Ann. Neurol. vol. 24 pp. 574 – 576 ’88 ]. Catecholamines such as dopamine and norepinephrine are easily oxidized to quinones, and this may be the ‘black stuff’ in the substantia nigra (which is latin for black stuff).

Here are the numbers for serotonin neurons in the few brain nuclei (dorsal raphe nucleus) in which they are found.  Less than dopamine.  A mere 165,000 +/- 34,000 — https://www.ncbi.nlm.nih.gov › pubmed

So being too small to be seen with a total axon length of a football field, they appear to last as long as we do.  Have we missed a neurological disease due to loss of serotonin neurons?

Why should the axons of dopamine, serotonin and norepinephrine neurons be so long and branch so widely?  Because they release their transmitters diffusely in the brain, and diffusion is too slow, so the axonal apparatus must get it there and release it locally into the brain’s extracellular space, no postsynaptic specializations are present in volume neurotransmission — that’s the point.  This is one of the reasons that a wiring diagram of the brain isn’t enough — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/.

Just think of that dopamine neuron with 500,000 presynapses.  Synthesis and release must be general, as the neuron couldn’t possibly address an individual synapse.

The more we know the more remarkable the brain becomes.

 

Prolegomena to reading Fall by Neal Stephenson

As a college freshman I spent hours trying to untangle Kant’s sentences in “Prolegomena to Any Future Metaphysics”  Here’s sentence #1.   “In order that metaphysics might, as science, be able to lay claim, not merely to deceitful persuasion, but to insight and conviction, a critique of reason itself must set forth the entire stock of a priori concepts, their division according to the different sources (sensibility, understanding, and reason), further, a complete table of those concepts, and the analysis of all of them along with everything that can be derived from that analysis; and then, especially, such a critique must set forth the possibility of synthetic cognition a priori through a deduction of these concepts, it must set forth the principles of their use, and finally also the boundaries of that use; and all of this in a complete system.”

This post is something to read before tackling “Fall” by Neal Stephenson, a prolegomena if you will.  Hopefully it will be more comprehensible than Kant.   I’m only up to p. 83 of a nearly 900 page book.  But so far the book’s premise seems to be that if you knew each and every connection (synapse) between every neuron, you could resurrect the consciousness of an individual (e.g. a wiring diagram).  Perhaps Stephenson will get more sophisticated as I proceed through the book.  Perhaps not.  But he’s clearly done a fair amount neuroscience homework.

So read the following old post about why a wiring diagram of the brain isn’t enough to explain how it works.   Perhaps he’ll bring in the following points later in the book.

Here’s the old post.  Some serious (and counterintuitive) scientific results to follow in tomorrow’s post.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home.  Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain.  They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to.  For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana).  This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time.  Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them).  This is where the neurological urban myth of the loss of 10,000 neurons each day came from.  For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses.  Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it.  I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the  complexity of the interconnection between neurons and just how many there are.  Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope.  Stare at figure 2.f. a while and try to figure out what’s going on.  It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing.  There are at least 3 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses).  A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible.  It is now possible to watch synapses in the living brain of awake animal for extended periods of time.  But we now know that synapses come and go in the brain.  The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].  So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses.  Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission)  Their importance in psychiatry and drug addiction is unparalleled.  Examples of such volume transmitters include serotonin, dopamine and norepinephrine.  Drugs of abuse affecting their action include cocaine, amphetamine.  Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg.  This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles.  There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse.  The construction of the brain from the fertilized egg must be in some sense statistical.  Remarkable that it happens at all.  Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

 

How the brain really works (maybe)

Stare at the picture just below long and hard. It’s where the brain probably does its calculation — no, not the neuron in the center. No, not the astrocyte just above. Enlarge the picture many times. It’s all those tiny little circles and ellipses you see around the apical dendrite. They all represent nerve and glial processes. A few ellipses have very dark borders — this is myelin (which insulates them allowing them to conduct nerve impulses faster, and which also insulates them from being affected by the goings on of nerve processes next to them). Note that most of the nerve processes do NOT have myelin around them.

Now look at the bar at the lower right in the picture which tells you the magnification. 5 um is 5 microns or 50,000 Angstroms or roughly 10 times the wavelength of visible light (4,000 – 8,000 Angstroms). Look at the picture again and notice just how closely the little circles and ellipses are applied to each other (certainly closer than 1/10 of the bar). This is exactly why there was significant debate between two of the founders of neuroHistology — Camillo Golgi and Ramon Santiago y Cajal.

Unlike every other tissue in the body the brain is so tightly packed that it is impossible to see the cells that make it up with the usual stains used by light microscopists. People saw nuclei all right but they thought the brain was a mass of tissue with nuclei embedded in it (like a slime mold). It wasn’t until the late 1800′s that Camillo Golgi developed a stain which would now and then outline a neuron with all its processes. Another anatomist (Ramon Santiago y Cajal) used Golgi’s technique and argued with Golgi that yes the brain was made of cells. Fascinating that Golgi, the man responsible for showing nerve cells, didn’t buy it. This was a very hot issue at the time, and the two received a joint Nobel prize in 1906 (only 5 years after the prizes began).

The paper discussed below gives a possible reason why the brain is built like this — e.g. it’s how it works !!

Pictures are impressive, but could it be all artifact? To see something with an electron microscope (which this picture is) you really have to process the tissue to a fare-thee-well. One example from way back in the day when I started medical school (1962). Electron microscopy was just coming in, and the first thing we were supposed to see was something called the unit membrane surrounding each cell –two dark lines surrounding a light line, the whole mess being about 60 – 80 Angstroms thick. The dark lines were held to be proteins and the light line was supposed to be lipid. Fresh off 2 years of grad school in chemistry, I tried to figure out just what the chemical treatments used to put tissue in a form suitable for electron microscopy would do to proteins and lipids. It was impossible, but I came away impressed with just how vigorous and traumatic what the microscopists were doing actually was.

To make a long story short — the unit membrane was an artifact of fixation. We now know that the cell membrane has a thickness half that of the unit membrane, with all sorts of proteins going through the lipid.

This is something to keep in mind, for you to avoid being snowed by such pictures. Clinical neurologists and neurosurgeons know quite well that a brain lacking oxygen and glucose swells (a huge clinical problem), and dead brain is exactly that.

Even with all these caveats about electron microscopy of the brain, I think the picture above is pretty close to reality. In favor of tight packing is the following work (along with the staining work of over a century ago). [ Proc. Natl. Acad. Sci. vol. 103 pp. 5567 – 5572 ’06 ] injected spheres of different sizes (quantum dots actually) into the rat cerebral cortex, and watched how far they got from the site of injection. Objects ‘as large as’ 350 Angstroms were able to diffuse freely. This was larger than the width seen on electron microscopy (180 Angstroms) but still quite small and too small to be ‘seen’ with visible light.

What’s the point of all this? Simply that the neuropil of the cerebral cortex (all the stuff in the picture which isn’t the cell body of the neuron or the astrocyte) could be where the real computations of the cerebral cortex actually take place. In my opinion, ‘could be’ should be ‘is’ in the previous sentence.

Why? Because of the work described in a previous post — which is repeated in toto below the line of ****

Briefly, the authors of that paper claim to be able to look at the electrical activity of these small processes in the neuropil. How small? A diameter of 5 microns or less. This had never been done before. It was a tremendous technical achievement to do this in a living animal. What they found was that the frequency of spikes in these processes (likely dendrites) during sleep was 7 times greater than the frequency of spikes recorded next to the cell body (soma) which had been done many times before. During wakefulness, the frequency of spikes in the neuropil was 10 fold greater.

I’ve always found it remarkable that most neurons in the cerebral cortex aren’t firing all that rapidly (a few spikes per second — Science vol. 304 pp. 523 – 524, 559 – 563 ’04 ). Neurons (particularly sensory neurons) can fire a lot faster than that — ‘up to’ 500/second.

Perhaps this work explains why — the real calculations are being done in the neuropil by the dendrites.

Even more remarkably, it is possible that the processes of the neuropil are influencing each other without synapses between them because they are so closely packed. The membrane potential shifts the authors measured were much larger than the spikes in the dendrites. So the real computations being performed by the brain might not involve synapses at all ! This would be an explanation of why brain cells and their processes are so squeezed together. So they can talk to each other. No other organ in the body is like this throughout.

This post is already long enough, but the implications are worth exploring further. I’ve written about wiring diagrams of the brain, and how it is at least possible that they wouldn’t tell you how the brain worked — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/.

There is another possible reason that the wiring diagram wouldn’t be enough to give you an understanding. Here is an imperfect analogy. Suppose you had a complete map of every road and street in the USA, along with the address of every house, building and structure in it. In addition you could also measure the paths of all the vehicles on the roads for one day. Would this tell you how the USA worked? It would tell you nothing about what was going on inside the structures, or how it influenced traffic on the roads.

The paper below is seminal, because for the first time, it allows us to see what brain neurons are doing in all their parts — not just the cell body or the axon (which is all we’ve been able to look at before).

If these speculations are true, the brain is a much more powerful parallel processor than anything we are able to build presently (and possibly in the future). Each pyramidal neuron in the cortex would then be a microprocessor locally influencing all those in its vicinity — and in a cubic millimeter of the cerebral cortex (1,000 x 1,000 x 1,000 microns) there are 20,000 – 100,000 neurons (Science vol. 304 pp. 523 – 524, 559 – 563 ’04).

Fascinating stuff — stay tuned
*****

A staggeringly important paper (if true)

Our conception of how our brain does what it does has just been turned upside down, inside out and from the middle to each end — if the following paper holds up [ Science vol. 355 pp. 1281 eaaj1497 1 –> 10 ’17 ] The authors claim to be able to measure electrical activity in dendrites in a living, behaving animal for days at a time. Dendrites are about the size of the smallest electrodes we have, so impaling them destroys them. The technical details of what they did are crucial, as much of what they report may be artifact due to injury produced by the way they acquired their data.

First a picture of a pyramidal neuron of the cerebral cortex — https://en.wikipedia.org/wiki/Pyramidal_cell — the cell body is only 20 microns in diameter (the giant pyramidal neurons giving rise to the corticospinal tract are much larger with diameters of 100 microns). Look at the picture in the article. If the cell body is (soma) 20 microns the dendrites arising from it (particularly the apical dendrite) are at most 5 microns thick.

Here’s what they did. A tetrode is a bundle of 4 very fine electrodes. Bundle diameter is only 30 – 40 microns with a 5 micron gap between the tips. This allows an intact dendrite to be caught in the gap. The authors note that chronically implanted tetrodes produce an immune response, in which glial cells proliferate and wall off the tetrode, shielding it from the extracellular medium by forming a high impedance sheath. This allows the tetrode to measure the electrical activity of a dendrite caught between the 4 tips (and hopefully little else).

How physiologic is this activity? Remember that epilepsy developing after head trauma is thought to be due to abnormal electrical activity due to glial scars, and a glial scar is exactly what is found around the tetrode. So a lot more work needs to be done replicating this, and studying similar events in neuronal culture (without glia present).

Well those are the caveats. What did they find? The work involved 9 rats and 22 individually adjustable tetrodes. They found that spikes in the dendrites were quite different than the spikes found by a tetrode next to the pyramidal cell body. The dendritic spikes were larger (570 -2,100 microVolts) vs. 80 microVolts recorded extracellularly for spikes arising at the soma. Of course when the soma is impaled by an electrode you get a much larger spike.

More importantly, the dendritic spike rates were 5 times greater than the somatic spike rates during slow wave sleep and 10 times greater during exploration when awake. The authors call these dendritic action potentials (DAPs). Their amplitude was always positive.

They were also able to measure how the membrane potential of the dendrite fluctuated. The membrane potential fluctuations were always larger than the dendritic spikes themselves (by 7 fold). The size of the flucuations correlated with DAP magnitude and rate.

So all the neuronal spikes and axonal action potentials we’ve been measuring over the years (because it was all we could measure), may be irrelevant to what the brain is really doing. Maybe the real computation is occuring within dendrites.

Now we know we can put an electrode in the brain outside of any neuron and record something called a local field potential — which is held to be a weighted sum of transmembrane currents due to synaptic and dendritic activity and arises within 250 microns of the electrode (and probably closer than that).

So fluctuating potentials are out there in the substance of the brain, outside any neuronal structure. Is it possible that the changes in membrane potential in dendrites are felt by other dendrites and if so is this where the brain’s computations are really taking place? Could synapses be irrelevant to this picture, and each pyramidal neuron not be a transistor but a complex analog CPU? Heady stuff. It certainly means goodbye to the McCullouch Pitts model — https://en.wikipedia.org/wiki/Artificial_neuron.

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.