Tag Archives: Synapse

Prolegomena to reading Fall by Neal Stephenson

As a college freshman I spent hours trying to untangle Kant’s sentences in “Prolegomena to Any Future Metaphysics”  Here’s sentence #1.   “In order that metaphysics might, as science, be able to lay claim, not merely to deceitful persuasion, but to insight and conviction, a critique of reason itself must set forth the entire stock of a priori concepts, their division according to the different sources (sensibility, understanding, and reason), further, a complete table of those concepts, and the analysis of all of them along with everything that can be derived from that analysis; and then, especially, such a critique must set forth the possibility of synthetic cognition a priori through a deduction of these concepts, it must set forth the principles of their use, and finally also the boundaries of that use; and all of this in a complete system.”

This post is something to read before tackling “Fall” by Neal Stephenson, a prolegomena if you will.  Hopefully it will be more comprehensible than Kant.   I’m only up to p. 83 of a nearly 900 page book.  But so far the book’s premise seems to be that if you knew each and every connection (synapse) between every neuron, you could resurrect the consciousness of an individual (e.g. a wiring diagram).  Perhaps Stephenson will get more sophisticated as I proceed through the book.  Perhaps not.  But he’s clearly done a fair amount neuroscience homework.

So read the following old post about why a wiring diagram of the brain isn’t enough to explain how it works.   Perhaps he’ll bring in the following points later in the book.

Here’s the old post.  Some serious (and counterintuitive) scientific results to follow in tomorrow’s post.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home.  Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain.  They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to.  For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana).  This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time.  Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them).  This is where the neurological urban myth of the loss of 10,000 neurons each day came from.  For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses.  Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it.  I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the  complexity of the interconnection between neurons and just how many there are.  Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope.  Stare at figure 2.f. a while and try to figure out what’s going on.  It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing.  There are at least 3 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses).  A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible.  It is now possible to watch synapses in the living brain of awake animal for extended periods of time.  But we now know that synapses come and go in the brain.  The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].  So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses.  Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission)  Their importance in psychiatry and drug addiction is unparalleled.  Examples of such volume transmitters include serotonin, dopamine and norepinephrine.  Drugs of abuse affecting their action include cocaine, amphetamine.  Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg.  This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles.  There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse.  The construction of the brain from the fertilized egg must be in some sense statistical.  Remarkable that it happens at all.  Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

 

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

Maybe chemistry just isn’t that important in wiring the brain

Even the strongest chemical ego may not survive a current paper which states that the details of ligand receptor binding just aren’t that important in wiring the fetal brain.

The paper starts noting that there isn’t enough information in our 3.2 gigaBase genome to specify each and every synapse. Each cubic milliMeter of cerebral cortex is stated to contain a billion of them [ Cell vol. 163 pp. 277 – 280 ’15 ].

If you have enough receptors and ligands and use them combinatorially, you actually can specify quite a few synapses. We have 70 different protocadherin gene products found on the neuronal surface. They can bind to each other and themselves. The fruitfly has the dscam genes which guide axons to their proper position. Because of alternative splicing some 38,016 dscam isoforms are possible.

It’s not too hard to think of these different proteins on the neuronal surface as barcodes, specifying which neuron will bind to which.

Not so, says [ Cell vol. 163 pp. 285 – 291 ’15 ]. What is important is that there are lot of them, and that a neuron expressing one of them is unlikely to bump into another neuron carrying the same one. Neurons ‘like’ to form synapses, and will even form synapses with themselves (one process synapsing on another) if nothing else is around. These self synapses are called autapses. How likely is this? Well under each square millimeter of cortex in man there are some 100,000 neurons, and each neuron has multiple dendrites and axons. Self synapse formation is a real problem.

The paper says that the structure of all these protocadherins, dscams and similar surface molecules is irrelevant to what program they are carrying out — not synapsing on yourself. If a process bumps into another in the packed cortex with the same surface molecule, the ‘homophilic’ binding prevents self-synapse formation. So the chemical diversity is just the instantiation of the ‘don’t synapse with yourself’ rule — what’s important is that there is a lot of diversity. Just what this diversity is chemically is less important than there is a lot of it.

This is another example of “It’s not the bricks, it’s the plan” in another guise — https://luysii.wordpress.com/2015/09/27/it-aint-the-bricks-its-the-plan/

The neuron as motherboard

Back in the day when transistors were fairly large and the techniques for putting them together on silicon were primitive by today’s standards, each functionality was put on a separate component which was then placed on a substrate called the motherboard. Memory was one component, the central processing unit (CPU) another, each about the size of a small cellphone today. Later on as more and more transistors could be packed on a chip, functionality such as memory could be embedded in the CPU chip. We still have motherboards today as functionality undreamed of back then (graphic processors, disc drives) can be placed on them.

It’s time to look at individual neurons as motherboards rather than as CPUs which sum outputs and then fire. The old model was to have a neuron look like an oak tree, with each leaf functioning as an input device (dendritic spine). If enough of them were stimulated at once, a nerve impulse would occur at the trunk (the axon). To pursue the analogy a bit further, the axon has zillions of side branches (e.g,. the underground roots) which than contact other neurons. Probably the best example of this are the mangrove trees I saw in China, where the roots are above ground.

How would a contraption like this learn anything? If an impulse arrives at an axonal branch touching a leaf (dendritic spine) — e.g. a synapse, the spine doesn’t always respond. The more times impulses hit the leaf when it is responding to something else, the more likely the spine is to respond (this is called long term potentiation aka LTP).

We’ve always thought that different parts of the dendritic tree (leaves and branches) receive different sorts of information, and can remember (by LTP). Only recently have we been able to study different leaves and branches of the same neuron and record from them in a living intact animal. Well we can, and what the following rather technical description says, its that different areas of a single neuron are ‘trained’ for different tasks. So a single neuron is far more than a transistor or even a collection of switches. It’s an entire motherboard (full fledged computer to you).

Presently Intel can put billions of transistors on a chip. But we have billions of neurons, each of which has tends of thousands of leaves (synapses) impinging on it, along with memory of what happened at each leaf.

That’s a metaphorical way of describing the results of the following paper (given in full jargon mode).

[ Nature vol. 520 pp. 180 – 185 ’15 ] Different motor learning tasks induce dendritic calcium spikes on different apical tuft branches of individual layer V pyramidal neurons in mouse motor cortex. These branch specific calcium spikes cause long lasting potentiation of postsynaptic dendritic spines active at the time of spike generation.