Tag Archives: Synapse

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

Advertisements

Maybe chemistry just isn’t that important in wiring the brain

Even the strongest chemical ego may not survive a current paper which states that the details of ligand receptor binding just aren’t that important in wiring the fetal brain.

The paper starts noting that there isn’t enough information in our 3.2 gigaBase genome to specify each and every synapse. Each cubic milliMeter of cerebral cortex is stated to contain a billion of them [ Cell vol. 163 pp. 277 – 280 ’15 ].

If you have enough receptors and ligands and use them combinatorially, you actually can specify quite a few synapses. We have 70 different protocadherin gene products found on the neuronal surface. They can bind to each other and themselves. The fruitfly has the dscam genes which guide axons to their proper position. Because of alternative splicing some 38,016 dscam isoforms are possible.

It’s not too hard to think of these different proteins on the neuronal surface as barcodes, specifying which neuron will bind to which.

Not so, says [ Cell vol. 163 pp. 285 – 291 ’15 ]. What is important is that there are lot of them, and that a neuron expressing one of them is unlikely to bump into another neuron carrying the same one. Neurons ‘like’ to form synapses, and will even form synapses with themselves (one process synapsing on another) if nothing else is around. These self synapses are called autapses. How likely is this? Well under each square millimeter of cortex in man there are some 100,000 neurons, and each neuron has multiple dendrites and axons. Self synapse formation is a real problem.

The paper says that the structure of all these protocadherins, dscams and similar surface molecules is irrelevant to what program they are carrying out — not synapsing on yourself. If a process bumps into another in the packed cortex with the same surface molecule, the ‘homophilic’ binding prevents self-synapse formation. So the chemical diversity is just the instantiation of the ‘don’t synapse with yourself’ rule — what’s important is that there is a lot of diversity. Just what this diversity is chemically is less important than there is a lot of it.

This is another example of “It’s not the bricks, it’s the plan” in another guise — https://luysii.wordpress.com/2015/09/27/it-aint-the-bricks-its-the-plan/

The neuron as motherboard

Back in the day when transistors were fairly large and the techniques for putting them together on silicon were primitive by today’s standards, each functionality was put on a separate component which was then placed on a substrate called the motherboard. Memory was one component, the central processing unit (CPU) another, each about the size of a small cellphone today. Later on as more and more transistors could be packed on a chip, functionality such as memory could be embedded in the CPU chip. We still have motherboards today as functionality undreamed of back then (graphic processors, disc drives) can be placed on them.

It’s time to look at individual neurons as motherboards rather than as CPUs which sum outputs and then fire. The old model was to have a neuron look like an oak tree, with each leaf functioning as an input device (dendritic spine). If enough of them were stimulated at once, a nerve impulse would occur at the trunk (the axon). To pursue the analogy a bit further, the axon has zillions of side branches (e.g,. the underground roots) which than contact other neurons. Probably the best example of this are the mangrove trees I saw in China, where the roots are above ground.

How would a contraption like this learn anything? If an impulse arrives at an axonal branch touching a leaf (dendritic spine) — e.g. a synapse, the spine doesn’t always respond. The more times impulses hit the leaf when it is responding to something else, the more likely the spine is to respond (this is called long term potentiation aka LTP).

We’ve always thought that different parts of the dendritic tree (leaves and branches) receive different sorts of information, and can remember (by LTP). Only recently have we been able to study different leaves and branches of the same neuron and record from them in a living intact animal. Well we can, and what the following rather technical description says, its that different areas of a single neuron are ‘trained’ for different tasks. So a single neuron is far more than a transistor or even a collection of switches. It’s an entire motherboard (full fledged computer to you).

Presently Intel can put billions of transistors on a chip. But we have billions of neurons, each of which has tends of thousands of leaves (synapses) impinging on it, along with memory of what happened at each leaf.

That’s a metaphorical way of describing the results of the following paper (given in full jargon mode).

[ Nature vol. 520 pp. 180 – 185 ’15 ] Different motor learning tasks induce dendritic calcium spikes on different apical tuft branches of individual layer V pyramidal neurons in mouse motor cortex. These branch specific calcium spikes cause long lasting potentiation of postsynaptic dendritic spines active at the time of spike generation.