Tag Archives: volume neurotransmission

The four hour cure for depression: what is Ketamine doing?

It is a sad state of affairs when you look forward to writing a post on depression.



From Nature 2 July — “G4 a type of swine flu virus from China can proliferate in human airway cells.  34/338 pig farm workers in China have antibodies to it.  In ferrets G4 causes lung inflammation and coughing.”

Well that’s enough reason to flee to the solace of the basic neuroscience of depression.



The drugs we use for depression aren’t great.  They don’t help at least a third of the patients, and they usually take several weeks to work for endogenous depression.  They seemed to work faster in my MS patients who had a relapse and were quite naturally depressed by an exogenous event completely out of their control.

Enter Ketamine which, when given IV, can transiently lift depression within a few hours.  You can find more details and references in an article in  Neuron ( vol. 101 pp. 774 – 778 ’19)  written by the guys at Yale who did some of the original work. However, here’s the gist of the article.  A single dose of ketamine produced antidepressant effects that began within hours peaked in 24 – 72 hours and dissipated within 2 weeks (if ketamine wasn’t repeated).  This occurred in 50 – 75% of people with treatment resistant depression.  Remarkably one third of treated patients went into remission.

This simply has to be telling us something very important about the neurochemistry of depression.

Naturally there has been a lot of work on the neurochemical changes produced by ketamine, none of which I’ve found convincing ( see https://luysii.wordpress.com/2019/10/27/how-does-ketamine-lift-depression/ ) until the following paper [ Neuron  vol. 106 pp. 715 – 726 ’20 ].

In what follows you have to have some basic knowledge of synaptic structure, but here’s a probably inadequate elevator pitch.  Synapses have two sides, pre- and post-.  On the presynaptic side neurotransmitters are enclosed in synaptic vesicles.  Their contents are released into the synaptic cleft when an action potential arrives from elsewhere in the neuron.  The neurotransmitters flow across the very narrow synapse to bind to receptors on the postsynaptic side, triggering (or not) a response of the postsynaptic neuron.  Presynaptic terminals vary in the number vesicles they contain.

Synapses are able to change their strength (how likely an action potential is to produce a postsynaptic response).  Otherwise our brains wouldn’t be able to change and learn anything.  This is called synaptic plasticity.

One way to change the strength of a synapse is to adjust the number of synaptic vesicles found on the presynaptic side.   Presynaptic neurons form synapses with many different neurons.  The average neuron in the cerebral cortex is post-synaptic to thousands of neurons.

We think that synaptic plasticity involves changes at particular synapses but not at all of them.

Not so with ketamine according to the paper.  It changes the number of presynaptic vesicles at all synapses of a given neuron by the same percentage — this is called synaptic scaling.  Given 3 synapses containing 60  50 and 40 vesicles, upward synaptic scaling by 20% would add 12 vesicles to the first 10 to the second and 8 to the third.   The paper states that this is exactly what ketamine does to neurons using glutamic acid (the major excitatory neurotransmitter found in brain).  Even more interesting, is the fact that lithium which treats mania has the opposite effects decreasing the number of vesicles in each synapse by the same percentage.

I found this rather depressing when I first read it, as I realized that there was no chemical process intrinsic to a neuron which could possibly work quickly enough to change all the synapses at once.  To do this you need a drug which goes everywhere at once.

But you don’t. There are certain brain nuclei which send their processes everywhere in the brain.  Not only that but their processes contain varicosities which release their neurotransmitter even where there is no post-synaptic apparatus.  One such nucleus (the pars compacta of the substantia nigra) has extensively ramified processes so much so that “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].  So when that single neuron fires, dopamine is likely to bathe every neuron in the brain.  We think that something similar occurs in the locus coeruleus of the lower brain which has only 15,000 neurons and releases norepinephrine, and also in the raphe nuclei of the brainstem which release serotonin.

It should be less than a surprise that drugs which alter neurotransmission by these neurotransmitters are used to treat various psychiatric diseases.  Some drugs of abuse alter them as well (Cocaine and speed release norepinephrine, LSD binds one of the serotonin receptors etc, etc.)

The substantia nigra contains only 450,000 neurons at birth, so you don’t need a big nucleus to affect our 80 billion neuron brains.

So the question before the house, is have we missed other nuclei in the brain which control volume neurotransmission by glutamic acid?   If they exist, could their malfunction be a cause of mania and/or depression?  There is plenty of room for 10,000 to 100,000 neurons to hide in an 80 billion neuron brain.

Time to think outside the box people. Here is an example:  Since ketamine blocks activation of one receptor for glutamic acid, could there be a system using volume neurotransmission which releases a receptor inhibitor?

Addendum 7 July — I sent a copy of the post to the authors and received this back from one of them. “Thank you very much for your kind words and interest in our work. Your explanation is quite accurate (my only suggestion would be to replace “vesicles” with “receptors”, as the changes we propose are postsynaptic). Reading your blog reassures us that our review article accomplished its main goal of reaching beyond the immediate neuroscience community to a wider audience like yourself.”


Why don’t serotonin neurons die like dopamine neurons do in Parkinson’s disease

Say what ?  “This proportion will likely be higher in rat dopaminergic neurons, which have even larger axonal arbors with ~500,000 presynapses, or in human serotonergic neurons, which are estimated to extend axons for 350 meters” – from [ Science vol. 366 3aaw9997 p. 4 ’19 ]

I thought I was reasonably well informed but I found these numbers astounding, so I looked up the papers.  Here is how such statement can be made with chapter and verse.

“The validity of the single-cell axon length measurements for dopaminergic and cholinergic neurons can be independently checked with calculations based on the total volume of the target territory, the density of the particular type of axon (axon length per volume of target territory), and the number of neuronal cell bodies giving rise to that type of axonThese population analyses are made possible by the availability of antibodies that localize to different types of axons: anti-ChAT for cholinergic axons (also visualized with acetylcholine esterase histochemistry), anti-tyrosine hydroxylase for striatal dopaminergic axons, and anti-serotonin for serotonergic axons.

The human data for axon density and neuron counts have been published for forebrain cholinergic neurons and for serotonergic neurons projecting from the dorsal raphe nucleus to the cortex, and cortical volume estimates for humans are available from MRI analyses; forebrain cholinergic neuron data is also available for chimpanzees. These calculations lead to axon length estimates of 107 m and 31 m, respectively, for human and chimpanzee forebrain cholinergic neurons, and an axon length estimate of 170–348 meters for human serotonergic neurons.”

H. Wu, J. Williams, J. Nathans, Complete morphologies of basal forebrain cholinergic neurons in the mouse. eLife 3, e02444 (2014). doi: 10.7554/eLife.02444; pmid: 24894464

How in the world can these neurons survive as long as they do?

Not all of them do–  At birth there are 450,000 neurons in the substantia nigra (one side or both sides?), declining to 275 by age 60.  Patients with Parkinsonism all had cell counts below 140,000 [  Ann. Neurol. vol. 24 pp. 574 – 576 ’88 ]. Catecholamines such as dopamine and norepinephrine are easily oxidized to quinones, and this may be the ‘black stuff’ in the substantia nigra (which is latin for black stuff).

Here are the numbers for serotonin neurons in the few brain nuclei (dorsal raphe nucleus) in which they are found.  Less than dopamine.  A mere 165,000 +/- 34,000 — https://www.ncbi.nlm.nih.gov › pubmed

So being too small to be seen with a total axon length of a football field, they appear to last as long as we do.  Have we missed a neurological disease due to loss of serotonin neurons?

Why should the axons of dopamine, serotonin and norepinephrine neurons be so long and branch so widely?  Because they release their transmitters diffusely in the brain, and diffusion is too slow, so the axonal apparatus must get it there and release it locally into the brain’s extracellular space, no postsynaptic specializations are present in volume neurotransmission — that’s the point.  This is one of the reasons that a wiring diagram of the brain isn’t enough — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/.

Just think of that dopamine neuron with 500,000 presynapses.  Synthesis and release must be general, as the neuron couldn’t possibly address an individual synapse.

The more we know the more remarkable the brain becomes.


NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.