Category Archives: Uncategorized

The prion battles continue with a historical note at the end

Now that we know proteins don’t have just one shape, and that 30% of them have unstructured parts, it’s hard to remember just how radical that Prusiner’s proposal that a particular conformation (PrPSc) of the normal prion protein (PrPC) caused other prion proteins to adopt it and cause disease was back in the 80s. Actually Kurt Vonnegut got there first with Ice-9 in “Cat’s Cradle ” in 1963. If you’ve never read it, do so, you’ll like it.

There was huge resistance to Prusiner’s idea, but eventually it became accepted except by Laura Manuelidis (about which more later). People kept saying the true infectious agent was a contaminant in the preparations Prusiner used to infect mice and that the prion protein (called PrPC) was irrelevant.

The convincing argument that Prusiner was right (for me at least) was PMCA (Protein Misfolding Cyclic Amplification) in which you start with a seed of PrPSc (the misfolded form of the normal prion protein PrPC), incubate it with a 10,000 fold excess of normal PrPC, which is converted by the evil PrPSC to more of itself. Then you sonicate what you’ve got breaking it into small fragments, and continue the process with another 10,000 fold excess of normal PrPC. Repeat this 20 times. This should certainly dilute out any infectious agent along for the ride (no living tissue is involved at any point). You still get PrPSc at the end. For details see Cell vol. 121 pp. 195 – 206 ’05.

Now comes [ Proc. Natl. Acad. Sci. vol. 117 pp. 23815 – 23822 ’20 ] which claims to be able to separate the infectivity of prions from their toxicity. Highly purified mouse prions show no signs of toxicity (neurite fragmentation, dendritic spine density changes) in cell culture, but are still infectious producing disease when injected into another mouse brain.

Even worse treatment of brain homogenates from prion infected mice with sodium laroylsarcosine destroys the toxicity to cultured neurons without reducing infectivity to the next mouse.

So if this paper can be replicated it implies that the prion protein triggers some reaction in the intact brain which then kills the animals.

Manuelidis thought in 2011 that the prion protein is a reaction to infection, and that we haven’t found the culprit. I think the PCMA pretty much destroyed that idea.

So basically we’re almost back to square one with what causes prion disease. Just to keep you thinking. Consider this. We can knock out the prion protein gene in mice. Guess what? The mice are normal. However, injection of the abnormal prion protein (PrPSc) into their brains (which is what researchers do to transmit the disease) doesn’t cause any disease.

Historical notes: I could have gone to Yale Med when Manuelidis was there, but chose Penn instead. According to Science vol. 332 pp. 1024 – 1027 ’11 she was one of 6 women in the class, and married her professor (Manuelidis) aged 48 when she was 24 graduating in 1967. In today’s rather Victorian standards of consent, power differential between teacher and student, that would probably have gotten both of them bounced out.

So I went to Penn Med. graduating in ’66. Prusiner graduated in ’68. He and I were in the same medical fraternity (Nu Sigma Nu). Don’t think animal house, medical fraternities were a place to get some decent food, play a piano shaped object and very occasionally party. It’s very likely that we shared a meal, but I have no recollection.

Graduating along with me was another Nobel Laureate to be — Mike Brown, he of the statins. Obviously a smart guy, but he didn’t seem outrageously smarter than the rest of us.

Cells are not bags of cytoplasm

How Ya Gonna Keep ’em Down on the Farm (After They’ve Seen Paree) is a song of 100+ years ago when World War I had just ended. In 1920, for the first time America was 50/50 urban/rural. Now it’s 82%.

What does this have to do with cellular biology? A lot. One of the first second messengers to be discovered was cyclic adenosine monophosphate (CAMP). It binds to an enzyme complex called protein kinase A (PKA), activating it, making it phosphorylate all sorts of proteins changing their activity. But PKA doesn’t float free in the cell. We have some 47 genes for proteins (called AKAPs for protein A Kinase Anchoring Protein) which bind PKA and localize it to various places in the cell. CAMP is made by an enzyme called adenyl cyclase of which we have 10 types, each localized to various places in the cell (because most of them are membrane embedded). We also have hundreds of G Protein Coupled Receptors (GPCRs) localized in various parts of the cell (apical, basal, primary cilia, adhesion structures etc. etc.) many of which when activated stimulate (by yet another complicated mechanism) adenyl cyclase to make CAMP.

So the cell tries to keep CAMP when it is formed relatively localized (down on the farm if you will). Why have all these ways of making it if its going to run all over the cell after all.

Actually the existence of localized signaling by CAMP is rather controversial, particularly when you can measure how fast it is moving around. All studies previous to Cell vol. 182 pp. 1379 – 1381, 1519 – 1530 ’20 found free diffusion of CAMP.

This study, found that CAMP (in low concentrations) was essentially immobile, remaining down on the farm where it was formed.

The authors used a fluorescent analog of CAMP which allowed them to use fluorescence fluctuation spectroscopy which gives the probability distribution function of an individual molecule occupying a given position in space and time (SpatioTemporal Image correlation Spectroscopy — STICS).

Fascinating as the study is, it is ligh tyears away from physiologic — the fluorescent CAMP analog was not formed by anything resembling a physiologic mechanism (e.g. by adenyl cyclase). A precursor to the fluorescent CAMP was injected into the cell and broken down by ‘intracellular esterases’ to form the fluorescent CAMP analog.

Then the authors constructed a protein made of three parts (1) a phosphodiesterase (PDE) which broke down the fluorescent CAMP analog and (2) another protein — the signaler — which fluoresced when it bound the CAMP analog. The two were connected by (3) a flexible protein linker e.g. the ‘ruler’ of the previous post. The ruler could be made of various lengths.

Then levels of fluorescent CAMP were obtained by injecting it into the cell, or stimulating a receptor.

If the sensor was 100 Angstroms away from the PDE, it never showed signs of CAMP, implying the the PDE was destroying it before it could get to the linker implying that diffusion was quite slow. This was at low concentrations of the fluorescent CAMP analog. At high injection concentrations the CAMP overcame the sites which were binding it in the cell and moved past the signaler.

It was a lot of work but it convincingly (to me) showed that CAMP doesn’t move freely in the cell unless it is of such high concentration that it overcomes the binding sites available to it.

They made another molecule containing (1) protein kinase A (2) a ruler (3) a phophodiesterase. If the kinase and phosphodiesterase were close enough together, CAMP never got to PKA at all.

Another proof that phosphodiesterase enzymes can create a zone where there is no free CAMP (although there is still some bound to proteins).

Hard stuff (to explain) but nonetheless impressive, and shows why we must consider the cell a bunch of tiny principalities jealously guarding their turf like medieval city states.


A molecular ruler

Time to cleanse your mind by leaving the contentious world of social issues and entering the realm of pure thought with some elegant chemistry. 

You are asked to construct a molecular ruler with a persistence length of 150 Angstroms. 

Hint #1: use a protein

Hint #2; use alpha helices

Spoiler alert — nature got there first. 

The ruler was constructed and used in an interesting paper on CAMP nanoDomains (about which more on the next post).

It’s been around since 2011 [ Proc. Natl. Acad. Sci. vol. 108 pp. 20467 – 20472 ’11 ] and I’m embarrassed to admit I’d never heard of it.

It’s basically a run of 4 negatively charged amino acids (glutamic acid or aspartic acid) followed by a run of 4 positively charged amino acids (lysine, arginine). This is a naturally occurring motif found in a variety of species. 

My initial (incorrect) thought was that this couldn’t work as the 4 positively charged amino acids would bend at the end and bind to the 4 negatively charged ones. This can’t work even if you make the peptide chain planar, as the positive charges would alternate sides on the planar peptide backbone.

Recall that there are 3.5 amino acids/turn of the alpha helix, meaning that between a run of 4 Glutamic acid/Aspartic acids and an adjacent run of 4 lysines/arginines, an ionic bond is certain to form between the side chains (and not between adjacent amino acids on the backbone, but probably one 3 or 4 amino acids away)

Since a complete turn of the alpha helix is only 5.4 Angstroms, a persistence length of 150 means about 28 turns of the helix using 28 * 3.5 = 98 amino acids or about 12 blocks of ++++—- charged amino acids. 

The beauty of the technique is that by starting with an 8 amino acid ++++—- block, you can add length to your ruler in 12 Angstrom increments. This is exactly what Cell vol. 182 pp. 1519 – 1530 ’20 did. But that’s for the next post. 

247 ZeptoSeconds

247 ZeptoSeconds is not the track time of the fastest Marx brother. It is the time a wavelength of light takes to travel across a hydrogen molecule (H2) before it kicks out an electron — the photoelectric effect.

But what is a zeptosecond anyway? There are 10^21 zeptoSeconds in a second. That’s a lot. A thousand times more than the number of seconds since the big bang which is only 60 x 60 x 24 x 365 x 13.8 x 10^9 = 4. 35 x 10^17. Not that big a deal to a chemist anyway since 10^21 is 1/600th of the number of molecules in a mole.

You can read all about it in Science vol. 370 pp. 339 – 341 ’20 — it you have a subscription.

Studying photoionization allows you to study the way light is absorbed by molecules, something important to any chemist. The 247 zeptoseconds is the birth time of the emitted electron. It depends on the travel time of the photon across the hydrogen molecule.

They don’t quite say trajectory of the photon, but it is implied even though in quantum mechanics (which we’re dealing with here), there is no such a thing as a trajectory. All we have is measurements at time t1 and time t2. We are not permitted to say what the photon is doing between these two times when we’ve done measurements. Our experience in the much larger classical physics world makes us think that there is such a thing.

It is the peculiar doublethink quantum mechanics forces on us. Chemists know this when they think about something as simple as the S2 orbital, something spherically symmetric, with electron density on either side of a node. The node is where you never find an electron. Well if you don’t, find it here, how can it have a trajectory from one side to the other.

Quantum mechanics is full of conundrums like that. Feynman warned us not to think about them, but it will take your mind off the pandemic (and if you’re good, off the election as well)..

It’s worth reading the article in Quanta which asks if wavefunctions tunnel through a barrier at speeds faster than light — here’s a link — It will make your head spin.

Here’s a link to an earlier post about the doublethink quantum mechanics forces on us

Here’s the post itself

Doublethink and angular momentum — why chemists must be adept at it

Chemists really should know lots and lots about angular momentum which is intimately involved in 3 of the 4 quantum numbers needed to describe atomic electronic structure. Despite this, I never really understood what was going until taking the QM course, and digging into chapters 10 and 11 of Giancoli’s physics book (pp. 248 -310 4th Edition).

Quick, what is the angular momemtum of a single particle (say a planet) moving around a central object (say the sun)? Well, its magnitude is the current speed of the particle times its mass, but what is its direction? There must be a direction since angular momentum is a vector. The (unintuitive to me) answer is that the angular momemtum vector points upward (resp. downward) from the plane of motion of the planet around the center of mass of the sun planet system, if the planet is moving counterclockwise (resp. clockwise) according to the right hand rule. On the other hand, the momentum of a particle moving in a straight line is just its mass times its velocity vector (e.g. in the same direction).

Why the difference? This unintuitive answer makes sense if, instead of a single point mass, you consider the rotation of a solid (e.g. rigid) object around an axis. All the velocity vectors of the object at a given time either point in different directions, or if they point in the same direction have different magnitudes. Since the object is solid, points farther away from the axis are moving faster. The only sensible thing to do is point the angular momentum vector along the axis of rotation (it’s the only thing which has a constant direction).

Mathematically, this is fairly simple to do (but only in 3 dimensions). The vector from the axis of rotation to the planet (call it r), and the vector of instantaneous linear velocity of the planet (call it v) do not point in the same direction, so they define a plane (if they do point in the same direction the planet is either hurtling into the sun or speeding directly away, hence not rotating). In 3 dimensions, there is a unique direction at 90 degrees to the plane. The vector cross product of r and v gives a vector pointing in this direction (to get a unique vector, you must use the right or the left hand rule). Nicely, the larger r and v, the larger the angular momentum vector (which makes sense). In more than 3 dimensions there isn’t a unique direction away from a plane, which is why the cross product doesn’t work there (although there are mathematical analogies to it).

This also explains why I never understood momentum (angular or otherwise) till now. It’s very easy to conflate linear momentum with force and I did. Get hit by a speeding bullet and you feel a force in the same direction as the bullet — actually the force you feel is what you’ve done to the bullet to change its momentum (force is basically defined as anything that changes momentum).

So the angular momentum of an object is never in the direction of its instantaneous linear velocity. But why should chemists care about angular momentum? Solid state physicists, particle physicists etc. etc. get along just fine without it pretty much, although quantum mechanics is just as crucial for them. The answer is simply because the electrons in a stable atom hang around the nucleus and do not wander off to infinity. This means that their trajectories must continually bend around the nucleus, giving each trajectory an angular momentum.

Did I say trajectory? This is where the doublethink comes in. Trajectory is a notion of the classical world we experience. Consider any atomic orbital containing a node (e.g. everything but a 1 s orbital). Zeno would have had a field day with them. Nodes are surfaces in space where the electron is never to be found. They separate the various lobes of the orbital from each other. How does the electron get from one lobe to the other by a trajectory? We do know that the electron is in all the lobes because a series of measurements will find the electron in each lobe of the orbital (but only in one lobe per measurement). The electron can’t make the trip, because there is no trip possible. Goodbye to the classical notion of trajectory, and with it the classical notion of angular momentum.

But the classical notions of trajectory and angular momentum still help you think about what’s going on (assuming anything IS in fact going on down there between measurements). We know quite a lot about angular momentum in atoms. Why? Because the angular momentum operators of QM commute with the Hamiltonian operator of QM, meaning that they have a common set of eigenfunctions, hence a common set of eigenvalues (e.g. energies). We can measure these energies (really the differences between them — that’s what a spectrum really is) and quantum mechanics predicts this better than anything else.

Further doublethink — a moving charge creates a magnetic field, and a magnetic field affects a moving charge, so placing a moving charge in a magnetic field should alter its energy. This accounts for the Zeeman effect (the splitting of spectral lines in a magnetic field). Trajectories help you understand this (even if they can’t really exist in the confines of the atom).

The death of the synonymous codon – V

The coding capacity of our genome continues to amaze. The redundancy of the genetic code has been put to yet another use. Depending on how much you know, skip the following four links and read on. Otherwise all the background you need to understand the following is in them.

There really is no way around the redundancy producing synonymous codons. If you want to code for 20 different amino acids with only four choices at each position, two positions (4^2) won’t do. You need three positions, which gives you 64 possibilities (61 after the three stop codons are taken into account) and the redundancy that comes along with it. The previous links show how the redundant codons for some amino acids aren’t redundant at all but used to code for the speed of translation, or for exonic splicing enhancers and inhibitors. Different codons for the same amino acid can produce wildly different effects leaving the amino acid sequence of a given protein alone.

The latest example — Proc. Natl. Acad. Sci. vol. 117 pp. 24936 – 24046 ‘2 — is even more impressive, as it implies that our genome may be coding for way more proteins than we thought.

The work concerns Mitochondrial DNA Polymerase Gamma (POLG), which is a hotspot for mutations (with over 200 known) 4 of which cause fairly rare neurologic diseases.

Normally translation of mRNA into protein begins with something called an initator codon (AUG) which codes for methionine. However in the case of POLG, a CUG triplet (not AUG) located in the 5′ leader of POLG messenger RNA (mRNA) initiates translation almost as efficiently (∼60 to 70%) as an AUG in optimal context. This CUG directs translation of a conserved 260-triplet-long overlapping open reading frame (ORF) called  POLGARF (POLG Alternative Reading Frame — surely they could have come up something more euphonious).

Not only that but the reading frame is shifted down one (-1) meaning that the protein looks nothing like POLG, with a completely different amino acid composition. “We failed to find any significant similarity between POLGARF and other known or predicted proteins or any similarity with known structural motifs. It seems likely that POLGARF is an intrinsically disordered protein (IDP) with a remarkably high isoelectric point (pI =12.05 for a human protein).” They have no idea what POLGARF does.

Yet mammals make the protein. It gets more and more interesting because the CUG triplet is part of something called a MIR (Mammalian-wide Interspersed Repeat) which (based on comparative genomics with a lot of different animals), entered the POLG gene 135 million years ago.

Using the teleological reasoning typical of biology, POLGARF must be doing something useful, or it would have been mutated away, long ago.

The authors note that other mutations (even from one synonymous codon to another — hence the title of this post) could cause other diseases due to changes in POLGARF amino acid coding. So while different synonymous codons might code for the same amino acid in POLG, they probably code for something wildly different in POLGARF.

So the same segment of the genome is coding for two different proteins.

Is this a freak of nature? Hardly. We have over an estimated 368,000 mammalian interspersed repeats in our genome —

Could they be turning on transcription for other proteins that we hadn’t dreamed of. Algorithms looking for protein coding genes probably all look for AUG codons and then look for open reading frames following them.

As usual Shakespeare got there first “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”

Certainly the paper of the year for intellectual interest and speculation.

Finishing Feynman Early

Have you ever finished what you thought of as a monumental (intellectual or otherwise) task early — such as finally writing up that paper, senior thesis, grant proposal etc. etc.  Well I just did.

It was finishing volume II of the Feynman Lectures on Physics on electromagnetism earlier than I had thought possible.  Feynman’s clarity of writing is legendary, which doesn’t mean that what he is writing about is simple or easy to grasp.  Going through vol. II is pleasant but intense intellectual work.

The volumes are paginated rather differently than most books.  Leighton and Sands wrote up each lecture as it was given, so each lecture is paginated, but the book isn’t paged sequentially.  Always wanting to know how much farther I had to, I added them together producing a running total as I read the book.  FYI  the 42 lectures (chapters) contain 531 pages.

So I’m plowing along through chapter 37 “Magnetic Materials” starting on p. 455 with its 13 pages knowing that there are still 63 intense pages to go at the end when I get to the last paragraph which begins “We now close our study of electricity and magnetism.”

Say what?  The rest is on fluid dynamics and elasticity (which I’m not terribly interested in).  So I’m done.  I feel like WilE Coyote chasing the roadrunner and suddenly running off a cliff.

However, the last chapter (42) is not to be missed for any of you.  It is the clearest explanation of curvature and curved space you’ll ever find.  All you need for the first 7 pages (which explains curvature) is high school geometry and the formula for the area of a circle (2 pi *r) and the volume of a sphere (4/3) * pi * r ^3.   That’s it.  Suitable for your nontechie friends.

Of course later on he uses the curvature of the space we live in to explain what Einstein did in general relativity and his principle of equivalence and his theory of gravitation (in another 7 pages).  I wish I’d read it before I tried to learn general relativity.

I did make an attempt to explain the Riemann curvature tensor — but it’s nowhere as good.

Here a link —

Here are links to my other posts on the Feynman Lectures on Physics

The pleasures of reading Feynman on Physics – V

Feynman finally gets around to discussing tensors 376 pages into volume II in “The Feynman Lectures on Physics” and a magnificent help it is (to me at least).  Tensors must be understood to have a prayer of following the math of General Relativity (a 10 year goal, since meeting classmate Jim Hartle who wrote a book “Gravity” on the subject).

There are so many ways to describe what a tensor is (particularly by mathematicians and physicists) that it isn’t obvious that they are talking about the same thing.   I’ve written many posts about tensors, as the best way to learn something it to try to explain it to someone else (a set of links to the posts will be found at the end).

So why is Feynman so helpful to me?  After plowing through 370 pages of Callahan’s excellent book we get to something called the ‘energy-momentum tensor’, aka the stress-energy tensor.  This floored me as it appeared to have little to do with gravity, talking about flows of energy and momentum. However it is only 5 pages away from the relativistic field equations so it must be understood.

Back in the day, I started reading books about tensors such as the tensor of inertia, the stress tensor etc.  These were usually presented as if you knew why they were developed, and just given in a mathematical form which left my intuition about them empty.

Tensors were developed years before Einstein came up with special relativity (1905) or general relativity (1915).

This is where Feynman is so good.  He starts with the problem of electrical polarizability (which is familiar if you’ve plowed this far through volume II) and shows exactly why a tensor is needed to describe it, e.g. he derives  the tensor from known facts about electromagnetism.  Then on to the tensor of inertia (another derivation).  This allows you to see where all that notation comes from. That’s all very nice, but you are dealing with just matrices.  Then on to tensors over 4 vector spaces (a rank 4 tensor) not the same thing as a 4 tensor which is over a 4 dimensional vector space.

Then finally we get to the 4 tensor (a tensor over a 4 dimensional vector space) of electromagnetic momentum.  Here are the 16 components of Callahan’s energy momentum tensor, clearly explained.  The circle is finally closed.

He briefly goes into the way tensors transform under a change of coordinates, which for many authors is the most important thing about them.   So his discussion doesn’t contain the usual blizzard of superscripts and subscript.  Covariant and contravariant are blessedly absent. Here the best explanation of how they transform is in Jeevanjee “An introduction to Tensors and Group Theory for Physicists”  chapter 3 pp. 51 – 74.

Here are a few of the posts I’ve written about tensors trying to explain them to myself (and hopefully you)

Death rates from coronavirus drop in half 2 months after Georgia loosens lockdown restrictions

There were apocalyptic predictions of doom when Georgia loosened its lockdown restrictions against the pandemic coronavirus SARS-CoV-2 on 25 April.  Here they are

From The Atlantic — “Georgia’s Experiment in Human Sacrifice — The state is about to find out how many people need to lose their lives to shore up the economy.” —

A month later (25 May) not much had happened —

7 day moving average of new cases of COVID19 ending 25 April — 740

7 day moving average of new cases of COVID19 ending 13 May — 525 (the state allows 14 days for all the data to roll in, so the last date they regard as having secure numbers is the 7th of May and here the number is 539)

7 day moving averages of deaths from COVID19 ending 25 April — 35

7 day moving average of deaths from COVID19 ending 13 May — 24 (the state allows 14 days for all the data to roll in, so the last date they regard as having secure numbers is the 7th of May and here the number is 27).

Back on 25 May I wrote “People who assumed (on purely correlative evidence) that lockdowns prevented new cases, and that lifting them would cause a marked increase in new cases and deaths, are clearly wrong.  It’s possible that cases will spike in the future proving them right, but pretty unlikely.  It’s only fair to give the doomsayers a sporting chance and followup is planned in a month.”

So here’s the followup.   The 7 day moving average of daily deaths had dropped to 17 as of 11 June.  Remember Georgia waits 14 days as data filters in to regard the numbers as definitive.  Here’s the link —

So the death rate from COVID-19 dropped in half 2 months after Georgia loosened some of the lockdown restrictions.

There are only two useful statistics in all of this.  The moving average of the daily death rate and the number of COVID19 cases in the hospital.  I no longer follow the number of new cases, because they include people with a positive antibody test (all of whom have recovered).  We know that most cases are asymptomatic.  It’s very hard to get the second number of people sick in the hospital with COVID19 (I’ve tried with no luck).  COVID19 used to mean that you were sick — no longer, it now counts positive antibody tests, rendering the number relatively useless.  By choosing who to test, numbers can be easily inflated —

Daily death rates are great for cherry picking scare headlines — it’s worth looking at this article from Tampa —

It contains a great figure with the number of deaths each day from March onward on which is superimposed the moving average — the range is from 10 to 100.  Even more impressive is the fall on weekends and the rise during the week.

Fortunately, every Friday  Florida releases the weekly results for antibody testing, so we’ll be able to see how many of these new cases of COVID19 are people who have recovered from it.

Here’s another link — well worth looking at — with the number of new cases in Florida in one graph (with the marked increase in the past week) and the number of death from the disease just below.  The deaths in the past week are the lowest they’ve been in a month —


These are grim times.  I was going to write yet another post on the virus, but it can wait.  We all need a joke, so here goes.

This guy has been going to a psychiatrist for years. She’d been very supportive.

On a recent session he said, I had a strange dream last night.

Psychiatrist’s ears perk up.

Tell me about it.

Well, I dreamed that you were my mother.

Psychiatrist sits bolt upright in her chair.

That’s fascinating what did you do?

Well I was so upset I couldn’t get back to sleep, so I got up, had a cup of coffee and went in to the office

That’s a breakfast?

Covid19 could be coming for you

A friend and his wife are getting 3 days of meals delivered to their room in their retirement home.  Clearly a great way to socially isolate themselves.  This will help ‘flatten the curve’.  What that means is that the peak won’t be as high, so we won’t run out of beds and respirators.

But look at the curve in this article —

Now integrate the area under the curve.  Looks like the number of cases is comparable (more actually under the flattened curve).

Add this to the extreme likelihood that covid19 will become endemic in the population (given the number of cases out there).  This means that absent a vaccine or a treatment, you will meet it sooner or later with whatever biologic resources you have.

On the positive side, the amount of research into the way virus kills is only matched by the number of therapeutic trials underway (both enormous).  The way the journals have opened up so results are widely available gratis and freely shared is impressive.

Do glia think?

Do glia think Dr. Gonatas?  This was part of an exchange between G. Milton Shy, head of neurology at Penn, and Nick Gonatas a brilliant neuropathologist who worked with Shy as the two of them described new disease after new disease in the 60s ( myotubular (centronuclear) myopathy, nemaline myopathy, mitochondrial myopathy and oculopharyngeal muscular dystrophy).

Gonatas was claiming that a small glial tumor caused a marked behavioral disturbance, and Shy was demurring.  Just after I graduated, the Texas Tower shooting brought the question back up in force —

A recent paper [ Neuron vol. 105 pp. 954 – 956, 1036 – 1047 ’20] gives good evidence that glia are more than the janitors and the maintenance crew of the brain.

Glia cover most synapses (so neurotransmitter there doesn’t leak out, I thought) giving rise to the term tripartite synapse (presynaptic terminal + postsynaptic membrane + glial covering).

Here’s what they studied.  The cerebral cortex projects some of its axons (which use glutamic acid as a neurotransmitter) to a much studied nucleus in animals (the nucleus accumbens).  This is synapse #1. The same nucleus gets a projection of axons from the brainstem ventral tegmental area (VTA) which uses dopamine as a neurotransmitter.  However, the astrocytes (a type of glia) covering synapse #1 have the D1 dopamine receptor (there are 5 different dopamine receptors) on them.  It isn’t clear if the dopamine neurons actually synapse (synapse #2) on the astrocytes, or whether the dopamine  just leaks out of the synaptic cleft to the covering glia.

Optogenetic stimulation of the VTA dopamine neurons results in an elevation of calcium in the astrocytes (a sign of stimulation). Chemogenetic activation of these astrocytes depresses the presynaptic  terminals of the neurons projecting the nucleus accumbens  from the cerebral cortex .  How does this work?  Stimulated astrocytes release ATP or its produce adenosine.  This binds to the A1 purinergic receptor on the presynaptic terminal of the cortical projection.

So what?

The following sure sounds like the astrocyte here is critical to brain function.  Activation of the astrocyte D1 receptor contributes to the locomotor hyperactivity seen after an injection of amphetamine.

Dopamine is intimately involved in reward, psychosis, learning and other processes (antipsychotics and drugs for hyperactivity manipulate it).  That the humble astrocyte is involved in dopamine action takes it out of the maintenance crew and puts it in to management.

A final note about Dr. Shy.  He was a brilliant and compelling teacher, and instead of the usual 1% of a medical school class going into neurology, some 5% of ours did.  In 1967 he ascended to the chair of the pinnacle of American Neurology at the time, Columbia University.  Sadly, he died the month he assumed the chair.  Scuttlebut has it that he misdiagnosed his own heart attack as ‘indigestion’ and was found dead in his chair.