Category Archives: Neurology & Psychiatry

Feeling mentally soggy? It could be that your brain has shrunk what with it being winter and all

I wouldn’t have believed that part of the brain can shrink by 40% and then regrow, but exactly that happens to the Etruscan Shrew in the winter. So maybe it’s not you or even COVID19.

It’s a fascinating critter — the world’s smallest mammal, tipping the scales at 1.8 grams (about 6 aspirins). It has a very rapid metabolic rate, eating twice its weight daily. Things get tight in the winter so it shrinks its brain. Remember that even in big and sluggish us, as we sit there reading (or writing) this, our brain is receiving 20% of our cardiac output, despite being 3% or so of our body weight. For more about the Shrew see

For more detail see

What’s really exciting is that the number of neurons increases in the shrew’s brain come summer. Since it’s a mammal, we’re not talking about lizards regrowing limbs, but something evolutionarily close to us. For more detail see

There are actually some conditions with reversible cerebral atrophy, and as a neurologist I made sure to look for them.

Here they are

l. Alcoholism

2. Adrenal corticosteroids (exogenous or endogenous)

3. Malnutrition in kids

4. Depakene (valproic acid)

5. Anorexia nervosa.


Neuroscience can no longer ignore phase separation

As a budding organic chemist, I always found physical chemistry rather dull, particularly phase diagrams. Organic reactions give you a very clear and intuitive picture of energy and entropy without the math.

In past few years cell biology has been finding phase changes everywhere. Now it comes to neuroscience as the synaptic active zone (where vesicles are released) is an example of a phase change (macromolecular condensation, liquid liquid phase separation, biomolecular condensates — it goes by a lot of names as the field is new). If you are new to the field, have a look at an excerpt from an earlier post before proceeding further — it is to be found after the ****

Although the work [ Nature vol. 588 pp. 454 – 458 ’20 ] was done in C. elegans with proteins SYD2 (aka liprinAlpha) and ELKS1, humans have similar proteins.

Phase separation accounts for a variety of cellular organelles not surrounded by membranes. The best known example is the nucleolus, but others include Cajal bodies, ProMyelocytic Leukemia Bodies (PML bodies), gemline P granules, processing bodies, stress granules.

These nonmembranous organelles have 3 properties in common

l. They arise as a phase separation from the surrounding milieu

2. They remain in a liquid state, but with properties distinct from those in the surrounding cellular material

3. They are dynamic. Proteins move in and out of them in seconds (rather than minutes, hours or longer as is typical for stable complexes.

They are usually made of proteins and RNA, and proteins in them usually have low complexity sequences (one example contains 60 amino acids of which 45 are one of alanine, serine, proline and arginine)

Back to the synaptic active zone. The ELKS1 and SYD2 both have phase separation regions (which aren’t of low complexity but they both have lots of amino acids capable of making pi pi contacts). They undergo phase separation during an early stage of synapse development. Later they solidify and bind other proteins found in the active presynaptic zone. You can make mutant ELKS1 and SYD2 lacking the low complexity regions, but the synapses they form are abnormal.

The liquid phase scaffold formed by SYD2 and ELK1 can be reconstituted in vitro. It binds and incorporates downstream synaptic components. Both proteins are large (SYD2 has 1,139 amino acids, ELKS1 has 836).

What is remarkable is that you can take a phase separation motif from human proteins (FUS which when mutated can cause ALS, or from hnRNPA2) put them into SYD2 and ELK1 mutants lacking their low complexity region and have the proteins form a normal presynaptic active zone.

This is remarkable and exciting stuff


Advances in cellular biology have largely come from chemistry.  Think DNA and protein structure, enzyme analysis.  However, cell biology is now beginning to return the favor and instruct chemistry by giving it new objects to study. Think phase transitions in the cell, liquid liquid phase separation, liquid droplets, and many other names (the field is in flux) as chemists begin to explore them.  Unlike most chemical objects, they are big, or they wouldn’t have been visible microscopically, so they contain many, many more molecules than chemists are used to dealing with.

These objects do not have any sort of definite stiochiometry and are made of RNA and the proteins which bind them (and sometimes DNA).  They go by any number of names (processing bodies, stress granules, nuclear speckles, Cajal bodies, Promyelocytic leukemia bodies, germline P granules.  Recent work has shown that DNA may be compacted similarly using the linker histone [ PNAS vol.  115 pp.11964 – 11969 ’18 ]

The objects are defined essentially by looking at them.  By golly they look like liquid drops, and they fuse and separate just like drops of water.  Once this is done they are analyzed chemically to see what’s in them.  I don’t think theory can predict them now, and they were never predicted a priori as far as I know.

No chemist in their right mind would have made them to study.  For one thing they contain tens to hundreds of different molecules.  Imagine trying to get a grant to see what would happen if you threw that many different RNAs and proteins together in varying concentrations.  Physicists have worked for years on phase transitions (but usually with a single molecule — think water).  So have chemists — think crystallization.

Proteins move in and out of these bodies in seconds.  Proteins found in them do have low complexity of amino acids (mostly made of only a few of the 20), and unlike enzymes, their sequences are intrinsically disordered, so forget the key and lock and induced fit concepts for enzymes.

Are they a new form of matter?  Is there any limit to how big they can be?  Are the pathologic precipitates of neurologic disease (neurofibrillary tangles, senile plaques, Lewy bodies) similar.  There certainly are plenty of distinct proteins in the senile plaque, but they don’t look like liquid droplets.

It’s a fascinating field to study.  Although made of organic molecules, there seems to be little for the organic chemist to say, since the interactions aren’t covalent.  Time for physical chemists and polymer chemists to step up to the plate.

Biden’s cerebral aneurysm

A friend sent me a semi-hysterical rant from a neurosurgeron about the dangers of President Biden’s cerebral aneurysm. Not to worry. This happened in 1988 and was successfully clipped although it ruptured during surgery. The only possible complication at this point is normal pressure hydrocephalus (occult hydrocephalus). That’s a medical mouthful so here’s some background to put it all into context.

If you’ve ever seen a blister on an inner tube, that’s what a cerebral aneurysm looks like. They usually look like a round ball on the side of an artery in the brain. They look nothing like an aneurysm of the aorta. To treat them, one puts a clip around the neck of the aneurysm, so to prevent the pressure in the adjacent artery from bursting it. As Dr. Tom Langfitt, the neurosurgeon who taught medical students, interns and residents at Penn Med in the 60’s said “they’ll stare you down every time”. To put a clip around the neck of the aneurysm you have to jiggle and move it, which may cause it to break. This happened during surgery on President Biden in 1988.

Remarkably, Neal Kassell, the neurosurgeon operating on President Biden was an undergraduate at Penn when I was a neurology resident there in ’67 – ’68. Even before med school (graduating Penn Med in’72) he was vitally interested in neurosurgery and hung around the hospital and would observe Langfitt in action in the OR.

What is there to worry about? Relatively little. It is possible that Biden is developing another aneurysm. One well known complication of a ruptured intracranial aneurysm is something called occult hydrocephalus (or normal pressure hydrocephalus). Blood is extremely inflammatory, and the inflammation can resolve causing scarring (fibrosis) of the linings of the brain. This can impede the flow of spinal fluid.

What are the symptoms? Cognitive decline for one, something that’s been endlessly discussed by pundits, politicians and the voters. The other symptom which even you can look for is difficulty walking, in particular beginning to walk. People with this seem to have feet glued to the floor and have problems initiating walking.

Diagnosis — in Biden’s case, a CAT scan to see if the cerebral ventricles are larger than they should be — has great pictures and explanation.

Why not an MRI — because the clips used back in 1988 contain magnetizable material, and entering the strong magnetic fields of an MRI scanning would rip the clips off the aneurysm and kill Biden.

I think the chances of occult hydrocephalus developing 32 years after the aneurysm are remote. If it were going to happen it would have already. In the meantime, watch him start to walk.

Maybe the senile plaque really is a tombstone

“Thinking about pathologic changes in neurologic disease has been simplistic in the extreme.  Intially both senile plaques and neurofibrillary tangles were assumed to be causative for Alzheimer’s.  However there are 3 possible explanations for any microscopic change seen in any disease.  The first is that they are causative (the initial assumption).  The second is that they are a pile of spent bullets, which the neuron uses to defend itself against the real killer.  The third is they are tombstones, the final emanations of a dying cell.” I’ve thought this way for years, and the quote is from 2012:

We now have some evidence that the senile plaque may be just a tombstone marking a dead neuron. Certainly attempts to remove the plaques haven’t helped patients despite billions spent in the attempt.

A recent paper [ Proc. Natl. Acad. Sci. vol. 117 pp. 28625–28631 ’20 – ] not only provides a new way to look at Alzheimer’s disease, but immediately suggests (to me at least) a way to test the idea. If the test proves correct, it will turn the focus of Alzheimer disease research on its ear.

Not to leave anyone behind, the senile plaque is largely made of a small fragment (the aBeta peptide 40 or 42 amino acids) from a much larger protein (the amyloid precursor protein [ APP ] — with well over 800 amino acids). Mutations in APP with the net effect of producing more aBeta are associated with familial Alzheimer’s disease, as are mutations in the enzymes chopping up APP to form Abeta (presenilin1, etc.).

The paper summarizes some evidence that the real culprit is neuronal uptake of the Abeta peptide either as a monomer, or a collection of monomers (an oligomer) or even the large aggregate of monomers seen under the microscope as the senile plaque.

The paper gives clear evidence that a 30 amino acid fragment of Abeta by itself without oligomerization can be taken up by neurons. Not only that but they found the protein on neuronal cell surface that Abeta binds to as well.

Ready to be shocked?

The protein taking Abeta into the neuron is the prion protein (PrPC) which can cause mad cow disease, wasting disease of elk and all sorts of horrible neurologic diseases among them Jakob Creutzfeldt disease.

Now to explain a bit of the jargon which follows. The amino acids making up our proteins come in two forms which are mirror images of each other. All our amino acids are of the L form, but the authors were able to synthesize the 42 amino acid Abeta peptide (Abeta42 below) using all L or all D amino acids.

It’s time to let the authors speak for themselves.

“In previous experiments we compared toxicity of L- and D-Aβ42. We found that, under conditions where L-Aβ42 reduced cell viability over 50%, D-Aβ42 was either nontoxic (PC12) or under 20% toxic . We later showed that L-Aβ is taken up approximately fivefold more efficiently than D-Aβ (28), suggesting that neuronal Aβ uptake and toxicity are linked.”

Well, if so, then the plaque is the tombstone of a neuron which took up too much Abeta.

Well how could you prove this? Any thoughts?

Cell models are nice, but animal models are probably better (although they’ve never resulted in useful therapy for stroke despite decades of trying).

Enter the 5xFAD mouse — it was engineered to have 3 mutations in APP known to cause Familial Alzheimer’s Disease + 2 more in Presenilin1 (which also cause FAD). The poor mouse starts getting Abeta deposition in its brain under two months of age (mice live about two years).

Now people aren’t really sure just what the prion protein (PrPC) actually does, and mice have been made without it (knockout mice). They are viable and fertile, and initially appear normal, but abnormalities appear as the mouse ages if you look hard enough. But still . . .

So what?

Now either knock out the PrPC gene in the 5xFAD mouse or mate the two different mouse strains.

The genes (APP, presenilin1 and PrPC) are on different chromosomes (#21, #14 and #20 respectively). So the first generation (F1) will have a normal counterpart of each of the 3 genes, along with a pathologic gene (e.g. they will be heterozygous for the 3 genes).

When members of F1 are bred with each other we’d expect some of them to have all mutant genes. If it were only 2 genes on two chromosomes, we’d expect 25% of he offspring (F2 generation) to have all abnormal genes. I’ll leave it for the mathematically inclined to figure out what the actual percentage of homozygous abnormal for all 3 would be).

What’s the point? Well, it’s easy to measure just what genes a mouse is carrying, so it’s time to look at mice with a full complement of 5xFAD genes and no PrPC.

If these mice don’t have any plaques in their brains, it’s game, set and match. Alzheimer research will shift from ways to remove the senile plaque, to ways to prevent it by inhibiting cellular uptake of the abeta peptide.

What could go wrong? Well, their could be other mechanisms and other proteins involved in getting Abeta into cells, but these could be attacked as well.

If the experiment shows what it might, this would be the best Thanksgiving present I could imagine.

So go to it. I’m an 80+ year old retired neurologist with no academic affiliation. I’d love to see someone try it.

Neural nets

The following was not written by me, but by a friend now retired from Bell labs. It is so good that it’s worth sharing.

I asked him to explain the following paper to me which I found incomprehensible despite reading about neural nets for years. The paper tries to figure out why neural nets work so well. The authors note that we lack a theoretical foundation for how neural nets work (or why they should !).

Here’s a link

Here’s what I got back

Interesting paper. Thanks.

I’ve had some exposure to these ideas and this particular issue, but I’m hardly an expert.

I’m not sure what aspect of the paper you find puzzling. I’ll just say a few things about what I gleaned out of the paper, which may overlap with what you’ve already figured out.

The paper, which is really a commentary on someone else’s work, focuses on the classification problem. Basically, classification is just curve fitting. The curve you want defines a function f that takes a random example x from some specified domain D and gives you the classification c of x, that is, c = f(x).

Neural networks (NNs) provide a technique for realizing this function f by way of a complex network with many parameters that can be freely adjusted. You take a (“small”) subset T of examples from D where you know the classification and you use those to “train” the NN, which means you adjust the parameters to minimize the errors that the NN makes when classifying the elements of T. You then cross your fingers and hope that the NN will show useful accuracy when classifying examples from D that it has not seen before (i.e., examples that were not in the training set T). There is lots of empirical hokus pokus and rules-of-thumb concerning what techniques work better than others in designing and training neural networks. Research to place these issues on a firmer theoretical basis continues.

You might think that the best way to train a NN doing the classification task is simply to monitor the classifications it makes on the training set vectors and adjust the NN parameters (weights) to minimize those errors. The problem here is that classification output is very granular (discontinuous): cat/dog, good/bad, etc. You need to have a more nuanced (“gray”) view of things to get the hints you need to gradually adjust the NN weights and home in on their “best” setting. The solution is a so-called “loss” function, a continuous function that operates on the output data before it’s classified (while it is still very analog, as opposed to the digital-like classification output). The loss function should be chosen so that lower loss will generally correspond to lower classification error. Choosing it, of course, is not a trivial thing. I’ll have more to say about that later.

One of the supposed truisms of NNs in the “old days” was that you shouldn’t overtrain the network. Overtraining means beating the parameters to death until you get 100% perfect classification on the training set T. Empirically, it was found that overtraining degrades performance: Your goal should be to get “good” performance on T, but not “too good.” Ex post facto, this finding was rationalized as follows: When you overtrain, you are teaching the NN to do an exact thing for an exact set T, so the moment it sees something that differs even a little from the examples in set T, the NN is confused about what to do. That explanation never made much sense to me, but a lot of workers in the field seemed to find it persuasive.

Perhaps a better analogy is the non-attentive college student who skipped lectures all semester and has gained no understanding of the course material. Facing a failing grade, he manages by chicanery to steal a copy of the final exam a week before it’s given. He cracks open the textbook (for the first time!) and, by sheer willpower, manages to ferret out of that wretched tome what he guesses are the correct, exact answers to all the questions in the exam. He doesn’t really understand any of the answers, but he commits them to memory and is now glowing with confidence that he will ace the test and get a good grade in the course.

But a few days before the final exam date the professor decides to completely rewrite the exam, throwing out all the old questions and replacing them with new ones. The non-attentive student, faced with exam questions he’s never seen before, has no clue how to answer these unfamiliar questions because he has no understanding of the underlying principles. He fails the exam badly and gets an F in the course.
Relating the analogy of the previous two paragraphs to the concept of overtraining NNs, the belief was that if you train a NN to do a “good” job on the test set T but not “too good” a job, it will incorporate (in its parameter settings) some of the background knowledge of “why” examples are classified the way they are, which will help it do a better job when it encounters “unfamiliar” examples (i.e., examples not in the test set). However, if you push the training beyond that point, the NN starts to enter the regime where its learning (embodied in its parameter settings) becomes more like the rote memorization of the non-attentive student, devoid of understanding of the underlying principles and ill prepared to answer questions it has not seen before. Like I said, I was never sure this explanation made a lot of sense, but workers in the field seemed to like it.

That brings us to “deep learning” NNs, which are really just old-fashioned NNs but with lots more layers and, therefore, lots more complexity. So instead of having just “many” parameters, you have millions. For brevity in what follows, I’ll often refer to a “deep learning NN” as simply a “NN.”
Now let’s refer to Figure 1 in the paper. It illustrates some of the things I said above. The vertical axis measures error, while the horizontal axis measures training iterations. Training involves processing a training vector from T, looking at the resulting value of the loss function, and adjusting the NN’s weights (from how you set them in the previous iteration) in a manner that’s designed to reduce the loss. You do this with each training vector in sequence, which causes the NN’s weights to gradually change to values that (you hope) will result in better overall performance. After a certain predetermined number of training iterations, you stop and measure the overall performance of the NN: the overall error on the training vectors, the overall loss, and the overall error on the test vectors. The last are vectors from D that were not part of the training set.

Figure 1 illustrates the overtraining phenomenon. Initially, more training gives lower error on the test vectors. But then you hit a minimum, with more training after that resulting in higher error on the test set. In old-style NNs, that was the end of the story. With deep-learning NNs, it was discovered that continuing the training well beyond what was previously thought wise, even into the regime where the training error is at or near zero (the so-called Terminal Phase of Training—TFT), can produce a dramatic reduction in test error. This is the great mystery that researchers are trying to understand.

You can read the four points in the paper on page 27071, which are posited as “explanations” of—or at least observations of interesting phenomena that accompany—this unexpected lowering of test error. I read points 1 and 2 as simply saying that the pre-classification portion of the NN [which executes z = h(x, theta), in their terminology] gets so fine-tuned by the training that it is basically doing the classification all by itself, with the classifier per se being left to do only a fairly trivial job (points 3 and 4).
To me, I feel like this “explanation” misses the point. Here is my two-cents worth: I think the whole success of this method is critically dependent on the loss function. The latter has to embody, with good fidelity, the “wisdom” of what constitutes a good answer. If it does, then overtraining the deep-learning NN like crazy on that loss function will cause its millions of weights to “learn” that wisdom. That is, the NN is not just learning what the right answer is on a limited set of training vectors, but it is learning the “wisdom” of what constitutes a right answer from the loss function itself. Because of the subtlety and complexity of that latent loss function wisdom, this kind of learning became possible only with the availability of modern deep-learning NNs with their great complexity and huge number of parameters.

The prion battles continue with a historical note at the end

Now that we know proteins don’t have just one shape, and that 30% of them have unstructured parts, it’s hard to remember just how radical that Prusiner’s proposal that a particular conformation (PrPSc) of the normal prion protein (PrPC) caused other prion proteins to adopt it and cause disease was back in the 80s. Actually Kurt Vonnegut got there first with Ice-9 in “Cat’s Cradle ” in 1963. If you’ve never read it, do so, you’ll like it.

There was huge resistance to Prusiner’s idea, but eventually it became accepted except by Laura Manuelidis (about which more later). People kept saying the true infectious agent was a contaminant in the preparations Prusiner used to infect mice and that the prion protein (called PrPC) was irrelevant.

The convincing argument that Prusiner was right (for me at least) was PMCA (Protein Misfolding Cyclic Amplification) in which you start with a seed of PrPSc (the misfolded form of the normal prion protein PrPC), incubate it with a 10,000 fold excess of normal PrPC, which is converted by the evil PrPSC to more of itself. Then you sonicate what you’ve got breaking it into small fragments, and continue the process with another 10,000 fold excess of normal PrPC. Repeat this 20 times. This should certainly dilute out any infectious agent along for the ride (no living tissue is involved at any point). You still get PrPSc at the end. For details see Cell vol. 121 pp. 195 – 206 ’05.

Now comes [ Proc. Natl. Acad. Sci. vol. 117 pp. 23815 – 23822 ’20 ] which claims to be able to separate the infectivity of prions from their toxicity. Highly purified mouse prions show no signs of toxicity (neurite fragmentation, dendritic spine density changes) in cell culture, but are still infectious producing disease when injected into another mouse brain.

Even worse treatment of brain homogenates from prion infected mice with sodium laroylsarcosine destroys the toxicity to cultured neurons without reducing infectivity to the next mouse.

So if this paper can be replicated it implies that the prion protein triggers some reaction in the intact brain which then kills the animals.

Manuelidis thought in 2011 that the prion protein is a reaction to infection, and that we haven’t found the culprit. I think the PCMA pretty much destroyed that idea.

So basically we’re almost back to square one with what causes prion disease. Just to keep you thinking. Consider this. We can knock out the prion protein gene in mice. Guess what? The mice are normal. However, injection of the abnormal prion protein (PrPSc) into their brains (which is what researchers do to transmit the disease) doesn’t cause any disease.

Historical notes: I could have gone to Yale Med when Manuelidis was there, but chose Penn instead. According to Science vol. 332 pp. 1024 – 1027 ’11 she was one of 6 women in the class, and married her professor (Manuelidis) aged 48 when she was 24 graduating in 1967. In today’s rather Victorian standards of consent, power differential between teacher and student, that would probably have gotten both of them bounced out.

So I went to Penn Med. graduating in ’66. Prusiner graduated in ’68. He and I were in the same medical fraternity (Nu Sigma Nu). Don’t think animal house, medical fraternities were a place to get some decent food, play a piano shaped object and very occasionally party. It’s very likely that we shared a meal, but I have no recollection.

Graduating along with me was another Nobel Laureate to be — Mike Brown, he of the statins. Obviously a smart guy, but he didn’t seem outrageously smarter than the rest of us.

How Infants learn language – V

Infants don’t learn language like neural nets do. Unlike nets, no feedback is involved, which amazingly, makes learning faster.

As is typical of research in psychology, the hard part is thinking of something clever to do, rather than actually carrying it out.

[ Proc. Natl. Acad. Sci. vol. 117 pp. 26548 – 26549 ’20 ] is a short interview with psychologist Richard N. Aslin. Here’s a link — hopefully not behind a paywall —

He was interested in how babies pull out words from a stream of speech.

He took a commonsense argument and ran with it.

“The learning that I studied as an undergrad was reinforcement learning—that is, you’re getting a reward for responding to certain kinds of input—but it seemed that that kind of learning, in language acquisition, didn’t make any sense. The mother is not saying, “listen to this word…no, that’s the wrong word, listen to this word,” and giving them feedback. It’s all done just by being exposed to the language without any obvious reward”

So they performed an experiment whose results surprised them. They made a ‘language’ of speech sounds which weren’t words and presented them 4 per second for a few minutes, to 8 month old infants. There was an underlying statistical structure, as certain sounds were more likely to follow another one, others were less likely. That’s it. No training. No feedback. No nothin’, just a sequence of sounds. Then they presented sequences (from the same library of sounds) which the baby hadn’t heard before and the baby recognized them as different. The interview didn’t say how they knew the baby was recognizing them, but my guess is that they used the mismatch negativity brain potential which automatically arises to novel stimuli.

Had you ever heard of this? I hadn’t but the references to the author’s papers go back to 1996 ! Time for someone to replicate this work.

So our brains have an innate ability to measure statistical probability of distinct events occurring. Even better we react to the unexpected event. This may be the ‘language facility’ Chomsky was talking about half a century ago. Perhaps this innate ability is the origin of music, the most abstract of the arts.

How infants learn language is likely inherently fascinating to many, not just neurologists.

Here are links to some other posts on the subject you might be interested in.

The death of the synonymous codon – V

The coding capacity of our genome continues to amaze. The redundancy of the genetic code has been put to yet another use. Depending on how much you know, skip the following four links and read on. Otherwise all the background you need to understand the following is in them.

There really is no way around the redundancy producing synonymous codons. If you want to code for 20 different amino acids with only four choices at each position, two positions (4^2) won’t do. You need three positions, which gives you 64 possibilities (61 after the three stop codons are taken into account) and the redundancy that comes along with it. The previous links show how the redundant codons for some amino acids aren’t redundant at all but used to code for the speed of translation, or for exonic splicing enhancers and inhibitors. Different codons for the same amino acid can produce wildly different effects leaving the amino acid sequence of a given protein alone.

The latest example — Proc. Natl. Acad. Sci. vol. 117 pp. 24936 – 24046 ‘2 — is even more impressive, as it implies that our genome may be coding for way more proteins than we thought.

The work concerns Mitochondrial DNA Polymerase Gamma (POLG), which is a hotspot for mutations (with over 200 known) 4 of which cause fairly rare neurologic diseases.

Normally translation of mRNA into protein begins with something called an initator codon (AUG) which codes for methionine. However in the case of POLG, a CUG triplet (not AUG) located in the 5′ leader of POLG messenger RNA (mRNA) initiates translation almost as efficiently (∼60 to 70%) as an AUG in optimal context. This CUG directs translation of a conserved 260-triplet-long overlapping open reading frame (ORF) called  POLGARF (POLG Alternative Reading Frame — surely they could have come up something more euphonious).

Not only that but the reading frame is shifted down one (-1) meaning that the protein looks nothing like POLG, with a completely different amino acid composition. “We failed to find any significant similarity between POLGARF and other known or predicted proteins or any similarity with known structural motifs. It seems likely that POLGARF is an intrinsically disordered protein (IDP) with a remarkably high isoelectric point (pI =12.05 for a human protein).” They have no idea what POLGARF does.

Yet mammals make the protein. It gets more and more interesting because the CUG triplet is part of something called a MIR (Mammalian-wide Interspersed Repeat) which (based on comparative genomics with a lot of different animals), entered the POLG gene 135 million years ago.

Using the teleological reasoning typical of biology, POLGARF must be doing something useful, or it would have been mutated away, long ago.

The authors note that other mutations (even from one synonymous codon to another — hence the title of this post) could cause other diseases due to changes in POLGARF amino acid coding. So while different synonymous codons might code for the same amino acid in POLG, they probably code for something wildly different in POLGARF.

So the same segment of the genome is coding for two different proteins.

Is this a freak of nature? Hardly. We have over an estimated 368,000 mammalian interspersed repeats in our genome —

Could they be turning on transcription for other proteins that we hadn’t dreamed of. Algorithms looking for protein coding genes probably all look for AUG codons and then look for open reading frames following them.

As usual Shakespeare got there first “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”

Certainly the paper of the year for intellectual interest and speculation.

First Debate — What did the neurologist think?

As my brother sometimes says “everyone is entitled to my opinion”.  Why should you be interested in mine?  Because I was a clinical neurologist from 1968 to 2000 seeing probably 25,000 patients over the years. Because I was board certified by the American Board of Psychiatry and Neurology.  Because later I examined candidates for certification for the same board.  Because I have an extensive experience with dementia in patients, and (unfortunately) with close friends and their kin and in our family.

The main question I had before the debates, was “Is Joe Biden cognitively impaired”, given the selection of his statements and gaffes.

The short answer is no.  He held his own, and moreover did so for 90 stressful minutes.

The more nuanced answer is that there are a few things about him that are not 100%.  As the time wore on, he mispronounced and slurred more words.  Also the right corner of his mouth appeared to sag a bit more (but no one has a perfectly symmetrical face).

The most unusual feature is Biden’s upper face — it doesn’t move. The masklike face is a symptom of Parkinsonism, but if so it is the only one.  I’m ashamed to admit that I didn’t notice how often his eyes blinked, but since I didn’t notice infrequent blinking (another sign of Parkinsonism) it probably wasn’t present. The prosody of his speech  ( is normal, not diminished as it would be in Parkinsonism.  Is he on botox?  He has a remarkably unlined face for a man his age.

Biden often appeared to be looking down at something — talking points?  mini-teleprompter?

Is Trump impaired cognitively?  No sign of it.  His responses were quick, sometimes funny and often not to the point.   Both men are smart, but Trump appears (to me) to be smarter.

Although Chris Wallace is from Fox News hence suspect for many,  I thought he was a tough and impartial moderator, which is exactly what I wanted.

I did look at a C-Span segment of the audience settling down before the actual debate and was horrified.  50% not wearing masks, people shaking hands, getting far closer than 6 feet from each other.   Even if they’d all been recently tested for the virus, this was irresponsible behavior and an extremely poor model for the country.

A new way to treat neurodegeneration

It’s probably too good to be true and the work certainly needs to be replicated, and I can’t believe it actually works but here it is.

Pelizaeus Merzbacher’s disease is a hereditary disease affecting the cells (oligodendroglia) making myelin (the fatty wrapping of nerve fibers (axons) in the brain.  The net effect is that there isn’t enough myelin.

The mutation affects PLP (proteolipid protein) which accounts for half the protein in myelin.  The biochemistry is fascinating, but not as much as the genetics.   The protein has 276 amino acids, and even 20 years ago some 60 point mutations were known (implying that not enough PLP is around), except that over half the cases have a duplication of the gene (implying that too much is around).  A mouse model (the Jimpy mouse)  is available — it has a point mutation in PLP.

Interestingly people who lack any PLP (due to mutation) have milder disease than people with the point mutations. I told you the genetics was fascinating.

Noting that null mutations in PLP did better, the authors of Nature vol. 585 pp. 397 – 403 ’20 tried to produce a knockout in the jimpy mice using CRISPR-Cas9.  Amazingly the animals did better, even living longer.

Then the authors did something incredible, they injected antisense oligonucleotides which bound to the mRNA for PLP inhibiting translation and decreasing the amount of PLP around into the ventricles of the mice, and they got better, and lived longer.

Now we have 1,000 times more neurons than the mouse does and our brain is even larger, so it’s a long way from applying this to the neurodegenerations which afflict us, but I find it amazing that antisense oligonucleotides were able to diffuse into the brain, get into the oligodendroglia cells and shut down PLP synthesis.

Oligonucleotides are big molecules, and they used two such, one 20 and one 16 nucleotides long — even a single nucleotide (adenosine monophosphate) has a mass of 347 Daltons.  It is amazing that such a large molecule could get into a living cell.

Now a molecule doesn’t know what it is diffusing in, so whether even administration of an oligonucleotide into the human cerebral ventricles would get far enough into the brain to reach its target is far from clear.

Just that it worked in these animals improving neurologic function and lifespan is incredible.  As Carl Sagan said –“Extraordinary claims require extraordinary evidence”  so the work needs repeating.

Shutting down an mRNA is one thing and I don’t see how they could be used to correct an mRNA (unless they are getting into the nucleus and correcting a splice junction mutation).

Amazing stuff.  You never know what the next issue of the journals will brain.  It’s like opening presents.