The staggering implications of one axon synapsing on another

It isn’t often that a single paper can change the way we think the brain works.  But such is the case for the paper described in the previous post (full copy below *** ) if the implications I draw from it are correct.

Unfortunately this post requires a deep dive into neuroanatomy, neurophysiology, neuropharmacology and cellular molecular biology.  I hope to put in enough background to make some of it comprehensible, but it is really written for the cognoscenti in these fields.

I’m pretty sure these thoughts are both original and unique

Briefly, the paper provided excellent evidence for one axon causing another to fire an impulse (an action potential).   The fireror was from a neuron using acetyl choline as a neurotransmitter, and the fireree was a dopamine axon going to the striatum.

Dopamine axons are special.  They go all over the brain. The cell body of the parent neuron of the axon to be synapsed on uses dopamine as a neurotransmitter.  It sits in the pars compacta of the substantia nigra a fair piece away from the target they studied. “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].”  These axons release dopamine all over the brain.  There aren’t many dopamine neurons to begin with just 80,000 which is 1 millionth of the current (probably unreliable) estimate of the number of neurons in the brain 80,000,000,000.

Now synapses between neurons are easy to spot using electron microscopy.  The presynaptic terminal contains a bunch of small vesicles and is closely apposed (300 Angstroms — way below anything the our eyes can see) to the post synaptic neuron which also looks different, usually having a density just under the membrane (called, logically enough, post-synaptic density).  Embedded in the postsynaptic membrane are proteins which conduct ions such as Na+, K+, Cl- into the postsynaptic neuron triggering an action potential.

But the dopamine axons going all over the brain have a lot of presynaptic specialization, but the post-synaptic neuron and its postsynaptic density is nowhere to be found.  This is called volume neurotransmission.

The story doesn’t end with dopamine.  There are 3 other similar systems of small numbers of neurons collected into nuclei, using different neurotransmitters, but whose axons branch and branch so they go all over the brain.

These are the locus coeruleus which uses norepinephrine as a neurotransmitter, the dorsal raphe nucleus which uses serotonin and the basal nucleus of Meynert which uses acetyl choline.

What is so remarkable about the paper, that it allows the receiving neurons to (partially) control what dopamine input it gets.

But dopamine doesn’t work at the synapse, and the 5 receptors for it (called G Protein Coupled Receptors — GPCRs) aren’t found there. None of the GPCRs conduct ions or trigger action potentials (immediately anyway).  Instead, they produce their effects much more slowly and change the metabolism of the interior of the cell.

Neither does norepinephrine all of whose receptors are GPCRs.  Serotonin does have one of its 16 or so receptors which conducts ions, but the rest are GPCRs.

Acetyl choline does have one class of receptors (nicotinic) which conducts ions, and which the paper shows is what is triggering the axon on axon synapse.  The other class (muscarinic) of acetyl choline receptor is a GPCR.

We do know that the norepinephrine and serotonin axons work by volume neurotransmission (not sure about those of the basal nucleus of Meynert).

Now the paper tested axon to axon firing in one of the four systems (dopamine) in one of the places its axons goes (the striatum).  There is no question that the axons of all 4 systems ramify widely.

Suppose axon to axon firing is general, so a given region can control in someway how much dopamine/serotonin/norepinephrine/acetyl choline it is getting.

Does this remind you of any system you are familiar with?  Maybe, because my wife went to architecture school, it reminds me of an old apartment building, with separate systems to distribute electricity, plumbing, steam heat and water to each apartment, which controls how much of each it gets.

Perhaps these four systems are basically neurological utilities, necessary for  the function of the brain, but possibly irrelevant to the computations it is carrying out, like a mother heating a bottle for her baby in water on a gas stove on a cold winter night.  The nature of steam heat, electricity, water and gas tell you very little about what is going on in her apartment.

The paper is so new (the Neuron issue of 21 September) that more implications are sure to present themselves.

Quibbles are sure to arise.  One is that fact that the gray matter of our brain doesn’t contain much in the way of neurons using acetyl choline as a neurotransmitter.  What it does have is lots of neurons using GABA which we know can act on axons, inhibiting axon potential generation.  This has been well worked out with synapses where the axon emerges from the neuron cell body (the initial segment).

The work was done in living animals, so no microscopy is available showing the synapse. Such work is sure to be done.  No classical presynaptic apparatus may be present, just two naked axons touching each other and interacting by ephaptic transmission.

So a lot of work should be done, the first of which should be replication. As the late Carl Sagan said “extraordinary claims require extraordinary evidence”.

Finally:

As Mark Twain said ” There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

 

Synapses on Axons !

Every now and then a paper comes along which shows how little we really know about the brain and how it works.  Even better, it demands a major rethink of what we thought we knew.  Such a paper is — https://www.cell.com/neuron/fulltext/S0896-6273(22)00656-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627322006560%3Fshowall%3Dtrue

which I doubt you can get unless you are a subscriber to Neuron.    What [ Neuron vol. 110 pp. 2889 – 2890 ’22 ] does is pretty much prove that an axon from one neuron can synapse on an axon of another neuron.  When one neuron is stimulated the axon of another neuron fires an impulse (an action potential) as measured by patch clamping the second axon.  This happens way too fast after stimulation to be explained by volume neurotransmission (about which more later).  Such synapses are well known on the initial segment of the axon as it leaves the cell body (the soma) of the neuron.

But these synapses occur very near to the end of the axon in the part of the brain (the striatum) the parent neuron (a midbrain dopamine neuron) innervates (the striatum).   The neurotransmitter involved is acetylcholine and the striatum has lots of neurons using acetylcholine as a neurotransmitter.  There are two basic types of acetylcholine receptor in the brain — muscarinic and nicotinic.  Muscarinic receptors are slow acting and change the internal chemistry of the neuron.  This takes time.  Nicotinic receptors are ion channels, and when they open, an action potential is nearly immediate.  Also using a drug to block the nicotinic acetyl choline receptor, blocks action potential formation after stimulation.

Why is this work so radical? (which of course means that it must be repeated by others).  It implies that all sorts of computations in the brain can occur locally at the end of an axon, far away from the neuron cell body which is supposed to be in total control of it.  The computations could occur without any input from the cell body, and spontaneous activity of the axons they studied occur without an impulse from the cell body.   If replicated, we’re going to have to rethink our models of how the brain actually works.  The authors note that they have just studied one system, but other workers are certain to study others, to find out how general this.

Neuropil, is an old term for areas of the brain with few neuron or glial cell bodies, but lots of neural and glial processes.  It never was much studied, and our brain has lots of it.  Perhaps it is actually performing computations, in which case it must be added to the 80 billion neurons we are thought to have.

Now for a bit more detail

The cell body of the parent neuron of the axon to be synapsed on uses dopamine as a neurotransmitter.  It sits in the pars compacta of the substantia nigra a fair piece away from the target they studied. “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].”  These axons release dopamine all over the brain, and not necessarily synapsing with a neuron.  So when that single neuron fires, dopamine is likely to bathe every neuron in the brain.This is called volume neurotransmission which is important because the following neurotransmitters use it — dopamine, serotonin, acetyl choline and norepinephrine. Each has only a small number of cells using them as a transmitter.  The ramification of these neurons is incredible.

So now you see why massive release of any of the 4 neurotransmitters mentioned (norepinephrine, serotonin, dopamine, acetyl choline) would have profound effects on brain states.  The four are vitally involved in emotional state and psychiatric disease. The SSRIs treat depression, they prevent reuptake of released serotonin.  Cocaine has similar effects on dopamine.  The list goes on and on and on.

Axons synapsing on other axons is yet another reason to modify our rather tattered wiring diagram of the brain — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/

Book Review: Proving Ground, Kathy Kleiman

Proving Ground is a fascinating book about the 6 women who programmed the first programmable computer, the ENIAC (Electronic Numerical Integrator And Computer).  Prior to this, the women were computers as the term was used in the 1940s for people who sat in front of calculating machines and performed lengthy numerical computations solving differential equations to find the path of an artillery shell one bloody addition/subtraction/multiplication/division at a time.  When World War II started and when the man were off in the army, the search was on for  women with a mathematical background who could do this.

A single trajectory took a day to calculate, and each trajectory had to be separately calculated for different wind currents, air temperature, speed and weight of the shell.  The computations were largely done at the Moore School of Engineering at Penn and were way too slow (although accurate) to produce the numbers of trajectories the army needed.

Enter Dr. John Mauchley who had an idea of how to do this using vacuum tubes, and a brilliant 23 year old engineer, J. Presper Eckert, who could instantiate it. The army committed money to building the machine, which came in 42 monster boxes 8 feet tall, 2 feet wide and what looks like 4 feet deep.

6 of the best and brightest computers of trajectories were recruited to figure out how to wire the boxes together to mimic the trajectory calculations they had already been doing.  So, if you’ve ever done any programming, you’ll know that having a definite target to mimic with software makes life much easier.

Going a bit deeper, if you’ve done any programming in machine language, you know about registers, the addition and logical unit, hard wired memory, alterable memory.

Here’s what the 6 women were given by Dr. Eckert (without ever seeing the monster boxes)

l. A circuit diagram of each box, showing how this vacuum tube activated that vacuum tube etc. etc. The 42 boxes contained 18,000 vacuum tubes.  Vacuum tubes and transistors are similar in that their utility is that they only conduct electricity in one direction and can be turned on and off.

2. A block diagram — which showed how the functions of a unit or system interrelate

3. A logical diagram — places for dials switches, plug and cables on the front of the 42 units.

So given this, the 6 had to figure out what each unit did, and how to wire them together to mimic the trajectory calculations they had been doing.

They did it, and initially without being able to enter the room with the boxes (because they didn’t have the proper security clearance).  Eventually they got it and were able to figure out how to wire the boxes together.

If that isn’t brilliant enough, because the calculations were still taking too long, they invented parallel programing.

For those of you who know computing, that should be enough to make you thirst for more detail.

The book contains a lot of sociology.  The women were treated like dirt by the higher ups (but not by Mauchley or Eckert).  When the time came to show ENIAC off to the brass (both academic and military), they were tasked with serving coffee and hanging up coats.  When Kleiman found pictures of them with ENIAC and asked who they were, she was told they were ‘refrigerator ladies’ — whose function was similar to the barely clothed models draped over high powered automobiles to sell them.

I’ll skip the book’s sociology for some sociology of my own.  The book has biographies and much fascinating detail about all 6 women.  I grew up near Philly, and know the buildings at Penn where this was done (I went to Penn Med). Two of the 6 were graduates of Chestnut Hill College, a small Catholic school west of Philly.  The girl across the street went there.  Her mother was born in County Donegal and cleaned houses.  Her father dropped out of high school at 16 to support his widowed mother.  No social services between the two world wars, wasn’t that terrible etc. etc.  Her father worked in a lumberyard, yet the two of them sent both children to college, and owned their own home (eventually free of debt).  The Chestnut Hill grad I know became an editor at Harcourt Brace, her brother became a millionaire insurance executive.  It would be impossible for two working class people to do this today where I grew up (or probably in most places).

What is dx really?

“Differential geometry is the study of properties that are invariant under change of notion”  — Preface p. vii of “Introduction to Smooth Manifolds” J. M. Lee second edition.  Lee says this is “funny primarily because it is so close to the truth”.

Having ascended to a substantial base camp for the assault on the Einstein Field equations (e.g. understanding the Riemann curvature tensor), I thought I’d take a break and follow Needham’s advice about another math book “Elementary Differential Geometry” 2nd edition (revised 2006) by Barrett O’Neill.  “First published in 1966, this trail-blazing text pioneered the use of Forms at the undergraduate level.  Today, more than a half-century later, O’Neill’s work remains, in my view, the single most clear-eyed, elegant and (ironically) modern treatment of the subject available — present company excepted! — at the undergraduate level.”

Anyone taking calculus has seen plenty of dx’s — in derivatives, in integrals etc. etc..  They’re rarely explained.  O’Neill will get you there in just the first 24 pages.  One more page and you’ll understand

df =  (partial f/partial x) * dx + (partial f/partial y) * dy + (partial f/partial z) * dz

which you’ve doubtless seen before primarily as (heiroglyphics) before you moved on.

Is it easy?  No, not unless you read definitions and internalize them immediately.  The definitions are very clearly explained.

His definition of vector is a bit different — two points in Euclidean 3-space (written R^3 which is the only space he talks about in the first 25 pages).  His 3-space is actually a vector space in which point can be added and scalar multiplied.

You’ll need to bone up on the chain rule from calculus 101.

A few more definitions — natural coordinate functions, tangent space to R^3 at p, vector field on R^3, pointwise principle, natural frame field, Euclidean coordinate function (written x_i, where i is in { 1, 2, 3 } ), derivative of a function in direction of vector v (e.g. directional derivative), operation of a vector on a function, tangent vector, 1-form, dual space. I had to write them down to keep them straight as they’re mixed in paragraphs containing explanatory text.

and

at long last,

differential of x_i (written dx_i)

All is not perfect.  On p. 28 you are introduced to the alternation rule

dx_i ^ dx_j = – dx_j ^ dx_i with no justification whatsoever

On p. 30 you are given the formula for the exterior derivative of a one form again with no justification.  So its back to mumbling incantations and plug and chug

FDA Amylyx approval 7 September implies Simufilam will be FDA approved this year

On 7 September an FDA advisory board reversed itself and recommended approval for a drug for ALS — https://www.wsj.com/articles/amylyxs-als-drug-backed-by-fda-advisers-11662590651?mod=newsviewer_click.  The head of the FDA Office of Neuroscience (Billy Dunn) gave a verbal endorsement, making it likely that Amylyx’s drug would be approved.

What does this have to do with the approval of Simufilam this year? Amylyx did a post-hoc, retrospective “responder analysis” that showed patients who did respond to drug (vs placebo) had “an usually strong response”, i.e., a bunch of non-responders in the general population masked the beneficial effects of the drug. This, after the same committee in March turned the drug down due to lack of efficacy in the studied cohort as a whole.

You may recall that I thought Cassava’s results with Simufilam were better than they realized after they released the data on the first 50 patients in the open trail reaching the 9 month endpoint. The full post published 25 August 2021 can be found below the &&&&&. 5/50 had a greater than 50% improvement in their ADAS-Cog11 score (by more than 10 points).  Data like this in Alzheimer’s has never been seen before in any study, or in my clinical experience.  So the data can not be explained by Cherry-picking.  The only other explanations are (1) Fraud (2) incompetent ADAS-Cog11 measurement (3) people without Alzheimer’s entering the study for the money, all of which I think are remote.  Also, the average decline at one year in ADAS-Cog in Alzheimer patients is 5 points.

So Cassava has data similar to Amylyx’s on the first 50 of the 200 in the open label study.  The last of the 200 will complete their full year on the drug by the end of 2022, at which point data will be released.  If the results on the 200 patients are similar to those on the first 50 (say 20/200 having significant (greater than 50% change for the better in ADAS-Cog) improvement, Cassava will have a (strong in my opinion) argument for Simufilam approval.

Clinicians know that patients always respond variably to any sort of therapy. We now know why.  Given that the human genome contains 3,200,000,000 positions.  Full genome sequencing of well over 100,000 people has shown that any two people will differ at one position in a thousand — that’s 3,200,000 differences  — source   https://www.ncbi.nlm.nih.gov/books/NBK20363/

 

Gentlemen start your engines

&&&&&

Cassava Sciences 9 month data is probably better than they realize

My own analysis of the Cassava Sciences 9 month data shows that it is probably even better than they realize.

Here is a link to what they released — keep it handy https://www.cassavasciences.com/static-files/13794384-53b3-452c-ae6c-7a09828ad389.

I was unable to listen to Lindsay Burn’s presentation at the Alzheimer Association International Conference in July as I wasn’t signed up.  I have been unable to find either a video or a transcript, so perhaps Lindsay did realize what I’m about to say.

Apparently today 25 August there was another bear attack on the company and its data.  I’ve not read it or even seen what the stock did.  In what follows I am assuming that everything they’ve said about their data is true and that their data is what they say it is.

So the other day I had a look at what Cassava released at the time of Lindsay’s talk.

First some background on their study.  It is a report on the first 50 patients who had received Simulfilam for 9 months.  It is very important to understand how they were measuring cognition.  It is something called ADAS-Cog11

Here it is and how it is scored and my source — https://www.verywellhealth.com/alzheimers-disease-assessment-scale-98625

The original version of the ADAS-Cog consists of 11 items, including:1

1. Word Recall Task: You are given three chances to recall as many words as possible from a list of 10 words that you were shown. This tests short-term memory.

2. Naming Objects and Fingers: Several real objects are shown to you, such as a flower, pencil and a comb, and you are asked to name them. You then have to state the name of each of the fingers on the hand, such as pinky, thumb, etc. This is similar to the Boston Naming Test in that it tests for naming ability, although the BNT uses pictures instead of real objects, to prompt a reply.

3. Following Commands: You are asked to follow a series of simple but sometimes multi-step directions, such as, “Make a fist” and “Place the pencil on top of the card.”

4. Constructional Praxis: This task involves showing you four different shapes, progressively more difficult such as overlapping rectangles, and then you will be asked to draw each one. Visuospatial abilities become impaired as dementia progresses and this task can help measure these skills.

5. Ideational Praxis: In this section, the test administrator asks you to pretend you have written a letter to yourself, fold it, place it in the envelope, seal the envelope, address it and demonstrate where to place the stamp. (While this task is still appropriate now, this could become less relevant as people write and send fewer letters through the mail.)

6. Orientation: Your orientation is measured by asking you what your first and last name are, the day of the week, date, month, year, season, time of day, and location. This will determine whether you are oriented x 1, 2, 3 or 4.

7. Word Recognition Task: In this section, you are asked to read and try to remember a list of twelve words. You are then presented with those words along with several other words and asked if each word is one that you saw earlier or not. This task is similar to the first task, with the exception that it measures your ability to recognize information, instead of recall it.

8. Remembering Test Directions: Your ability to remember directions without reminders or with a limited amount of reminders is assessed.

9. Spoken Language: The ability to use language to make yourself understood is evaluated throughout the duration of the test.

10. Comprehension: Your ability to understand the meaning of words and language over the course of the test is assessed by the test administrator.

11. Word-Finding Difficulty: Throughout the test, the test administrator assesses your word-finding ability throughout spontaneous conversation.

What the ADAS-Cog Assesses

The ADAS-Cog helps evaluate cognition and differentiates between normal cognitive functioning and impaired cognitive functioning. It is especially useful for determining the extent of cognitive decline and can help evaluate which stage of Alzheimer’s disease a person is in, based on his answers and score. The ADAS-Cog is often used in clinical trials because it can determine incremental improvements or declines in cognitive functioning.2

Scoring

The test administrator adds up points for the errors in each task of the ADAS-Cog for a total score ranging from 0 to 70. The greater the dysfunction, the greater the score. A score of 70 represents the most severe impairment and 0 represents the least impairment.

The average score of the 50 individuals entering was 17 with a standard deviation of 8, meaning that about 2/3 of the group entering had scores of 9 to 25 and that 96% had scores of 1 to 32 (but I doubt that anyone would have entered the study with a score of 1 — so I’m assuming that the lowest score on entry was 9 and the highest was 25).  Cassava Sciences has this data but I don’t know what it is.

Now follow the link to Individual Patient Changes in ADAS-Cog (N = 50) and you will see 50 dots, some red, some yellow, some green.

Look at the 5 individuals who fall between -10 and – 15 and think about what this means.  -10 means that an individual made 10 fewer errors at 9 months than on entry into the study.  Again, I have no idea what the scores of the 5 were on entry.

So assume the worst and that the 5 all had scores of 25 on entry.  The group still showed a 50% improvement from baseline as they look like they either made 12, 13, or 14 fewer errors.  If you assume that the 5 had the average impairment of 17 on entry, they were nearly normal after 9 months of treatment.  That doesn’t happen in Alzheimer’s and is a tremendous result.   Lindsay may have pointed this out in her talk, but I don’t know although I’ve tried to find out.

Is there another neurologic disease with responses like this.  Yes there is, and I’ve seen it.

I was one of the first neurologists in the USA to use L-DOPA for Parkinsonism.  All patients improved, and I actually saw one or two wheelchair bound Parkinsonians walk again (without going to Lourdes).  They were far from normal, but ever so much better.

However,  treated mildly impaired Parkinsonians became indistinguishable from normal, to the extent that I wondered if I’d misdiagnosed them.

12 to 14 fewer errors is a big deal, an average decrease of 3 errors, not so much, but still unprecedented in Alzheimer’s disease.   Whether this is clinically meaningful is hard to tell.  However, 12 month data on the 50 will be available in the fourth quarter of ’21, and if the group as a whole continues to improve over baseline it will be a very big deal as it will tell us a lot about Alzheimer’s.

Cassava Sciences has all sorts of data we’ve not seen (not that they are hiding it).  Each of the 50 has 4 data points (entry, 3, 6 and 9 months) and it would be interesting to see the actual scores rather than the changes between them in all 50.  Were the 5 patients with the 12 – 14 fewer errors more impaired (high ADAS-Cog11 score in entry) or less.

Was the marked improvement in the 5 slow and steady or sudden?   Ditto for the ones who deteriorated or who got much worse or who slightly improved.

Even if such dramatic improvement is confined to 10% of those receiving therapy it is worth a shot to give it to all.  Immune checkpoint blockade has dramatically helped some patients with cancer  (far from all), yet it is tried in many.

Disclaimer:  My wife and I have known Lindsay since she was a teenager and we were friendly with her parents.  However, everything in this post is on the basis of public information available to anyone (and of course my decades of experience as a clinical neurologist)

 

Understanding the Riemann curvature tensor is like doing a spinal tap

Back in the day when I was doing spinal taps, I spent far more time setting them up (positioning the patient so that the long axis of the spinal column was parallel to the floor and the vertical axis of the recumbent patient was perpendicular to the floor) than actually doing the tap.  Why? because then, all I had to do was have the needle parallel to the floor, with no guessing about how to angle it when the patient had rolled (usually forward into the less than firm mattress of the hospital bed).

So it is with finally seeing what the Riemann curvature tensor actually is, why it is the way it is, and why the notation describing it is such a mess.  Finally on p. 290 of Needham’s marvelous book “Visual Differential Geometry and Forms” the Riemann curvature tensor is revealed in all its glory.  Understanding it takes far less time than understanding the mathematical (and geometric) scaffolding required to describe it, a la spinal taps.

Over the years while studying relativity, I’ve seen it in one form or other (always algebraic) without understanding what the algebra was really describing.

Needham will get you there, but you have to understand a lot of other things first. Fortunately almost all of them are described visually, so you see what the algebra is trying to describe.  Along the way you will see what the Lie bracket of two vector fields is all about along with holonomy.  And you will really understand what curvature is.  And Needham will give you 3 ways to understand parallel transport (which underlies everything — thanks Ashutosh)

Needham starts off with Gauss’s definition of curvature of a surface — the angular excess of a triangle, divided by its area.

Here is why this definition is enough to show you why the surface of a sphere is curved.   Go to the equator.  Mark point one, then point two 1/4 of the way around the sphere.  Form longitudes (perpendiculars) there and extend them as great circles toward the North pole. You now have a triangle containing 3 right angles, (clearly an angular excess from Euclid who states that the sum the angles of a triangle is two right angles).  The reason, of course, is because the sphere is curved.

Ever since I met a classmate 12 years ago at a college reunion who was a relativist working with Hawking, I decided to try to learn relativity so I’d have something intelligent to say to him if we ever met again (COVID19 stopped all that although we’re still both alive).

Now that I understand what the math of relativity is trying to describe, I may be able to understand general relativity.

Be prepared for a lot of work, but do start with Needham’s book.  Here are some links to other things I’ve written about it.  It will take some getting used to as it is very different from any math book you’ve ever read (except Needham’s other book).

12 July 21 — https://luysii.wordpress.com/2021/07/12/a-premature-book-review-and-a-60-year-history-with-complex-variables-in-4-acts/

4 Dec 21 — https://luysii.wordpress.com/2021/12/04/a-book-worth-buying-for-the-preface-alone-or-how-to-review-a-book-you-havent-read/

7 Mar 22 — https://luysii.wordpress.com/2022/03/07/visual-differential-geometry-and-forms-q-take-3/

27 June 22 — https://luysii.wordpress.com/2022/06/27/the-chinese-room-argument-understanding-math-and-the-imposter-syndrome/

17 July 22 — https://luysii.wordpress.com/2022/07/17/a-visual-proof-of-the-the-theorem-egregium-of-gauss/

4 diseases explained at one blow said the protein chemist — part 2 — TDP43

A brilliant paper [ Science vol. 377 eabn5582 pp. 1 –> 20 ’22 ] explains how changing a single amino acid (proline) to another  can cause 4 different diseases, depending on the particular protein it is found in (and which proline of many is changed).

There is so much in this paper that it will take several posts to go over it all.  The chemistry in the paper is particularly fine.  So it’s back to Biochemistry 101 and the alpha helix and the beta sheet.

A lot of the paper concerns TDP43, a protein familiar to neurologists because it is involved in FTD-ALS (FrontoTemporal Dementia — Amyotrophic Lateral Sclerosis) and ALS itself.

I actually saw a case early in training.  I had been taught that ALS patients remained cognitively intact until the end (certainly true in my experience — think of Stephen Hawking), so here was this ALS case who was mildly demented.  My education, deficient at that time, so I’d never heard of FTD-ALS, had me writing in the chart “we’re missing something here”.  These were calmer times in the medical malpractice world.

TDP43 is a protein with a lot of different parts in its 414 amino acids.  There are two regions which bind to RNA (Rna Recognition Motifs { RRMs } ), and a glycine rich low complexity domain at the carboxy terminal end.

TDP43 proteins are found in the neuronal inclusions of ALS (interestingly, these weren’t recognized when I was in training).  The low complexity domain of TDP43 aggregates and form fibers.  Some 50 different mutations have been found here in patients.

Just this year the cryoEM structure of TDP43 aggregates from two patients with FTD-ALS were described [ Nature vol. 601 pp. 29 – 30, 139 – 143 ’22 ].  It appears to be a typical amyloid structure with all 79 amino acids (from # 282 Glycine to #360  Glutamine) in a single plane.  Here’s a link to the actual paper — https://www.nature.com/articles/s41586-021-04199-3.  It is likely behind a paywall, but if you can get it, look at figure 2 p. 140, which has the structure.  Who would have ever thought that a protein could flatten out this much.

Both structures were from TDP-43 with none of the 24 mutations known to cause FTD-ALS.

But that’s far from the end of the story.  The same area of TDP43 can also form liquid droplets (perhaps the precursor of the fibers).  But that’s where the brilliant chemistry of [ Science vol. 377 eabn5582 pp. 1 –> 20 ’22 ] comes in.

That’s for next time.  After that, I should be finished with Needham and will have time to write about 6 or so of the interesting papers I’ve run across in the past 6 months.

We interrupt this program . . .

I’ll interrupt the series of posts on the brilliant article [ Science vol. 377 eabn5582 pp. 1 –> 20 ’22 ] to talk about working with the very frightening diazo methane 61 years ago.

I was able to convince Woodward to let me work on an idea of mine to show that carbenes were generated by photolysis of a diazo compound (this was suspected but not known at the time).

Here’s the idea

l. Condense acrylic acid with cyclopentadiene by a Diels Alder reaction.  Because of steric effects the acid points below the ring

2. Form the acyl chloride

3. React with diazoMethane to form the diazocarbonyl (no change in the orientation of the carbonyl relative to the ring.

4. Photolyze — if  a carbene is formed, it’s in perfect position to form a cyclopropane on the other side of the ring which if formed would pretty much prove the point.

Diazomethane was known to be quite explosive, and I spent a lot of time tiptoing around the lab when working with it.  Combine this with the worst lab technique in the world and I couldn’t get things to work. Subsequently the idea was shown to be correct, and an enormous amount of work has been done on carbenes.

So why interrupt the flow of posts about the brilliant  [ Science vol. 377 eabn5582 pp. 1 –> 20 ’22 ] ?

Because Science vol. 377 pp. 649 – 654 ’22 reports a simple (and nonexplosive) way to form carbenes from aldehydes.  Here’s what they say

“Common aldehydes are readily converted (via stable a-acyloxy halide intermediates) to electronically diverse (donor or neutral) carbenes to facilitate >10 reaction classes. This strategy enables safe reactivity of nonstabilized carbenes from alkyl, aryl, and formyl aldehydes via zinc carbenoids. Earth-abundant metal salts [iron(II) chloride (FeCl2), cobalt(II) chloride (CoCl2), copper(I) chloride (CuCl)] are effective catalysts for these chemoselective carbene additions to s and p bonds.”

How I wished I had this back then.

4 diseases explained at one blow said the protein chemist — part 1

A brilliant paper [ Science vol. 377 eabn5582 pp. 1 –> 20 ’22 ] explains how changing a single amino acid (proline) to another  can cause 4 different diseases, depending on the particular protein it is found in (and which proline of many is changed).

There is so much in this paper that it will take several posts to go over it all.  The chemistry in the paper is particularly fine.  So it’s back to Biochemistry 101 and the alpha helix and the beta sheet.

Have a look at this

https://cbm.msoe.edu/teachingResources/proteinStructure/secondary.html

If you can tell me how to get a picture like this into a WordPress post please make a comment.

The important point is that hydrogen bonds between the amide hydrogen of one amino acid and the carbonyl group of another hold the alpha helix and the beta pleated sheet together.

Enter proline : p//en.wikipedia.org/wiki/Proline.  Proline when not embedded in a protein has a hydrogen on the nitrogen atom in the ring.  When proline is joined to another amino acid by a peptide bond in a protein, the hydrogen on the nitrogen is no longer present.  So the hydrogen bond helping to hold the two structures (alpha helix and beta sheet) is no longer present at proline, and alpha helices and beta sheets containing proline are not has stable.  Prolines after the fourth amino acid of the alpha helix (e. g. after the first turn of the helix) produce a kink.  The proline can’t adopt the alpha helical configuration of the backbone and it can’t hydrogen bond.

But it’s even worse than that (and this observation may even be original).  Instead of a hydrogen bonding to the free electrons of the oxygen in the carbonyl group you have the two electrons on the nitrogen jammed up against them.  This costs energy and further destabilizes both structures.

Being a 5 membered ring which contains the alpha carbon of the amino acid, proline in proteins isn’t as flexible as other amino acids.

This is why proline is considered to be a helix breaker, and is used all the time in alpha helices spanning cellular membranes to cause kinks, giving them more flexibility.

There is much more to come — liquid liquid phase separation, prion like domains, low complexity sequences, frontotemporal dementia with ALS, TDP43, amyloid, Charcot Marie Tooth disease and Alzheimer’s disease.

So, for the present stare at the link to the diagram above.

Why Cassava’s 1 year results should allow compassionate use of Simufilam

Cassava reported results on 100 Alzheimer patients in an open label (e.g. no controls) trial of Simufilam for 1 year — https://finance.yahoo.com/news/cassava-sciences-reports-second-quarter-131500494.html.  The average results were unimpressive (to the uninitiated) with only a minimal average overall improvement of an ADAS-Cog11 score of 1.5 points.  This is probably why the stock (SAVA) dropped a point yesterday after the news.  Since everything turns on ADAS-Cog11 here is a link to a complete description — https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5929311/.  The test takes about 45 minutes placing it out of reach of a busy practicing clinical neurologist.

Why is even the 1.5 point improvement impressive to the initiated (me)?  Over 32 years in clinical neurology, I’d estimate that I saw at least 1 demented patient each week.  Now probably only 300 or so of the 1,664 were followed for a year.  Guess what?  None of them remained stable for a year, and all got worse.  Absolutely none of them  ever got better after a year.  So at least some stabilization of the disease is possible for a year.  The statistics say that Alzheimer patients lose 5 points a year on ADAS-Cog.

But that’s pretty small beer.  Who wants to keep a demented patient around but stable.  Here is the remarkable part of the Cassava results at a year.

63% of the 100 Patients Showed an Improvement in ADAS-Cog11 Scores, and This Group of Patients Improved an Average of 5.6 Points (S.D. ± 3.8). The statistics say that Alzheimer patients lose 5 points a year on ADAS-Cog.

This is unprecedented and is a strong argument for quick approval of Simufilam (or at least compassionate use).

The cynic will say that I’m just looking at the happy part of the Bell curve.  There must have been people who declined to average the improvement in the 63% down to a measly 1.5 points on the ADAS-Cog.

This is where clinical experience comes in.  No drug helps everyone with a given disease.  “Only 20% of cancer patients respond long term to a type of immune checkpoint blockade (of PD-1)” Science vol. 363p. 1377 ’19.  Nonetheless immune checkpoint blockade of several types was approved by the FDA, simply because there was nothing better available.

So if nearly 2/3 of Alzheimer patients will improve at one year on Simufilam, why not  let the FDA offer it to them now under compassionate use.