Category Archives: Uncategorized

Minorities on course to win the Darwin awards

While the plural of anecdote is not data, two episodes this week have me very depressed about the spread of the pandemic virus in the minority community (particularly in Blacks). The first occurred with a very intelligent Black woman who worked in tech support at Comcast and helped us when our internet connection went down. You do not get a job like that unless you’re smart. She’s heard a lot about vaccine side effects and isn’t going to get it. The next was a National Guard woman working for AAA, who won’t get the vaccine unless its a military requirement.

At 3 visits to our vaccination site in a town 45% of the population is Puerto Rican we saw nary a one (except for the guy disinfecting the chairs). I talked to one of the nurses, who said that our experience is typical of what she sees day after day.

One way to make a dent in this, is force hospitals when reporting COVID19 deaths, to state whether the patient was vaccinated or not. Granted most COVID19 will not be vaccinated at out current levels of vaccination, but as this doesn’t change with increasing vaccination levels, perhaps they will be convinced (but unfortunately after a lot of unnecessary deaths.

This is not written with the old WordPress Editor, but with the new one which I hate. It doesn’t seem to let you put in tabs.

You now have to pay up to get Premium edition to install the classic editor. Although initially angered, I’ve been using it for a decade absolutely free, and it’s time to pay up

The past year

The past year was exactly what practicing clinical neurology from ’67 -’00 was like. Fascinating intellectual material along with impotence in the face of horrible suffering

Force in physics is very different from the way we think of it

I’m very lucky (and honored) that a friend asked me to read and comment on the galleys of a his book. He’s trying to explain some very advanced physics to laypeople (e.g. me). So he starts with force fields, gravitational, magnetic etc. etc. The physicist’s idea of force is so far from the way we usually think of it. Exert enough force long enough and you get tired, but the gravitational force never does, despite moving planets stars and whole galaxies around.

Then there’s the idea that the force is there all the time whether or not it’s doing something a la Star Wars. Even worse is the fact that force can push things around despite going through empty space where there’s nothing to push on, action at a distance if you will.

You’ve in good company if the idea bothers you. It bothered Isaac Newton who basically invented action at a distance. Here he is in a letter to a friend.


“That gravity should be innate inherent & {essential} to matter so that one body may act upon another at a distance through a vacuum without the mediation of any thing else by & through which their action or force {may} be conveyed from one to another is to me so great an absurdity that I beleive no man who has in philosophical matters any competent faculty of thinking can ever fall into it. “

So physicists invented the ether which was physical, and allowed objects to push each other around by pushing on the ether between them. 

But action at a distance without one atom pushing on the next etc. etc. is exactly what an incredible paper found [ Proc. Natl. Acad. Sci. vol. 117 pp. 25445 – 25454 ’20 ].

Allostery is an abstract concept in protein chemistry, far removed from everyday life. Far removed except if you like to breathe, or have ever used a benzodiazepine (Valium, Librium, Halcion, Ativan, Klonopin, Xanax) for anything. Breathing? Really? Yes — Hemoglobin, the red in red blood cells is really 4 separate proteins bound to each other. Each of the four can bind one oxygen molecule. Binding of oxygen to one of the 4 proteins produces a subtle change in the structure of the other 3, making it easier for another oxygen to bind. This produces another subtle change in structure of the other making it easier for a third oxygen to bind. Etc. 

This is what allostery is, binding of molecule to one part of a protein causing changes in structure all over the protein. 

Neurologists are familiar with the benzodiazepines, using them to stop continuous seizure activity (status epilepticus), treat anxiety (Xanax), or seizures (Klonopin). They all work the same way, binding to a complex of 5 proteins called the GABA receptor, which when it binds Gamma Amino Butyric Acid (GABA) in one place causes negative ions to flow into the neuron, inhibiting it from firing. The benzodiazepines bind to a completely different site, making the receptor more likely to open when it binds GABA. 

The assumption about all allostery is that something binds in one place, pushing the atoms around, which push on other atoms which push on other atoms, until the desired effect is produced. This is the opposite of action at a distance, where an effect is produced without the necessity of physical contact.

The paper studied TetR, a protein containing 203 amino acids. If you’ve ever thought about it, almost all the antibiotics we have come from bacteria, which they use on other bacteria. Since we still have bacteria around, the survivors must have developed a way to resist antibiotics, and they’ve been doing this long before we appeared on the scene. 

TetR helps bacteria resist tetracycline, an antibiotic produced by bacteria. When tetracycline binds to TetR it causes other parts of the protein to change so it binds DNA causing the bacterium, among other things, to make a pump which moves tetracyline out of the cell. Notice that site where tetracycline binds on TetR is not the business end where TetR binds DNA, just as where the benzodiazepines bind the GABA receptor is not where the ion channel is. 

This post is long enough already without describing the cleverness which allowed the authors to do the following. They were able to make TetRs containing every possible mutation of all 203 positions. How many is that — 203 x 19 = 3838 different proteins. Why 19? Because we have 20 amino acids, so there are 19 possible distinct changes at each of the 203 positions in TetR.  

Some of the mutants didn’t bind to DNA, implying they were non-functional. The 3 dimensional structure of TetR is known, and they chose 5 of nonfunctional mutants. Interestingly these were distributed all over the protein. 

Then, for each of the 5 mutants they made another 3838 mutants, to see if a mutation in another position would make the mutant functional again. You can see what a tremendous amount of work this was. 

Here is where it gets really interesting. The restoring mutant (revertants if you want to get fancy) were all over the protein and up to 40 – 50 Angstroms away from the site of the dead mutation. Recall that 1 Angstrom is the size of a hydrogen atom, a turn of the alpha helix is 5.4 Angstroms and contains 3.5 amino acids per turn.The revertant mutants weren’t close to the part of the protein binding tetracycline or the part binding to DNA. 

Even worse the authors couldn’t find a contiguous path of atom pushing atom pushing atom, to explain why TetR was able to bind DNA again. So there you have it — allosteric action at a distance.

There is much more in the paper, but after all the work they did it’s time to let the authors speak for themselves. “Several important insights emerged from these results. First, TetR exhibits a high degree of allosteric plasticity evidenced by the ease of disrupting and restoring function through several mutational paths. This suggests the functional landscape of al- lostery is dense with fitness peaks, unlike binding or catalysis where fitness peaks are sparse. Second, allosterically coupled residues may not lie along the shortest path linking allosteric and active sites but can occur over long distances “

But there is still more to think about, particularly for drug development. Normally, in developing a drug for X, we have a particular site on a particular protein as a target, say the site on a neurotransmitter receptor where a neurotransmitter binds. But the work shows that sites far removed from the actual target might have the same effect

Natural selection yes, but for what?

Groups across the political spectrum don’t like the idea that natural selection operates on us. The left because of the monstrosities produced by social Darwinism and eugenics. The devout because we have supposedly been formed by the creator in his image and further perfection is blasphemous.

Like it or not, there is excellent evidence for natural selection occurring in humans. One of the best is natural selection for the lactase gene.

People with lactose intolerance have nothing wrong with the gene for lactase which breaks down the sugar lactose found in milk.  Babies have no problem with breast milk.  The enzyme (lactase)  produced from the gene is quite normal in all of us; no mutations are found in the lactose protein.  However 10,000 years ago and earlier, cattle were not domesticated, so there was no dietary reason for a human weaned from the breast to make the enzyme.  In fact continuing to use energy to make the enzyme something it would never get to act on is wasteful. The genomes of our ancient ancestors had figured this out.   The control region (lactase enhancer) for the lactase gene is 14,000 nucleotides upstream from the gene itself, and back then it shut off after age 8.  After domestication of cattle 10,000 or so years ago, so that people could digest milk their entire lives a mutation arose changing cytosine to thymine in the enhancer. It spread like wildfire because back then our ancestors were in a semi-starved state most of the time, and carriers of the mutation had better nutrition.

Well that was the explanation until a recent paper [ Cell vol. 183 pp. 684 – 701 ’20 ]. It was thought that lacking the mutation you couldn’t use milk past age 8 or so. However sequencing of sites of the herdsmen of the steppes showed that they were using milk a lot (making cheese and yogurt) 8,000 years ago. Our best guess is that the mutation arose 4,000 years ago.

So possibly, the reason it spread wasn’t milk digestion, but something else. Nothing has changed the million nucleotide segment of our genome since the mutation arose — this implies that it was under strong positive natural selection. But for what?

Well, a million nucleotides codes for a lot of stuff, not just the lactase enzyme. Also there is evidence that people with the mutation is linked to metabolic abnormalities and diseases associated with decreased energy expenditure, such as obesity and type II diabetes, as well as abnormal blood metabolites and lipids.

The region codes for a microRNA (miR-128-1). Knocking it out in mice results in increases energy expenditure and improvement in high fat diet obesity. Glucose tolerance is also improved.

So it is quite possible that what was being selected for was the ‘thrifty gene’ miR-128-1 which would our semi-starved ancestors expend less energy and store whatever calories they met as fat.

In cattle a similar (syntenic) genomic region near miR-128-1 has also been under positive selection (by breeders) for feed efficiency and intramuscular fat.

So a mutation producing a selective advantage in one situation is harmful in another.

Another example — https://luysii.wordpress.com/2012/06/10/have-tibetans-illuminated-a-path-to-the-dark-matter-of-the-genome/

The mutation which allows Tibetans to adapt to high altitude causes a hereditary form of blindness (Leber’s optic atroxpy) in people living at sea level. 25% of Tibetans have the mutation. Another example of natural selection operating on man.

Neural nets

The following was not written by me, but by a friend now retired from Bell labs. It is so good that it’s worth sharing.

I asked him to explain the following paper to me which I found incomprehensible despite reading about neural nets for years. The paper tries to figure out why neural nets work so well. The authors note that we lack a theoretical foundation for how neural nets work (or why they should !).

Here’s a link

https://www.pnas.org/content/pnas/117/44/27070.full.pdf

Here’s what I got back

Interesting paper. Thanks.


I’ve had some exposure to these ideas and this particular issue, but I’m hardly an expert.


I’m not sure what aspect of the paper you find puzzling. I’ll just say a few things about what I gleaned out of the paper, which may overlap with what you’ve already figured out.


The paper, which is really a commentary on someone else’s work, focuses on the classification problem. Basically, classification is just curve fitting. The curve you want defines a function f that takes a random example x from some specified domain D and gives you the classification c of x, that is, c = f(x).


Neural networks (NNs) provide a technique for realizing this function f by way of a complex network with many parameters that can be freely adjusted. You take a (“small”) subset T of examples from D where you know the classification and you use those to “train” the NN, which means you adjust the parameters to minimize the errors that the NN makes when classifying the elements of T. You then cross your fingers and hope that the NN will show useful accuracy when classifying examples from D that it has not seen before (i.e., examples that were not in the training set T). There is lots of empirical hokus pokus and rules-of-thumb concerning what techniques work better than others in designing and training neural networks. Research to place these issues on a firmer theoretical basis continues.


You might think that the best way to train a NN doing the classification task is simply to monitor the classifications it makes on the training set vectors and adjust the NN parameters (weights) to minimize those errors. The problem here is that classification output is very granular (discontinuous): cat/dog, good/bad, etc. You need to have a more nuanced (“gray”) view of things to get the hints you need to gradually adjust the NN weights and home in on their “best” setting. The solution is a so-called “loss” function, a continuous function that operates on the output data before it’s classified (while it is still very analog, as opposed to the digital-like classification output). The loss function should be chosen so that lower loss will generally correspond to lower classification error. Choosing it, of course, is not a trivial thing. I’ll have more to say about that later.


One of the supposed truisms of NNs in the “old days” was that you shouldn’t overtrain the network. Overtraining means beating the parameters to death until you get 100% perfect classification on the training set T. Empirically, it was found that overtraining degrades performance: Your goal should be to get “good” performance on T, but not “too good.” Ex post facto, this finding was rationalized as follows: When you overtrain, you are teaching the NN to do an exact thing for an exact set T, so the moment it sees something that differs even a little from the examples in set T, the NN is confused about what to do. That explanation never made much sense to me, but a lot of workers in the field seemed to find it persuasive.


Perhaps a better analogy is the non-attentive college student who skipped lectures all semester and has gained no understanding of the course material. Facing a failing grade, he manages by chicanery to steal a copy of the final exam a week before it’s given. He cracks open the textbook (for the first time!) and, by sheer willpower, manages to ferret out of that wretched tome what he guesses are the correct, exact answers to all the questions in the exam. He doesn’t really understand any of the answers, but he commits them to memory and is now glowing with confidence that he will ace the test and get a good grade in the course.


But a few days before the final exam date the professor decides to completely rewrite the exam, throwing out all the old questions and replacing them with new ones. The non-attentive student, faced with exam questions he’s never seen before, has no clue how to answer these unfamiliar questions because he has no understanding of the underlying principles. He fails the exam badly and gets an F in the course.
Relating the analogy of the previous two paragraphs to the concept of overtraining NNs, the belief was that if you train a NN to do a “good” job on the test set T but not “too good” a job, it will incorporate (in its parameter settings) some of the background knowledge of “why” examples are classified the way they are, which will help it do a better job when it encounters “unfamiliar” examples (i.e., examples not in the test set). However, if you push the training beyond that point, the NN starts to enter the regime where its learning (embodied in its parameter settings) becomes more like the rote memorization of the non-attentive student, devoid of understanding of the underlying principles and ill prepared to answer questions it has not seen before. Like I said, I was never sure this explanation made a lot of sense, but workers in the field seemed to like it.


That brings us to “deep learning” NNs, which are really just old-fashioned NNs but with lots more layers and, therefore, lots more complexity. So instead of having just “many” parameters, you have millions. For brevity in what follows, I’ll often refer to a “deep learning NN” as simply a “NN.”
Now let’s refer to Figure 1 in the paper. It illustrates some of the things I said above. The vertical axis measures error, while the horizontal axis measures training iterations. Training involves processing a training vector from T, looking at the resulting value of the loss function, and adjusting the NN’s weights (from how you set them in the previous iteration) in a manner that’s designed to reduce the loss. You do this with each training vector in sequence, which causes the NN’s weights to gradually change to values that (you hope) will result in better overall performance. After a certain predetermined number of training iterations, you stop and measure the overall performance of the NN: the overall error on the training vectors, the overall loss, and the overall error on the test vectors. The last are vectors from D that were not part of the training set.


Figure 1 illustrates the overtraining phenomenon. Initially, more training gives lower error on the test vectors. But then you hit a minimum, with more training after that resulting in higher error on the test set. In old-style NNs, that was the end of the story. With deep-learning NNs, it was discovered that continuing the training well beyond what was previously thought wise, even into the regime where the training error is at or near zero (the so-called Terminal Phase of Training—TFT), can produce a dramatic reduction in test error. This is the great mystery that researchers are trying to understand.


You can read the four points in the paper on page 27071, which are posited as “explanations” of—or at least observations of interesting phenomena that accompany—this unexpected lowering of test error. I read points 1 and 2 as simply saying that the pre-classification portion of the NN [which executes z = h(x, theta), in their terminology] gets so fine-tuned by the training that it is basically doing the classification all by itself, with the classifier per se being left to do only a fairly trivial job (points 3 and 4).
To me, I feel like this “explanation” misses the point. Here is my two-cents worth: I think the whole success of this method is critically dependent on the loss function. The latter has to embody, with good fidelity, the “wisdom” of what constitutes a good answer. If it does, then overtraining the deep-learning NN like crazy on that loss function will cause its millions of weights to “learn” that wisdom. That is, the NN is not just learning what the right answer is on a limited set of training vectors, but it is learning the “wisdom” of what constitutes a right answer from the loss function itself. Because of the subtlety and complexity of that latent loss function wisdom, this kind of learning became possible only with the availability of modern deep-learning NNs with their great complexity and huge number of parameters.

The prion battles continue with a historical note at the end

Now that we know proteins don’t have just one shape, and that 30% of them have unstructured parts, it’s hard to remember just how radical that Prusiner’s proposal that a particular conformation (PrPSc) of the normal prion protein (PrPC) caused other prion proteins to adopt it and cause disease was back in the 80s. Actually Kurt Vonnegut got there first with Ice-9 in “Cat’s Cradle ” in 1963. If you’ve never read it, do so, you’ll like it.

There was huge resistance to Prusiner’s idea, but eventually it became accepted except by Laura Manuelidis (about which more later). People kept saying the true infectious agent was a contaminant in the preparations Prusiner used to infect mice and that the prion protein (called PrPC) was irrelevant.

The convincing argument that Prusiner was right (for me at least) was PMCA (Protein Misfolding Cyclic Amplification) in which you start with a seed of PrPSc (the misfolded form of the normal prion protein PrPC), incubate it with a 10,000 fold excess of normal PrPC, which is converted by the evil PrPSC to more of itself. Then you sonicate what you’ve got breaking it into small fragments, and continue the process with another 10,000 fold excess of normal PrPC. Repeat this 20 times. This should certainly dilute out any infectious agent along for the ride (no living tissue is involved at any point). You still get PrPSc at the end. For details see Cell vol. 121 pp. 195 – 206 ’05.

Now comes [ Proc. Natl. Acad. Sci. vol. 117 pp. 23815 – 23822 ’20 ] which claims to be able to separate the infectivity of prions from their toxicity. Highly purified mouse prions show no signs of toxicity (neurite fragmentation, dendritic spine density changes) in cell culture, but are still infectious producing disease when injected into another mouse brain.

Even worse treatment of brain homogenates from prion infected mice with sodium laroylsarcosine destroys the toxicity to cultured neurons without reducing infectivity to the next mouse.

So if this paper can be replicated it implies that the prion protein triggers some reaction in the intact brain which then kills the animals.

Manuelidis thought in 2011 that the prion protein is a reaction to infection, and that we haven’t found the culprit. I think the PCMA pretty much destroyed that idea.

So basically we’re almost back to square one with what causes prion disease. Just to keep you thinking. Consider this. We can knock out the prion protein gene in mice. Guess what? The mice are normal. However, injection of the abnormal prion protein (PrPSc) into their brains (which is what researchers do to transmit the disease) doesn’t cause any disease.

Historical notes: I could have gone to Yale Med when Manuelidis was there, but chose Penn instead. According to Science vol. 332 pp. 1024 – 1027 ’11 she was one of 6 women in the class, and married her professor (Manuelidis) aged 48 when she was 24 graduating in 1967. In today’s rather Victorian standards of consent, power differential between teacher and student, that would probably have gotten both of them bounced out.

So I went to Penn Med. graduating in ’66. Prusiner graduated in ’68. He and I were in the same medical fraternity (Nu Sigma Nu). Don’t think animal house, medical fraternities were a place to get some decent food, play a piano shaped object and very occasionally party. It’s very likely that we shared a meal, but I have no recollection.

Graduating along with me was another Nobel Laureate to be — Mike Brown, he of the statins. Obviously a smart guy, but he didn’t seem outrageously smarter than the rest of us.

Cells are not bags of cytoplasm

How Ya Gonna Keep ’em Down on the Farm (After They’ve Seen Paree) is a song of 100+ years ago when World War I had just ended. In 1920, for the first time America was 50/50 urban/rural. Now it’s 82%.

What does this have to do with cellular biology? A lot. One of the first second messengers to be discovered was cyclic adenosine monophosphate (CAMP). It binds to an enzyme complex called protein kinase A (PKA), activating it, making it phosphorylate all sorts of proteins changing their activity. But PKA doesn’t float free in the cell. We have some 47 genes for proteins (called AKAPs for protein A Kinase Anchoring Protein) which bind PKA and localize it to various places in the cell. CAMP is made by an enzyme called adenyl cyclase of which we have 10 types, each localized to various places in the cell (because most of them are membrane embedded). We also have hundreds of G Protein Coupled Receptors (GPCRs) localized in various parts of the cell (apical, basal, primary cilia, adhesion structures etc. etc.) many of which when activated stimulate (by yet another complicated mechanism) adenyl cyclase to make CAMP.

So the cell tries to keep CAMP when it is formed relatively localized (down on the farm if you will). Why have all these ways of making it if its going to run all over the cell after all.

Actually the existence of localized signaling by CAMP is rather controversial, particularly when you can measure how fast it is moving around. All studies previous to Cell vol. 182 pp. 1379 – 1381, 1519 – 1530 ’20 found free diffusion of CAMP.

This study, found that CAMP (in low concentrations) was essentially immobile, remaining down on the farm where it was formed.

The authors used a fluorescent analog of CAMP which allowed them to use fluorescence fluctuation spectroscopy which gives the probability distribution function of an individual molecule occupying a given position in space and time (SpatioTemporal Image correlation Spectroscopy — STICS).

Fascinating as the study is, it is ligh tyears away from physiologic — the fluorescent CAMP analog was not formed by anything resembling a physiologic mechanism (e.g. by adenyl cyclase). A precursor to the fluorescent CAMP was injected into the cell and broken down by ‘intracellular esterases’ to form the fluorescent CAMP analog.

Then the authors constructed a protein made of three parts (1) a phosphodiesterase (PDE) which broke down the fluorescent CAMP analog and (2) another protein — the signaler — which fluoresced when it bound the CAMP analog. The two were connected by (3) a flexible protein linker e.g. the ‘ruler’ of the previous post. The ruler could be made of various lengths.

Then levels of fluorescent CAMP were obtained by injecting it into the cell, or stimulating a receptor.

If the sensor was 100 Angstroms away from the PDE, it never showed signs of CAMP, implying the the PDE was destroying it before it could get to the linker implying that diffusion was quite slow. This was at low concentrations of the fluorescent CAMP analog. At high injection concentrations the CAMP overcame the sites which were binding it in the cell and moved past the signaler.

It was a lot of work but it convincingly (to me) showed that CAMP doesn’t move freely in the cell unless it is of such high concentration that it overcomes the binding sites available to it.

They made another molecule containing (1) protein kinase A (2) a ruler (3) a phophodiesterase. If the kinase and phosphodiesterase were close enough together, CAMP never got to PKA at all.

Another proof that phosphodiesterase enzymes can create a zone where there is no free CAMP (although there is still some bound to proteins).

Hard stuff (to explain) but nonetheless impressive, and shows why we must consider the cell a bunch of tiny principalities jealously guarding their turf like medieval city states.

*****

A molecular ruler

Time to cleanse your mind by leaving the contentious world of social issues and entering the realm of pure thought with some elegant chemistry. 

You are asked to construct a molecular ruler with a persistence length of 150 Angstroms. 

Hint #1: use a protein

Hint #2; use alpha helices

Spoiler alert — nature got there first. 

The ruler was constructed and used in an interesting paper on CAMP nanoDomains (about which more on the next post).

It’s been around since 2011 [ Proc. Natl. Acad. Sci. vol. 108 pp. 20467 – 20472 ’11 ] and I’m embarrassed to admit I’d never heard of it.

It’s basically a run of 4 negatively charged amino acids (glutamic acid or aspartic acid) followed by a run of 4 positively charged amino acids (lysine, arginine). This is a naturally occurring motif found in a variety of species. 

My initial (incorrect) thought was that this couldn’t work as the 4 positively charged amino acids would bend at the end and bind to the 4 negatively charged ones. This can’t work even if you make the peptide chain planar, as the positive charges would alternate sides on the planar peptide backbone.

Recall that there are 3.5 amino acids/turn of the alpha helix, meaning that between a run of 4 Glutamic acid/Aspartic acids and an adjacent run of 4 lysines/arginines, an ionic bond is certain to form between the side chains (and not between adjacent amino acids on the backbone, but probably one 3 or 4 amino acids away)

Since a complete turn of the alpha helix is only 5.4 Angstroms, a persistence length of 150 means about 28 turns of the helix using 28 * 3.5 = 98 amino acids or about 12 blocks of ++++—- charged amino acids. 

The beauty of the technique is that by starting with an 8 amino acid ++++—- block, you can add length to your ruler in 12 Angstrom increments. This is exactly what Cell vol. 182 pp. 1519 – 1530 ’20 did. But that’s for the next post. 

247 ZeptoSeconds

247 ZeptoSeconds is not the track time of the fastest Marx brother. It is the time a wavelength of light takes to travel across a hydrogen molecule (H2) before it kicks out an electron — the photoelectric effect.

But what is a zeptosecond anyway? There are 10^21 zeptoSeconds in a second. That’s a lot. A thousand times more than the number of seconds since the big bang which is only 60 x 60 x 24 x 365 x 13.8 x 10^9 = 4. 35 x 10^17. Not that big a deal to a chemist anyway since 10^21 is 1/600th of the number of molecules in a mole.

You can read all about it in Science vol. 370 pp. 339 – 341 ’20 — https://science.sciencemag.org/content/sci/370/6514/339.full.pdf it you have a subscription.

Studying photoionization allows you to study the way light is absorbed by molecules, something important to any chemist. The 247 zeptoseconds is the birth time of the emitted electron. It depends on the travel time of the photon across the hydrogen molecule.

They don’t quite say trajectory of the photon, but it is implied even though in quantum mechanics (which we’re dealing with here), there is no such a thing as a trajectory. All we have is measurements at time t1 and time t2. We are not permitted to say what the photon is doing between these two times when we’ve done measurements. Our experience in the much larger classical physics world makes us think that there is such a thing.

It is the peculiar doublethink quantum mechanics forces on us. Chemists know this when they think about something as simple as the S2 orbital, something spherically symmetric, with electron density on either side of a node. The node is where you never find an electron. Well if you don’t, find it here, how can it have a trajectory from one side to the other.

Quantum mechanics is full of conundrums like that. Feynman warned us not to think about them, but it will take your mind off the pandemic (and if you’re good, off the election as well)..

It’s worth reading the article in Quanta which asks if wavefunctions tunnel through a barrier at speeds faster than light — here’s a link — https://www.quantamagazine.org/quantum-tunnel-shows-particles-can-break-the-speed-of-light-20201020/. It will make your head spin.

Here’s a link to an earlier post about the doublethink quantum mechanics forces on us

https://luysii.wordpress.com/2009/12/10/doublethink-and-angular-momentum-why-chemists-must-be-adept-at-it/

Here’s the post itself

Doublethink and angular momentum — why chemists must be adept at it

Chemists really should know lots and lots about angular momentum which is intimately involved in 3 of the 4 quantum numbers needed to describe atomic electronic structure. Despite this, I never really understood what was going until taking the QM course, and digging into chapters 10 and 11 of Giancoli’s physics book (pp. 248 -310 4th Edition).

Quick, what is the angular momemtum of a single particle (say a planet) moving around a central object (say the sun)? Well, its magnitude is the current speed of the particle times its mass, but what is its direction? There must be a direction since angular momentum is a vector. The (unintuitive to me) answer is that the angular momemtum vector points upward (resp. downward) from the plane of motion of the planet around the center of mass of the sun planet system, if the planet is moving counterclockwise (resp. clockwise) according to the right hand rule. On the other hand, the momentum of a particle moving in a straight line is just its mass times its velocity vector (e.g. in the same direction).

Why the difference? This unintuitive answer makes sense if, instead of a single point mass, you consider the rotation of a solid (e.g. rigid) object around an axis. All the velocity vectors of the object at a given time either point in different directions, or if they point in the same direction have different magnitudes. Since the object is solid, points farther away from the axis are moving faster. The only sensible thing to do is point the angular momentum vector along the axis of rotation (it’s the only thing which has a constant direction).

Mathematically, this is fairly simple to do (but only in 3 dimensions). The vector from the axis of rotation to the planet (call it r), and the vector of instantaneous linear velocity of the planet (call it v) do not point in the same direction, so they define a plane (if they do point in the same direction the planet is either hurtling into the sun or speeding directly away, hence not rotating). In 3 dimensions, there is a unique direction at 90 degrees to the plane. The vector cross product of r and v gives a vector pointing in this direction (to get a unique vector, you must use the right or the left hand rule). Nicely, the larger r and v, the larger the angular momentum vector (which makes sense). In more than 3 dimensions there isn’t a unique direction away from a plane, which is why the cross product doesn’t work there (although there are mathematical analogies to it).

This also explains why I never understood momentum (angular or otherwise) till now. It’s very easy to conflate linear momentum with force and I did. Get hit by a speeding bullet and you feel a force in the same direction as the bullet — actually the force you feel is what you’ve done to the bullet to change its momentum (force is basically defined as anything that changes momentum).

So the angular momentum of an object is never in the direction of its instantaneous linear velocity. But why should chemists care about angular momentum? Solid state physicists, particle physicists etc. etc. get along just fine without it pretty much, although quantum mechanics is just as crucial for them. The answer is simply because the electrons in a stable atom hang around the nucleus and do not wander off to infinity. This means that their trajectories must continually bend around the nucleus, giving each trajectory an angular momentum.

Did I say trajectory? This is where the doublethink comes in. Trajectory is a notion of the classical world we experience. Consider any atomic orbital containing a node (e.g. everything but a 1 s orbital). Zeno would have had a field day with them. Nodes are surfaces in space where the electron is never to be found. They separate the various lobes of the orbital from each other. How does the electron get from one lobe to the other by a trajectory? We do know that the electron is in all the lobes because a series of measurements will find the electron in each lobe of the orbital (but only in one lobe per measurement). The electron can’t make the trip, because there is no trip possible. Goodbye to the classical notion of trajectory, and with it the classical notion of angular momentum.

But the classical notions of trajectory and angular momentum still help you think about what’s going on (assuming anything IS in fact going on down there between measurements). We know quite a lot about angular momentum in atoms. Why? Because the angular momentum operators of QM commute with the Hamiltonian operator of QM, meaning that they have a common set of eigenfunctions, hence a common set of eigenvalues (e.g. energies). We can measure these energies (really the differences between them — that’s what a spectrum really is) and quantum mechanics predicts this better than anything else.

Further doublethink — a moving charge creates a magnetic field, and a magnetic field affects a moving charge, so placing a moving charge in a magnetic field should alter its energy. This accounts for the Zeeman effect (the splitting of spectral lines in a magnetic field). Trajectories help you understand this (even if they can’t really exist in the confines of the atom).

The death of the synonymous codon – V

The coding capacity of our genome continues to amaze. The redundancy of the genetic code has been put to yet another use. Depending on how much you know, skip the following four links and read on. Otherwise all the background you need to understand the following is in them.

https://luysii.wordpress.com/2011/05/03/the-death-of-the-synonymous-codon/

https://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/

https://luysii.wordpress.com/2014/01/05/the-death-of-the-synonymous-codon-iii/

https://luysii.wordpress.com/2014/04/03/the-death-of-the-synonymous-codon-iv/

There really is no way around the redundancy producing synonymous codons. If you want to code for 20 different amino acids with only four choices at each position, two positions (4^2) won’t do. You need three positions, which gives you 64 possibilities (61 after the three stop codons are taken into account) and the redundancy that comes along with it. The previous links show how the redundant codons for some amino acids aren’t redundant at all but used to code for the speed of translation, or for exonic splicing enhancers and inhibitors. Different codons for the same amino acid can produce wildly different effects leaving the amino acid sequence of a given protein alone.

The latest example — https://www.pnas.org/content/117/40/24936 Proc. Natl. Acad. Sci. vol. 117 pp. 24936 – 24046 ‘2 — is even more impressive, as it implies that our genome may be coding for way more proteins than we thought.

The work concerns Mitochondrial DNA Polymerase Gamma (POLG), which is a hotspot for mutations (with over 200 known) 4 of which cause fairly rare neurologic diseases.

Normally translation of mRNA into protein begins with something called an initator codon (AUG) which codes for methionine. However in the case of POLG, a CUG triplet (not AUG) located in the 5′ leader of POLG messenger RNA (mRNA) initiates translation almost as efficiently (∼60 to 70%) as an AUG in optimal context. This CUG directs translation of a conserved 260-triplet-long overlapping open reading frame (ORF) called  POLGARF (POLG Alternative Reading Frame — surely they could have come up something more euphonious).

Not only that but the reading frame is shifted down one (-1) meaning that the protein looks nothing like POLG, with a completely different amino acid composition. “We failed to find any significant similarity between POLGARF and other known or predicted proteins or any similarity with known structural motifs. It seems likely that POLGARF is an intrinsically disordered protein (IDP) with a remarkably high isoelectric point (pI =12.05 for a human protein).” They have no idea what POLGARF does.

Yet mammals make the protein. It gets more and more interesting because the CUG triplet is part of something called a MIR (Mammalian-wide Interspersed Repeat) which (based on comparative genomics with a lot of different animals), entered the POLG gene 135 million years ago.

Using the teleological reasoning typical of biology, POLGARF must be doing something useful, or it would have been mutated away, long ago.

The authors note that other mutations (even from one synonymous codon to another — hence the title of this post) could cause other diseases due to changes in POLGARF amino acid coding. So while different synonymous codons might code for the same amino acid in POLG, they probably code for something wildly different in POLGARF.

So the same segment of the genome is coding for two different proteins.

Is this a freak of nature? Hardly. We have over an estimated 368,000 mammalian interspersed repeats in our genome — https://en.wikipedia.org/wiki/Mammalian-wide_interspersed_repeat.

Could they be turning on transcription for other proteins that we hadn’t dreamed of. Algorithms looking for protein coding genes probably all look for AUG codons and then look for open reading frames following them.

As usual Shakespeare got there first “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”

Certainly the paper of the year for intellectual interest and speculation.

Finishing Feynman Early

Have you ever finished what you thought of as a monumental (intellectual or otherwise) task early — such as finally writing up that paper, senior thesis, grant proposal etc. etc.  Well I just did.

It was finishing volume II of the Feynman Lectures on Physics on electromagnetism earlier than I had thought possible.  Feynman’s clarity of writing is legendary, which doesn’t mean that what he is writing about is simple or easy to grasp.  Going through vol. II is pleasant but intense intellectual work.

The volumes are paginated rather differently than most books.  Leighton and Sands wrote up each lecture as it was given, so each lecture is paginated, but the book isn’t paged sequentially.  Always wanting to know how much farther I had to, I added them together producing a running total as I read the book.  FYI  the 42 lectures (chapters) contain 531 pages.

So I’m plowing along through chapter 37 “Magnetic Materials” starting on p. 455 with its 13 pages knowing that there are still 63 intense pages to go at the end when I get to the last paragraph which begins “We now close our study of electricity and magnetism.”

Say what?  The rest is on fluid dynamics and elasticity (which I’m not terribly interested in).  So I’m done.  I feel like WilE Coyote chasing the roadrunner and suddenly running off a cliff.

However, the last chapter (42) is not to be missed for any of you.  It is the clearest explanation of curvature and curved space you’ll ever find.  All you need for the first 7 pages (which explains curvature) is high school geometry and the formula for the area of a circle (2 pi *r) and the volume of a sphere (4/3) * pi * r ^3.   That’s it.  Suitable for your nontechie friends.

Of course later on he uses the curvature of the space we live in to explain what Einstein did in general relativity and his principle of equivalence and his theory of gravitation (in another 7 pages).  I wish I’d read it before I tried to learn general relativity.

I did make an attempt to explain the Riemann curvature tensor — but it’s nowhere as good.

Here a link — https://luysii.wordpress.com/2020/02/03/the-reimann-curvature-tensor/

Here are links to my other posts on the Feynman Lectures on Physics

https://luysii.wordpress.com/2020/04/14/the-pleasures-of-reading-feynman-on-physics/

https://luysii.wordpress.com/2020/05/03/the-pleasures-of-reading-feynman-on-physics-ii/

https://luysii.wordpress.com/2020/05/11/the-pleasures-of-reading-feynman-on-physics-iii/

https://luysii.wordpress.com/2020/06/17/the-pleasures-of-reading-feynman-on-physics-iv/

https://luysii.wordpress.com/2020/08/12/the-pleasures-of-reading-feynman-on-physics-v/