Category Archives: Chemistry (relatively pure)

The Pleasures of Reading Feynman on Physics – IV

Chemists don’t really need to know much about electromagnetism.  Understand Coulombic forces between charges and you’re pretty much done.   You can use NMR easily without knowing much about magnetism aside from the shielding of the nucleus from a magnetic field by  charge distributions and ring currents. That’s  about it.  Of course, to really understand NMR you need the whole 9 yards.

I wonder how many chemists actually have gone this far.  I certainly haven’t.  Which brings me to volume II of the Feynman Lectures on Physics which contains over 500 pages and is all about electromagnetism.

Trying to learn about relativity told me that the way Einstein got into it was figuring out how to transform Maxwell’s equations correctly (James J. Callahan “The Geometry of Spacetime” pp. 22 – 27).  Using the Galilean transformation (which just adds velocities) an observer moving at constant velocity gets a different set of Maxwell equations, which according to the Galilean principle of relativity (yes Galileo got there first) shouldn’t happen.

Lorentz figured out a mathematical kludge so Maxwell’s equations transformed correctly, but it was just that,  a kludge.  Einstein derived the Lorentz transformation from first principles.

Feynman back in the 60s realized that the entering 18 yearolds had heard of relativity and quantum mechanics.  He didn’t like watching them being turned off to physics by studying how blocks travel down inclined planes for 2 or more years before getting to the good stuff (e. g. relativity, quantum mechanics).  So there is special relativity (no gravity) starting in volume I lecture 15 (p. 138) including all the paradoxes, time dilation length contraction, a very clear explanation fo the Michelson Morley experiment etc. etc.

Which brings me to volume II, which is also crystal clear and contains all the vector calculus (in 3 dimensions anyway) you need to know.  As you probably know, moving charge produces a magnetic field, and a changing magnetic field produces a force on a moving charge.

Well and good but on 144 Feynman asks you to consider 2 situations

  1. A stationary wire carrying a current and a moving charge outside the wire — because the charge is moving, a magnetic force is exerted on it causing the charge to move toward the wire (circle it actually)

2. A stationary charge and a  moving wire carrying a current

Paradox — since the charge isn’t moving there should be no magnetic force on it, so it shouldn’t move.

Then Feynman uses relativity to produce an electric force on the stationary charge so it moves.  (The world does not come equipped with coordinates) and any reference frame you choose should give you the same physics.

He has to use the length (Fitzgerald) contraction of a moving object (relativistic effect #1) and the time dilation of a moving object (relativistic effect #2) to produce  an electric force on the stationary charge.

It’s a tour de force and explains how electricity and magnetism are parts of a larger whole (electromagnetism).  Keep the charge from moving and you see only electric forces, let it move and you see only magnetic forces.  Of course there are reference frames where you see both.

 

The pleasures of reading Feynman on Physics – II

If you’re tired of hearing and thinking about COVID-19 24/7 even when you don’t want to, do what I did when I was a neurology resident 50+ years ago making clever diagnoses and then standing helplessly by while patients died.  Back then I read topology and the intense concentration required to absorb and digest the terms and relationships, took me miles and miles away.  The husband of one of my interns was a mathematician, and she said he would dream about mathematics.

Presumably some of the readership are chemists with graduate degrees, meaning that part of their acculturation as such was a course in quantum mechanics.  Back in the day it was pretty much required of chemistry grad students — homage a Prof. Marty Gouterman who taught the course to us 3 years out from his PhD in 1961.  Definitely a great teacher.  Here he is now, a continent away — http://faculty.washington.edu/goutermn/.

So for those happy souls I strongly recommend volume III of The Feynman Lectures on Physics.  Equally strongly do I recommend getting the Millennium Edition which has been purged of the 1,100 or so errors found in the 3 volumes over the years.

“Traditionally, all courses in quantum mechanics have begun in the same way, retracing the path followed in the historical development of the subject.  One first learns a great deal about classical mechanics so that he will be able to understand how to solve the Schrodinger equation.  Then he spends a long time working out various solutions.  Only after a detailed study of this equation does he get to the advanced subject of the electron’s spin.”

The first half of volume III is about spin

Feynman doesn’t even get to the Hamiltonian until p. 88.  I’m almost half through volume III and there has been no sighting of the Schrodinger equation so far.  But what you will find are clear explanations of Bosons and Fermions and why they are different, how masers and lasers operate (they are two state spin systems), how one electron holds two protons together, and a great explanation of covalent bonding.  Then there is great stuff beyond the ken of most chemists (at least this one) such as the Yukawa explanation of the strong nuclear force, and why neutrons and protons are really the same.  If you’ve read about Bell’s theorem proving that ‘spooky action at a distance must exist’, you’ll see where the numbers come from quantum mechanically that are simply impossible on a classical basis.  Zeilinger’s book “The Dance of the Photons” goes into this using .75 (which Feynman shows is just cos(30)^2.

Although Feynman doesn’t make much of a point about it, the essentiality of ‘imaginary’ numbers (complex numbers) to the entire project of quantum mechanics impressed me.  Without them,  wave interference is impossible.

I’m far from sure a neophyte could actually learn QM from Feynman, but having mucked about using and being exposed to QM and its extensions for 60 years, Feynman’s development of the subject is simply a joy to read.

So get the 3 volumes and plunge in.  You’ll forget all about the pandemic (for a while anyway)

 

A bombshell that wasn’t

Yesterday, a friend sent me the following

” Chinese Coronavirus Is a Man Made Virus According to Luc Montagnier the Man Who Discovered HIV

Contrary to the narrative that is being pushed by the mainstream that the COVID 19 virus was the result of a natural mutation and that it was transmitted to humans from bats via pangolins, Dr Luc Montagnier the man who discovered the HIV virus back in 1983 disagrees and is saying that the virus was man made.”

Pretty impressive isn’t it?  Montagnier says that in the 30,000 nucleotide sequence of the new coronovirus SARS-CoV-2 he found sequences of the AIDS virus (HIV1).  Worse, the biolab in Wuhan was working both on HIV1 and coronaviruses.  It seems remote that a human could have been simultaneously infected with both, but these things happen all the time in the lab, intentionally or not.

It really wouldn’t take much to prove Montagnier’s point.  Matching 20 straight nucleotides from HIV1 to the Wuhan coronavirus is duck soup now that we have the sequences of both.  HIV1 has a genome with around 10,000 nucleotides, and the Wuhan coronavirus has a genome of around 30,000.  Recall that each nucleotide can be one of 4 things: A, U, G, C.  In the genome the nucleotides are ordered, and differences in the order mean different things — consider the two words united and untied.

Suppose Montagnier found a 20 nucleotide sequence from HIV1 in the new coronavirus genome. How many possibilities are there for such a sequence?  Well for a 2 nucleotide sequence there are 4 x 4 == 4^2 = 16,  for a 3 nucleotide sequence 4 x 4 x 4 == 4^3 = 64.  So for 20 nucleotides there are 4^20 possible sequences == 1,099,511,622,776 different possibilities.  So out of the HIV1 genome there are 10,000 – 20 such sequences, and in the coronavirus sequence there are 30,000 -20  such sequences so there are 10,000 times 30,000 ways for a 20 nucleotide sequence to match up between the two genomes.  That 300,000,000 ways for a match to occur by chance — or less than .1%.  If you’re unsatisfied with those odds than make the match larger.  25 nucleotides should satisfy the most skeptical.

But there’s a rub — as Carl Sagan has said  “Extraordinary claims require extraordinary evidence.”  Apparently Montagnier hasn’t published the sequence of HIV1 he claims to have found in the coronavirus.   If anyone knows what it is please write a comment.

Then there’s the fact that Montagnier appears to have gone off his rocker. In 2009 he published a  paper (in a journal he apparently built) which concludes that diluted DNA from pathogenic bacterial and viral species is able to emit specific radio waves” and that “these radio waves [are] associated with ‘nanostructures’ in the solution that might be able to recreate the pathogen”.

Sad.  Just as one of the greatest chemists of the 20th century will be remembered for his crackpot ideas about vitamin C (Linus Pauling), Montagnier may be remembered for this.

On second thought, there is no reason to need Montagnier and his putative sequence at all. The sequences of both genomes are known.     Matching any 20 nucleotide sequence from HIV1 to any of the 30,000 – 20 20 nucleotide sequences from the Wuhan flu is a problem right out of Programming 101.  It’s a matter of a few loops, if thens and go to’s.  . If you’re ambitious  you could start with smaller sequences say 5 – 10 nucleotides, find a match, move to the next largest size sequence and repeat until you find the largest contiguous sequence of nucleotides in HIV1 to be found in the coronavirus.

You can read about the Wuhan lab in an article from Nature in 2017 — https://www.nature.com/news/inside-the-chinese-lab-poised-to-study-world-s-most-dangerous-pathogens-1.21487

Phillip Anderson, 1923 – 202 R. I. P.

Phil Anderson probably never heard of Ludwig Mies Van Der Rohe, he of the Bauhaus and his famous dictum ‘less is more’, so he probably wasn’t riffing on it when he wrote “More Is Different” in August of 1970 [ Science vol. 177 pp. 393 – 396 ’72 ] — https://science.sciencemag.org/content/sci/177/4047/393.full.pdf.

I was just finishing residency and found it a very unusual paper for Science Magazine.  His Nobel was 5 years away, but Anderson was of sufficient stature that Science published it.  The article was a nonphilosophical attack on reductionism with lots of hard examples from solid state physics. It is definitely worth reading, if the link will let you.  The philosophic repercussions are still with us.

He notes that most scientists are reductionists.  He puts it this way ” The workings of our minds and bodies and of all the matter animate and inanimate of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under extreme conditions we feel we know pretty well.”

So many body physics/solid state physics obeys the laws of particle physics, chemistry obeys the laws of many body physics, molecular biology obeys the laws of chemistry, and onward and upward to psychology and the social sciences.

What he attacks is what appears to be a logical correlate of this, namely that understanding the fundamental laws allows you to derive from them the structure of the universe in which we live (including ourselves).   Chemistry really doesn’t predict molecular biology, and cellular molecular biology doesn’t really predict the existence of multicellular organisms.  This is because new phenomena arise at each level of increasing complexity, for which laws (e.g. regularities) appear which don’t have an explanation by reducing them the next fundamental level below.

Even though the last 48 years of molecular biology, biophysics have shown us a lot of new phenomena, they really weren’t predictable.  So they are a triumph of reductionism, and yet —

As soon as you get into biology you become impaled on the horns of the Cartesian dualism of flesh vs. spirit.  As soon as you ask what something is ‘for’ you realize that reductionism can’t help.  As an example I’ll repost an old one in which reductionism tells you exactly how something happens, but is absolutely silent on what that something is ‘for’

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this is hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” — https://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/ –puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? 

Frameshifting

hed oga tet hec atw hoa tet her atw hob ith erp aw

Say what?  It’s a simple sentence made of 3 letter words frameshifted by one

he dog ate the cat who ate the rat who bit her paw

Codons are read as groups of three nucleotides, and frameshifting has always been thought to totally destroy the meaning of a protein, as an entirely different protein is made.

Not so says PNAS vol. 117 pp. 5907 – 5912 ’20. Normally a frameshifted protein has only 7% sequence identity with the original.  This is about what one would expect given that there are 20 amino acids, and chance coincidence would argue for 5%.  But there are more ways for proteins to be similar rather than identical.  One can classify our amino acids in several ways, charged vs. uncharged, aromatic vs. nonaromatic, hydrophilic vs. hydrophobic etc. etc.

The authors looked at 2,900 human proteins, then they frameshifted the original by +1 and compared the hydrophobicity profiles of the two.  Amazingly there was a correlation of .7 between the two, despite sequence identity of 7%.  Similarly frameshifting didn’t disturb the chance of intrinsic disorder.  So frameshifting is embedded in the structure of the universal genetic code, and may have actually contributed to its shaping.  Frameshifting could be an evolutionary mechanism of generating proteins with similar attributes (hydrophobicity, intrinsic order vs. disorder, etc.) but with vastly different sequences.  The evolution, aka natural selection aka deus ex machine aka God could muck about the ready made protein and find something new for it to do.   A remarkable concept.

The gag-pol precursor p180 of the AIDS virus is derived from the gag-pol mRNA by translation involving ribosomal frameshifting within the gag-pol overlap region.  The overlap is 241 nucleotides with pol in the -1 phase with respect to gag (that’s an amazing 80 amino acids).  I was amazed at the efficiency of coding of two different proteins (one and enzyme and one structural), but perhaps they aren’t that different in terms of hydrophobicity (or something else).

I’d love to see the hydropathy profile of the overlap of the two proteins, but I don’t know how to get it.

Amyloid

Amyloid goes way back, and scientific writing about has had various zigs and zags starting with Virchow (1821 – 1902) who named it because he thought it was made out of sugar.  For a long time it was defined by the way it looks under the microscope being birefringent when stained with Congo red (which came out 100 years ago,  long before we knew much about protein structure (Pauling didn’t propose the alpha helix until 1951).

Birefringence itself is interesting.  Light moves at different speeds as it moves through materials — which is why your legs look funny when you stand in shallow water.  This is called the refractive index.   Birefringent materials have two different refractive indexes depending on the orientation (polarization) of the light looking at it.  So when amyloid present in fixed tissue on a slide, you see beautiful colors — for pictures and much more please see — https://onlinelibrary.wiley.com/doi/full/10.1111/iep.12330

So there has been a lot of confusion about what amyloid is and isn’t and even the exemplary Derek Lowe got it wrong in a recent post of his

“It needs to be noted that tau is not amyloid, and the TauRx’s drug has failed in the clinic in an Alzheimer’s trial.”

But Tau fibrils are amyloid, and prions are amyloid and the Lewy body is made of amyloid too, if you subscribe to the current definition of amyloid as something that shows a cross-beta pattern on Xray diffraction — https://www.researchgate.net/figure/Schematic-representation-of-the-cross-b-X-ray-diffraction-pattern-typically-produced-by_fig3_293484229.

Take about 500 dishes and stack them on top of each other and that’s the rough dimension of an amyloid fibril.  Each dish is made of a beta sheet.  Xray diffraction was used to characterize amyloid because no one could dissolve it, and study it by Xray crystallography.

Now that we have cryoEM, we’re learning much more.  I have , gone on and on about how miraculous it is that proteins have one or a few shapes — https://luysii.wordpress.com/2010/08/04/why-should-a-protein-have-just-one-shape-or-any-shape-for-that-matter/

So prion strains and the fact that alpha-synuclein amyloid aggregates produce different clinical disease despite having the same amino acid sequence was no surprise to me.

But it gets better.  The prion strains etc. etc may not be due to different structure but different decorations of the same structure by protein modifications.

The same is true for the different diseases that tau amyloid fibrils produce — never mind that they’ve been called neurofibrillary tangles and not amyloid, they have the same cross-beta structure.

A great paper [ Cell vol. 180 pp. 633 – 644 ’20 ] shows how different the tau protofilament from one disease (corticobasal degeneration) is from another (Alzheimer’s disease).  Figure three shows the side chain as it meanders around forming one ‘dish’ in the model above.  The meander is quite different in corticobasal degeneration (CBD) and Alzheimers.

It’s all the stuff tacked on. Tau is modified on its lysines (some 15% of all amino acids in the beta sheet forming part) by ubiquitination, acetylation and trimethylation, and by phosphorylation on serine.

Figure 3 is worth more of a look because it shows how different the post-translational modifications are of the same amino acid stretch of the tau protein in the Alzheimer’s and CBD.  Why has this not been seen before — because the amyloid was treated with pronase and other enzymes to get better pictures on cryoEM.  Isn’t that amazing.  Someone is probably looking to see if this explains prion strains.

The question arises — is the chain structure in space different because of the modifications, or are the modifications there because the chain structure in space is different.  This could go either way we have 500+ enzymes (protein kinases) putting phosphate on serine and/or threonine, each looking at a particular protein conformation around the two so they don’t phosphorylate everything — ditto for the enzymes that put ubiquitin on proteins.

Fascinating times.  Imagine something as simple as pronase hiding all this beautiful structure.

 

 

The ubiquitin wars

Ubiquitin used to be simple.  All it had to do was form an amide between its carboxy terminal glycine and the epsilon amino group of lysine of a target protein, and bingo — the protein was targeted for degradation by the proteasome.

Before proceeding, it’s worth thinking why this sort of thing doesn’t happen more often, by which I mean amide formation between carboxyl groups on aspartic and glutamic acid on one protein and lysines on the surface of another.  That’s where the 3 amino acids are likely to be found, because they are charged at physiological pH, meaning they cost energy (and probably entropy) to put into the relatively hydrophobic interior of a protein where there isn’t a lot of water around to hide their charges.   Also, every noncyclic protein (which is just about all of them) has a carboxy terminal amino acid — why don’t they link up spontaneously to the lysines on the surface of other proteins?

Well, ubiquitin does NOT link up spontaneously.  It has a suite of enzymes to do so.  Like a double play in baseball, 3 enzymes are involved, which move ubiquitin to E1 (the shortstop) to E2 (the second baseman) to E3 (the first baseman).  We have over 600 E3 enzymes, 40 E2s and 9 E1s.  650/20,000 protein coding genes is a significant number — and the 600 E3s are likely there to provide specificity to just what protein gets linked to.

Addendum 21 Feb — Silly me, I should have added in the nearly 100 genes coding for proteins that remove attached ubiquitins (e.g. the deubiquitinases).

A few more fun facts and then down to business.  First ubiquitin is so stable that boiling water doesn’t denature it [ Science vol. 365 pp. 502 – 505 ’19 ].  Second ubiquitin can link to itself, as it contains 7 lysines at amino acids 6, 11, 27, 28, 33, 48 and 63 of the 72 amino acids contained in the protein.

Polyubiquitin chains are often made up of multiple ubiquitin monomers with lengths up to 10 [ Nature. vol. 462 pp. 615 – 619 ’09  2009 ] meaning that there could be a lot of different ones ( 7^10 = 282,475,249.  However chains found in nature seem to use just one type of link, e.g. linking the carboxyl group of one ubiquitin to just one of the 7 lysines over and over, forming a rather monotonous polymer.

On to the interesting paper, namely the ubiquitin wars inside a macrophage invaded by TB [ Nature vol. 577 pp. 682 – 688 ’20 ]  Ubiquitin initially was thought to be a tag marking a protein for destruction.  It’s much more complicated than that.  A host E3 ubiquitin ligase (ANAPC2, a core subunit of the anaphase promoting complex/cyclosome) promotes the attachment of lysine #11 linked ubiquitin chains to lysine #76 of the TB protein Rv0222.  In some way this helps Rv022 to suppress the expression of proinflammatory cytokines.

We do know that the ubiquitination of Rv022 facilitates in some way the recruitment of the protein tyrosine phosphatase SHP1 to the adaptor protein TRAF6 (Tumor necrosis factor Receptor Associated Family member 6) preventing the its ubiquitination and activation.  Of interest is the fact that TRAF6 itself is an E3 ubiquitin ligase which acts on many proteins.

Now to continue and show the further complexity of what’s going on inside our cells.  Autophosphorylated IRAK leaves the TLR (Toll Like Receptor) signaling complex forming a complex with TRAF6 resulting in the oligomerization of TRAF6.  Somehow this activates TAK1, a member of the MAP3 kinase family and this leads to the activation of the family of IKappaB kinases which phosphorylate IKappaB leading to its proteolysis.  Once IKappaB is removed from NFKappaB, translation of NFKappaB to the nucleus occurs where it turns on transcription of cytokines and other proinflammatory genes.

It is really amazing when you think of all the checks and balances going on down there.  How crude our weapons against inflammation are now, compared to what we might have when we know all the mechanisms behind it.

What would Woodward do — take II

“It’s no wonder that truth is stranger than fiction. Fiction has to make sense.”  Mark Twain.

The Harvard Chemistry Department chair arrested?  And for what?  For lying and hiding research work he was doing for China.

“The arrangement between Lieber and the Chinese institution spanned “significant” periods of time between at least 2012 and 2017, according to the affidavit. It says the deal called for Lieber to be paid up to $50,000 a month, in addition to $150,000 per year “for living and personal expenses.”

Who knew betraying your country could be so lucrative?  Of course these are allegations, and have to be proved in court.

What would the great Robert Burns Woodward (https://en.wikipedia.org/wiki/Robert_Burns_Woodward) say to this?  He’s already spinning in his grave over the slings and arrows heaped on pure synthetic organic chemistry.  For details see part of an old post at the end.

Interesting how the department has changed.  No Chinese there at all ’60 – ’62 (even postdocs).  There were several Japanese and Sikh postdocs along with a fair number of happy go lucky Australians.

Chemistry applications can be lucrative.  The new Princeton Chemistry Building was built thanks to professor Ted Taylor, whose royalties on Alimta (Pemetrexed), an interesting molecule with what looks like guanine, glutamic acid, benzoic acid and ethane all nicely stitched together to form an antifolate, to the tune of over 1/4 of a billion dollars built it.

It’s interesting to note that the Princeton undergraduate catalog for ’57 – ’58 has Dr. Taylor basically in academic slobbovia — he’s only teaching Chem 304a, a one semester course “Elementary Organic Chemistry for Basic Engineers” (not even advanced engineers)

For details please see  — https://luysii.wordpress.com/2011/05/16/princeton-chemistry-department-the-new-oberlin/

What would Woodward do ?

Sleeper is one of the great Woody Allen movies from the 70s.  Woody plays Miles Monroe, the owner of (what else?) a health food store who through some medical mishap is frozen in nitrogen and is awakened 200 years later.  He finds that scientific research has shown that cigarettes and fats are good for you.  A McDonald’s restaurant is shown with a sign “Over 795 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 Served”

I returned from my father’s 100 year birthday blowout and band camp and began attacking a giant pile of accumulated unread journals.  In the 9 August Nature (p. 630 – 631)  2007 he was amazed to read criticism of a 64 step 22 year synthesis of an exquisitely complex molecule (azadirachtin) — a molecule in which it is easier to count the number of optically INactive carbons than the optically active ones.  Back in the 60s we were all impressed with how Woodward got the 5 asymmetric centers in a 6 membered ring of reserpine (which was in use as an antihypertensive at the time, and whose fairly common side effect of depression was one of the clues leading to the amine theory of affect).  Rip was surprised to find that the criticism was not that the synthesis was incorrect, but that the project shouldn’t have been done at all.  Apparently a significant body of organic chemists think this way.

Political correctness has left few groups which it is safe to disparage.  With apologies to one of them (Christians) I’ve got to ask “What would Woodward do?”

Have you had your molybdenum today?

Chemists don’t usually think of the products of a chemical reaction barreling off and penetrating another structure.   Because of the equipartition of energy, the energy of a given exothermic chemical reaction quickly gets redistributed into electronic, vibration and rotational energy and some translational energy.  It’s exactly why blasting a particular bond with exactly the right energy to break it, isn’t widely used in synthetic organic chemistry — the energy redistributes over the whole molecule too rapidly.  But that’s exactly what is thought to happen in the molybdenum storage protein according to Proc. Natl. Acad. Sci. vol. 116 pp. 26497 – 26504 ’19.

Back off a bit.  Without molybdenum we’d all be dead, as it is a critical component of the plant enzyme breaking the triple nitrogen to nitrogen (aka nitrogenase), so it can be fixed into biologic material of the plant (and ultimately us).  It takes 225 kiloCalories/mole to break N2 apart (compared to 90 kiloCalories/mole for the carbon carbon bond in ethane).

The paper concerned discusses the molybdenum storage protein of a bacterium  (Azotobacter vinelandii).  The protein is a heterohexamer of 3 alpha and 3 beta subunits with a total molecular mass of 180 kiloDaltons.

The mechanism if cleverness itself — here’s a direct quote from the abstract of the paper. “First, we show that molybdate, ATP, and Mg2+ consecutively bind into the open ATP-binding groove of the β-subunit, which thereafter becomes tightly locked by fixing the previously disordered N-terminal arm of the α-subunit over the β-ATP. Next, we propose a nucleophilic attack of molybdate onto the γ-phosphate of β-ATP, analogous to the similar reaction of the structurally related UMP kinase. The formed instable phosphoric-molybdic anhydride becomes immediately hydrolyzed and, according to the current data, the released and accelerated molybdate is pressed through the cage wall, presumably by turning aside the Metβ149 side chain. A structural comparison between MoSto and UMP kinase provides valuable insight into how an enzyme is converted into a molecular machine during evolution. The postulated direct conversion of chemical energy into kinetic energy via an activating molybdate kinase and an exothermic pyrophosphatase reaction to overcome a proteinous barrier represents a novelty in ATP-fueled biochemistry, because normally, ATP hydrolysis initiates large-scale conformational changes to drive a distant process.”

What drives the MO4 away from the ADP ? Probably electrostatic repulsion between two negative charges in the very low dielectric constant environment of the storage protein (said to be around 7 with water at 80) which does relatively little to shield the charges from each other.

Of course the SN2 reaction is like two billiard balls hitting each other with the leaving group barreling off at about the same velocity as the attacking group. How fast is that?

Pretty fast.  To figure out how fast any chemical entity is moving at 300 K (80 F) just divide 2735 by the square root of the molecular mass.  So when Iodine barrels in to methyl bromide at 243 meters second, the bromine leaves at 307 meters second.

Well the C – Br bond length  is 1.9 Angstroms, the atomic radii of Br and C are 1.8 and .7 Angstroms — So methyl bromide is 4.5 Angstroms long or 4.5 x 10^-10 meters.  So 307 meters/ second means that the bromine ion takes  roughly 10^-3 seconds to go a meter, and 10^-3 * ( 1/4.5) * 10^-10 ) seconds to go the diameter of the methyl bromide molecule.  (Of course this ignores the solvent that’s in the way impeding the Bromine anion’s progress — but that’s another story).  I put this numerology in because chemists (including me) usually don’t think about reactions this way and it’s rather humbling to do so.

How a chemical measuring stick actually works

The immune system knows something is up when a foreign peptide fragment is presented to it.  Here’s the hand holding the peptide — https://www.researchgate.net/figure/Overall-structure-of-HLA-peptide-complex_fig1_26490512.

There it sits, lying on top of a bed of beta sheets, with two side rails of alpha helices.  Proteins are big, way too big to fit into the hand, so the fragments must be chopped up into peptides no longer than 9 amino acids long (see the picture of it lying in state).

So the class assignment for today is to figure out how to design a protein which takes peptides from 10 – 16 amino acids long, and shortens them to 9 amino acids.

Obviously a trick question, because the actual amino acids making up the peptide don’t really matter much.  So somehow the protein is reacting to length rather than chemistry.

Tricky no?

ERAP1 (Endoplasmic Reticulum aminopeptidase associated with Antigen Processing has figured it out [ Proc. Natl. Acad. Sci. vol. 116 pp. 22709 – 22715 ’19 ].  It is a huge protein (948 amino acids) with four domains forming a large cavity (which it must have to accomodate a 19 amino acid paptide).  The peptide is chopped up from the amino terminal, stopping when the length reaches 9 amino acids.  The active site is at one end of the cavity, and at the other end there is a site which looks like it should cleave the carboxyterminal amino acid, but it doesn’t because the site is inactive.  However, even catalytically inactive enzymatic sites have enough structure left so they bind the substrate.

So binding of the carboxy terminal amino acid to the back site causes conformational changes transmitted through various alpha helices to the active enzyme at the other end.  It munches away removing amino acid after amino acid until the peptide gets short enough (translation 9 amino acids) so that it doesn’t push on the back site.

Incredibly clever, even though it hurts me as a chemist to see the enzyme essentially ignoring the chemistry of its substrate.

I far prefer this to politics where data is ignored.  Two examples

l. From a review of a book by Paul Krugman in the Jan/Feb 2020 Atlantic

“Krugman is substantively correct on just about every topic he addresses.” Yes except Peak Oil in 2010, Stock Market collapse in Nov 2016 and the coming recession in an article April 2019

2. Former Secretary of Labor Robert Reich in the Guardian 22 Dec ’19 — “How Trump has betrayed the working class” — by employing them and raising their wages no doubt.