Category Archives: Chemistry (relatively pure)

It’s why I don’t read novels

You can’t make up stuff like this. A nephrologist whom I consulted about our daughter-in-law’s bout with pre-eclampsia, asked me about her brother-in-law when she found out I’d been a neurologist. Long out of practice, I called someone in my call group still practicing, only to find out that his son (who was just a little guy when we practiced) is finishing up his PhD in Chemistry from Princeton. Put this in a novel and no one would believe it.

The reason for the post, is that Princeton’s new Chemistry building, built to the tune of .25 gigaDollars, isn’t working very well. According to his son not all the hoods are functional. There are other dysfunctionalities as well, lack of appropriate space etc. etc. All is not lost however, the building is so beautiful (if non-functional) that it is used as a movie set from time to time. Any comments from present or past inhabitants of the new building?

Here’s the old post.

Princeton Chemistry Department — the new Oberlin

When I got to grad school in the fall of ’60, most of the other grad students were from East and West coast schools (Princeton, Bryn Mawr, Smith, Barnard, Wheaton, Cal Tech etc. etc.), but there were two guys from Oberlin (Dave Sigman, Rolf Sternglanz) which seemed strange until I looked into it. Oberlin, of course, is a great school for music but neither of them was a musician. They told me of Charles Martin Hall, Oberlin alum and inventor of the Hall process for Aluminum — still used today. He profited greatly from his invention, founding what is today Alcoa, running and owning a lot of it. He gave tons of money to the Oberlin Chemistry department, which is why it was so good back than (and probably still is).

What does this have to do with Princeton? Princeton’s Charles Hall is emeritus prof Ted Taylor, whose royalties on Alimta (Pemetrexed), an interesting molecule with what looks like guanine, glutamic acid, benzoic acid and ethane all nicely stitched together to form an antifolate, to the tune of over 1/4 of a billion dollars built the new Princeton Chemistry building. Praise be, the money didn’t go into any of the current academic fads (you know what they are), but good old chemistry.

An article in the 11 May “Princeton Alumni Weekly” (yes weekly) about the new building contains several other interesting assertions. The old chemistry building is blamed for a number of sins e.g., “no longer conducive to the pursuit of cutting-edge science in the 21st century”, “hard to recruit world-class faculty and grad students to what was essentially rabbit warren” etc. etc. Funny, but we thought the place was pretty good back then.

When the University president (Shirley Tilghman, a world-class molecular biologist prior to assuming the presidency — just Google imprinting) describes Princeton Chemistry as ‘one of Princeton’s “least-strong departments” you know there are problems. Is this really true? Maybe the readership knows.

Grad school applications are now coming from the ‘very top applicants’ — is it that easy to rate them? This is said not to be true 10 years ago — wonder how those now with PhD’s entering the department back then feel about this.

Then there is a picture of a young faculty member “Abby Doyle” who joined the department 6 years after graduating Harvard in 2002. As I recall there was a lot of comment on this in the earlier incarnation of ChemBark a few years ago.

The new building is supposed to inspire collaboration because of its open space, and 75 foot atrium, ‘few walls between the labs and glass is everywhere’. Probably the article was written by an architect. The implication being is that all you need for good science is a good building, and that bad buildings can inhibit good science. Anyone out there whose science has blossomed once they were put in a glass cage?

It’s interesting to note that the undergraduate catalog for ’57 – ’58 has Dr. Taylor basically in academic slobbovia — he’s only teaching Chem 304a, a one semester course “Elementary Organic Chemistry for Basic Engineers” (not even advanced engineers)

Comments anyone?

No longer looking under the lamppost

Time flies. It’s been over 5 years since I wrote http://luysii.wordpress.com/2009/09/25/are-biochemists-looking-under-the-lamppost/, essentially a long complaint that biochemists (and by implication drug chemistry and drug discovery) were looking at the molecules they knew and loved rather than searching for hidden players in the biochemistry and physiology of the cell.

Things are much better now. Here are 3 discoveries from the recent past, some of which should lead to drugable targets.

#1 FAFHA — a possible new way to treat Diabetes. Interested? Take a long chain saturated fatty acid such as stearic acid (C18:0). Now put a hydroxyl group somewhere on the chain (the body has found ways put them at different sites — this gives you a hydroxy fatty acid (HFA). Next esterify this hydroxyl group with another fatty acid and you have a Fatty Acid ester of a Hydroxy Fatty acid (an FAHFA if you will). So what?

Well fat makes them and releases them into the blood, making them yet another adipokine and further cementing fat as an endocrine organ. Once released FAHFAs stimulate insulin release, and increase glucose uptake in the fat cell when they activate GPR120 (the long chain fatty acid receptor).

A variety of fatty acids can form the ester, one of which is palmitic acid (C16:0) forming Palmitic Hydroxy Stearic Acid (PAHSA) which binds to GPR120. if that weren’t enough PAHSAs are anti-inflammatory — interested read more [ Cell vol. 159 pp. 238 239, 318 – 332 ’14 ]. I don’t think the enzymes forming HFA’s are known, and I’m willing to bet that are other HFAs out there.

#2 Maresin1 (7S, 14S dihydroxy docosa 4Z 8E 10E, 12Z 16Z, 19Z hexaenoic acid to you) is the way you start making Specialized Proresolving Mediators (SPMs). Form an epoxide of one of the double bonds and then do an SN2 ring opening with a thiol (glutathione for one) forming what they call a sulfido-conjugate mediator. It appears to be one of the many ways that inflammation is resolved. It helps resolve E. Coli infection in mice at nanoMolar concentration. SPMs further neutrophil recruitment and promote macrophage clearance of apoptotic cells and tissue debris. Wouldn’t you like to make a drug like that? Think of the specificity of the enzyme producing the epoxidation of just one of the 6 double bonds. Also a drug target. For details please see PNAS vol. 111 pp. E4753 – E4761 ’14

#3 Up4A (Uridine Adenosine Tetraphosphate) — as you might expect it’s an agonist at some purinergic receptors (PO2X1, P2Y2, P2Y4) causing vasoconstriction, and vasodilatation at others (P2Y1). It is released into the colon when enteric neurons are stimulated. Another player whose existence we had no idea about. Certainly we have all the GI and vasodilating drugs we need. If nothing else it will be a pharmacological tool. Again the enzyme making it isn’t known — yet another drug target possibly. For details see PNAS vol. 111 pp. 15821 – 15826 ’14.

There is a lot more in these 3 papers than can be summarized here.

Who knows what else is out there, and what it is doing? Glad to see people are starting to look

Watching electrons being pushed

Would any organic chemist like to watch electrons moving around in a molecule? Is the Pope Catholic? Attosecond laser pulses permit this [ Science vol. 346 pp. 336 – 339 ’14 ]. An attosecond is 10^-18 seconds. The characteristic vibrational motion of atoms in chemical bonds occurs at the femtosecond scale (10^-15 seconds). An electron takes 150 attoseconds to orbit a hydrogen atom [ Nature vol. 449 p. 997 ’07 ]. Of course this is macroscopic thinking at the quantum level, a particular type of doublethink indulged in by chemists all the time — http://luysii.wordpress.com/2009/12/10/doublethink-and-angular-momentum-why-chemists-must-be-adept-at-it/.

The technique involves something called pump probe spectroscopy. Here was the state of play 15 years ago — [ Science vol. 283 pp. 1467 – 1468 ’99 ] Using lasers it is possible to blast in a short duration (picoseconds 10^-12 to femtoseconds 10^-15) pulse of energy (pump pulse ) at one frequency (usually ultraviolet so one type of bond can be excited) and then to measure absorption at another frequency (usually infrared) a short duration later (to measure vibrational energy). This allows you to monitor the formation and decay of reactive intermediates produced by the pump (as the time between pump and probe is varied systematically).

Time has marched on and we now have lasers capable of producing attosecond pulses of electromagnetic energy (e.g. light).

A single optical cycle of visible light of 6000 Angstrom wavelength lasts 2 femtoseconds. To see this just multiply the reciprocal of the speed of light (3 * 10^8 meters/second) by the wavelength (6 * 10^3 *10^-10). To get down to the attosecond range you must use light of a shorter wavelength (e.g. the ultraviolet or vacuum ultraviolet).

The paper didn’t play around with toy molecules like hydrogen. They blasted phenylalanine with UV light. Here’s what they said “Here, we present experimental evidence of ultrafast charge dynamics in the amino acid phenylalanine after prompt ionization induced by isolated attosecond pulses. A probe pulse then produced a doubly charged molecular fragment by ejection of a second electron, and charge migration manifested itself as a sub-4.5-fs oscillation in the yield of this fragment as a function of pump-probe delay. Numerical simulations of the temporal evolution of the electronic wave packet created by the attosecond pulse strongly support the interpretation of the experimental data in terms of charge migration resulting from ultrafast electron dynamics preceding nuclear rearrangement.”

OK, they didn’t actually see the electron dynamics but calculated it to explain their results. It’s the Born Oppenheimer approximation writ large.

You are unlikely to be able to try this at home. It’s more physics than I know, but here’s the experimental setup. ” In our experiments, we used a two-color, pump-probe technique. Charge dynamics were initiated by isolated XUV sub-300-as pulses, with photon energy in the spectral range between 15 and 35 eV and probed by 4-fs, waveform-controlled visible/near infrared (VIS/NIR, central photon energy of 1.77 eV) pulses (see supplementary materials).”

Now we know why hot food tastes differently

An absolutely brilliant piece of physical chemistry explained a puzzling biologic phenomenon that organic chemistry was powerless to illuminate.

First, a fair amount of background

Ion channels are proteins present in the cell membrane of all our cells, but in neurons they are responsible for the maintenance of a membrane potential across the membrane, which has the ability change abruptly causing an nerve cell to fire an impulse. Functionally, ligand activated ion channels are pretty easy to understand. A chemical binds to them and they open and the neuron fires (or a muscle contracts — same thing). The channels don’t let everything in, just particular ions. Thus one type of channel which binds acetyl choline lets in sodium (not potassium, not calcium) which causes the cell to fire impulses. The GABA[A] receptor (the ion channel for gamma amino butyric acid) lets in chloride ions (and little else) which inhibits the neuron carrying it from firing. (This is why the benzodiazepines and barbiturates are anticonvulsants).

Since ion channels are full of amino acids, some of which have charged side chains, it’s easy to see how a change in electrical potential across the cell membrane could open or shut them.

By the way, the potential is huge although it doesn’t seem like much. It is usually given as 70 milliVolts (inside negatively charged, outside positively charged). Why is this a big deal? Because the electric field across our membranes is huge. 70 x 10^-3 volts is only 70 milliVolts. The cell membrane is quite thin — just 70 Angstroms. This is 7 nanoMeters (7 x 10^-9) meters. Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.

Now for the main course. We easily sense hot and cold. This is because we have a bunch of different ion channels which open in response to different temperatures. All this without neurotransmitters binding to them, or changes in electric potential across the membrane.

People had searched for some particular sequence of amino acids common to the channels to no avail (this is the failure of organic chemistry).

In a brilliant paper, entropy was found to be the culprit. Chemists are used to considering entropy effects (primarily on reaction kinetics, but on equilibria as well). What happens is that in the open state a large number of hydrophobic amino acids are exposed to the extracellular space. To accommodate them (e.g. to solvate them), water around them must be more ordered, decreasing entropy. This, of course, is why oil and water don’t mix.

As all the chemists among us should remember, the equilibrium constant has components due to kinetic energy (e.g. heat, e.g. enthalpy) and due to entropy.

The entropy term must be multiplied by the temperature, which is where the temperature sensitivity of the equilibrium constant (in this case open channel/closed channel) comes in. Remember changes in entropy and enthalpy work in opposite directions —

delta G(ibbs free energy) = delta H (enthalpy) - T * delta S (entropy

Here’s the paper [ Cell vol. 158 pp. 977 – 979, 1148 1158 ’14 ] They note that if a large number of buried hydrophobic groups become exposed to water on a conformational change in the ion channel, an increased heat capacity should be produced due to water ordering to solvate the hydrophobic side chains. This should confer a strong temperature dependence on the equilibrium constant for the reaction. Exposing just 20 hydrophobic side chains in a tetrameric channel should do the trick. The side chains don’t have to be localized in a particular area (which is why organic chemists and biochemists couldn’t find a stretch of amino acids conferring cold or heat sensitivity — it didn’t matter where the hydrophobic amino acids were, as long as there were enough of them, somewhere).

In some way this entwines enthalpy and entropy making temperature dependent activation U shaped rather than monotonic. So such a channel is in principle both hot activated and cold activated, with the position of the U along the temperature axis determining which activation mode is seen at experimentally accessible temperatures.

All very nice, but how many beautiful theories have we seen get crushed by ugly facts. If they really understood what is going on with temperature sensitivity, they should be able to change a cold activated ion channel to a heat activated one (by mutating it). If they really, really understood things, they should be able to take a run of the mill temperature INsensitive ion channel and make it temperature sensitive. Amazingly, the authors did just that.

Impressive. Read the paper.

This harks back to the days when theories of organic reaction mechanisms were tested by building molecules to test them. When you made a molecule that no one had seen before and predicted how it would react you knew you were on to something.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Breaking benzene

Industrially to break benzene aromaticity in order to add an alkyl group using the Friedel Crafts reaction requires fairly hairy conditions — http://www.chemguide.co.uk/organicprops/arenes/fc.html e.g. pressure to keep everything liquid and temperatures of 130 – 160 Centigrade.

A remarkable paper [ Nature vol. 512 pp. 413 – 415 ’14 ] uses a Titanium hydride catalyst and mild conditions (22 C — room temperature) for little over a day to form a titanium methylcyclopentenyl complex from benzene (which could be isolated) and studied spectroscopically.

The catalyst itself is rather beautiful. 3 titaniums, 6 hydrides and 3 C5Me4SiMe3 groups.

Benzene is the aromaticity workhorse of introductory organic chemistry. If you hydrogenate cyclohexene 120 kiloJoules is given off. Hydrogenating benzene should give off 360 kiloJoules, but because of aromatic stabilization only 208 is given off — implying that aromaticity lowers the energy of benzene by 152 kiloJoules. Clayden uses kiloJoules. I’m used to kiloCalories. To get them divide kiloJoules by 4.19.

What other magic does transition metal catalysis have in store?

A very UNtheoretical approach to cancer diagnosis

We have tons of different antibodies in our blood. Without even taking mutation into account we have 65 heavy chain genes, 27 diversity segments, and 6 joining regions for them (making 10,530) possibilities — then there are 40 genes for the kappa light chains and 30 for the lambda light chains or over 1,200 * 10,530. That’s without the mutations we know that do occur to increase antibody affinity. So the number of antibodies probably ramming around in our blood is over a billion (I doubt that anyone has counted then, just has no one has ever counted the neurons in our brain). Antibodies can bind to anything — sugars, fats, but we think of them as mostly binding to protein fragments.

We also know that cancer is characterized by mutations, particularly in the genes coding for proteins. Many of the these mutations have never been seen by the immune system, so they act as neoantigens. So what [ Proc. Natl. Acad. Sci. vo. 111 pp. E3072 – E3080 ’14 ] did was make a chip containing 10,000 peptides, and saw which of them were bound by antibodies in the blood.

The peptides were 20 amino acids long, with 17 randomly chosen amino acids, and a common 3 amino acid linker to the chip. While 10,000 seems like a lot of peptides, it is a tiny fraction (actually 10^-18
of the 2^17 * 10^17 = 1.3 * 10^22 possible 17 amino acid peptides).

The blood was first diluted 500x so blood proteins other than antibodies don’t bind significantly to the arrays. The assay is disease agnostic. The pattern of binding of a given person’s blood to the chip is called an immunosignature.

What did they measure? 20 samples from each of five cancer cohorts collected from multiple geographic sites and 20 noncancer samples. A reference immunosignature was generated. Then 120 blinded samples from the same diseases gave 95$% classification accuracy. To investigate the breadth of the approach and test sensitivity, the immunosignatures 75% of over 1,500 historical samples (some over 10 years old) comprising 14 different diseases were used as training, then the other 25% were read blind with an accuracy of over 98% — not too impressive, they need to get another 1,500 samples. Once you’ve trained on 75% of the sample space, you’d pretty much expect the other 25% to look the same.

The immunosignature of a given individual consists of an overlay of the patterns from the binding signals of many of the most prominent circulating antibodies. Some are present in everyone, some are unique.

A 2002 reference (Molecular Biology of the Cell 4th Edition) states that there are 10^9 antibodies circulating in the blood. How can you pick up a signature on 10K peptides from this. Presumably neoAntigens from cancer cells elicit higher afifnity antibodies then self-antigens. High affiity monoclonals can be diluted hundreds of times without diminishing the signal.

The next version of the immunosignature peptide microArray under development contains over 300,000 peptides.

The implication is that each cancer and each disease produces either different antigens and or different B cell responses to common antigens.

Since the peptides are random, you can’t align the peptides in the signature to the natural proteomic space to find out what the antibody is reacing to.

It’s a completely atheoretical approach to diagnosis, but intriguing. I’m amazed that such a small sample of protein space can produce a significant binding pattern diagnostic of anything.

It’s worth considering just what a random peptide of 17 amino acids actually is. How would you make one up? Would you choose randomly giving all 20 amino acids equal weight, or would you weight the probability of a choice by the percentage of that amino acid in the proteome of the tissue you are interested in. Do we have such numbers? My guess is that proline, glycine and alanine would the most common amino acids — there is so much collagen around, and these 3 make up a high percentage of the amino acids in the various collagens we have (over 15 at least).

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 – E3533 ’14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Are Van der Waals interactions holding asteroids together?

A recent post of Derek’s concerned the very weak (high kD) but very important interactions of proteins within our cells. http://pipeline.corante.com/archives/2014/08/14/proteins_grazing_against_proteins.phpAr

Most of this interaction is due to Van der Waals forces — http://en.wikipedia.org/wiki/Van_der_Waals_force. Shape shape complementarity (e.g. steric factors) and dipole dipole interactions are also important.

Although important, Van der Waals interactions have always seemed like a lot of hand waving to me.

Well guess what, they are now hypothesized to be what is holding an asteroid together. Why are people interested in asteroids in the first place? [ Science vol. 338 p. 1521 ’12 ] “Asteroids and comets .. reflect the original chemical makeup of the solar system when it formed roughly 4.5 billion years ago.”

[ Nature vol. 512 p. 118 ’14 ] The Rosetta spacecraft reached the comet 67P/Churyumov-Gerasimenko after a 10 year journey becoming the first spacecraft to rendezvous with a comet. It will take a lap around the sun with the comet and will watch as the comet heats up and releases ice in a halo of gas and dust. It is now flying triangles in front of the comet, staying 100 kiloMeters away. In a few weeks it will settle into a 30 kiloMeter orbit around he comet. It will attempt to place a lander (Philae) the size of a washing machine on its surface in November. The comet is 4 kiloMeters long.

[ Nature vol. 512 pp. 139 – 140, 174 – 176 ’14 ] A kiloMeter sized near Earth asteroid called (29075) 1950 DA (how did they get this name?) is covered with sandy regolith (heterogeneous material covering solid rock { on earth } it includes dust, soil, broken rock ). The asteroid rotates every 2+ hours, and it is so small that gravity alone can’t hold the regolith to its surface. An astronaut could scoop up a sample from its surface, but would have to hold on to the asteroid to avoid being flung off by the rotation. So the asteroid must have some degree of cohesive strength. The strength required is 64 pascals to hold the rubble together — about the pressure that a penny exerts on the palm of your hand. A Pascal is 1/101,325 of atmospheric pressure.

They think the strength comes from van der Waals interactions between small (1 – 10 micron) grains — making it fairy dust. It’s rather unsatisfying as no one has seen these particles.

The ultimate understanding of the large multi-protein and RNA machines (ribosome, spliceosome, RNA polymerase etc. etc. ) without which life would be impossible will involve the very weak interactions which hold them together. Along with permanent dipole dipole interactions, charge interactions and steric complementarity, the van der Waals interaction is high on anyone’s list.

Some include dipole dipole interactions as a type of van der Waals interaction. The really fascinating interaction is the London dispersion force. These are attractions seen between transient induced dipoles formed in the electron clouds surrounding each atomic nucleus.

It’s time to attempt the surmount the schizophrenia which comes from trying to see how quantum mechanics gives rise to the macroscopic interactions between molecules which our minds naturally bring to matters molecular (with a fair degree of success).

Steric interactions come to mind first — it’s clear that an electron cloud surrounding molecule 1 should repel another electron cloud surrounding molecule 2. Shape complementarity should allow two molecules to get closer to each other.

What about the London dispersion forces, which are where most of the van der Waals interaction is thought to be. We all know that quantum mechanical molecular orbitals are static distributions of electron probability. They don’t fluctuate (at least the ones I’ve read about). If something is ‘transiently inducing a dipole’ in a molecule, it must be changing the energy level of a molecule, somehow. All dipoles involve separation of charge, and this always requires energy. Where does it come from? The kinetic energy of the interacting molecules? Macroscopically it’s easy to see how a collision between two molecules could change the vibrational and/or rotation energy levels of a molecule. What does a collision between between molecules look like in terms of the wave functions of both. I’ve never seen this. It has to have been worked out for single particle physics in an accelerators, but that’s something I’ve never studied.

One molecule inducing a transient dipole in another, which then induces a complementary dipole in the first molecule, seems like a lot of handwaving to me. It also appears to be getting something for nothing contradicting the second law of thermodynamics.

Any thoughts from the physics mavens out there?

Old dog does new(ly discovered) tricks

One of the evolutionarily oldest enzyme classes is aaRS (for amino acyl tRNA synthetase). Every cell has them including bacteria. Life as we know it wouldn’t exist without them. Briefly they load tRNA with the appropriate amino acid. If this Greek to you, look at the first 3 articles in https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Amino acyl tRNA syntheses are enzymes of exquisite specificity, having to correctly match up 20 amino acids to some 61 different types of tRNAs. Mistakes in the selection of the correct amino acid occurs every 1/10,000 to 1/100,000, and in the selection of the correct tRNA every 1/1,000,000. The lower tRNA error rate is due to the fact that tRNAs are much larger than amino acids, and so more contacts between enzyme and tRNA are possible.

As the tree of life was ascended from bacteria over billions of years, 13 new protein domains which have no obvious association with aminoacylation have been added to AARS genes. More importantly, the additions have been maintained over the course of evolution (with no change in the primary function of the synthetase). Some of the new domains are appended to each of several synthetases, while others are specific to a single synthetase. The fact that they’ve been retained implies they are doing something that natural selection wants (teleology inevitably raises its ugly head with any serious discussion of molecular biology or cellular physiology — it’s impossible to avoid).

[ Science vol.345 pp 328 – 332 ’14 ] looked at what mRNAs some 37 different AARS genes were transcribed into. Six different human tissues were studied this way. Amazingly, 79% of the 66 in-frame splice variants removed or disrupted the aaRS catalytic domain. . The AARS for histidine had 8 inframe splice variants all of which removed the catalytic domain. 60/70 variants losing the catalytic domain (they call these catalytic nulls) retained at least one of the 13 added domains in higher eukaryotes. Some of the transcripts were tissue specific (e.g. present in some of the 6 tissues but not all).

Recent work has shown roles for specific AARSs in a variety of pathways — blood vessel formation, inflammation, immune response, apoptosis, tumor formation, p53 signaling. The process of producing a completely different function for a molecule is called exaptation — to contrast it with adaptation.

Up to now, when a given protein was found to have enzymatic activity, the book on what that protein did was closed (with the exception of the small GTPases). End of story. Yet here we have cells spending the metabolic energy to make an enzymatically dead protein (aaRSs are big — the one for alanine has nearly 1,000 amino acids). Teleology screams — what is it used for? It must be used for something! This is exactly where chemistry is silent. It can explain the incredible selectivity and sensitivity of the enzyme but not what it is ‘for’. We have crossed the Cartesian dualism between flesh and spirit.

Could this sort of thing be the tip of the iceberg? We know that splice variants of many proteins are common. Could other enzymes whose function was essentially settled once substrates were found, be doing the same thing? We may have only 20,000 or so protein coding genes, but 40,000, 60,000, . . . or more protein products of them, each with a different biological function.

So aaRSs are very old molecular biological dogs, who’ve been doing new tricks all along. We just weren’t smart enough to see them (’till now).

Novels may have only 7 basic plots, but molecular biology continues to surprise and enthrall.

Follow

Get every new post delivered to your Inbox.

Join 69 other followers