The New York Times and NOAA flunk Chem 101

As soon as budding freshman chemists get into their first lab they are taught about significant figures. Thus 3/7 = .4 (not .428571 which is true numerically but not experimentally) Data should never be numerically reported with more significant figures than given by the actual measurement.

This brings us to yesterday’s front page story (with the map colored in red) “2014 Breaks Heat Record, Challenging Global Warming Skeptics“. Well it did if you believe that a .02 degree centigrade difference in global mean temperature is significant. The inconvenient fact that the change was this small was not mentioned until the 10th paragraph. It was also noted there that .02 C is within experimental error. Do you have a thermometer that measures temperatures that exactly? Most don’t, and I doubt that NOAA does either. Amusingly, the eastern USA was the one area which didn’t show the rise. Do you think that measurements here are less accurate than in Africa, South America Eurasia? Could it be the other way around?

It is far more correct to say that Global warming has essentially stopped for the past 14 years, as mean global temperature has been basically the same during that time. This is not to say that we aren’t in a warm spell. Global warming skeptics (myself included) are not saying that CO2 isn’t a greenhouse gas, and they are not denying that it has been warm. However, I am extremely skeptical of models predicting a steady rise in temperature that have failed to predict the current decade and a half stasis in global mean temperature. Why should such models be trusted to predict the future when they haven’t successfully predicted the present.

It reminds me of the central dogma of molecular biology years ago “DNA makes RNA makes Protein”, and the statements that man and chimpanzee would be regarded as the same species given the similarity of their proteins. We were far from knowing all the players in the cell and the organism back then, and we may be equally far from knowing all the climate players and how they interact now.

Framingham shows us just how there is more to biology than genetics

If you have two copies of a particular variant (rs993609) of the FTO gene (FaT mass and Obesity associated gene) you are likely to weigh 7 pounds more then if you have neither. Pretty exciting stuff for the basic scientist, given the problems obesity causes (or at least is associated with). The study involved 39,000 people [ Science vol. 316 pp. 889 – 894 ’07 ]. At the end of the post, I’ll have a lot of technical stuff about just what FTO is thought to do inside the cell, but that’s not why I’m posting this.

Framingham Massachusetts is a town about 30 miles west of Boston. Thanks to the cooperation of its citizenry, it has taught us huge swaths of human biology since it began nearly 70 years ago. Briefly, The Framingham Health Study (FHS) was initiated in 1948 when 5,209 people were enrolled in the original cohort; since then, the study has come to be composed of four separate but related populations. The Framingham Offspring Study began in 1971, consisting of 5,124 individuals who represented the children of the original cohort population and their spouses. Participants in the offspring study were given physical examinations and detailed questionnaires at regular intervals starting in 1972, with a total of eight waves completed through 2008. The Body Mass Index (BMI) was calculated from measured height and weight. The offspring cohort was born over a 40-y period, with participants ranging in age from their teens to their late 50s at the time of study onset in 1971. In addition to providing survey and examination data, a large fraction of participants (73.0%, 3,742 individuals) had their DNA genotyped using the 100KAffymetrix array (43). Genotypes at the rs9939609 allele of FTO were extracted using PLINK (44) from data contained in the Framingham SHARe database.

Given the same gene, its effects should be constant through time, other things being equal. The following work [ Proc. Natl. Acad. Sci. vol. 112 pp. 354 – 359 ’15 ] mined the Framingham study to see if when you were born mattered to how fat you became if you carried the fat variant. There were 8 waves of data collection data from ’71 to ’08. Those born before ’42 showed less penetrance of the FTO gene.

Figure 1 p.356 is particularly impressive. Everyone became heavier as they got older. This is because height declines with age raising BMI even in the presence of constant weight. As far as I know, the following explanation from another post ( original — “People lose height as they age, yet the BMI is quite sensitive to it (remember the denominator has height squared). The great thing about BMI is that it’s easily measured, and doesn’t rely on what people remember about their weight or their height. Well as a high school basketball player my height was 6′ 1”+, now (at age 75) its 6’0″. So even with constant weight my BMI goes up.

Well it’s time to do the calculation to see what a fairly common shrinkage from 73.5 inches to 72 would to to the BMI (at a constant weight). Surprisingly it is not trivial — (72/73.5) * (72/73.5) = .9596. So the divisor is 4% less meaning the BMI is 4% more, which is almost exactly what the low point on the curve does with each passing decade after 50 ! ! ! This might even be an original observation, and it would explain a lot.”

What is impressive about figure 1, is that those born before 1942 with two copies of the risk allele weren’t much heavier than those with one or no copies of the risk allele. This was true at all ages measured (remember these people were sequentially followed). Those born after 1942 carrying two copies of the high risk allele were 2 – 4 pounds heavier (again measured at all ages).

This is as good proof as one could hope for that environment affects gene expression, something we all assumed instinctively. There is no way one could repeat the experiment, except to start a new one in the future, which, as this shows, will occur in a different environment, which should make a difference. MDs gradually woke up to the fallacy of using historical rather than concurrent controls particularly in studies of therapies to prevent heart attack and stroke, as the rates of both dropped significantly in the past 50 years, and survival from individual heart attacks and strokes also improved.

So what does FTO actually do? Naturally anyone dealing with strokes wants to know as much as possible about one of the largest risk factors — obesity. What follows is a fairly undigested copy of my notes over the years on papers concerning FTO. I make no attempt to provide the relevant background, although most readers will have some. It’s interesting to see how our knowledge about FTO has grown over the years. Enjoy ! !

[ Science vol. 316 p. 185, 889 – 894 ’07 ] FTO was first found in type II diabetics by looking for single nucleotide polymorphisms distinguishing 1924 UK type II diabetics from 2938 UK controls (were southeast Asians included?). Subsequently, larger populations (3757 type IIs and 5346 controls) were independently studied and the findings replicated. [ Cell vol. 134 p. 714 ’08 ] — The association hasn’t held up in the Han Chinese.

The FTO gene is found on chromosome #16. 16% of white adults have two copies of the variant (46% have one copy). They are 1.67 times more likely to be obese. At this point (13 Apr ’07) no one knows what the gene does.

FTO is a gene of unknown function in an unknown pathway that was originally cloned as a result of a fused-toe mutant mouse, that results from a 1.6 megaBase deletion of mouse chromosome #8. The deletion removes some 6 genes.

[ Cell vol. 131 p. 827 ’07 ] A blurb about something to be published in Science. This work shows that FTO codes for a nucleic acid demethylase. It has the enzymatic activity of a 2 oxo-glutaric acid oxygenase. The enzyme removes methyl groups from 3 methyl thymine (in DNA) 3 methyl uracil (in RNA). The SNPs linking FTO to obesity are in introns in the gene. In mice, the mRNA for FTO is highly enriched in the hypothalamus. Levels of FTO mRNA drop by 60% in fasting mice.

[ Science vol. 318 pp. 1469 – 1472 ’07 ] The Science paper at last. The gene produce catalyzes the Fe++ and 2-oxoglutaric acid dependent demethylation of 3 methyl thymine (which may not be the relevant substrate) in single stranded DNA with production of succinic acid, formaldehyde, and CO2. FTO is found in the nucleus in transfected cells. The mRNA for FTO is most abundant in the brain particularly in hypothalamic nuclei governing energy balance. FTO is inhibited by Krebs cycle intermediates (isn’t 2 oxoglutarate a Krebs cycle intermediate? ) particularly fumaric acid.

[ Science vol. 334 pp. 569 – 571 ’11 ] FTO removes methyl groups from 3 Methylthymine, and 3 methylUridine in single stranded DNA and RNA (ssDNA, ssRNA). The present work shows FTO converts 6 methylamino Adenine to adenine in RNA. FTO associates with speckles containing RNA splicing factors and RNA polymerase II

[ Nature vol. 457 p. 1095 ’09 ] Mice lacking FTO were normal at birth, but at 6 weeks weighed 30 – 40% less than normal mice (or haploinsufficients). This was due to loss of white fat — which was nearly completely absent at 15 months. The mutants ate more (in proportion to their body weight) than normal. On a high fat diet, both groups gained less weight than normals. Mice lacking FTO use more energy while not moving much.

[ Nature vol. 458 pp. 894 – 898 ’09 ] Loss of FTO in mice leads to postnatal growth retardation and a significant reduction both in fat and in lean body mass. The leanness is due to increased energy expenditure and sympathetic cativation, despite decreased sspontaneous motor activity and relative hyperphagia.

[ Proc. Natl. Acad. Sci. vol. 107 pp. 8404 – 8409 ’10 ] Carriers of the fat allele of FTO have smaller brains (8% smaller in the frontal lobes, 12% smaller in the occipital lobes). The brain differences weren’t due to differences in cholesterol, hypertension or white matter hyperintensities. So FTO risk isn’t a surrogate for the metabolic changes of obesity. The study was done in 206 cognitively normal adults (average age 76). Every 1 unit increase in BMI was assocaited with 1 – 1.5% reduction in brain volume in a variety of brain regions.

The highest expression of FTO is in the cerebral cortex. Whether expression in the hypothalamus changes after food deprivation is controversial.

It is known that obesity (BMI > 30) is associated with smaller brains. In this group temporal lobe atrophy was found in people with higher BMI but not in people with risk allele of FTO.

There was no effect of BMI on brain size in noncarriers of the FTO allele. So FTO status may influence the effect of BMI on the brain.

[ Cell vol. 149 pp. 1635 – 1646 ’12 ] A study of just what 6methylamino adenine (m6A) is doing and where in the genome it is doing it. m6A is the physiologically relevant target of FTO. It is found in tRNA, rRNA and mRNA. It fact m6A is found in 7,676 different mRNAs. The modification is markedly increased throughout brain development. m6A sites are enriched near stop codons and in 3′ untranslated regions (3′ UTRs). Even more interestingly, there is an association between m6A and microRNA binding sites in the 3′ UTRs ! ! ! m6A is not enriched at splice junctions. 30% of genes are said to have microRNA binding sites, but 67% of the 3′ UTRs containing m6A have microRNA binding sites. However, the two can’t overlap in the 3′ UTR. Many features of m6A localization are the same in man and mouse.

[ Nature vol. 490 pp. 267 – 272 ’12 ] In some way the SNP rs7202116 in FTO is associated with phenotypic variability per se. No other locus causes BMI variability this way.

[ Proc. Natl. Acad. Sci. vol. 110 pp. 2557 – 2562 ’13 ] FTO is widely expressed, with highest levels in brain, particularly the hypothalamus. FTO expression in the hypothalamus is decreased after a 48 hour fast, and incraeasing after a 10 week exposure to a high fat diet.

Carriers of the obesity promoting allele are hyperphagic and show altered (how?) macronutrient preference. This work shows that cells lacking FTO show decreased activation of the mTORC1 pathway, decreased rates of mRNA translation, and increased autophagy — all of which helps explain the stunted growth seen in man homozygous for FTO mutations.

FTO is rapidly degraded when cells are deprived of amino acids (this decreases TORC1 activity, making it a part of the physiological response to starvation). How this reoates to the demethylase activity of FTO isn’t known (yet). The methylase action is crucial for its ability to sustain mTORC1 activity in the face of amino acid deprivation.

[ Nature vol. 507 pp. 309 – 310, 371 – 375 ’14 ] Amazingly, the association between obesity and FTO involves another gene (IRX3) which is 500 kiloBases away. This was determined by chromosome conformation capture (CCC). The promoter of IRX3 interacts physically interacts with the first intron of FTO — this was found human cell lines, and other organisms. Obesity li9nked SNPs are associated with IRX3 expression in these samples, but not with expression of FTO. Mice lacking a functional copy of IRX3 have 25 – 30% lower body weight than controls (primarily due to loss of fat mass and increase in BMR with browning of white fat.

There is another case — an enhancer in an intron of LMBR1 reglates the developmental gene SHH found over a megaBase away. Mutations in the enhancer can cause limb malformations due to altered SHH expression.

Do enzymes chase their prey?

Do enzymes chase their prey? At first thought, this seems ridiculous. However people have been measuring diffusion of substances in water for over a century. Even Einstein worked on it (his paper on Brownian motion). So it’s fairly easy to measure the diffusion of an enzyme in water. Several enzymes (catalase — one of the most efficient enzymes known, and urease) diffuse faster when their substrate is present. [ Nature vol. 517 pp. 149 – 150, 227 – 230 ’15 ] The hydrolysis of urea by urease and the conversion of H2O2 to O2 and water by catalase enhances the molecular diffusion of the enzymes (this is called anomlous diffusion).If you inhibit catalase enzymatic activity using azide the anomalous diffusion disappears (even though there’s still plenty of H2O2 around). This work also showed that the rate of diffusion of catalase, urease and 2 more ezymes correlates with the heat produced by the reaction catalyzed.

Heating the catalytic center of catalase (using a short laser pulse) produces the same anomalous diffusion. Proteins exist in a world in which Brownian motion is governed by viscous forces rather than by inertia, so coasting (a la Galileo and Newton’s law of inertia) isn’t an option — continuous force generation is required.

Heat generated from each catalytic cycle could be transmitted through the enzyme as a pressure wave. For this to happen the catalytic center must be NOT at the center of mass of the enzyme, so the pressure wave will create differential stress at the enzyme solvent interface (which should propel the enzyme). They call this the chemoacoustic effect.

Molecular dynamics simulations suggest that the transmission of energy through a protein can be quite fast (5 Angstroms/picoSecond) and nonuniformly distributed.

Some enzymes have a near perfect catalytic efficiency. Every time a substrate hits them, the substrate is converted to product. Examples include catalase, acetyl cholinesterase, fumarase, and carbonic anhydrase. There are 100 million to a billion collisions per mole per second in solution.

Could this be a product of evolution (to make enzymes actively search out substrates?). Note, this won’t work if the catalytic center of the enzyme is in the center of mass.

I doubt that much catalytic efficiency is gained by having a huge protein molecule sluggishly move through the cytoplasm. Why? The molecular mass of H2O2 is 19 Daltons (vs. 18 for water), so it moves slightly more slowly but water moves at 20C in water at 590 meters/second. Of course it doesn’t get very far before it bumps into another water molecule and gets deflected.

Is there an ace physical chemist out there who can put numbers on this. I couldn’t believe that I couldn’t find a simple expression for the relation between the diffusion coefficient and the mass of the diffuser, ditto for the atomic volume of a water molecule, although I’m guessing that it’s pretty close to the length of the H – O bond (.95 Angstroms) giving a mass of 3.6 cubic Angstroms. I wanted this so I could see how much room to roam a water molecule has.

Cancer as the telephone game

An interesting paper just out [ Science vol. 347 pp. 78 – 81 ’14 ] basically says that cancer is just bad luck due to copying errors of the 3.2 megaBase genome when cells divide. It’s a version of the telephone game in which a message is passed around a circle of people getting progressively garbled each time.

The evidence in support of the assertion is that the variation in cancer rates between tissues is strongly related to the number of divisions of the stem cells required to maintain that tissue. For instance the lifetime risk of being diagnosed with cancer is 7% for lung but .6% for brain (about this more later). Risk in the GI tract varies by a factor of 24 (.5% for the esophagus 4.8% for the colon) which is proportional to the number of stem cell divisions undergone during lifetime.

They estimate that at most 1/3 of the variation in risk among tissues is due to environmental factors or inherited predisposition. That’s certainly not to say that you should go ahead and smoke.

The idea makes a lot of sense. Even though the error rate in copying the parental genome to a child is an amazingly low 1/100,000,000 that still is 32 mutations per generation (more from the father than the mother and more from him the older he is, not so for the mother)– for details please see

There is even better evidence for this based on my clinical experience in neurology for 35+ years. The lifetime chance of a brain tumor is stated to be .6%. However in all these years I never saw a brain tumor made of neurons. They were all derived from glia (astrocytoma, glioblastoma) or the coverings of the brain (meningiomas). Why? Essentially neurons in the cerebral cortex (not the deeper parts of the brain) don’t divide. [ Cell vol. 153 pp. 1183 – 1185, 1219 – 1227 ’13, Science vol. 340 pp. 1180 – 1181 ’13 (Editorial) ] Even the parts that do divide add a trivial amount of neurons to the brain (700 neurons a day). Even if you live 100 years — that’s only 100 * 365 * 700 == 26 million neurons, a trivial amount compared to the 100 billion neurons you are estimated to have (this number grows each time I read about it).

You might be interested in how we can make statements like this about new neuron formation in the brain. It’s very clever — Carbon-14 accumulated in the atmosphere between the mid 50s and early 60s as a byproduct of above ground testing of nuclear weapons. Such testing was banned by treaty in 1963 and carbon-14 levels in the atmosphere declined in the following decades to previous low background levels. Carbon-14 is used in archeologic dating because its halflife is 5730 years.

Using postmortem tissue samples of individuals born before and after the nuclear bomb tests, the integration of carbon-14 into genomic DNA was measured. This would have occurred during the cell’s last division cycle. One can calculate the birth dates of different cell types collected from various tissues including brain. The approach is accurate to within a few years. The 5730 year half life of 14-C means that whatever is in human DNA hasn’t had a chance to decay (by much) in 50 years. The amount of carbon-14 in cellular DNA therefore reflects the amount of carbon-14 in the atmosphere when the cells underwent their last division. The amount of carbon-14 in the atmosphere was determined by measuring it in the annual growth rings of pine trees in Sweden — a surrogate for atmospheric carbon-14 levels in the past 60 years. The birthdate of cells is determined as the year the C-14 in them matches those of the pine trees.

Microexons, great new drugable targets

Some very serious new players in cellular and tissue molecular biology have just been found. They are very juicy drugable targets, not that targeting them will be easy. If you don’t know what introns, exons and alternate splicing are, it’s time to learn. Go to read and follow the links forward. It should be all you need to comprehend the following.

The work came out at the tail end of 2014 [ Cell vol. 159 pp. 1488 – 1489, 1511 -1523 ’14 ]. Microexons are defined as exons containing 50 nucleotides or less (the paper says 3 – 27 nucleotides). They have been overlooked, partially because their short length makes them computationally difficult to find. Also few bothered to look for them as they were thought to be unfavorable for splicing because they were too short to contain exonic splicing enhancers. They are so short that it was thought that the splicing machinery (which is huge) couldn’t physically assemble at both the 3′ and 5′ splice sites. So much for theory, they’re out there.

What is a cell and tissue differentially regulated alternative splicing event? It’s the way a given mRNA can be spliced together one way in tissue/cell #1 and another in tissue/cell #2 producing different proteins in each. Exons subject to tissue specific alternative splicing are significantly UNDERrepresented in well folded domains in proteins. Instead they are found in regions of protein disorder more frequently than one would expect by chance. Typically these regions are on the protein surface. The paper found that the microexons code for short amino acid motifs which typically interact with other proteins and ligands. 3 – 27 nucleotides lets you only code for 1 – 9 amino acids.

One well known example of a short interaction motif is RGD (for Arginine Glycine Aspartic acid in the single letter amino acid code). The sequence is found in a family of surface proteins (the integrins) with at least 26 known members. These 3 amino acids are all that is needed for the interns to bind to a variety of extracellular molecules — collagen, fibrin, glycosaminoglycans, proteoglycans. So a 3 amino acid sequence on the surface of a protein can do quite a bit.

Among a set of analyzed neural specific exons (e. g. they were only spliced that way in the brain) found in known disordered regions of the parent protein, 1/3 promoted or disrupted interactions with partner proteins. So regulated exon splicing might specify tissue and cell type specific protein interaction networks (Translation: they might explain why tissues look different even when they express the same genes). The authors regard microExon inclusion/exclusion as protein surface microsurgery.

The paper has found HUNDREDS of evolutionarily highly conserved microexons from RNA-Seq data sets ( in various species. Many of them impact neurogenesis and brain function. Regulation of microExons changes significantly during neuronal differentiation. Although microexons represent only 1% of the alternate splice sites seen, they constitute ‘up to’ 1/3 of all evolutionarily conserved neural-regulated alternative splicing between man and mouse.

The inclusion in the final transcript of most identified neural microExons is regulated by a brain specific factor nSR100 (neural specific SR related protein of 100 kiloDaltons)/SRRM4 which binds to intronic enhancer UGC motifs close to the 3′ splice sites, resulting in their inclusion. They are ‘enhanced’ by tissue specific RBFox proteins. nSR100 is reduced in Autism Spectrum DIsorder (really? all? some?). nSR100 is strongly coexpressed in the developing human brain in a gene network module M2 which is enriched for rare de novo ASD assciated mutations.

MicroExons are enriched for lengths which are multiples of 3 nucleotides (implying strong selection pressure to preserve reading frames). The microExons are also enriched in charged amino acids. Most microExons show high inclusion at late stages of neuronal differentiation in genes associated with axon and synapse function. A neural specific microExon in Protrudin/Zfyve27 increases its interaction with Vesicle Associated membrane protein associated Protein VAP) and to promote neurite outgrowth. A 6 nucleotide neural microExon in Apbb1/Fe65 promotes an interaction with Kat5/Tip60. Apbb1 is an adaptor protein functioning in neurite outgrowth.

So inclusion/exclusion of microExons can alter the interactions of proteins involved in neurogenesis. Misregulation of neural specific microexons has been found in autism spectrum disorder (what hasn’t? Pardon the cynicism).

Protein interaction domains haven’t been studied to nearly the extent they need to be, and we know far less about them than we should. All the large molecular machines of the cell (ribosome, mediator, spliceosome, mitochondrial respiratory chain) involve large numbers of proteins interacting with each other not by the covalent bonds beloved by organic chemists, but by much weaker forces (van der Waals,charge attraction, hydrophobic entropic forces etc. etc.).

Designing drugs to interfere (or promote) such interactions will be tricky, yet they should have profound effects on cellular and organismal physiology. Off target effects are almost certain to occur (particularly since we know so little about the partners of a given motif). Showing how potentially useful such a drug can be, a small molecule inhibitor of the interaction of the AIDs virus capsid protein with two cellular proteins (CPSF6, TNPO3) it must interact with to get into the nucleus has been developed. (Unfortunately I’ve lost the reference)

My cousin married a high school dropout a few years ago. Not to worry — he dropped out of high school to go to college, and has a PhD in Electrical Engineering from Berkeley and has worked at Bell labs. He was very interested in combining his math and modeling skills with my knowledge of neurology to make some models of CNS function. I demurred, as I thought we knew too little about the brain to come up with models (which I generally distrust anyway). The basic problem was that I felt we didn’t know all the players in the brain and how they fit together.

MicroExons show this in spades.

How formal tensor mathematics and the postulates of quantum mechanics give rise to entanglement

Tensors continue to amaze. I never thought I’d get a simple mathematical explanation of entanglement, but here it is. Explanation is probably too strong a word, because it relies on the postulates of quantum mechanics, which are extremely simple but which lead to extremely bizarre consequences (such as entanglement). As Feynman famously said ‘no one understands quantum mechanics’. Despite that it’s never made a prediction not confirmed by experiments, so the theory is correct even if we don’t understand ‘how it can be like that’. 100 years of correct prediction of experimentation are not to be sneezed at.

If you’re a bit foggy on just what entanglement is — have a look at Even better; read the book by Zeilinger referred to in the link (if you have the time).

Actually you don’t even need all the postulates for quantum mechanics (as given in the book “Quantum Computation and Quantum Information by Nielsen and Chuang). No differential equations. No Schrodinger equation. No operators. No eigenvalues. What could be nicer for those thirsting for knowledge? Such a deal ! ! ! Just 2 postulates and a little formal mathematics.

Postulate #1 “Associated to any isolated physical system, is a complex vector space with inner product (that is a Hilbert space) known as the state space of the system. The system is completely described by its state vector which is a unit vector in the system’s state space”. If this is unsatisfying, see an explication of this on p. 80 of Nielson and Chuang (where the postulate appears)

Because the linear algebra underlying quantum mechanics seemed to be largely ignored in the course I audited, I wrote a series of posts called Linear Algebra Survival Guide for Quantum Mechanics. The first should be all you need. but there are several more.

Even though I wrote a post on tensors, showing how they were a way of describing an object independently of the coordinates used to describe it, I did’t even discuss another aspect of tensors — multi linearity — which is crucial here. The post itself can be viewed at

Start by thinking of a simple tensor as a vector in a vector spacespace. The tensor product is just a way of combining vectors in vector spaces to get another (and larger) vector space. So the tensor product isn’t a product in the sense that multiplication of two objects (real numbers, complex numbers, square matrices) produces another object of the exactly same kind.

So mathematicians use a special symbol for the tensor product — a circle with an x inside. I’m going to use something similar ‘®’ because I can’t figure out how to produce the actual symbol. So let V and W be the quantum mechanical state spaces of two systems.

Their tensor product is just V ® W. Mathematicians can define things any way they want. A crucial aspect of the tensor product is that is multilinear. So if v and v’ are elements of V, then v + v’ is also an element of V (because two vectors in a given vector space can always be added). Similarly w + w’ is an element of W if w an w’ are. Adding to the confusion trying to learn this stuff is the fact that all vectors are themselves tensors.

Multilinearity of the tensor product is what you’d think

(v + v’) ® (w + w’) = v ® (w + w’ ) + v’ ® (w + w’)

= v ® w + v ® w’ + v’ ® w + v’ ® w’

You get all 4 tensor products in this case.

This brings us to Postulate #2 (actually #4 on the book on p. 94 — we don’t need the other two — I told you this was fairly simple)

Postulate #2 “The state space of a composite physical system is the tensor produce of the state spaces of the component physical systems.”

Where does entanglement come in? Patience, we’re nearly done. One now must distinguish simple and non-simple tensors. Each of the 4 tensors products in the sum on the last line is simple being the tensor product of two vectors.

What about v ® w’ + v’ ® w ?? It isn’t simple because there is no way to get this by itself as simple_tensor1 ® simple_tensor2 So it’s called a compound tensor. (v + v’) ® (w + w’) is a simple tensor because v + v’ is just another single element of V (call it v”) and w + w’ is just another single element of W (call it w”).

So the tensor product of (v + v’) ® (w + w’) — the elements of the two state spaces can be understood as though V has state v” and W has state w”.

v ® w’ + v’ ® w can’t be understood this way. The full system can’t be understood by considering V and W in isolation, e.g. the two subsystems V and W are ENTANGLED.

Yup, that’s all there is to entanglement (mathematically at least). The paradoxes entanglement including Einstein’s ‘creepy action at a distance’ are left for you to explore — again Zeilinger’s book is a great source.

But how can it be like that you ask? Feynman said not to start thinking these thoughts, and if he didn’t know you expect a retired neurologist to tell you? Please.

The Battle for the Soul of Smith College

The following letter to the Smith college newspaper “The Sophian” appeared in the current issue. Disclaimer: our neice went there, I’ve played chamber music with one of the Physics profs there, I’m currently studying a math book with an emeritus Smith prof who wrote it, I’ve audited a course there, I may take piano lessons from a retired music prof there. It’s a great institution with plenty of intelligent articulate undergraduates. Wendy Kaminer is a Smith Alumna. It will be fascinating to see how this plays out.

Chris Pyle

Mount Holyoke Professor

Thanks to The Sophian for publishing a transcript of what Wendy Kaminer actually said in New York. Now it is perfectly clear she is not a racist, but used the “n-word,” unexpurgated, to make a point about those caring souls who, in their effort to protect the sensibilities of students, violate free speech. The hyperventilating that followed Kaminer’s uncensored prose proves her point conclusively.

Imagine that Mark Twain had been invited to read some of his writings on campus, but that Kaminer’s critics discovered that he had used the “n-word” liberally in “The Adventures of Huckleberry Finn.” What should the college do? Disinvite him? Ask him to tone down his remarks because they might traumatize someone? Post “trigger warnings” all over campus?

The Sophian would publish Twain’s speech, but post warnings, like those that preceded the Kaminer transcript, declaring that “This author is guilty of ‘racism/racial slurs, sexist/misogynist slurs,’ and writes about ‘race-based violence.’” Twain’s admirers might be offended by such prissiness, but that’s too bad. The Sophian has a moral duty to give its adult readers early warning of impending isms on it pages. Otherwise they might be shocked, like little children confronted by age-inappropriate messages.

Unnoticed in last month’s kerfuffle was Kaminer’s provocative suggestion: “colleges and universities should . . . fire almost all of the student life administrators.” Why? Because they are the primary source of the patronizing idea that college students, especially women, are psychologically delicate souls, easily wounded by unvarnished prose. It is the duty of student life deans to create “safe spaces” for all students, free from words and ideas that might traumatize them (or anyone else).

These deans are direct descendants of Harriet Bowdler, the Victorian lady who persuaded her brother John, a publisher, to sanitize the great books so that they would be suitable for the fragile sensibilities of women and servants. As a result, it wasn’t until the 1950s that professors could find an unexpurgated edition of Shakespeare’s plays to assign to their students.

Kaminer is not the only critic of these well-meaning deans. The American Association of University Professors rejects the “presumption that students need to be protected rather than challenged” is both “infantilizing and anti-intellectual.” The American Library Association, the Foundation for Individual Rights in Education and the American Civil Liberties Union oppose content warnings for much the same reason that Smith professors once opposed Joe McCarthy’s censors who, when they weren’t removing books from libraries, stamped them with warning labels.

“When labeling is an attempt to prejudice attitudes,” the AAUP warns, it is a censor’s tool…If ‘The House of Mirth’ or ‘Anna Karenina’ carried a warning about suicide, students might overlook the other questions about wealth, love, deception and existential anxiety that are what those books are actually about.” The AAUP additionally says, “Trigger warnings also signal an expected response to the content (e.g. dismay, distress, disapproval) and eliminate the element of surprise and spontaneity that can enrich the reading experience and provide critical insight.”

When President McCartney’s committee meets, it will struggle over nothing less than the soul of the college. Will Smith continue to be a liberal arts college for strong women, or will it become a therapeutic shelter for the easily offended?

Professor Chris Pyle


Paul Schleyer 1930 – 2014, A remembrance

Thanks Peter for your stories and thoughts about Dr. Schleyer (I never had the temerity to even think of him as Paul). Hopefully budding chemists will read it, so they realize that even department chairs and full profs were once cowed undergraduates.

He was a marvelous undergraduate advisor, only 7 years out from his own Princeton degree when we first came in contact with him and a formidable physical and intellectual presence even then. His favorite opera recording, which he somehow found a way to get into the lab, was don Giovanni’s scream as he realized he was to descend into Hell. I never had the courage to ask him if the scars on his face were from dueling.

We’d work late in the lab, then go out for pizza. In later years, I ran into a few Merck chemists who found him a marvelous consultant. However, back in the 50’s, we’d be working late, and he’d make some crack about industrial chemists being at home while we were working, the high point of their day being mowing their lawn.

I particularly enjoyed reading his papers when they came out in Science. To my mind he finally settled things about the nonclassical nature of the norbornyl cation — here it is, with the crusher being the very long C – C bond lengths

Science vol. 341 pp. 62 – 64 ’13 contains a truly definitive answer (hopefully) along with a lot of historical background should you be interested. An Xray crystallographic structure of a norbornyl cation (complexed with a Al2Br7- anion) at 40 Kelvin shows symmetrical disposition of the 3 carbons of the nonclassical cation. It was tricky, because the cation is so symmetric that it rotates within crystals at higher temperatures. The bond lengths between the 3 carbons are 1.78 to 1.83 Angstroms — far longer than the classic length of 1.54 Angstroms of a C – C single bond.

I earlier wrote a post on why I don’t read novels, the coincidences being so extreme that if you put them in a novel, no one would believe them and throw away the book — it involves the Princeton chemistry department and my later field of neurology — here’s the link

Here’s yet another. Who would have thought, that years later I’d be using a molecule Paul had synthesized to treat Parkinson’s disease as a neurologist. He did an incredibly elegant synthesis of adamantane using only the product of a Diels Alder reaction, hydrogenating it with a palladium catalyst and adding AlCl3. An amazing synthesis and an amazing coincidence.

As Peter noted, he was an extremely productive chemist and theoretician. He should have been elected to the National Academy of Sciences, but never was. It has been speculated that his wars with H. C. Brown made him some powerful enemies. I’ve heard through the grapevine that it rankled him greatly. But virtue is its own reward, and he had plenty of that.

R. I. P. Dr. Schleyer

Paul Schleyer (1930 – 2014) R. I. P.

This is a guest post by Peter J. Reilly, Anson Marston Distinguished Professor Emeritus, Department of Chemical and Biological Engineering, Iowa State University, fellow Schleyer undergraduate advisee Princeton 1958 – 1960, friend, and all around good guy.

I’ll follow with my own reminiscences in another post. Obits tend to be polished and bland, ‘speak no evil of the dead’ and all that, but Peter captures the flavor of what it was actually like to be Paul’s advisee and exposed to his formidable presence.

“Following are my thoughts on our undergraduate chemistry advisor at Princeton, Paul von Ragué Schleyer, who died on November 21 of this year at 84.

Paul was an amazingly prolific chemist. He started publishing in 1956, soon after he arrived at Princeton from receiving a Ph.D. at Harvard, where he studied from 1951 to 1954 after earning an A.B. from Princeton. He was still publishing at the time of his death. In fact, he had promised to deliver a book chapter over this Thanksgiving weekend. Over his latter years at Princeton, in the early 1970’s, his annual production of papers averaged the middle 20’s. He kept up the same pace at Universität Erlangen-Nürnberg in Germany from 1976 to 1992. From 1993 to 1997, when he had appointments at both Erlangen-Nürnberg and the University of Georgia, he was in the 40’s. When fully at Georgia, after 1997, he gradually slacked off, publishing only 16 papers this year. Altogether he had 1277 publications, when a really productive chemist with ready access to students and postdoctoral fellows hopes to have 200–250 in a full career.

Another way to consider Paul’s productivity is by how often his work had been cited (partly by his own later papers but mainly by the papers of others). A 1981–1997 survey reported that he was the third most cited chemist in the world. Althogether his works were cited over 75,000 times. His h-index is 126 in the Thomson Reuters Web of Science database, meaning that he had 126 publications that were cited at least 126 times, an astounding number.

I first met Paul in the fall of 1958, two years after I arrived at Princeton. I needed to find someone to supervise my junior paper, a ritual common to all Princeton undergraduates doing A.B. degrees. I had originally approached Edward Taylor, a somewhat older chemistry professor, but when I told him that I was somewhat interested in becoming a chemical engineer, he directed me to Paul. Paul was 28 at the time, but he seemed older to me (I supposed all professors did). He was tall, with dark black hair combed to the side over his forehead. He had a scar on his cheek and talked very precisely.

My father met him once and came away asking if he had been a German U-boat captain during WWII.

I must say that I spent a sizable part of the next two years being terrified of Paul. He had a laboratory in the second floor of the southwest corner of Frick Chemical Laboratory. The benches were full of glassware, to the point where it seemed hard to do any research. However, the item that spooked me the most was a cauldron full of boiling black liquid, supposedly mainly nitric acid, in which dirty glassware was submerged to be cleaned.

Paul gave me a project to research the incidence and properties of the benzyne intermediate, a short-lived benzene ring with a triple bond. This was my first exposure to research beyond short papers for classes, and I suppose that I did well enough for him to invite me to do a senior thesis with him. The topic was to determine the mechanism by which an obscure organic chemical rearranged itself. The title of the thesis that came from a year’s dogged effort was “A Study of the Cleavage Products of 2,5-Dimethyltetrahydropyran-2-Methanol”, but what I mainly made was black goop. Paul’s written comments to me started with the statement that he was sorry that the problem was so intractable, but at least he liked my writeup. I still have the thesis (and the junior paper). Back in 2007 I was contacted by the Princeton University Library, which had lost its copy. They asked if I could send them mine so that they could microfilm it, which of course I did.

I remember that at least four of us chemistry majors spent much of our senior years in a very large and empty laboratory working on our theses under Paul’s direction. I must say that the various chemicals that I worked on smelled a lot better than the ones that you dealt with. I used to take weekend dates up to the laboratory to show them where I worked, and I would open one of your very small tubules, I think containing butyl mercaptan. Its smell still permeated the room on Mondays. (Editor’s note — people used to look at their shoes when I walked into the eating club after working with n-Bu-SH or similar compounds).

Despite my lack of success on my thesis, I learned from it how to do research. My chemical engineering major professor at the University of Pennsylvania was hard to contact, so much of my doctoral dissertation was done without much supervision. Between the two experiences, I had a good foundation for my 46 years of being a chemical engineering professor, six at the University of Nebraska-Lincoln and 40 at Iowa State University after four years at DuPont in southern New Jersey.

I only saw Paul four times after leaving Princeton. The first was when I returned there for a short visit. The second time was at my 25th Princeton reunion, when one of his daughters was graduating. A third time was when he visited the Iowa State chemistry department to present a prestigious lecture. The fourth and last time was in 2005 when I visited the University of Georgia for a meeting. Paul spent about 30 minutes telling me about his latest research, of which I understood very little.

I will close with a little story. When I told Paul during my senior year that I wanted to go to graduate school in chemical engineering, he asked why I wanted to become a pipe-fitter. Probably because of my chemistry background at Princeton, my research was always chemistry- and biology-based, first in fermentations at Penn and Nebraska (with a detour to chloro- and fluorocarbons at DuPont), and then in enzymes and carbohydrates at Iowa State. I moved more and more into computation late in my career, and when Paul visited around 2002 I told him that I would be sending a manuscript to the Journal of Computational Chemistry, which he and Lou Allinger at Georgia had founded and were still editing. Being Paul, he immediately said in his deep voice that it had better be good. As it turned out, it sailed through the review process with hardly a blip, and I followed it up with a second manuscript a few years later.

So, we were fortunate to have Paul as a mentor during our formative years. He certainly wasn’t the sweetest guy, but he was brilliant, and hopefully a very small part of his brilliance rubbed off on us.”

Peter J. Reilly

How one membrane protein senses mechanical stress

Chemists (particularly organic chemists) think they’re pretty smart. So see if you can figure out how a membrane embedded ion channel opens due to mechanical stress. The answer is to be found in last week’s Nature (vol. 516 pp. 126 – 130 4 Dec ’14).

As you probably know, membrane embedded proteins get stuck there because they contain multiple alpha helices with mostly hydrophobic amino acids allowing them to snuggle up to the hydrocarbon tails of the lipids making up the lipid bilayer of the biological membrane.

The channel in question is called TRAAK, known to open in response to membrane tension. It conducts potassium ions. The voltage sensitive potassium channels have 24 transmembrane alpha helices, 6 in each of the tetramer proteins comprising it. TRAAK has only 8. As is typical of all ion channels, the helices act like staves on a barrel, shifting slightly to open the pore.

In this case, with little membrane tension, the helices separate slightly permitting a a 10 carbon tail ( CH3 – [ CH2 – CH2 – CH2 ]3 – ) to enter the barrel occluding the pore. Tension on the membrane tends decrease the packing of hydrocarbon tails of the membrane, pulling the plug out of the pore. Neat !! ! ! This is a completely different mechanism than the voltage sensing helix in the 24 transmembrane voltage sensitive potassium channels, and one that no one has predicted despite all their intelligence.

Trigger warning. This paper is by MacKinnon who won the Nobel for his work on potassium channels. He used antibodies to stabilize ion channels so they could be studied by crystallography. Take them out of the membrane and they denature. Why the warning? In his Nobel work he postulated an alpha helical hairpin paddle extending outward from the channel core into the membrane’s lipid interior. It was both hydrophobic and charged, and could move in response to transmembrane voltage changes.

This received vigorous criticism from others, who felt it was an artifact produced by the use of the antibody to stabilize the protein for crystallography.

Why the warning? Because MacKinnnon also used an antibody to stabilize TRAAK.

The whole idea of membrane tension brings up the question of just how strong van der Waals forces really are. Biochemists and molecular biologists tend to think of hydrophobic forces as primarily entropic, pushing hydrophobic parts of a protein together so water would have to exquisitely structure itself to solvate them (e.g. lowering the entropy greatly). Here however, the ‘pull’ if you wish, is due to the mutual attraction of the hydrophobic lipid side chains to each other, which I would imagine is pretty week.

I’m sure that these forces have been measured, and years ago I enjoyed reading about Langmuir’s work putting what was basically soap on a substrate, and forming a two dimensional gas which actually followed something resembling P * Area = n * R * T. So the van der Waals forces have been measured, I just don’t know what they are. Does anyone out there?

Nonetheless, some very slick (physical and organic) chemistry.


Get every new post delivered to your Inbox.

Join 74 other followers