It all depends on whose ox is being gored

The following article appeared in the New York Times 19 October 2016. The following paragraph begins a direct, continuous, unedited quote from the start of the article. Subsequently, the article discusses other matters brought up in the debate — here’s the link for the whole thing — The times they are a’changin’ aren’t they?

“In a remarkable statement that seemed to cast doubt on American democracy, Donald J. Trump said Wednesday that he might not accept the results of next month’s election if he felt it was rigged against him — a stand that Hillary Clinton blasted as “horrifying” at their final and caustic debate on Wednesday.

Mr. Trump, under enormous pressure to halt Mrs. Clinton’s steady rise in opinion polls, came across as repeatedly frustrated as he tried to rally conservative voters with hard-line stands on illegal immigration and abortion rights. But he kept finding himself drawn onto perilous political territory by Mrs. Clinton and the debate’s moderator, Chris Wallace.

He sputtered when Mrs. Clinton charged that he would be “a puppet” of President Vladimir V. Putin of Russia if elected. He lashed out repeatedly, saying that “she’s been proven to be a liar on so many different ways” and that “she’s guilty of a very, very serious crime” over her State Department email practices. And by the end of the debate, when Mrs. Clinton needled him over Social Security, Mr. Trump snapped and said, “Such a nasty woman.”

Mrs. Clinton was repeatedly forced to defend her long service in government, which Mr. Trump charged had yielded no real accomplishments. But she was rarely rattled, and made a determined effort to rise above Mr. Trump’s taunts while making overtures to undecided voters.

She particularly sought to appeal to Republicans and independents who have doubts about Mr. Trump, arguing that she was not an opponent of the Second Amendment as he claimed, and promising to be tougher and shrewder on national security than Mr. Trump.

But it was Mr. Trump’s remark about the election results that stood out, even in a race that has been full of astonishing moments.

Every losing presidential candidate in modern times has accepted the will of the voters, even in extraordinarily close races, such as when John F. Kennedy narrowly defeated Richard M. Nixon in 1960 and George W. Bush beat Al Gore in Florida to win the presidency in 2000.

Mr. Trump insisted, without offering evidence, that the general election has been rigged against him, and he twice refused to say that he would accept its result.

“I will look at it at the time,” Mr. Trump said. “I will keep you in suspense.”

“That’s horrifying,” Mrs. Clinton replied. “Let’s be clear about what he is saying and what that means. He is denigrating — he is talking down our democracy. And I am appalled that someone who is the nominee of one of our two major parties would take that position.”

Mrs. Clinton then ticked off the number of times he had deemed a system rigged when he suffered a setback, noting he had even called the Emmy Awards fixed when his show, “The Apprentice,’’ was passed over.

“It’s funny, but it’s also really troubling,” she said. “That is not the way our democracy works.”

Mrs. Clinton also accused Mr. Trump of extreme coziness with Mr. Putin, criticizing him for failing to condemn Russian espionage against her campaign’s internal email.

When Mr. Trump responded that Mr. Putin had “no respect” for Mrs. Clinton, she shot back, in one of the toughest lines of the night: “That’s because he’d rather have a puppet as president of the United States.”

“No puppet, no puppet,” Mr. Trump sputtered. “You’re the puppet.” He quickly recovered and said, “She has been outsmarted and outplayed worse than anybody I’ve ever seen in any government, whatsoever.”

There’s more — but the above is a direct continuous unedited quote from the article

No posts for a while

Off to Manila for a wedding, and Hong Kong to see a new grandson — will be back mid-February

For a picture see —

Ring currents ride again

One of the most impressive pieces of evidence (to me at least) that we really understand what electrons are doing in organic molecules are the ring currents. Recall that the pi electrons in benzene are delocalized above and below the planar ring determined by the 6 carbon atoms.

How do we know this? When a magnetic field is applied the electrons in the ring cloud circulate to oppose the field. So what? Well if you can place a C – H bond above the ring, the induced current will shield it. Such molecules are known, and the new edition of Clayden (p. 278) shows the NMR spectra showing [ 7 ] paracyclophane which is benzene with 7 CH2’s linked to the 1 and 4 positions of benzene, so that the hydrogens of the 4th CH2 is directly over the ring (7 CH2’s aren’t long enough for it to be anywhere else). Similarly, [ 18 ] Annulene has 6 hydrogens inside the armoatic ring — and these hydrogens are even more deshielded. Interestingly building larger and larger annulenes, as shown that aromaticity decreases with increasing size, vanishing for systems with more than 30 pi electrons (diameter 13 Angstroms), probably because planarity of the carbons becomes less and less possible, breaking up the cloud.

This brings us to Nature vol. 541 pp. 200 – 203 ’17 which describes a remarkable molecule with 6 porphyins in a ring hooked together by diyne linkers. The diameter of the circle is 24 Angstroms. Benzene and [ 18 ] Annulene have all the carbons in a plane, but the picture of the molecule given in the paper does not. Each of the porphyrins is planar of course, but each plane is tangent to the circle of porphyrins.

Also discussed is the fact that ‘anti-aromatic’ ring currents exist, in which they circulate to enhance rather than diminish the imposed magnetic field. The molecule can be switched between the aromatic and anti-aromatic states by its oxidation level. When it has 78 electrons ( 18 * 4 ) + 2 in the ring (with a charge of + 6) it is aromatic. When it has 80 elections with a + 4 charge it is anti-aromatic — further confirmation of the Huckel rule (as if it was needed).

On a historical note reference #27 is to a paper of Marty Gouterman in 1961, who was teaching grad students in chemistry in the spring of 1961. He was an excellent teacher. Here he is at the University of Washington —

Memories are made of this ?

Back in the day when information was fed into computers on punch cards, the data was the holes in the paper not the paper itself. A far out (but similar) theory of how memories are stored in the brain just got a lot more support [ Neuron vol. 93 pp. 6 -8, 132 – 146 ’17 ].

The theory says that memories are stored in the proteins and sugar polymers surrounding neurons rather than the neurons themselves. These go by the name of extracellular matrix, and memories are the holes drilled in it which allow synapses to form.

Here’s some stuff I wrote about the idea when I first ran across it two years ago.


An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines ( where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory. However, RAM memory in the computers of the 70s used the localized buildup of charge to store bits and bytes. Since charge would leak away from where it was stored, it had to be refreshed constantly –e.g. at least 12 times a second, or it would be lost. Yet another reason data should always be frequently backed up.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.


So how does the new work support this idea? It involves a structure that I’ve never talked about — the lysosome (for more info see It’s basically a bag of at least 40 digestive and synthetic enzymes inside the cell, which chops anything brought to it (e.g. bacteria). Mutations in the enzymes cause all sorts of (fortunately rare) neurologic diseases — mucopolysaccharidoses, lipid storage diseases (Gaucher’s, Farber’s) the list goes on and on.

So I’ve always thought of the structure as a Pandora’s box best kept closed. I always thought of them as confined to the cell body, but they’re also found in dendrites according to this paper. Even more interesting, a rather unphysiologic treatment of neurons in culture (depolarization by high potassium) causes the lysosomes to migrate to the neuronal membrane and release its contents outside. One enzyme released is cathepsin B, a proteolytic enzyme which chops up the TIMP1 outside the cell. So what. TIMP1 is an endogenous inhibitor of Matrix MetalloProteinases (MMPs) which break down the extracellular matrix. So what?

Are neurons ever depolarized by natural events? Just by synaptic transmission, action potentials and spontaneously. So here we have a way that neuronal activity can cause holes in the extracellular matrix,the holes in the punch cards if you will.

Speculation? Of course. But that’s the fun of reading this stuff. As Mark Twain said ” There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Tensors yet again

In the grad school course on abstract algebra I audited a decade or so ago, the instructor began the discussion about tensors by saying they were the hardest thing in mathematics. Unfortunately I had to drop this section of the course due a family illness. I’ve written about tensors before and their baffling notation and nomenclature. The following is yet another way to look at them which may help with their confusing terminology

First, this post will assume you have a significant familiarity with linear algebra. I’ve written a series of posts on the subject if you need a brush up — pretty basic — here’s a link to the first post —
All of them can be found here —

Here’s another attempt to explain them — which will give you the background on dual vectors you’ll need for this post —

To the physicist, tensors really represent a philosophical position — e.g. there are shapes and processes external to us which are real, and independent of the way we choose to describe them mathematically. E. g. describing them by locating their various parts and physical extents in some sort of coordinate system. That approach is described here —

Zee in one of his books defines tensors as something that transforms like a tensor (honest to god). Neuenschwander in his book says “What kind of a definition is that supposed to be, that doesn’t tell you what it is that is changing.”

The following approach may help — it’s from an excellent book which I’ve not completely gotten through — “An Introduction to Tensors and Group Theory for Physicists” by Nadir Jeevanjee.

He says that tensors are just functions that take a bunch of vectors and return a number (either real or complex). It’s a good idea to keep the volume tensor (which takes 3 vectors and returns a real number) in mind while reading further. The tensor function just has one other constraint — it must be multilinear ( Amazingly, it turns out that this is all you need.

Tensors are named by the number of vectors (written V) and dual vectors (written V*) they massage to produce the number. This is fairly weird when you think of it. We don’t name sin (x) by x because this wouldn’t distinguish it from the zillion other real valued functions of a single variable.

So an (r, s) tensor is named by the ordered array of its operands — (V, …V,V*, …,V*) with r V’s first and s V* next in the array. The array tells you what the tensor function must be.

How can Jeevanjee get away with this? Amazingly, multilinearity is all you need. Recall that the great thing about the linearity of any function or operator on a vector space is that ALL you need to know is what the function or operator does to the basis vectors of the space. The effect on ANY vector in the vector space then follows by linearity.

Going back to the volume tensor whose operand is (V, V, V) and the vector space for all 3 V’s (R^3), how many basis vectors are there for V x V x V ? There are 3 for each V meaning that there are 3^3 = 27 possible basis vectors. You probably remember the formula for the volume enclosed by 3 vectors (call them u, v, w). The 3 components of u are u1 u2 and u3.

The volume tensor calculates volume by ( U crossproduct V ) dot product W.
Writing the calculation out

Volume = u1*v2*w3 – u1*v3*w2 + u2*v3*w1 – u2*v1*w3 + u3*v1*w2 – u3*v2*w1. What about the other 21 combinations of basis vectors? They are all zero, but they are all present in the tensor.

While any tensor manipulating two vectors can be expressed as a square matrix, clearly the volume tensor with 27 components can not be. So don’t confuse tensors with matrices (as I did).

Note that the formula for volume implicitly used the usual standard orthogonal coordinates for R^3. What would it be in spherical coordinates? You’d have to use a change of basis matrix to (r, theta, phi). Actually you’d have to have 3 of them, as basis vectors in V x V x V are 3 places arrays. This gives the horrible subscript and superscript notation of matrices by which tensors are usually defined. So rather than memorizing how tensors transform you can derive things like

T_i’^j’ = (A^k_i’)*(A^k_i’) * T_k^l where _ before a letter means subscript and ^ before a letter means superscript and A^k_i’ and A^k_i’ are change of basis matrices and the Einstein summation convention is used. Note that the chance of basis formula for tensor components for the volume tensor would have 3 such matrices, not two as I’ve shown.

One further point. You can regard a dual vector as a function that takes a vector and returns a number — so a dual vector is a (1,0) tensor. Similarly you can regard vectors as functions that take dual vectors and returns a number, so they are (0,1) tensors. So, actually vectors and dual vectors are tensors as well.

The distinction between describing what a tensor does (e.g. its function) and what its operands actually are caused me endless confusion. You write a tensor operating on a dual vector as a (0, 1) tensor, but a dual vector is a (1,0) considered as a function.

None of this discussion applies to the tensor product, which is an entirely different (but similar) story.

Hopefully this helps

Tidings of great joy

One of the hardest things I had to do as a doc was watch an infant girl waste away and die of infantile spinal muscular atrophy (Werdnig Hoffmann disease) over the course of a year. Something I never thought would happen (a useful treatment) may be at hand. The actual papers are not available yet, but two placebo controlled trials with a significant number of patients (84, 121) in each were stopped early because trial monitors (not in any way involved with the patients) found the treated group was doing much, much better than the placebo. A news report of the trials is available [ Science vol. 354 pp. 1359 – 1360 ’16 (16 December) ].

The drug, a modified RNA molecule, (details not given) binds to another RNA which codes for the missing protein. In what follows a heavy dose of molecular biology will be administered to the reader. Hang in there, this is incredibly rational therapy based on serious molecular biological knowledge. Although daunting, other therapies of this sort for other neurologic diseases (Huntington’s Chorea, FrontoTemporal Dementia) are currently under study.

If you want to start at ground zero, I’ve written a series which should tell you enough to get started. Start here —
and follow the links to the next two.

Here we go if you don’t want to plow through all three

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 – 9623 ’00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers [ Science vol. 345 pp. 624 – 625, 688 – 693 ’14 ].describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

I was extremely skeptical when I read the papers two years ago. Why? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Well, I shouldn’t have been so skeptical. I can’t say much more about the chemistry of the drug (nusinersen) until the papers come out.

Fortunately, the couple (a cop and a nurse) took the 25% risk of another child with the same thing and produced a healthy infant a few years later.

A new way to study protein dynamics

“Fields of 1,000,000 Volts/centiMeter are dangerously large from a laboratory point of view” — true enough, but that’s merely one TENTH of the potential difference/distance ratio found across the plasma membrane of all our cells. Here’s why after a bit of background

We wouldn’t exist without the membranes enclosing our cells which are largely hydrocarbon. Chemists know that fatty acids have one end (the carboxyl group) which dissolves in water while the rest is pure hydrocarbon. The classic is stearic acid — 18 carbons in a straight chain with a carboxyl group at one end. 3 molecules of stearic acid are esterified to glycerol in beef tallow (forming a triglyceride). The pioneers hydrolyzed it to make soap. Saturated fatty acids of 18 carbons or more are solid at body temperature (soap certainly is), but cellular membranes are fairly fluid, and proteins embedded in them move around pretty quickly. Why? Because most fatty acids found in biologic membranes over 16 carbons have double bonds in them. Guess whether they are cis or trans. Hint: the isomer used packs less well into crystals — you’ve got it, all the double bonds found in oleic (18 carbons 1 double bond), arachidonic (20 carbons, 4 double bonds) are cis this keeps membranes fluids as well. The cis double bond essentially puts a 60 degree kink in the hydrocarbon chain, making it much more difficult to pack in a liquid crystal type structure with all the hydrocarbon chains stretched out. Then there’s cholesterol which makes up 1/5 or so of membranes by weight — it also breaks up the tendency of fatty acid hydrocarbon chains to align with each other because it doesn’t pack with them very well. So cholesterol is another fluidizer of membranes.

How thick is the cellular membrane? If you figure the hydrocarbon chains of a saturated fatty acid stretched out as far as they can go, you get 1.54 Angstroms * cosine (30 degrees) = 1.33 Angstroms/carbon — times 16 = 21 Angstroms. Now double that because cellular membranes are lipid bilayers meaning that they are made of two layers of hydrocarbons facing each other, with the hydrophilic ends (carboxyls, phosphate groups) pointing outward. So we’re up to 42 Angstroms of thickness for the hydrocarbon part of the membrane. Add another 10 Angstroms or so for the hydrophilic ends (which include things like serine, choline etc. etc.) and you’re up to about 60 Angstroms thickness for the membrane (which is usually cited as 70 Angstroms — I don’t know why).

Because the electric field across our membranes is huge. The potential difference across our cell membranes is 70 milliVolts — 70 x 10^-3 volts. 70 Angstroms is 7 nanoMeters (7 x 10^-9) meters. Divide 70 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/centiMeter.

So our membrane proteins live and function quite nicely in this intense electric field. Which brings us to [ Nature vol. 540 pp. 400 – 405 ’16 ] which zaps protein crystals with electric fields of this intensity, and then does Xray crystallography at various intervals to watch how the protein backbone and side chains move. The technique is called Electric Field stimulated Xray crystallography (EF-X). Unlike solution where proteins are all in slightly different conformations, the starting line is the same as is the finish line.

The electric pulse durations range from 50 – 500 nanoSeconds (50 – 500 * 10^-9 seconds). The xray pulse for doing Xray crystallography lasts all of 100 picoSeconds (100 * 10^’12). By timing the delay between the electric pulse and the Xray pulse you watch the protein move in time in response to the electric pulse. Hardly physiologic, but it seems likely that protein motions will follow the path of least resistance, which should tell us which conformations are closest in energy to the energy minimum found in proteins. The pulses are collected 50, 100, 200 nanoSeconds after pulse onset. The crystals tolerated ‘huncreds’ of 100 – 500 nanoSecond megaVolt electric field pulses. But even 50 nanoSeconds is pretty long when protein dynamics is concerned, as bond vibrations are as fast as a few femtoSeconds (10^-15 seconds). An electric field of this strength exerts a force of 10^

The technology enabling this is fantastic, but it is quite similar in concept what the late Nobelist Ahmed Zewail was doing. Of course his work was even faster looking at chemical reactions at the femtoSecond level of time (10^-15 seconds). So as the year draws to a close, it’s nice to see his ideas live on, even if he didn’t.


Madonna and Child


Another amicus curiae brief

The Soros Open Society Foundation today filed an amicus curiae brief supporting the suit of the Cleveland Indians for an 8th (and hopefully final) game of the 2016 World Series. Attorney Justin Cloaca notes that just as there should be a living Constitution, the rules for baseball and the World Series should change with the times. “Why should the rules of a game established less than 30 years after the U. S. Constitution remain fixed in stone” said Cloaca. For details of the original suit please see —

Another amicus curiae brief is said to be in the works by Rancid Fecus, attorney for the Cubs. The President elect is said to be tweeting on the subject.

Cleveland demands 8th game — World series not over

The Cleveland Indians filed a lawsuit in Federal Court today demanding an 8th game, alleging that the World Series was really not over because both teams had scored the same number of runs in the first 7. Attorney Bryce Dyspareunia stated the result was unjust even though the Cubs had won 4 of the first 7 games. To be fair it should be the greatest number of runs scored over time, he said. The Clinton campaign has joined the suit as an amicus curiae