Category Archives: Chemistry (relatively pure)

Paul Schleyer 1930 – 2014, A remembrance

Thanks Peter for your stories and thoughts about Dr. Schleyer (I never had the temerity to even think of him as Paul). Hopefully budding chemists will read it, so they realize that even department chairs and full profs were once cowed undergraduates.

He was a marvelous undergraduate advisor, only 7 years out from his own Princeton degree when we first came in contact with him and a formidable physical and intellectual presence even then. His favorite opera recording, which he somehow found a way to get into the lab, was don Giovanni’s scream as he realized he was to descend into Hell. I never had the courage to ask him if the scars on his face were from dueling.

We’d work late in the lab, then go out for pizza. In later years, I ran into a few Merck chemists who found him a marvelous consultant. However, back in the 50’s, we’d be working late, and he’d make some crack about industrial chemists being at home whole we were working, the high point of their day being mowing their lawn.

I particularly enjoyed reading his papers when they came out in Science. To my mind he finally settled things about the nonclassical nature of the norbornyl cation — here it is, with the crusher being the very long C – C bond lengths

Science vol. 341 pp. 62 – 64 ’13 contains a truly definitive answer (hopefully) along with a lot of historical background should you be interested. An Xray crystallographic structure of a norbornyl cation (complexed with a Al2Br7- anion) at 40 Kelvin shows symmetrical disposition of the 3 carbons of the nonclassical cation. It was tricky, because the cation is so symmetric that it rotates within crystals at higher temperatures. The bond lengths between the 3 carbons are 1.78 to 1.83 Angstroms — far longer than the classic length of 1.54 Angstroms of a C – C single bond.

I earlier wrote a post on why I don’t read novels, the coincidences being so extreme that if you put them in a novel, no one would believe them and throw away the book — it involves the Princeton chemistry department and my later field of neurology — here’s the link http://luysii.wordpress.com/2014/11/13/its-why-i-dont-read-novels/

Here’s yet another. Who would have thought, that years later I’d be using a molecule Paul had synthesized to treat Parkinson’s disease as a neurologist. He did an incredibly elegant synthesis of adamantane using only the product of a Diels Alder reaction, hydrogenating it with a palladium catalyst and adding AlCl3. An amazing synthesis and an amazing coincidence.

As Peter noted, he was an extremely productive chemist and theoretician. He should have been elected to the National Academy of Sciences, but never was. It has been speculated that his wars with H. C. Brown made him some powerful enemies. I’ve heard through the grapevine that it rankled him greatly. But virtue is its own reward, and he had plenty of that.

R. I. P. Dr. Schleyer

Paul Schleyer (1930 – 2014) R. I. P.

This is a guest post by Peter J. Reilly, Anson Marston Distinguished Professor Emeritus, Department of Chemical and Biological Engineering, Iowa State University, fellow Schleyer undergraduate advisee Princeton 1958 – 1960, friend, and all around good guy.

I’ll follow with my own reminiscences in another post. Obits tend to be polished and bland, ‘speak no evil of the dead’ and all that, but Peter captures the flavor of what it was actually like to be Paul’s advisee and exposed to his formidable presence.

“Following are my thoughts on our undergraduate chemistry advisor at Princeton, Paul von Ragué Schleyer, who died on November 21 of this year at 84.

Paul was an amazingly prolific chemist. He started publishing in 1956, soon after he arrived at Princeton from receiving a Ph.D. at Harvard, where he studied from 1951 to 1954 after earning an A.B. from Princeton. He was still publishing at the time of his death. In fact, he had promised to deliver a book chapter over this Thanksgiving weekend. Over his latter years at Princeton, in the early 1970’s, his annual production of papers averaged the middle 20’s. He kept up the same pace at Universität Erlangen-Nürnberg in Germany from 1976 to 1992. From 1993 to 1997, when he had appointments at both Erlangen-Nürnberg and the University of Georgia, he was in the 40’s. When fully at Georgia, after 1997, he gradually slacked off, publishing only 16 papers this year. Altogether he had 1277 publications, when a really productive chemist with ready access to students and postdoctoral fellows hopes to have 200–250 in a full career.

Another way to consider Paul’s productivity is by how often his work had been cited (partly by his own later papers but mainly by the papers of others). A 1981–1997 survey reported that he was the third most cited chemist in the world. Althogether his works were cited over 75,000 times. His h-index is 126 in the Thomson Reuters Web of Science database, meaning that he had 126 publications that were cited at least 126 times, an astounding number.

I first met Paul in the fall of 1958, two years after I arrived at Princeton. I needed to find someone to supervise my junior paper, a ritual common to all Princeton undergraduates doing A.B. degrees. I had originally approached Edward Taylor, a somewhat older chemistry professor, but when I told him that I was somewhat interested in becoming a chemical engineer, he directed me to Paul. Paul was 28 at the time, but he seemed older to me (I supposed all professors did). He was tall, with dark black hair combed to the side over his forehead. He had a scar on his cheek and talked very precisely.

My father met him once and came away asking if he had been a German U-boat captain during WWII.

I must say that I spent a sizable part of the next two years being terrified of Paul. He had a laboratory in the second floor of the southwest corner of Frick Chemical Laboratory. The benches were full of glassware, to the point where it seemed hard to do any research. However, the item that spooked me the most was a cauldron full of boiling black liquid, supposedly mainly nitric acid, in which dirty glassware was submerged to be cleaned.

Paul gave me a project to research the incidence and properties of the benzyne intermediate, a short-lived benzene ring with a triple bond. This was my first exposure to research beyond short papers for classes, and I suppose that I did well enough for him to invite me to do a senior thesis with him. The topic was to determine the mechanism by which an obscure organic chemical rearranged itself. The title of the thesis that came from a year’s dogged effort was “A Study of the Cleavage Products of 2,5-Dimethyltetrahydropyran-2-Methanol”, but what I mainly made was black goop. Paul’s written comments to me started with the statement that he was sorry that the problem was so intractable, but at least he liked my writeup. I still have the thesis (and the junior paper). Back in 2007 I was contacted by the Princeton University Library, which had lost its copy. They asked if I could send them mine so that they could microfilm it, which of course I did.

I remember that at least four of us chemistry majors spent much of our senior years in a very large and empty laboratory working on our theses under Paul’s direction. I must say that the various chemicals that I worked on smelled a lot better than the ones that you dealt with. I used to take weekend dates up to the laboratory to show them where I worked, and I would open one of your very small tubules, I think containing butyl mercaptan. Its smell still permeated the room on Mondays. (Editor’s note — people used to look at their shoes when I walked into the eating club after working with n-Bu-SH or similar compounds).

Despite my lack of success on my thesis, I learned from it how to do research. My chemical engineering major professor at the University of Pennsylvania was hard to contact, so much of my doctoral dissertation was done without much supervision. Between the two experiences, I had a good foundation for my 46 years of being a chemical engineering professor, six at the University of Nebraska-Lincoln and 40 at Iowa State University after four years at DuPont in southern New Jersey.

I only saw Paul four times after leaving Princeton. The first was when I returned there for a short visit. The second time was at my 25th Princeton reunion, when one of his daughters was graduating. A third time was when he visited the Iowa State chemistry department to present a prestigious lecture. The fourth and last time was in 2005 when I visited the University of Georgia for a meeting. Paul spent about 30 minutes telling me about his latest research, of which I understood very little.

I will close with a little story. When I told Paul during my senior year that I wanted to go to graduate school in chemical engineering, he asked why I wanted to become a pipe-fitter. Probably because of my chemistry background at Princeton, my research was always chemistry- and biology-based, first in fermentations at Penn and Nebraska (with a detour to chloro- and fluorocarbons at DuPont), and then in enzymes and carbohydrates at Iowa State. I moved more and more into computation late in my career, and when Paul visited around 2002 I told him that I would be sending a manuscript to the Journal of Computational Chemistry, which he and Lou Allinger at Georgia had founded and were still editing. Being Paul, he immediately said in his deep voice that it had better be good. As it turned out, it sailed through the review process with hardly a blip, and I followed it up with a second manuscript a few years later.

So, we were fortunate to have Paul as a mentor during our formative years. He certainly wasn’t the sweetest guy, but he was brilliant, and hopefully a very small part of his brilliance rubbed off on us.”

Peter J. Reilly

How one membrane protein senses mechanical stress

Chemists (particularly organic chemists) think they’re pretty smart. So see if you can figure out how a membrane embedded ion channel opens due to mechanical stress. The answer is to be found in last week’s Nature (vol. 516 pp. 126 – 130 4 Dec ’14).

As you probably know, membrane embedded proteins get stuck there because they contain multiple alpha helices with mostly hydrophobic amino acids allowing them to snuggle up to the hydrocarbon tails of the lipids making up the lipid bilayer of the biological membrane.

The channel in question is called TRAAK, known to open in response to membrane tension. It conducts potassium ions. The voltage sensitive potassium channels have 24 transmembrane alpha helices, 6 in each of the tetramer proteins comprising it. TRAAK has only 8. As is typical of all ion channels, the helices act like staves on a barrel, shifting slightly to open the pore.

In this case, with little membrane tension, the helices separate slightly permitting a a 10 carbon tail ( CH3 – [ CH2 – CH2 – CH2 ]3 – ) to enter the barrel occluding the pore. Tension on the membrane tends decrease the packing of hydrocarbon tails of the membrane, pulling the plug out of the pore. Neat !! ! ! This is a completely different mechanism than the voltage sensing helix in the 24 transmembrane voltage sensitive potassium channels, and one that no one has predicted despite all their intelligence.

Trigger warning. This paper is by MacKinnon who won the Nobel for his work on potassium channels. He used antibodies to stabilize ion channels so they could be studied by crystallography. Take them out of the membrane and they denature. Why the warning? In his Nobel work he postulated an alpha helical hairpin paddle extending outward from the channel core into the membrane’s lipid interior. It was both hydrophobic and charged, and could move in response to transmembrane voltage changes.

This received vigorous criticism from others, who felt it was an artifact produced by the use of the antibody to stabilize the protein for crystallography.

Why the warning? Because MacKinnnon also used an antibody to stabilize TRAAK.

The whole idea of membrane tension brings up the question of just how strong van der Waals forces really are. Biochemists and molecular biologists tend to think of hydrophobic forces as primarily entropic, pushing hydrophobic parts of a protein together so water would have to exquisitely structure itself to solvate them (e.g. lowering the entropy greatly). Here however, the ‘pull’ if you wish, is due to the mutual attraction of the hydrophobic lipid side chains to each other, which I would imagine is pretty week.

I’m sure that these forces have been measured, and years ago I enjoyed reading about Langmuir’s work putting what was basically soap on a substrate, and forming a two dimensional gas which actually followed something resembling P * Area = n * R * T. So the van der Waals forces have been measured, I just don’t know what they are. Does anyone out there?

Nonetheless, some very slick (physical and organic) chemistry.

Could Alzheimer’s disease be a problem in physics rather than chemistry?

Two seemingly unrelated recent papers could turn our attention away from chemistry and toward physics as the basic problem in Alzheimer’s disease. God knows we could use better therapy for Alzheimer’s disease than we have now. Any new way of looking at Alzheimer’s, no matter how bizarre,should be welcome. The approaches via the aBeta peptide, and the enzymes producing it just haven’t worked, and they’ve really been tried — hard.

The first paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 16124 – 16129 ’14 ] made surfaces with arbitrary degrees of roughness, using the microfabrication technology for making computer chips. We’re talking roughness that’s almost smooth — bumps ranging from 320 Angstroms to 800. Surfaces could be made quite regular (as in a diffraction grating) or irregular. Scanning electron microscopic pictures were given of the various degrees of roughness.

Then they plated cultured primitive neuronal cells (PC12 cells) on surfaces of varying degrees of roughness. The optimal roughness for PC12 to act more like neurons was an Rq of 320 Angstroms.. Interestingly, this degree of roughness is identical to that found on healthy astrocytes (assuming that culturing them or getting them out of the brain doesn’t radically change them). Hippocampal neurons in contact with astrocytes of this degree of roughness also began extending neurites. It’s important to note that the roughness was made with something neurons and astrocytes never see — silica colloids of varying sizes and shapes.

Now is when it gets interesting. The plaques of Alzheimer’s disease have surface roughness of around 800 Angstroms. Roughness of the artificial surface of this degree was toxic to hippocampal neurons (lower degrees of roughness were not). Normal brain has a roughness with a median at 340 Angstroms.

So in some way neurons and astrocytes can sense the amount of roughness in surfaces they are in contact with. How do they do this — chemically it comes down to Piezo1 ion channels, a story in themselves [ Science vol. 330 pp. 55 – 60 ’10 ] These are membrane proteins with between 24 and 36 transmembrane segments. Then they form tetramers with a huge molecular mass (1.2 megaDaltons) and 120 or more transmembrane segments. They are huge (2,100 – 4,700 amino acids). They can sense mechanical stress, and are used by endothelial cells to sense how fast blood is flowing (or not flowing) past them. Expression of these genes in mechanically insensitive cells makes them sensitive to mechanical stimuli.

The paper is somewhat ambiguous on whether expressing piezo1 is a function of neuronal health or sickness. The last paragraph appears to have it both ways.

So as we leave paper #1, we note that that neurons can sense the physical characteristics of their environment, even when it’s something as un-natural as a silica colloid. Inhibiting Piezo1 activity by a spider venom toxin (GsMTx4) destroys this ability. The right degree of roughness is healthy for neurons, the wrong degree kills them. Clearly the work should be repeated with other colloids of a different chemical composition.

The next paper [ Science vol. 342 pp. 301, 316 – 317, 373 – 377 ’13 ] Talks about the plumbing system of the brain, which is far more active than I’d ever imaged. The glymphatic system is a network of microscopic fluid filled channels. Cerebrospinal fluid (CSF) bathes the brain. It flows into the substance of the brain (the parenchyma) along arteries, and the fluid between the cellular elements (interstitial fluid) it exchanges with flows out of the brain along the draining veins.

This work was able to measure the amount of flow through the lymphatics by injected tracer into the CSF and/or the brain parenchyma. The important point about this is that during sleep these channels expand by 60%, and beta amyloid is cleared twice as quickly. Arousal of a sleeping mouse decreases the influx of tracer by 95%. So this amazing paper finally comes up with an explanation of why we spend 1/3 of our lives asleep — to flush toxins from the brain.

If you wish to read (a lot) more about this system — see an older post from when this paper first came out — http://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

So what is the implication of these two papers for Alzheimer’s disease?

    First

The surface roughness of the plaques (800 Angstroms roughness) may physically hurt neurons. The plaques are much larger or Alzheimer would never have seen them with the light microscopy at his disposal.

    Second

The size of the plaques themselves may gum up the brain’s plumbing system.

The tracer work should certainly be repeated with mouse models of Alzheimer’s, far removed from human pathology though they may be.

I find this extremely appealing because it gives us a new way of thinking about this terrible disorder. In addition it might explain why cognitive decline almost invariably accompanies aging, and why Alzheimer’s disease is a disorder of the elderly.

Next, assume this is true? What would be the therapy? Getting rid of the senile plaques in and of itself might be therapeutic. It is nearly impossible for me to imagine a way that this could be done without harming the surrounding brain.

Before we all get too excited it should be noted that the correlation between senile plaque burden and cognitive function is far from perfect. Some people have a lot of plaque (there are ways to detect them antemortem) and normal cognitive function. The work also leaves out the second pathologic change seen in Alzheimer’s disease, the neurofibrillary tangle which is intracellular, not extracellular. I suppose if it caused the parts of the cell containing them to swell, it too could gum up the plumbing.

As far as I can tell, putting the two papers together conceptually might even be original. Prasad Shastri, the author of the first paper, was very helpful discussing some points about his paper by Email, but had not heard of the second and is looking at it this weekend.

It’s why I don’t read novels

You can’t make up stuff like this. A nephrologist whom I consulted about our daughter-in-law’s bout with pre-eclampsia, asked me about her brother-in-law when she found out I’d been a neurologist. Long out of practice, I called someone in my call group still practicing, only to find out that his son (who was just a little guy when we practiced) is finishing up his PhD in Chemistry from Princeton. Put this in a novel and no one would believe it.

The reason for the post, is that Princeton’s new Chemistry building, built to the tune of .25 gigaDollars, isn’t working very well. According to his son not all the hoods are functional. There are other dysfunctionalities as well, lack of appropriate space etc. etc. All is not lost however, the building is so beautiful (if non-functional) that it is used as a movie set from time to time. Any comments from present or past inhabitants of the new building?

Here’s the old post.

Princeton Chemistry Department — the new Oberlin

When I got to grad school in the fall of ’60, most of the other grad students were from East and West coast schools (Princeton, Bryn Mawr, Smith, Barnard, Wheaton, Cal Tech etc. etc.), but there were two guys from Oberlin (Dave Sigman, Rolf Sternglanz) which seemed strange until I looked into it. Oberlin, of course, is a great school for music but neither of them was a musician. They told me of Charles Martin Hall, Oberlin alum and inventor of the Hall process for Aluminum — still used today. He profited greatly from his invention, founding what is today Alcoa, running and owning a lot of it. He gave tons of money to the Oberlin Chemistry department, which is why it was so good back than (and probably still is).

What does this have to do with Princeton? Princeton’s Charles Hall is emeritus prof Ted Taylor, whose royalties on Alimta (Pemetrexed), an interesting molecule with what looks like guanine, glutamic acid, benzoic acid and ethane all nicely stitched together to form an antifolate, to the tune of over 1/4 of a billion dollars built the new Princeton Chemistry building. Praise be, the money didn’t go into any of the current academic fads (you know what they are), but good old chemistry.

An article in the 11 May “Princeton Alumni Weekly” (yes weekly) about the new building contains several other interesting assertions. The old chemistry building is blamed for a number of sins e.g., “no longer conducive to the pursuit of cutting-edge science in the 21st century”, “hard to recruit world-class faculty and grad students to what was essentially rabbit warren” etc. etc. Funny, but we thought the place was pretty good back then.

When the University president (Shirley Tilghman, a world-class molecular biologist prior to assuming the presidency — just Google imprinting) describes Princeton Chemistry as ‘one of Princeton’s “least-strong departments” you know there are problems. Is this really true? Maybe the readership knows.

Grad school applications are now coming from the ‘very top applicants’ — is it that easy to rate them? This is said not to be true 10 years ago — wonder how those now with PhD’s entering the department back then feel about this.

Then there is a picture of a young faculty member “Abby Doyle” who joined the department 6 years after graduating Harvard in 2002. As I recall there was a lot of comment on this in the earlier incarnation of ChemBark a few years ago.

The new building is supposed to inspire collaboration because of its open space, and 75 foot atrium, ‘few walls between the labs and glass is everywhere’. Probably the article was written by an architect. The implication being is that all you need for good science is a good building, and that bad buildings can inhibit good science. Anyone out there whose science has blossomed once they were put in a glass cage?

It’s interesting to note that the undergraduate catalog for ’57 – ’58 has Dr. Taylor basically in academic slobbovia — he’s only teaching Chem 304a, a one semester course “Elementary Organic Chemistry for Basic Engineers” (not even advanced engineers)

Comments anyone?

No longer looking under the lamppost

Time flies. It’s been over 5 years since I wrote http://luysii.wordpress.com/2009/09/25/are-biochemists-looking-under-the-lamppost/, essentially a long complaint that biochemists (and by implication drug chemistry and drug discovery) were looking at the molecules they knew and loved rather than searching for hidden players in the biochemistry and physiology of the cell.

Things are much better now. Here are 3 discoveries from the recent past, some of which should lead to drugable targets.

#1 FAFHA — a possible new way to treat Diabetes. Interested? Take a long chain saturated fatty acid such as stearic acid (C18:0). Now put a hydroxyl group somewhere on the chain (the body has found ways put them at different sites — this gives you a hydroxy fatty acid (HFA). Next esterify this hydroxyl group with another fatty acid and you have a Fatty Acid ester of a Hydroxy Fatty acid (an FAHFA if you will). So what?

Well fat makes them and releases them into the blood, making them yet another adipokine and further cementing fat as an endocrine organ. Once released FAHFAs stimulate insulin release, and increase glucose uptake in the fat cell when they activate GPR120 (the long chain fatty acid receptor).

A variety of fatty acids can form the ester, one of which is palmitic acid (C16:0) forming Palmitic Hydroxy Stearic Acid (PAHSA) which binds to GPR120. if that weren’t enough PAHSAs are anti-inflammatory — interested read more [ Cell vol. 159 pp. 238 239, 318 – 332 ’14 ]. I don’t think the enzymes forming HFA’s are known, and I’m willing to bet that are other HFAs out there.

#2 Maresin1 (7S, 14S dihydroxy docosa 4Z 8E 10E, 12Z 16Z, 19Z hexaenoic acid to you) is the way you start making Specialized Proresolving Mediators (SPMs). Form an epoxide of one of the double bonds and then do an SN2 ring opening with a thiol (glutathione for one) forming what they call a sulfido-conjugate mediator. It appears to be one of the many ways that inflammation is resolved. It helps resolve E. Coli infection in mice at nanoMolar concentration. SPMs further neutrophil recruitment and promote macrophage clearance of apoptotic cells and tissue debris. Wouldn’t you like to make a drug like that? Think of the specificity of the enzyme producing the epoxidation of just one of the 6 double bonds. Also a drug target. For details please see PNAS vol. 111 pp. E4753 – E4761 ’14

#3 Up4A (Uridine Adenosine Tetraphosphate) — as you might expect it’s an agonist at some purinergic receptors (PO2X1, P2Y2, P2Y4) causing vasoconstriction, and vasodilatation at others (P2Y1). It is released into the colon when enteric neurons are stimulated. Another player whose existence we had no idea about. Certainly we have all the GI and vasodilating drugs we need. If nothing else it will be a pharmacological tool. Again the enzyme making it isn’t known — yet another drug target possibly. For details see PNAS vol. 111 pp. 15821 – 15826 ’14.

There is a lot more in these 3 papers than can be summarized here.

Who knows what else is out there, and what it is doing? Glad to see people are starting to look

Watching electrons being pushed

Would any organic chemist like to watch electrons moving around in a molecule? Is the Pope Catholic? Attosecond laser pulses permit this [ Science vol. 346 pp. 336 – 339 ’14 ]. An attosecond is 10^-18 seconds. The characteristic vibrational motion of atoms in chemical bonds occurs at the femtosecond scale (10^-15 seconds). An electron takes 150 attoseconds to orbit a hydrogen atom [ Nature vol. 449 p. 997 ’07 ]. Of course this is macroscopic thinking at the quantum level, a particular type of doublethink indulged in by chemists all the time — http://luysii.wordpress.com/2009/12/10/doublethink-and-angular-momentum-why-chemists-must-be-adept-at-it/.

The technique involves something called pump probe spectroscopy. Here was the state of play 15 years ago — [ Science vol. 283 pp. 1467 – 1468 ’99 ] Using lasers it is possible to blast in a short duration (picoseconds 10^-12 to femtoseconds 10^-15) pulse of energy (pump pulse ) at one frequency (usually ultraviolet so one type of bond can be excited) and then to measure absorption at another frequency (usually infrared) a short duration later (to measure vibrational energy). This allows you to monitor the formation and decay of reactive intermediates produced by the pump (as the time between pump and probe is varied systematically).

Time has marched on and we now have lasers capable of producing attosecond pulses of electromagnetic energy (e.g. light).

A single optical cycle of visible light of 6000 Angstrom wavelength lasts 2 femtoseconds. To see this just multiply the reciprocal of the speed of light (3 * 10^8 meters/second) by the wavelength (6 * 10^3 *10^-10). To get down to the attosecond range you must use light of a shorter wavelength (e.g. the ultraviolet or vacuum ultraviolet).

The paper didn’t play around with toy molecules like hydrogen. They blasted phenylalanine with UV light. Here’s what they said “Here, we present experimental evidence of ultrafast charge dynamics in the amino acid phenylalanine after prompt ionization induced by isolated attosecond pulses. A probe pulse then produced a doubly charged molecular fragment by ejection of a second electron, and charge migration manifested itself as a sub-4.5-fs oscillation in the yield of this fragment as a function of pump-probe delay. Numerical simulations of the temporal evolution of the electronic wave packet created by the attosecond pulse strongly support the interpretation of the experimental data in terms of charge migration resulting from ultrafast electron dynamics preceding nuclear rearrangement.”

OK, they didn’t actually see the electron dynamics but calculated it to explain their results. It’s the Born Oppenheimer approximation writ large.

You are unlikely to be able to try this at home. It’s more physics than I know, but here’s the experimental setup. ” In our experiments, we used a two-color, pump-probe technique. Charge dynamics were initiated by isolated XUV sub-300-as pulses, with photon energy in the spectral range between 15 and 35 eV and probed by 4-fs, waveform-controlled visible/near infrared (VIS/NIR, central photon energy of 1.77 eV) pulses (see supplementary materials).”

Now we know why hot food tastes differently

An absolutely brilliant piece of physical chemistry explained a puzzling biologic phenomenon that organic chemistry was powerless to illuminate.

First, a fair amount of background

Ion channels are proteins present in the cell membrane of all our cells, but in neurons they are responsible for the maintenance of a membrane potential across the membrane, which has the ability change abruptly causing an nerve cell to fire an impulse. Functionally, ligand activated ion channels are pretty easy to understand. A chemical binds to them and they open and the neuron fires (or a muscle contracts — same thing). The channels don’t let everything in, just particular ions. Thus one type of channel which binds acetyl choline lets in sodium (not potassium, not calcium) which causes the cell to fire impulses. The GABA[A] receptor (the ion channel for gamma amino butyric acid) lets in chloride ions (and little else) which inhibits the neuron carrying it from firing. (This is why the benzodiazepines and barbiturates are anticonvulsants).

Since ion channels are full of amino acids, some of which have charged side chains, it’s easy to see how a change in electrical potential across the cell membrane could open or shut them.

By the way, the potential is huge although it doesn’t seem like much. It is usually given as 70 milliVolts (inside negatively charged, outside positively charged). Why is this a big deal? Because the electric field across our membranes is huge. 70 x 10^-3 volts is only 70 milliVolts. The cell membrane is quite thin — just 70 Angstroms. This is 7 nanoMeters (7 x 10^-9) meters. Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.

Now for the main course. We easily sense hot and cold. This is because we have a bunch of different ion channels which open in response to different temperatures. All this without neurotransmitters binding to them, or changes in electric potential across the membrane.

People had searched for some particular sequence of amino acids common to the channels to no avail (this is the failure of organic chemistry).

In a brilliant paper, entropy was found to be the culprit. Chemists are used to considering entropy effects (primarily on reaction kinetics, but on equilibria as well). What happens is that in the open state a large number of hydrophobic amino acids are exposed to the extracellular space. To accommodate them (e.g. to solvate them), water around them must be more ordered, decreasing entropy. This, of course, is why oil and water don’t mix.

As all the chemists among us should remember, the equilibrium constant has components due to kinetic energy (e.g. heat, e.g. enthalpy) and due to entropy.

The entropy term must be multiplied by the temperature, which is where the temperature sensitivity of the equilibrium constant (in this case open channel/closed channel) comes in. Remember changes in entropy and enthalpy work in opposite directions —

delta G(ibbs free energy) = delta H (enthalpy) - T * delta S (entropy

Here’s the paper [ Cell vol. 158 pp. 977 – 979, 1148 1158 ’14 ] They note that if a large number of buried hydrophobic groups become exposed to water on a conformational change in the ion channel, an increased heat capacity should be produced due to water ordering to solvate the hydrophobic side chains. This should confer a strong temperature dependence on the equilibrium constant for the reaction. Exposing just 20 hydrophobic side chains in a tetrameric channel should do the trick. The side chains don’t have to be localized in a particular area (which is why organic chemists and biochemists couldn’t find a stretch of amino acids conferring cold or heat sensitivity — it didn’t matter where the hydrophobic amino acids were, as long as there were enough of them, somewhere).

In some way this entwines enthalpy and entropy making temperature dependent activation U shaped rather than monotonic. So such a channel is in principle both hot activated and cold activated, with the position of the U along the temperature axis determining which activation mode is seen at experimentally accessible temperatures.

All very nice, but how many beautiful theories have we seen get crushed by ugly facts. If they really understood what is going on with temperature sensitivity, they should be able to change a cold activated ion channel to a heat activated one (by mutating it). If they really, really understood things, they should be able to take a run of the mill temperature INsensitive ion channel and make it temperature sensitive. Amazingly, the authors did just that.

Impressive. Read the paper.

This harks back to the days when theories of organic reaction mechanisms were tested by building molecules to test them. When you made a molecule that no one had seen before and predicted how it would react you knew you were on to something.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Breaking benzene

Industrially to break benzene aromaticity in order to add an alkyl group using the Friedel Crafts reaction requires fairly hairy conditions — http://www.chemguide.co.uk/organicprops/arenes/fc.html e.g. pressure to keep everything liquid and temperatures of 130 – 160 Centigrade.

A remarkable paper [ Nature vol. 512 pp. 413 – 415 ’14 ] uses a Titanium hydride catalyst and mild conditions (22 C — room temperature) for little over a day to form a titanium methylcyclopentenyl complex from benzene (which could be isolated) and studied spectroscopically.

The catalyst itself is rather beautiful. 3 titaniums, 6 hydrides and 3 C5Me4SiMe3 groups.

Benzene is the aromaticity workhorse of introductory organic chemistry. If you hydrogenate cyclohexene 120 kiloJoules is given off. Hydrogenating benzene should give off 360 kiloJoules, but because of aromatic stabilization only 208 is given off — implying that aromaticity lowers the energy of benzene by 152 kiloJoules. Clayden uses kiloJoules. I’m used to kiloCalories. To get them divide kiloJoules by 4.19.

What other magic does transition metal catalysis have in store?

Follow

Get every new post delivered to your Inbox.

Join 69 other followers