Category Archives: Chemistry (relatively pure)

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines ( where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.


In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see —

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Breaking benzene

Industrially to break benzene aromaticity in order to add an alkyl group using the Friedel Crafts reaction requires fairly hairy conditions — e.g. pressure to keep everything liquid and temperatures of 130 – 160 Centigrade.

A remarkable paper [ Nature vol. 512 pp. 413 - 415 '14 ] uses a Titanium hydride catalyst and mild conditions (22 C — room temperature) for little over a day to form a titanium methylcyclopentenyl complex from benzene (which could be isolated) and studied spectroscopically.

The catalyst itself is rather beautiful. 3 titaniums, 6 hydrides and 3 C5Me4SiMe3 groups.

Benzene is the aromaticity workhorse of introductory organic chemistry. If you hydrogenate cyclohexene 120 kiloJoules is given off. Hydrogenating benzene should give off 360 kiloJoules, but because of aromatic stabilization only 208 is given off — implying that aromaticity lowers the energy of benzene by 152 kiloJoules. Clayden uses kiloJoules. I’m used to kiloCalories. To get them divide kiloJoules by 4.19.

What other magic does transition metal catalysis have in store?

A very UNtheoretical approach to cancer diagnosis

We have tons of different antibodies in our blood. Without even taking mutation into account we have 65 heavy chain genes, 27 diversity segments, and 6 joining regions for them (making 10,530) possibilities — then there are 40 genes for the kappa light chains and 30 for the lambda light chains or over 1,200 * 10,530. That’s without the mutations we know that do occur to increase antibody affinity. So the number of antibodies probably ramming around in our blood is over a billion (I doubt that anyone has counted then, just has no one has ever counted the neurons in our brain). Antibodies can bind to anything — sugars, fats, but we think of them as mostly binding to protein fragments.

We also know that cancer is characterized by mutations, particularly in the genes coding for proteins. Many of the these mutations have never been seen by the immune system, so they act as neoantigens. So what [ Proc. Natl. Acad. Sci. vo. 111 pp. E3072 - E3080 '14 ] did was make a chip containing 10,000 peptides, and saw which of them were bound by antibodies in the blood.

The peptides were 20 amino acids long, with 17 randomly chosen amino acids, and a common 3 amino acid linker to the chip. While 10,000 seems like a lot of peptides, it is a tiny fraction (actually 10^-18
of the 2^17 * 10^17 = 1.3 * 10^22 possible 17 amino acid peptides).

The blood was first diluted 500x so blood proteins other than antibodies don’t bind significantly to the arrays. The assay is disease agnostic. The pattern of binding of a given person’s blood to the chip is called an immunosignature.

What did they measure? 20 samples from each of five cancer cohorts collected from multiple geographic sites and 20 noncancer samples. A reference immunosignature was generated. Then 120 blinded samples from the same diseases gave 95$% classification accuracy. To investigate the breadth of the approach and test sensitivity, the immunosignatures 75% of over 1,500 historical samples (some over 10 years old) comprising 14 different diseases were used as training, then the other 25% were read blind with an accuracy of over 98% — not too impressive, they need to get another 1,500 samples. Once you’ve trained on 75% of the sample space, you’d pretty much expect the other 25% to look the same.

The immunosignature of a given individual consists of an overlay of the patterns from the binding signals of many of the most prominent circulating antibodies. Some are present in everyone, some are unique.

A 2002 reference (Molecular Biology of the Cell 4th Edition) states that there are 10^9 antibodies circulating in the blood. How can you pick up a signature on 10K peptides from this. Presumably neoAntigens from cancer cells elicit higher afifnity antibodies then self-antigens. High affiity monoclonals can be diluted hundreds of times without diminishing the signal.

The next version of the immunosignature peptide microArray under development contains over 300,000 peptides.

The implication is that each cancer and each disease produces either different antigens and or different B cell responses to common antigens.

Since the peptides are random, you can’t align the peptides in the signature to the natural proteomic space to find out what the antibody is reacing to.

It’s a completely atheoretical approach to diagnosis, but intriguing. I’m amazed that such a small sample of protein space can produce a significant binding pattern diagnostic of anything.

It’s worth considering just what a random peptide of 17 amino acids actually is. How would you make one up? Would you choose randomly giving all 20 amino acids equal weight, or would you weight the probability of a choice by the percentage of that amino acid in the proteome of the tissue you are interested in. Do we have such numbers? My guess is that proline, glycine and alanine would the most common amino acids — there is so much collagen around, and these 3 make up a high percentage of the amino acids in the various collagens we have (over 15 at least).

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 - E3533 '14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Are Van der Waals interactions holding asteroids together?

A recent post of Derek’s concerned the very weak (high kD) but very important interactions of proteins within our cells.

Most of this interaction is due to Van der Waals forces — Shape shape complementarity (e.g. steric factors) and dipole dipole interactions are also important.

Although important, Van der Waals interactions have always seemed like a lot of hand waving to me.

Well guess what, they are now hypothesized to be what is holding an asteroid together. Why are people interested in asteroids in the first place? [ Science vol. 338 p. 1521 '12 ] “Asteroids and comets .. reflect the original chemical makeup of the solar system when it formed roughly 4.5 billion years ago.”

[ Nature vol. 512 p. 118 '14 ] The Rosetta spacecraft reached the comet 67P/Churyumov-Gerasimenko after a 10 year journey becoming the first spacecraft to rendezvous with a comet. It will take a lap around the sun with the comet and will watch as the comet heats up and releases ice in a halo of gas and dust. It is now flying triangles in front of the comet, staying 100 kiloMeters away. In a few weeks it will settle into a 30 kiloMeter orbit around he comet. It will attempt to place a lander (Philae) the size of a washing machine on its surface in November. The comet is 4 kiloMeters long.

[ Nature vol. 512 pp. 139 - 140, 174 - 176 '14 ] A kiloMeter sized near Earth asteroid called (29075) 1950 DA (how did they get this name?) is covered with sandy regolith (heterogeneous material covering solid rock { on earth } it includes dust, soil, broken rock ). The asteroid rotates every 2+ hours, and it is so small that gravity alone can’t hold the regolith to its surface. An astronaut could scoop up a sample from its surface, but would have to hold on to the asteroid to avoid being flung off by the rotation. So the asteroid must have some degree of cohesive strength. The strength required is 64 pascals to hold the rubble together — about the pressure that a penny exerts on the palm of your hand. A Pascal is 1/101,325 of atmospheric pressure.

They think the strength comes from van der Waals interactions between small (1 – 10 micron) grains — making it fairy dust. It’s rather unsatisfying as no one has seen these particles.

The ultimate understanding of the large multi-protein and RNA machines (ribosome, spliceosome, RNA polymerase etc. etc. ) without which life would be impossible will involve the very weak interactions which hold them together. Along with permanent dipole dipole interactions, charge interactions and steric complementarity, the van der Waals interaction is high on anyone’s list.

Some include dipole dipole interactions as a type of van der Waals interaction. The really fascinating interaction is the London dispersion force. These are attractions seen between transient induced dipoles formed in the electron clouds surrounding each atomic nucleus.

It’s time to attempt the surmount the schizophrenia which comes from trying to see how quantum mechanics gives rise to the macroscopic interactions between molecules which our minds naturally bring to matters molecular (with a fair degree of success).

Steric interactions come to mind first — it’s clear that an electron cloud surrounding molecule 1 should repel another electron cloud surrounding molecule 2. Shape complementarity should allow two molecules to get closer to each other.

What about the London dispersion forces, which are where most of the van der Waals interaction is thought to be. We all know that quantum mechanical molecular orbitals are static distributions of electron probability. They don’t fluctuate (at least the ones I’ve read about). If something is ‘transiently inducing a dipole’ in a molecule, it must be changing the energy level of a molecule, somehow. All dipoles involve separation of charge, and this always requires energy. Where does it come from? The kinetic energy of the interacting molecules? Macroscopically it’s easy to see how a collision between two molecules could change the vibrational and/or rotation energy levels of a molecule. What does a collision between between molecules look like in terms of the wave functions of both. I’ve never seen this. It has to have been worked out for single particle physics in an accelerators, but that’s something I’ve never studied.

One molecule inducing a transient dipole in another, which then induces a complementary dipole in the first molecule, seems like a lot of handwaving to me. It also appears to be getting something for nothing contradicting the second law of thermodynamics.

Any thoughts from the physics mavens out there?

Old dog does new(ly discovered) tricks

One of the evolutionarily oldest enzyme classes is aaRS (for amino acyl tRNA synthetase). Every cell has them including bacteria. Life as we know it wouldn’t exist without them. Briefly they load tRNA with the appropriate amino acid. If this Greek to you, look at the first 3 articles in

Amino acyl tRNA syntheses are enzymes of exquisite specificity, having to correctly match up 20 amino acids to some 61 different types of tRNAs. Mistakes in the selection of the correct amino acid occurs every 1/10,000 to 1/100,000, and in the selection of the correct tRNA every 1/1,000,000. The lower tRNA error rate is due to the fact that tRNAs are much larger than amino acids, and so more contacts between enzyme and tRNA are possible.

As the tree of life was ascended from bacteria over billions of years, 13 new protein domains which have no obvious association with aminoacylation have been added to AARS genes. More importantly, the additions have been maintained over the course of evolution (with no change in the primary function of the synthetase). Some of the new domains are appended to each of several synthetases, while others are specific to a single synthetase. The fact that they’ve been retained implies they are doing something that natural selection wants (teleology inevitably raises its ugly head with any serious discussion of molecular biology or cellular physiology — it’s impossible to avoid).

[ Science vol.345 pp 328 - 332 '14 ] looked at what mRNAs some 37 different AARS genes were transcribed into. Six different human tissues were studied this way. Amazingly, 79% of the 66 in-frame splice variants removed or disrupted the aaRS catalytic domain. . The AARS for histidine had 8 inframe splice variants all of which removed the catalytic domain. 60/70 variants losing the catalytic domain (they call these catalytic nulls) retained at least one of the 13 added domains in higher eukaryotes. Some of the transcripts were tissue specific (e.g. present in some of the 6 tissues but not all).

Recent work has shown roles for specific AARSs in a variety of pathways — blood vessel formation, inflammation, immune response, apoptosis, tumor formation, p53 signaling. The process of producing a completely different function for a molecule is called exaptation — to contrast it with adaptation.

Up to now, when a given protein was found to have enzymatic activity, the book on what that protein did was closed (with the exception of the small GTPases). End of story. Yet here we have cells spending the metabolic energy to make an enzymatically dead protein (aaRSs are big — the one for alanine has nearly 1,000 amino acids). Teleology screams — what is it used for? It must be used for something! This is exactly where chemistry is silent. It can explain the incredible selectivity and sensitivity of the enzyme but not what it is ‘for’. We have crossed the Cartesian dualism between flesh and spirit.

Could this sort of thing be the tip of the iceberg? We know that splice variants of many proteins are common. Could other enzymes whose function was essentially settled once substrates were found, be doing the same thing? We may have only 20,000 or so protein coding genes, but 40,000, 60,000, . . . or more protein products of them, each with a different biological function.

So aaRSs are very old molecular biological dogs, who’ve been doing new tricks all along. We just weren’t smart enough to see them (’till now).

Novels may have only 7 basic plots, but molecular biology continues to surprise and enthrall.

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see —

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Why marihuana scares me

There’s an editorial in the current Science concerning how very little we know about the effects of marihuana on the developing adolescent brain [ Science vol. 344 p. 557 '14 ]. We know all sorts of wonderful neuropharmacology and neurophysiology about delta-9 tetrahydrocannabinol (d9-THC) — The point of the authors (the current head of the Amnerican Psychiatric Association, and the first director of the National (US) Institute of Drug Abuse), is that there are no significant studies of what happens to adolescent humans (as opposed to rodents) taking the stuff.

Marihuana would the first mind-alteraing substance NOT to have serious side effects in a subpopulation of people using the drug — or just about any drug in medical use for that matter.

Any organic chemist looking at the structure of d9-THC (see the link) has to be impressed with what a lipid it is — 21 carbons, only 1 hydroxyl group, and an ether moiety. Everything else is hydrogen. Like most neuroactive drugs produced by plants, it is quite potent. A joint has only 9 milliGrams, and smoking undoubtedly destroys some of it. Consider alcohol, another lipid soluble drug. A 12 ounce beer with 3.2% alcohol content has 12 * 28.3 *.032 10.8 grams of alcohol — molecular mass 62 grams — so the dose is 11/62 moles. To get drunk you need more than one beer. Compare that to a dose of .009/300 moles of d9-THC.

As we’ve found out — d9-THC is so potent because it binds to receptors for it. Unlike ethanol which can be a product of intermediary metabolism, there aren’t enzymes specifically devoted to breaking down d9-THC. In contrast, fatty acid amide hydrolase (FAAH) is devoted to breaking down anandamide, one of the endogenous compounds d9-THC is mimicking.

What really concerns me about this class of drugs, is how long they must hang around. Teaching neuropharmacology in the 70s and 80s was great fun. Every year a new receptor for neurotransmitters seemed to be found. In some cases mind benders bound to them (e.g. LSD and a serotonin receptor). In other cases the endogenous transmitters being mimicked by a plant substance were found (the endogenous opiates and their receptors). Years passed, but the receptor for d9-thc wasn’t found. The reason it wasn’t is exactly why I’m scared of the drug.

How were the various receptors for mind benders found? You throw a radioactively labelled drug (say morphine) at a brain homogenate, and purify what it is binding to. That’s how the opiate receptors etc. etc. were found. Why did it take so long to find the cannabinoid receptors? Because they bind strongly to all the fats in the brain being so incredibly lipid soluble. So the vast majority of stuff bound wasn’t protein at all, but fat. The brain has the highest percentage of fat of any organ in the body — 60%, unless you considered dispersed fatty tissue an organ (which it actually is from an endocrine point of view).

This has to mean that the stuff hangs around for a long time, without any specific enzymes to clear it.

It’s obvious to all that cognitive capacity changes from childhood to adult life. All sorts of studies with large numbers of people have done serial MRIs children and adolescents as the develop and age. Here are a few references to get you started [ Neuron vol. 72 pp. 873 - 884, 11, Proc. Natl. Acad. Sci. vol. 107 pp. 16988 - 16993 '10, vol. 111 pp. 6774 -= 6779 '14 ]. If you don’t know the answer, think about the change thickness of the cerebral cortex from age 9 to 20. Surprisingly, it get thinner, not thicker. The effect happens later in the association areas thought to be important in higher cognitive function, than the primary motor or sensory areas. Paradoxical isn’t it? Based on animal work this is thought to be due pruning of synapses.

So throw a long-lasting retrograde neurotransmitter mimic like d9-THC at the dynamically changing adolescent brain and hope for the best. That’s what the cited editorialists are concerned about. We simply don’t know and we should.

Having been in Cambridge when Leary was just getting started in the early 60’s, I must say that the idea of tune in turn on and drop out never appealed to me. Most of the heavy marihuana users I’ve known (and treated for other things) were happy, but rather vague and frankly rather dull.

Unfortunately as a neurologist, I had to evaluate physician colleagues who got in trouble with drugs (mostly with alcohol). One very intelligent polydrug user MD, put it to me this way — “The problem is that you like reality, and I don’t”.

Further (physical) chemical elegance

If the chemical name phosphatidyl serine (PS) draws a blank, read the verbatim copy of a previous post under the *** to find out why it is so important to our existence. It is an ‘eat me’ signal when there is lots of it around, telling professional scavenger cells to engulf the cell showing lots of PS on its surface.

Life, as usual, is more complicated. There are a variety of proteins exposed on cell surfaces which bind to phosphoserine. Not only that, but exposing just a little PS on the surface of a cell can trigger a protective immune response. Immune cells binding to just a little PS on the surface of another cell proliferate rather than eat the cell expressing the PS. This brings us to Proc. Natl. Acad. Sci. vol. 111 pp 5526 – 5531 ’14 that explains how a given PS receptor (called TIM4) acts differently depending how much PS is present.

Some PS receptors such as Annexin V have essentially an all or none response to PS, if they bind at all, they trigger a response in the cell carrying them. Not so for TIM4 which only reacts if there is a lot of PS around, leaving cells which express less PS alone. This allows these cells to function in the protective immune response.

So how does TIM4 do this? See if you can think of a mechanism before reading the rest.

In addition to the PS binding pocket TIM4 has 4 peripheral basic residues in separate places. The basic residues are positively charged at physiologic pH and bind to the negatively charged phosphate group of phosphatidyl serene or to the carboxylate anion of phosphatidyl serine. The paper doesn’t explain how these basic residues don’t bind to the other phospholipids of the cell surface (such as phosphatidyl choline or sphingomyelin). It is conceivable that the basic side chains (arginine, lysine etc.) are so set up that they only bind to carboxylate anions and not phosphate anions (but this is a stretch). That would at least give them specificity for phosphatidyl serene as opposed the other phospholipids present in both leaflets of the cell membrane. In any even TIM4 will be triggered only if these groups also bind PS, leaving cells which show relatively little PS alone. Clever no?

For the cognoscenti, the Hill coefficient of TIM4 is 2 while that of Annexin V is 8 (describing more than explaining the all or none character of Annexin V binding).

Flippase. Eat me signals. Dragging their tails behind them. Have cellular biologists and structural biochemists gone over to the dark side? It’s all quite innocuous as the old nursery rhyme will show

Little Bo Peep has lost her sheep
and doesn’t know where to find them
Leave them alone, and they’ll come home
wagging their tails behind them.

First, some cellular biochemistry. The lipid bilayer encasing all our cells is made of two leaflets, inner and outer. The composition of the two is different (unlike the soap bubble). On the inside we find phosphatidylethanolamine (PE), phosphatidylserine (PS). The outer leaflet contains phosphatidylcholine (PC) and sphingomyelin (SM) and almost no PE or PS. This is clearly a low entropy situation compared to having all 4 randomly dispersed between the 2 leaflets.

What is the possible use of this (notice how teleology invariably creeps into cellular biology)? Chemistry is powerless to explain such things. Much as I love chemistry, such truths must be faced.

It takes energy to maintain this peculiar distribution. The enzyme moving PE and PS back inside the cell is the flippase. It requires energy in the form of ATP to operate. When a cell is dying ATP drops, and entropy takes its course moving PE and PS to the cell surface. Specialized cells (macrophages) exist to scoop up the dying or dead cells, without causing inflammation. They recognize PE and PS by a variety of receptors and munch up cells exposing them on the surface. So PE and PS are eat me signals which appear when there isn’t enough ATP around for flippase to use to haul PE and PS back inside. Clever no?

No for some juicy chemistry (assuming that you consider transport of a molecule across a lipid bilayer actual chemistry — no covalent bonds to the transferred molecule are formed or removed, although they are to the transporter). Well it certainly is physical chemistry isn’t it?

Here are the structures of PE, PS, PC, SM

There are a few things to notice. Like just about every lipid found in our membranes, they are amphipathic — they have a very lipid soluble part (look at the long hydrocarbon changes hanging below them) and a very water soluble part — the head groups containing the phosphate.

This brings us to [ Proc. Natl. Acad. Sci. vol. 111 pp. E1334 - E1343 '14 ] Which describes ATP8A2 (aka the flippase). Interestingly, the protein, with at least 10 alpha helices spanning the membrane, and 3 cytoplasmic domains closely resembles the classic sodium pump beloved of neurophysioloogists everywhere, which pumps sodium ions out of neurons and pumps potassium ions inside, producing the equally beloved membrane potential of neurons.

Look at those structures again. While there are charges on PE, PS (on the phosphate group), these molecules are far larger than the sodium or the potassium ion (easily by a factor of 10). This has long been recognized and is called the ‘giant substrate problem’.

The paper solved the structure of ATP8A2 and used molecular dynamics stimulations to try to understand how it works. What they found is that transmembrane alpha helices 1, 2, 4 and 6 (out of 10) form a water filled cavity, which dissolves the negatively charged phosphate of the head group. What happens to those long hydrocarbon tails? The are left outside the helices in the lipid core of the membrane. It is the charged head groups that are dragged through by the flippase, with the tails wagging along behind them, just like little Bo Peep.

There’s a lot more great chemistry in the paper, particularly how Isoleucine #364 directs the sequential formation and annihilation of the water filled cavities between alpha helices 1, 2, 4 and 6, and how a particular aspartic acid is phosphorylated (by ATP, explaining why the enzyme no longer works in energetically dying cells) changing conformation of all 10 transmembrane helices, so that only one half of the channel is open at a time (either to the inside or the outside).

Go read and enjoy. It’s sad that people who don’t know organic chemistry are cut off from appreciating such elegance. There is more to esthetics than esthetics.


Get every new post delivered to your Inbox.

Join 68 other followers