Category Archives: Neurology & Psychiatry

Could Alzheimer’s disease be a problem in physics rather than chemistry?

Two seemingly unrelated recent papers could turn our attention away from chemistry and toward physics as the basic problem in Alzheimer’s disease. God knows we could use better therapy for Alzheimer’s disease than we have now. Any new way of looking at Alzheimer’s, no matter how bizarre,should be welcome. The approaches via the aBeta peptide, and the enzymes producing it just haven’t worked, and they’ve really been tried — hard.

The first paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 16124 – 16129 ’14 ] made surfaces with arbitrary degrees of roughness, using the microfabrication technology for making computer chips. We’re talking roughness that’s almost smooth — bumps ranging from 320 Angstroms to 800. Surfaces could be made quite regular (as in a diffraction grating) or irregular. Scanning electron microscopic pictures were given of the various degrees of roughness.

Then they plated cultured primitive neuronal cells (PC12 cells) on surfaces of varying degrees of roughness. The optimal roughness for PC12 to act more like neurons was an Rq of 320 Angstroms.. Interestingly, this degree of roughness is identical to that found on healthy astrocytes (assuming that culturing them or getting them out of the brain doesn’t radically change them). Hippocampal neurons in contact with astrocytes of this degree of roughness also began extending neurites. It’s important to note that the roughness was made with something neurons and astrocytes never see — silica colloids of varying sizes and shapes.

Now is when it gets interesting. The plaques of Alzheimer’s disease have surface roughness of around 800 Angstroms. Roughness of the artificial surface of this degree was toxic to hippocampal neurons (lower degrees of roughness were not). Normal brain has a roughness with a median at 340 Angstroms.

So in some way neurons and astrocytes can sense the amount of roughness in surfaces they are in contact with. How do they do this — chemically it comes down to Piezo1 ion channels, a story in themselves [ Science vol. 330 pp. 55 – 60 ’10 ] These are membrane proteins with between 24 and 36 transmembrane segments. Then they form tetramers with a huge molecular mass (1.2 megaDaltons) and 120 or more transmembrane segments. They are huge (2,100 – 4,700 amino acids). They can sense mechanical stress, and are used by endothelial cells to sense how fast blood is flowing (or not flowing) past them. Expression of these genes in mechanically insensitive cells makes them sensitive to mechanical stimuli.

The paper is somewhat ambiguous on whether expressing piezo1 is a function of neuronal health or sickness. The last paragraph appears to have it both ways.

So as we leave paper #1, we note that that neurons can sense the physical characteristics of their environment, even when it’s something as un-natural as a silica colloid. Inhibiting Piezo1 activity by a spider venom toxin (GsMTx4) destroys this ability. The right degree of roughness is healthy for neurons, the wrong degree kills them. Clearly the work should be repeated with other colloids of a different chemical composition.

The next paper [ Science vol. 342 pp. 301, 316 – 317, 373 – 377 ’13 ] Talks about the plumbing system of the brain, which is far more active than I’d ever imaged. The glymphatic system is a network of microscopic fluid filled channels. Cerebrospinal fluid (CSF) bathes the brain. It flows into the substance of the brain (the parenchyma) along arteries, and the fluid between the cellular elements (interstitial fluid) it exchanges with flows out of the brain along the draining veins.

This work was able to measure the amount of flow through the lymphatics by injected tracer into the CSF and/or the brain parenchyma. The important point about this is that during sleep these channels expand by 60%, and beta amyloid is cleared twice as quickly. Arousal of a sleeping mouse decreases the influx of tracer by 95%. So this amazing paper finally comes up with an explanation of why we spend 1/3 of our lives asleep — to flush toxins from the brain.

If you wish to read (a lot) more about this system — see an older post from when this paper first came out — http://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

So what is the implication of these two papers for Alzheimer’s disease?

    First

The surface roughness of the plaques (800 Angstroms roughness) may physically hurt neurons. The plaques are much larger or Alzheimer would never have seen them with the light microscopy at his disposal.

    Second

The size of the plaques themselves may gum up the brain’s plumbing system.

The tracer work should certainly be repeated with mouse models of Alzheimer’s, far removed from human pathology though they may be.

I find this extremely appealing because it gives us a new way of thinking about this terrible disorder. In addition it might explain why cognitive decline almost invariably accompanies aging, and why Alzheimer’s disease is a disorder of the elderly.

Next, assume this is true? What would be the therapy? Getting rid of the senile plaques in and of itself might be therapeutic. It is nearly impossible for me to imagine a way that this could be done without harming the surrounding brain.

Before we all get too excited it should be noted that the correlation between senile plaque burden and cognitive function is far from perfect. Some people have a lot of plaque (there are ways to detect them antemortem) and normal cognitive function. The work also leaves out the second pathologic change seen in Alzheimer’s disease, the neurofibrillary tangle which is intracellular, not extracellular. I suppose if it caused the parts of the cell containing them to swell, it too could gum up the plumbing.

As far as I can tell, putting the two papers together conceptually might even be original. Prasad Shastri, the author of the first paper, was very helpful discussing some points about his paper by Email, but had not heard of the second and is looking at it this weekend.

Coca-Cola

For some readers, this might be the most useful post I’ve ever written. But first; some history. Back in grad school, I was dating a Cliffie. We were out to dinner at a nice (and cheap) restaurant in Cambridge. I’d had the flu and probably should have canceled, but in your early 20s, libido conquers all. So we’re sitting there, and I began to feel really nauseous and said we should pack it in, and I should go home.

She said “Let me try this, my father’s a General Practitioner”. So she ordered a can of coke, opened it and let it sit for a while till it warmed up and the fizz was gone. Then she told me to drink it in slow, small sips. It worked ! The nausea vanished and we continued on.

Fast forward to last night and probable food poisoning (or severe flu). No Coke in the house, but as soon as my wife got to a store opening at 7 this AM, it worked again — no nausea and stomach distress within a few minutes (I’d vomited at least 5 times over the course of the night).

Could this have been a placebo effect, because it had worked in the past and I wanted it to work so desperately? Possibly, but I was generally miserable for a period for a period of 10 hours, and the Coke settled my stomach very quickly. Coke is not an anti-diarrheal, but 10 hours into the illness there was nothing left.

Placebos and Nocebos are very complicated entities and a huge review in Neuron will tell you why. It’s very much worth reading – Neuron vol 84 pp. 623 – 637 ’14 — “Placebo Effects: From the Neurobiological Paradigm to Translational Implications”. The article contains references to studies showing that placebo is as effective as morphine on the third day post-op. In med school I’d heard stories to the effect that in Korea and WWII when they ran out of morphine on the battlefield, saline worked just as well. So probably these aren’t myths. It didn’t happen in Vietnam when I was in the service, as the country is long and thin, and no wounded soldier was more than 20 minutes away by chopper from a fully equipped field hospital (once they got him).

The ingredients in Coke are and were a closely held secret, but most think in the 1880s and 1890s, when it was sold as a medication, that Coke contained cocaine, hence the name. Back then, no one knew the potential of cocaine for addiction. Halstead the great Baltimore surgeon, got into it because cocaine is also a local anesthetic. Freud actually used cocaine to treat morphine addiction. Neither was malevolent, just ignorant.

The way it ought to be

A recent paper described the use of sulforaphane in treating autism [ Proc. Natl. Acad. Sci. vol. 111 pp. 15550 – 15555 ’14 ] A sulforaphane trial (double blind, randomized) on 44 men age 13 – 27 with moderate to severe Autism Spectrum Disorder received sulforaphane (50 – 150 micoMoles) for 18 weeks followed by 4 weeks without treatment. There was no change in the 15 placebo patients, while there was a 33% decline in the Aberrant Behavior Checklist scores. When the sulforaphane was stopped total scores rose toward pretreatment levels.

I had posted on sulforaphane before — see the end of this post. I wrote the lead author asking if some of the therapeutic effects could be due to the anti-androgen activity of sulforaphane. He wrote back in a few days.

“Sorry, I missed your email. Absolutely possible. We did not measure androgen levels, but will do so in the future.
Thank you so much.”

Contrast this to the absent responses on whether the subjects in two functional MRI studies of the default network were asleep. See http://luysii.wordpress.com/2014/10/11/the-silence-is-deafening/

Vegetarians are wimps: Science now tells us why

Oh, it started innocently enough. Population studies had shown that men who ate lots of cruciferous vegetables (collard greens, cabbage, brussels sprouts, broccoli, cauliflower, bok choy etc. etc.) had less prostate cancer. Some folks in Oregon decided to find out why [ Proc. Natl. Acad. Sci. vol. 106 pp. 16663 – 16668 ’09 ]. One of the compounds found in all these veggies is sulforaphane. There are all sorts of places to be found on the web that will sell it to you for your health. Sulforaphane is said to fight cancer, improve diabetes and kill bacteria (if you believe Wikipedia). Hosanna.

Prostate cancer is made worse by male hormones (androgens). They produce their effects in cells by binding to a protein (the androgen receptor) which then goes into the nucleus of the cell and turns on the genes which make males male. If there’s no androgen around the receptor just sits there outside the nucleus (e.g. in the cytoplasm), doing nothing. Some forms of prostate cancer have mutations in the receptor which turn it on whether androgen is present or not. This makes the cancer even worse. So one of the mainstays of prostate cancer therapy is lowering androgen levels by a variety of means, none of them pleasant — such as castration and various pills.

The Oregon work shows that sulforaphane decreases the amount of androgen receptor around resulting in less androgenic effects, and presumably less prostate cancer in the long run. How this is thought to occur is pretty interesting, highly technical and is to be found in subsequent paragraphs. It also explains why vegetarians are such wimps.

The androgen receptor sits in the cytoplasm bound to a protein called HSP90 (heat shock protein of 90 kiloDaltons). This protects the androgen receptor from being destroyed. Sulforaphane is a fairly simple molecule — a straight 4 carbon chain with a methyl sulfoxide group at one end and an isothiocyanate (-N=C=S ) group at the other. It should be pretty lipid soluble, meaning it can go everywhere in the body without much trouble. The authors showed that sulforaphane inhibits an enzyme called histone deacetylase 2 (HDAC2). This results in more acetylation of HSP90 on lysine, inhibiting the association of HSP90 with the androgen receptor, leading to increased destruction of the receptor and less androgenic effects in the cell.

The active site of one histone deacetylase that we know about is a tubular pocket containing a zinc binding site and two aspartic acid histidine charge relay systems. My guess is that the business end of sulforaphane is the isothiocyanate, which could react by nucleophilic attack of either the histidine nitrogen or the aspartic acid oxygen on the carbon of the -N=C=S group. Perhaps one of readers knows how it works.

Histone deacetylase inhibitors are presently very ‘hot’ and one of them, SAHA was approved by the FDA for the treatment of T cell cutaneous lymphoma in 2007, and many others are under active investigation. It’s important to remember that although this class of enzymes was discovered by their ability to remove acetyl groups from histones, they also remove acetyl groups from proteins which are not histones (e.g. HSP90).

So veggies are a two-edged sword.

The Silence is Deafening

A while back I wrote a post concerning a devastating paper which said that papers concerning the default mode of brain activity (as seen by functional magnetic resonance imaging { fMRI } ) had failed to make sure that the subjects were actually awake during the study (and most of them weren’t). The post is copied here after the ****

Here’s a paper from July ’14 [ Proc. Natl. Acad. Sci. vol. 111 pp. 10341 – 10346 ’14 ] Functional brain networks are typically mapped in a time averaged sense, based on the assumption that functional connections remain stationary in the resting brain. Typically resting state fMRI (default network == rsfMRI) is sampled at a resolution of 2 seconds or slower.

However the human connectome project (HCP) has high-quality rsfMRI data at subsecond resolution (using multiband accelerated echo planar imaging. This work used a sliding window approach mapping the evolution of functional brain networks over a continuous 15 minute interval at subsecond resolution in 10 people. I wrote the lead author 21 July ’14 to ask how he knew the subjects weren’t asleep during this time.

No response. The silence is deafening.

Another more recent paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 14259–14264 ’14 ] had interesting things to say about brain maturation in attention deficit disorder/ hyperactivity — here’s the summary

It was proposed that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit delays in brain maturation. In the last decade, resting state functional imaging has enabled detailed investigation of neural connectivity patterns and has revealed that the human brain is functionally organized into large-scale connectivity networks. In this study, we demonstrate that the developing relationships between default mode network (DMN) and task positive networks (TPNs) exhibit significant and specific maturational lag in ADHD. Previous research has found that individuals with ADHD exhibit abnormalities in DMN–TPN relationships. Our results provide strong initial evidence that these alterations arise from delays in typical maturational patterns. Our results invite further investigation into the neurobiological mechanisms in ADHD that produce delays in development of large-scale networks.

I wrote the lead author a few days ago to ask how he knew the subjects weren’t asleep during this time.

No response. The silence is deafening.

***

Addendum 22 Nov ’14 — In a huge review of resting state MRI (Neuron vol. 84 pp. 681 – 696 ’14) The following appeared –
“The issue of inadvertent sleep has only recently gained prominence, and the field has not yet developed consensus on how to deal with this issue.” Well, silence is no longer an option.

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the #### and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 – 705 ’14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

####

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

Now we know why hot food tastes differently

An absolutely brilliant piece of physical chemistry explained a puzzling biologic phenomenon that organic chemistry was powerless to illuminate.

First, a fair amount of background

Ion channels are proteins present in the cell membrane of all our cells, but in neurons they are responsible for the maintenance of a membrane potential across the membrane, which has the ability change abruptly causing an nerve cell to fire an impulse. Functionally, ligand activated ion channels are pretty easy to understand. A chemical binds to them and they open and the neuron fires (or a muscle contracts — same thing). The channels don’t let everything in, just particular ions. Thus one type of channel which binds acetyl choline lets in sodium (not potassium, not calcium) which causes the cell to fire impulses. The GABA[A] receptor (the ion channel for gamma amino butyric acid) lets in chloride ions (and little else) which inhibits the neuron carrying it from firing. (This is why the benzodiazepines and barbiturates are anticonvulsants).

Since ion channels are full of amino acids, some of which have charged side chains, it’s easy to see how a change in electrical potential across the cell membrane could open or shut them.

By the way, the potential is huge although it doesn’t seem like much. It is usually given as 70 milliVolts (inside negatively charged, outside positively charged). Why is this a big deal? Because the electric field across our membranes is huge. 70 x 10^-3 volts is only 70 milliVolts. The cell membrane is quite thin — just 70 Angstroms. This is 7 nanoMeters (7 x 10^-9) meters. Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.

Now for the main course. We easily sense hot and cold. This is because we have a bunch of different ion channels which open in response to different temperatures. All this without neurotransmitters binding to them, or changes in electric potential across the membrane.

People had searched for some particular sequence of amino acids common to the channels to no avail (this is the failure of organic chemistry).

In a brilliant paper, entropy was found to be the culprit. Chemists are used to considering entropy effects (primarily on reaction kinetics, but on equilibria as well). What happens is that in the open state a large number of hydrophobic amino acids are exposed to the extracellular space. To accommodate them (e.g. to solvate them), water around them must be more ordered, decreasing entropy. This, of course, is why oil and water don’t mix.

As all the chemists among us should remember, the equilibrium constant has components due to kinetic energy (e.g. heat, e.g. enthalpy) and due to entropy.

The entropy term must be multiplied by the temperature, which is where the temperature sensitivity of the equilibrium constant (in this case open channel/closed channel) comes in. Remember changes in entropy and enthalpy work in opposite directions —

delta G(ibbs free energy) = delta H (enthalpy) - T * delta S (entropy

Here’s the paper [ Cell vol. 158 pp. 977 – 979, 1148 1158 ’14 ] They note that if a large number of buried hydrophobic groups become exposed to water on a conformational change in the ion channel, an increased heat capacity should be produced due to water ordering to solvate the hydrophobic side chains. This should confer a strong temperature dependence on the equilibrium constant for the reaction. Exposing just 20 hydrophobic side chains in a tetrameric channel should do the trick. The side chains don’t have to be localized in a particular area (which is why organic chemists and biochemists couldn’t find a stretch of amino acids conferring cold or heat sensitivity — it didn’t matter where the hydrophobic amino acids were, as long as there were enough of them, somewhere).

In some way this entwines enthalpy and entropy making temperature dependent activation U shaped rather than monotonic. So such a channel is in principle both hot activated and cold activated, with the position of the U along the temperature axis determining which activation mode is seen at experimentally accessible temperatures.

All very nice, but how many beautiful theories have we seen get crushed by ugly facts. If they really understood what is going on with temperature sensitivity, they should be able to change a cold activated ion channel to a heat activated one (by mutating it). If they really, really understood things, they should be able to take a run of the mill temperature INsensitive ion channel and make it temperature sensitive. Amazingly, the authors did just that.

Impressive. Read the paper.

This harks back to the days when theories of organic reaction mechanisms were tested by building molecules to test them. When you made a molecule that no one had seen before and predicted how it would react you knew you were on to something.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 – E3533 ’14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Tolstoy rides again — Schizophrenia

“A field plagued by inconsistency, and perhaps even a degree of charlatanism” — strong stuff indeed [ Neuron vol. 83 pp. 760 – 763 ’14 ]. They are talking about studies attempting to find the genetic causes of schizophrenia.

This was the state of play four and a half years ago (in a post of April 2010)

Happy families are all alike.; every unhappy family is unhappy in its own way”. Thus beginneth Anna Karenina. That wasn’t supposed to happen with hereditary disease. The examples we had before large scale DNA sequencing became cheap were basically one gene causing one disease. Two of the best known cases were sickle cell anemia and cystic fibrosis. In the former, a change in a single position (nucleotide) of DNA caused a switch of one amino acid (valine for glutamic) acid at position #6 in beta hemoglobin. In the latter, all mutations have been found in a single gene called CFTR. 85% of known mutations involve the loss of 1 amino acids. But by 2003 over 600 different mutations accounted for only part of the other 15%. There is plenty of room for mutation as CFTR has 1480 amino acids. The kids I took care of in the muscular dystrophy clinic all turned out to have mutations in the genes for proteins found in muscle.

Why not look for the gene causing schizophrenia? It’s a terrible disease (the post “What is Schizophrenia really Like?” is included after the **** ) with a strong hereditary component. There was an awful era in psychiatry when it was thought to be environmental (e.g. the family was blamed). Deciding what is hereditary and what is environmental can be tricky. TB was thought to be hereditary (for a time) because it also ran in families. So why couldn’t schizophrenia be environmental? Well, if you are an identical twin and the other twin has it, your chances of having schizophrenia are 45%. If you are a fraternal twin your chance of having it are 3 times less (15%). This couldn’t be due to the environment.

It’s time to speak of SNPs (single nucleotide polymorphisms). Our genome has 3.2 gigaBases of DNA. With sequencing being what it is, each position has a standard nucleotide at each position (one of A, T, G, or C). If 5% of the population have one of the other 3 at this position you have a SNP. By 2004 some 7 MILLION SNPs had been found and mapped to the human genome. So to find ‘the gene’ for schizophrenia, just take schizophrenics as a group (there are lots of them — about 1% of the population) look at their SNPs, and see if they have any SNPs in common.

Study after study found suspect SNPs (which can be localized exactly in the genome) for schizophrenia. The area of the genome containing the SNP was then searched for protein coding genes to find the cause of the disease. Unfortunately each study implicated a different bunch of SNPs (in different areas of the genome). A study of 750 schizophrenics and an equal number of controls from North Carolina used 500,000 SNPs. None of the previous candidate genes held up. Not a single one [ Nature vol. 454 pp., 154 – 157 ’08 ]

As of 2009 here are 3,000 diseases showing simple inheritance in which a causative gene hasn’t been found. This is the ‘dark matter’ of the genome. We are sure it exists (because the diseases are hereditary) but we simply can’t see it.

There is presently a large (and expensive) industry called GWAS (Genome Wide Association Studies) which uses SNPs to look for genetic causes of diseases with a known hereditary component. One study on coronary heart disease had 23,000 participants. In 2007 the Wellcome Trust committed 45 million (pounds? dollars?) for studies of 27 diseases in 120,000 people. This is big time science. GWAS studies have found areas of the genome associated with various disorders. However, in all GWAS studies so far, what they’ve picked up explains less than 5% of the heritability. An example is height (not a disease). Its heritability is 80% yet the top 20 candidate genetic variants identified explain only 3% of the variance. People have called for larger and larger samples to improve matters.

What’s going on?

It’s time for you to read “Genetic Heterogeneity in Human Disease” [ Cell vol. 141 pp. 210 – 217 ’10 ]. It may destroy GWAS. Basically, they argue that most SNPs are irrelevant, don’t produce any functional change, and have arisen by random mutation. They are evolutionary chaff if you will. A 12 year followup study of 19,000 women looked at the 101 SNPs found by GWAS as risk variants for cardiovascular disease — not one of them predicted outcome [ J. Am. Med. Assoc. vol. 303 pp. 631 – 637 ’10 ]. The SNPs haven’t been eliminated by natural selection, because they aren’t causing trouble and because the human population has grown exponentially.

There’s a lot more in this article, which is worth reading carefully. It looks like what we’re calling a given disease with a known hereditary component (schizophrenia for example) is the result of a large number of different (and rather rare) mutations. A given SNP study may pick up one or two rare mutations, but they won’t be found in the next. It certainly has been disheartening to follow this literature over the years, in the hopes that the cause of disease X, Y or Z would finally be found, and that we would have a logical point of attack (but see an old post titled “Some Humility is in Order”).

Is there an analogy?

200 years ago (before Pasteur) physicians classified a variety of diseases called fevers. They knew they were somewhat different from each other (quotidian fever, puerperal fever, etc. etc.). But fever was the common denominator and clinically they looked pretty much the same (just as dogs look pretty much the same). Now we know that infectious fever has hundreds of different causes. The Cell article argues that, given what GWAS has turned up so far, this is likely to be the case for many hereditary disorders.

Tolstoy was right.

Fast forward to the present [ Nature vol. 511 pp. 412 – 413, 421 – 427 ’14 ] This is a paper from the Schizophrenia Working Group of the Psychiatric Genomics Consortium (PGC) which analyzed some 36,989 and 113,075 controls. They found 128 independent associations spanning 108 conservatively defined loci, meeting genome-wide significance. 83/128 hadn’t been previously reported. The associations were enriched among genes expressed in brain. Prior to this work, some 30 schizophrenia associated loci had been found through genome wide association studies (GWAS).

Interestingly 3/4 of the 108 loci include protein coding genes (of which 40% represent a single gene and another 8% are within 20 kiloBases of a gene).

The editorial noted that there have been 800 associations ‘of dubious value’

The present risk variants are common, and contribute in most (if not all) cases. One such association is with the D2 dopamine receptor, but not with COMT (which metabolizes dopamine). The most significant association is within the major histocompatibility complex (MHC).

The paper in Neuron cited above notes that schizophrenia genetics is a “field plagued by inconsistency, and perhaps even a degree of charlatanism” and that that there have been 800 associations ‘of dubious value’. The statistical sins of earlier work are described resulting in many HUNDREDS of variant associations with schizophrenia, and scores of falsely implicated genes. Standards have been developed to eliminate them [ Nat. Rev. Genet. vol. 9 pp. 356 – 369 ’08 ].

Here is their description of the statistical sins prior to GWAS “Before GWAS, the standard practice for investigating schizophrenia genetics (as well as many other areas) was to pick a candidate gene (usually based on dopamine or glutamate pathways or linkage studies) and compare the frequency of genetic variants in cases and controls. Any difference with a p value 0.05) p values, and associations seen in partitions of a data set. Beyond all of these obvious statistical transgressions, these studies often entirely ignored well-established causes of spurious associations such as population stratification. Labs would churn out separate papers for gene after gene with no correction for multiple testing, and, on top of all of that, there was a publication bias against negative findings. “

There is a hockey stick model in which few real associations aren’t found until a particular sample size is breached. This works for hypertension.

GWAS identifies genomic regions, not precise risk factors. It is estimated the the 108 loci implicate 350 genes. However the Major Histocompatibility Complex (MHC) counts as one locus and it has tones of genes.

The NHGRI website tracks independent GWAS signals for common diseases and traits, and currently records 7,300 associations with a p value under 5 * 10^-8. Only 20 have been tracked to causal variants (depending on criteria.

The number of genes implicated will only grow as the PGC continues to increase the sample size to capture smaller and smaller effect sizes — how long will it be until the whole genome is involved? There are some significant philosophical issues involved, but this post is long enough already.

*****
What Schizophrenia is really Like

I feel that writing to you there I am writing to the source of a ray of light from within a pit of semi-darkness. It is a strange place where you live, where administration is heaped upon administration, and all tremble with fear or abhorrence (in spite of pious phrases) at symptoms of actual non-local thinking. Up the river, slightly better, but still very strange in a certain area with which we are both familiar. And yet, to see this strangeness, the viewer must be strange.”

“I observed the local Romans show a considerable interest in getting into telephone booths and talking on the telephone and one of their favorite words was pronto. So it’s like ping-pong, pinging back again the bell pinged to me.”

Could you paraphrase this? Neither can I, and when, as a neurologist I had occasion to see schizophrenics, the only way to capture their speech was to transcribe it verbatim. It can’t be paraphrased, because it makes no sense, even though it’s reasonably gramatical.

What is a neurologist doing seeing schizophrenics? That’s for shrinks isn’t it? Sometimes in the early stages, the symptoms suggest something neurological. Epilepsy for example. One lady with funny spells was sent to me with her husband. Family history is important in just about all neurological disorders, particularly epilepsy. I asked if anyone in her family had epilepsy. She thought her nephew might have it. Her husband looked puzzled and asked her why. She said she thought so because they had the same birthday.

It’s time for a little history. The board which certifies neurologists, is called the American Board of Psychiatry and Neurology. This is not an accident as the two fields are joined at the hip. Freud himself started out as a neurologist, wrote papers on cerebral palsy, and studied with a great neurologist of the time, Charcot at la Salpetriere in Paris. 6 months of my 3 year residency were spent in Psychiatry, just as psychiatrists spend time learning neurology (and are tested on it when they take their Boards).

Once a month, a psychiatrist friend and I would go to lunch, discussing cases that were neither psychiatric nor neurologic but a mixture of both. We never lacked for new material.

Mental illness is scary as hell. Society deals with it the same way that kids deal with their fears, by romanticizing it, making it somehow more human and less horrible in the process. My kids were always talking about good monsters and bad monsters when they were little. Look at Sesame street. There are some fairly horrible looking characters on it which turn out actually to be pretty nice. Adults have books like “One flew over the Cuckoo’s nest” etc. etc.

The first quote above is from a letter John Nash wrote to Norbert Weiner in 1959. All this, and much much more, can be found in “A Beatiful Mind” by Sylvia Nasar. It is absolutely the best description of schizophrenia I’ve ever come across. No, I haven’t seen the movie, but there’s no way it can be more accurate than the book.

Unfortunately, the book is about a mathematician, which immediately turns off 95% of the populace. But that is exactly its strength. Nash became ill much later than most schizophrenics — around 30 when he had already done great work. So people saved what he wrote, and could describe what went on decades later. Even better, the mathematicians had no theoretical axe to grind (Freudian or otherwise). So there’s no ego, id, superego or penis envy in the book, just page after page of description from well over 100 people interviewed for the book, who just talked about what they saw. The description of Nash at his sickest covers 120 pages or so in the middle of the book. It’s extremely depressing reading, but you’ll never find a better description of what schizophrenia is actually like — e.g. (p. 242) She recalled that “he kept shifting from station to station. We thought he was just being pesky. But he thought that they were broadcasting messages to him. The things he did were mad, but we didn’t really know it.”

Because of his previous mathematical achievments, people saved what he wrote — the second quote above being from a letter written in 1971 and kept by the recipient for decades, the first quote from a letter written in 12 years before that.

There are a few heartening aspects of the book. His wife Alicia is a true saint, and stood by him and tried to help as best she could. The mathematicians also come off very well, in their attempts to shelter him and to get him treatment (they even took up a collection for this at one point).

I was also very pleased to see rather sympathetic portraits of the docs who took care of him. No 20/20 hindsight is to be found. They are described as doing the best for him that they could given the limited knowledge (and therapies) of the time. This is the way medicine has been and always will be practiced — we never really know enough about the diseases we’re treating, and the therapies are almost never optimal. We just try to do our best with what we know and what we have.

I actually ran into Nash shortly after the book came out. The Princeton University Store had a fabulous collection of math books back then — several hundred at least, most of them over $50, so it was a great place to browse, which I did whenever I was in the area. Afterwards, I stopped in a coffee shop in Nassau Square and there he was, carrying a large disheveled bunch of papers with what appeared to be scribbling on them. I couldn’t bring myself to speak to him. He had the eyes of a hunted animal.

I sincerely hope it works, but I’m very doubtful

A fascinating series of papers offers hope (in the form of a small molecule) for the truly horrible Werdnig Hoffman disease which basically kills infants by destroying neurons in their spinal cord. For why this is especially poignant for me, see the end of the post.

First some background:

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 – 9623 ’00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

Intellectually fascinating but ghastly to watch.

This brings us to the current papers [ Science vol. 345 pp. 624 – 625, 688 – 693 ’14 ].

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

Why am I skeptical? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Why does the paper grip me so. I watched the beautiful infant daughter of a cop and a nurse die of it 30 – 40 years ago. Even with all the degrees, all the training I was no better for the baby than my immigrant grandmother dispensing emotional chicken soup from her dry goods store (she only had a 4th grade education). Fortunately, the couple took the 25% risk of another child with WH and produced a healthy infant a few years later.

A second reason — a beautiful baby grandaughter came into our world 24 hours ago.

Poets and religious types may intuit how miraculous our existence is, but the study of molecular biology proves it (to me at least).

As if the job shortage for organic/medicinal chemists wasn’t bad enough

Will synthetic organic chemists be replaced by a machine? Today’s (7 August ’14) Nature (vol. 512 pp. 20 – 22) describes RoboChemist. As usual the job destruction is the fruit of the species being destroyed. Nothing new here — “The Capitalists will sell us the rope with which we will hang them.” — Lenin. “I would consider it entirely feasible to build a synthesis machine which could make any one of a billion defined small molecules on demand” says one organic chemist.

The design of the machine is already being studied, but with a rather paltry grant (1.2 million dollars). Even worse, for the thinking chemist, the choice of reactants and reactions to build the desired molecule will be made by the machine (given a knowledge base, and the algorithms that experienced chemists use, assuming they can be captured by a set of rules). E. J. Corey tried to do this automatically years ago with a program called LHASA (Logic and Heuristics Applied to Synthetic Analysis), but it never took off. Corey formalized what chemists had been doing all along — see http://luysii.wordpress.com/2010/06/20/retrosynthetic-analysis-and-moliere/

Another attempt along these lines is Chematica, which recently has had some success. A problem with using the chemical literature, is that only the conditions for a successful reaction are published. A synthetic program needs to know what doesn’t work as much as it needs to know what does. This is an important problem in the medical/drug literature where only studies showing a positive effect are published. There’s a great chapter in “How Not to Be Wrong” concerning the “International Journal of Haruspicy” which publishes only statically significant results for predicting the future reading sheep entrails. They publish a lot of stuff because some 400 Haruspicists in different labs are busy performing multiple experiments, 5% of which reach statistical significance. Previously drug companies had to publish only successful clinical trials. Now they’ll be going into a database regardless of outcome.

Automated machinery for making polynucleotides and poly peptides already exists, but here the reactions are limited. Still, the problem of getting the same reaction to work over and over with different molecules of the same class (amino acids, nucleotides) has been solved.

The last sentence is the most chilling “And with a large workforce of graduate students to draw on, academic labs often have little incentive to automate.” Academics — the last Feudal system left standing.

However, telephone operators faced the same fate years ago, due to automatic switching machinery. Given the explosion of telephone volume 50 years ago, there came a point where every woman in the USA would have worked for the phone company to handle the volume.

A similar moment of terror occurred in my field (clinical neurology) years ago with the invention of computerized axial tomography (CAT scans). All our diagnostic and examination skills (based on detecting slight deviations from normal function) would be out the window, when the CAT scan showed what was structurally wrong with the brain. Diagnosis was possible because abnormalities in structure invariably occurred earlier than abnormalities in function. Didn’t happen. We’d get calls – we found this thing on the CAT scan. What does it mean?

Even this wonderful machine which can make any molecule you wish, will not tell you what cellular entity to attack, what the target does, and how attacking it will produce a therapeutically useful result.

Follow

Get every new post delivered to your Inbox.

Join 69 other followers