Category Archives: Medicine in general

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

A very UNtheoretical approach to cancer diagnosis

We have tons of different antibodies in our blood. Without even taking mutation into account we have 65 heavy chain genes, 27 diversity segments, and 6 joining regions for them (making 10,530) possibilities — then there are 40 genes for the kappa light chains and 30 for the lambda light chains or over 1,200 * 10,530. That’s without the mutations we know that do occur to increase antibody affinity. So the number of antibodies probably ramming around in our blood is over a billion (I doubt that anyone has counted then, just has no one has ever counted the neurons in our brain). Antibodies can bind to anything — sugars, fats, but we think of them as mostly binding to protein fragments.

We also know that cancer is characterized by mutations, particularly in the genes coding for proteins. Many of the these mutations have never been seen by the immune system, so they act as neoantigens. So what [ Proc. Natl. Acad. Sci. vo. 111 pp. E3072 - E3080 '14 ] did was make a chip containing 10,000 peptides, and saw which of them were bound by antibodies in the blood.

The peptides were 20 amino acids long, with 17 randomly chosen amino acids, and a common 3 amino acid linker to the chip. While 10,000 seems like a lot of peptides, it is a tiny fraction (actually 10^-18
of the 2^17 * 10^17 = 1.3 * 10^22 possible 17 amino acid peptides).

The blood was first diluted 500x so blood proteins other than antibodies don’t bind significantly to the arrays. The assay is disease agnostic. The pattern of binding of a given person’s blood to the chip is called an immunosignature.

What did they measure? 20 samples from each of five cancer cohorts collected from multiple geographic sites and 20 noncancer samples. A reference immunosignature was generated. Then 120 blinded samples from the same diseases gave 95$% classification accuracy. To investigate the breadth of the approach and test sensitivity, the immunosignatures 75% of over 1,500 historical samples (some over 10 years old) comprising 14 different diseases were used as training, then the other 25% were read blind with an accuracy of over 98% — not too impressive, they need to get another 1,500 samples. Once you’ve trained on 75% of the sample space, you’d pretty much expect the other 25% to look the same.

The immunosignature of a given individual consists of an overlay of the patterns from the binding signals of many of the most prominent circulating antibodies. Some are present in everyone, some are unique.

A 2002 reference (Molecular Biology of the Cell 4th Edition) states that there are 10^9 antibodies circulating in the blood. How can you pick up a signature on 10K peptides from this. Presumably neoAntigens from cancer cells elicit higher afifnity antibodies then self-antigens. High affiity monoclonals can be diluted hundreds of times without diminishing the signal.

The next version of the immunosignature peptide microArray under development contains over 300,000 peptides.

The implication is that each cancer and each disease produces either different antigens and or different B cell responses to common antigens.

Since the peptides are random, you can’t align the peptides in the signature to the natural proteomic space to find out what the antibody is reacing to.

It’s a completely atheoretical approach to diagnosis, but intriguing. I’m amazed that such a small sample of protein space can produce a significant binding pattern diagnostic of anything.

It’s worth considering just what a random peptide of 17 amino acids actually is. How would you make one up? Would you choose randomly giving all 20 amino acids equal weight, or would you weight the probability of a choice by the percentage of that amino acid in the proteome of the tissue you are interested in. Do we have such numbers? My guess is that proline, glycine and alanine would the most common amino acids — there is so much collagen around, and these 3 make up a high percentage of the amino acids in the various collagens we have (over 15 at least).

Bad news on the cancer front

[ Nature vol. 512 pp. 143 - 144, 155 - 160 '14 ] Nuc-seq is an innovative sequencing method which achieves almost complete sequencing of whole genomes in single cells. It sequences DNA from cells about to divide (the G2/M stage of the cell cycle which has twice the DNA content of the usual cell). Genomes of multiple single cells from two types of human breast cancer (estrogen receptor positive and triple negative — the latter much more aggressive) and found that no two genomes of individual tumor cells were identical. Many cells had new mutations unique to them.

This brings into question what we actually mean by a cancer cell clone. They validated some of the single cell mutations by deep sequencing of a single molecule (not really sure what this is).

Large scale structural changes in DNA (amplification and deletion of large blocks of DNA) occurred early in tumor development. THey remain stable as clonal expansion of the tumor occur (e.g. they were found in all the cancer cells whose genome was sequenced). Point mutations accumulated more gradually generating extensive subclonal diversity. Many of the mutations occur in less than 10% of the tumor mass. Triple negative breast cancers (aggressive) have mutation rates 13 times greater than the slower growing estrogen receptor positive breast cancer cells.

This implies that the mutations are there BEFORE chemotherapy. This has always been a question as most types of chemotherapy attack DNA replication and are inherently mutagenic. It also implies that slamming cancer with chemotherapy early before it has extensively mutated is locking the barn door after the horse has been stolen. It still might help in preventing metastasis, so the approach remains viable.

However nuc-seq may only be useful for cancer cells without aneuploidy http://en.wikipedia.org/wiki/Aneuploidy which is extremely common in cancer cells.

Why is this such bad news? It means that before chemotherapy even starts there is a high degree of genetic diversity present in the tumor cell population. This means that natural selection (in the form of chemotherapy) has a diverse population to work on at the get go, making resistance far more likely to occur.

Had enough? Here’s more — [ Nature vol. 511 pp. 543 - 550 '14 ] A report of 230 resected lung adenocarcinomas using mRNA, microRNA and DNA sequencing found an incredible 8.8 mutations/megaBase — e.g. 3.2 * 3.8 * 1,000 == 28,000 mutations. Aberrations in NF1, MET. ERBB2 and RIT1 occured in 13% and were enriched in samples otherwise lacking an activated oncogene. Even when not mutated, mRNA splicing was different in tumors. As far as oncogenic pathways, multiple pathways were involved — p53in 63%, PI3K mTOR in 25%, Receptor Tyrosine Kinase in 76%, cell cycle regulators 64%.

This is the opposite side of the coin from the first paper, where the genomes of single tumor cells were sequenced. It is doubtful that all cells have the 28,000 mutations, which probably result from each cell having a subset. The first paper didn’t count how many mutations a single cell had (as far as i could see).

So oncologists are attacking a hydra-headed monster.

The perfect aphrodisiac ?

We’re off to London for a few weeks to celebrate our 50th Wedding Anniversary. As a parting gift to all you lovelorn organic chemists out there, here’s a drug target for a new aphrodisiac.

Yes, it’s yet another G Protein Coupled Receptor (GPCR) of which we have 800+ in our genome, and which some 30% of drugs usable in man target (but not this one).

You can read all about it in a leisurely review of “Affective Touch” in Neuron vol. 82 pp. 737 – 755 ’14, and Nature vol. 493 pp. 669 – 673 ’13. The receptor (if the physiological ligand is known the papers are silent about it) is found on a type of nerve going to hairy skin. It’s called MRGPRB4.

The following has been done in people. Needles were put in a cutaneous nerve, and skin was lightly stroked at rates between 1 and 10 centimeters/second. Some of the nerves respond at very high frequency 50 – 100 impulses/second (50 – 100 Hertz) to this stimulus. Individuals were asked to rate the pleasantness of the sensation produced. The most pleasant sensations produced the highest frequency responses of these nerves.

MRGPRB4 is found on nerves which respond like this (and almost nowhere else as far as is known), so a ligand for it should produce feelings of pleasure. The whole subject of proteins which produce effects when the cell carrying them is mechanically stimulated is fascinating. Much of the work has been done with the hair cells of the ear, which discharge when the hairs are displaced by sound waves. Proteins embedded in the hairs trigger an action potential when disturbed.

Perhaps there is no chemical stimulus for MRGPRB4, just as there isn’t for the hair cells, but even so it’s worth looking for some chemical which does turn on MRGPRB4. Perhaps a natural product already does this, and is in one of the many oils and lotions people apply to themselves. Think of the chemoattractants for bees and other insects.

If you’re the lucky soul who finds such a drug, fame and fortune (and perhaps more) is sure to be yours.

Happy hunting

Back in a few weeks

A huge amount of work will need to be redone

The previous post is reprinted below the —- if you haven’t read it, you should do so now before proceeding.

Briefly, no one had ever bothered to check if subjects were asleep while studying the default mode of brain activity. The paper discussed in the previous post appeared in the 7 May ’14 issue of Neuron.

In the 13 May ’14 issue of PNAS [ Proc. Natl. Acad. Sci. vol. 111 pp. E2066 - E2075 '14 ] a paper appeared on genetic links to default mode abnormalities in schizophrenia and bipolar disorder.

From the abstract “Study subjects (n = 1,305) underwent a resting-state functional MRI scan and were analyzed by a two-stage approach. The initial analysis used independent component analysis (ICA) in 324 healthy controls, 296 Schizophrenic probands, 300 psychotic bipolar disorder probands, 179 unaffected first-degree relatives of schizophrenic pro bans, and 206 unaffected first-degree relatives of psychotic bipolar disorder probands to identify default mode networks and to test their biomarker and/or endophenotype status. A subset of controls and probands (n = 549) then was subjected to a parallel ICA (para-ICA) to identify imaging–genetic relationships. ICA identified three default mode networks.” The paper represents a tremendous amount of work (and expense).

No psychiatric disorder known to man has normal sleep. The abnormalities found in the PNAS study may not be of the default mode network, but in the way these people are sleeping. So this huge amount of work needs to be repeated. An tghis is just one paper. As mentioned a Google search on Default Networks garnered 32,000,000 hits.

Very sad.

____

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

Why marihuana scares me

There’s an editorial in the current Science concerning how very little we know about the effects of marihuana on the developing adolescent brain [ Science vol. 344 p. 557 '14 ]. We know all sorts of wonderful neuropharmacology and neurophysiology about delta-9 tetrahydrocannabinol (d9-THC) — http://en.wikipedia.org/wiki/Tetrahydrocannabinol The point of the authors (the current head of the Amnerican Psychiatric Association, and the first director of the National (US) Institute of Drug Abuse), is that there are no significant studies of what happens to adolescent humans (as opposed to rodents) taking the stuff.

Marihuana would the first mind-alteraing substance NOT to have serious side effects in a subpopulation of people using the drug — or just about any drug in medical use for that matter.

Any organic chemist looking at the structure of d9-THC (see the link) has to be impressed with what a lipid it is — 21 carbons, only 1 hydroxyl group, and an ether moiety. Everything else is hydrogen. Like most neuroactive drugs produced by plants, it is quite potent. A joint has only 9 milliGrams, and smoking undoubtedly destroys some of it. Consider alcohol, another lipid soluble drug. A 12 ounce beer with 3.2% alcohol content has 12 * 28.3 *.032 10.8 grams of alcohol — molecular mass 62 grams — so the dose is 11/62 moles. To get drunk you need more than one beer. Compare that to a dose of .009/300 moles of d9-THC.

As we’ve found out — d9-THC is so potent because it binds to receptors for it. Unlike ethanol which can be a product of intermediary metabolism, there aren’t enzymes specifically devoted to breaking down d9-THC. In contrast, fatty acid amide hydrolase (FAAH) is devoted to breaking down anandamide, one of the endogenous compounds d9-THC is mimicking.

What really concerns me about this class of drugs, is how long they must hang around. Teaching neuropharmacology in the 70s and 80s was great fun. Every year a new receptor for neurotransmitters seemed to be found. In some cases mind benders bound to them (e.g. LSD and a serotonin receptor). In other cases the endogenous transmitters being mimicked by a plant substance were found (the endogenous opiates and their receptors). Years passed, but the receptor for d9-thc wasn’t found. The reason it wasn’t is exactly why I’m scared of the drug.

How were the various receptors for mind benders found? You throw a radioactively labelled drug (say morphine) at a brain homogenate, and purify what it is binding to. That’s how the opiate receptors etc. etc. were found. Why did it take so long to find the cannabinoid receptors? Because they bind strongly to all the fats in the brain being so incredibly lipid soluble. So the vast majority of stuff bound wasn’t protein at all, but fat. The brain has the highest percentage of fat of any organ in the body — 60%, unless you considered dispersed fatty tissue an organ (which it actually is from an endocrine point of view).

This has to mean that the stuff hangs around for a long time, without any specific enzymes to clear it.

It’s obvious to all that cognitive capacity changes from childhood to adult life. All sorts of studies with large numbers of people have done serial MRIs children and adolescents as the develop and age. Here are a few references to get you started [ Neuron vol. 72 pp. 873 - 884, 11, Proc. Natl. Acad. Sci. vol. 107 pp. 16988 - 16993 '10, vol. 111 pp. 6774 -= 6779 '14 ]. If you don’t know the answer, think about the change thickness of the cerebral cortex from age 9 to 20. Surprisingly, it get thinner, not thicker. The effect happens later in the association areas thought to be important in higher cognitive function, than the primary motor or sensory areas. Paradoxical isn’t it? Based on animal work this is thought to be due pruning of synapses.

So throw a long-lasting retrograde neurotransmitter mimic like d9-THC at the dynamically changing adolescent brain and hope for the best. That’s what the cited editorialists are concerned about. We simply don’t know and we should.

Having been in Cambridge when Leary was just getting started in the early 60’s, I must say that the idea of tune in turn on and drop out never appealed to me. Most of the heavy marihuana users I’ve known (and treated for other things) were happy, but rather vague and frankly rather dull.

Unfortunately as a neurologist, I had to evaluate physician colleagues who got in trouble with drugs (mostly with alcohol). One very intelligent polydrug user MD, put it to me this way — “The problem is that you like reality, and I don’t”.

Is a sea change taking place at the New York Times ?

The little kid started crying as I approached him with the syringe filled with yellow fluid. He knew that after he was held down and I injected him he would be violently sick and vomit repeatedly.

It was 1964 and this happened at the Children’s Hospital of Philadelphia (CHOP) and the kid had acute lymphatic leukemia, and the syringe was full of methotrexate, the antifolate drug in use at the time. I was a third year med student. Although Stanley Milgram had begun his “Obedience to Authority” experiments in 1961 http://en.wikipedia.org/wiki/Milgram_experiment, I was hardly a happy or willing participant in the proceedings. I had nightmares about it.

Like all the kids with leukemia at CHOP, the little boy was part of a ‘study’ run by an oncologist, with an accent right out of Boris Karloff. I thought he was a monster. He was so happy that the kids in his branch of the study survived a horrible 21 months, vs. the previous record of 18. I thought that the kids were being kept alive and suffering when they shouldn’t have been, in order to set a new survival record. The study randomized the kids between the new regimen and the current regimen showing the best survival.

Well, I was terribly wrong, and the oncologist was a hero not a monster. Presently the cure rate (not survival) of childhood leukemia is over 90%. We now worry about the long term side effects of the drugs (and radiation) used to cure it — cognitive problems, fertility problems. It was precisely because the new treatment was compared to the best previous treatment that we are where we are today.

What in the world does this have to do with the New York Times?

Simply this, on Monday 21 April the front page of the New York Times contained an article title “50 Years Later, Hardship Hits Back, Poorest Counties Are Still Losing in War on Want”. They don’t call it the “War on Poverty” until the 5th paragraph. Nonetheless, the article (without explicitly saying so) documents just what a failure it has been. Nowhere in the article, is there any mention of why it failed, but it’s clear that only more of the same has been tried — more food stamps, more medicaid, more free school lunches, etc. etc. It is claimed in the article that this lifted tens of thousands above a subsistence standard of living, yet 15% of the populace is still living in poverty and 47/300 million of us are on food stamps.

At least the Times is no longer pretending that the War on Poverty (started in 1964 when I was pushing methotrexate) is a success.

Another sign of a sea change at the Times appeared the day before on the Op-Ed page in an article titled “From Rags to Riches to Rags” in which the notion of a static top 1% in income was debunked. A study of 44 years of longitudinal data of people from 25 to 60 showed that 12% of all of them would be in the top 1% of income for at least one year, and that 39% will be in the top 5% of income for at least 1 year.

A third appeared on the 22nd in a front page article concerning a near lynching by Blacks in Detroit of a white man who hit a child with his car.

In recent years, I’ve thought that I’ve had to read the Times much as the Russians read Pravda during cold war I (and perhaps today). A friend has called it ‘advocacy driven journalism’. Perhaps there will be a shift in orientation from left to right, but, even so, I’m not a fan of having articles #1 and #3 any place other than Op – Ed page. Advocacy journalism is advocacy journalism whether it agrees with your political orientation or not. The 3 articles cited really aren’t news. That’s what the opinion page is for — opinion and background.

80+ years ago my future parents discovered that one of the first things they had in common was that they both read the Times. I grew up with it, and hopefully it will become a great newspaper again.

The failure to try anything new against poverty is a manifestation of the arrogance of the intelligent, about which there will be another post.

Further (physical) chemical elegance

If the chemical name phosphatidyl serine (PS) draws a blank, read the verbatim copy of a previous post under the *** to find out why it is so important to our existence. It is an ‘eat me’ signal when there is lots of it around, telling professional scavenger cells to engulf the cell showing lots of PS on its surface.

Life, as usual, is more complicated. There are a variety of proteins exposed on cell surfaces which bind to phosphoserine. Not only that, but exposing just a little PS on the surface of a cell can trigger a protective immune response. Immune cells binding to just a little PS on the surface of another cell proliferate rather than eat the cell expressing the PS. This brings us to Proc. Natl. Acad. Sci. vol. 111 pp 5526 – 5531 ’14 that explains how a given PS receptor (called TIM4) acts differently depending how much PS is present.

Some PS receptors such as Annexin V have essentially an all or none response to PS, if they bind at all, they trigger a response in the cell carrying them. Not so for TIM4 which only reacts if there is a lot of PS around, leaving cells which express less PS alone. This allows these cells to function in the protective immune response.

So how does TIM4 do this? See if you can think of a mechanism before reading the rest.

In addition to the PS binding pocket TIM4 has 4 peripheral basic residues in separate places. The basic residues are positively charged at physiologic pH and bind to the negatively charged phosphate group of phosphatidyl serene or to the carboxylate anion of phosphatidyl serine. The paper doesn’t explain how these basic residues don’t bind to the other phospholipids of the cell surface (such as phosphatidyl choline or sphingomyelin). It is conceivable that the basic side chains (arginine, lysine etc.) are so set up that they only bind to carboxylate anions and not phosphate anions (but this is a stretch). That would at least give them specificity for phosphatidyl serene as opposed the other phospholipids present in both leaflets of the cell membrane. In any even TIM4 will be triggered only if these groups also bind PS, leaving cells which show relatively little PS alone. Clever no?

For the cognoscenti, the Hill coefficient of TIM4 is 2 while that of Annexin V is 8 (describing more than explaining the all or none character of Annexin V binding).

****
Flippase. Eat me signals. Dragging their tails behind them. Have cellular biologists and structural biochemists gone over to the dark side? It’s all quite innocuous as the old nursery rhyme will show

Little Bo Peep has lost her sheep
and doesn’t know where to find them
Leave them alone, and they’ll come home
wagging their tails behind them.

First, some cellular biochemistry. The lipid bilayer encasing all our cells is made of two leaflets, inner and outer. The composition of the two is different (unlike the soap bubble). On the inside we find phosphatidylethanolamine (PE), phosphatidylserine (PS). The outer leaflet contains phosphatidylcholine (PC) and sphingomyelin (SM) and almost no PE or PS. This is clearly a low entropy situation compared to having all 4 randomly dispersed between the 2 leaflets.

What is the possible use of this (notice how teleology invariably creeps into cellular biology)? Chemistry is powerless to explain such things. Much as I love chemistry, such truths must be faced.

It takes energy to maintain this peculiar distribution. The enzyme moving PE and PS back inside the cell is the flippase. It requires energy in the form of ATP to operate. When a cell is dying ATP drops, and entropy takes its course moving PE and PS to the cell surface. Specialized cells (macrophages) exist to scoop up the dying or dead cells, without causing inflammation. They recognize PE and PS by a variety of receptors and munch up cells exposing them on the surface. So PE and PS are eat me signals which appear when there isn’t enough ATP around for flippase to use to haul PE and PS back inside. Clever no?

No for some juicy chemistry (assuming that you consider transport of a molecule across a lipid bilayer actual chemistry — no covalent bonds to the transferred molecule are formed or removed, although they are to the transporter). Well it certainly is physical chemistry isn’t it?

Here are the structures of PE, PS, PC, SM http://www.google.com/search?q=phosphatidylserine&client=safari&rls=en&tbm=isch&tbo=u&source=univ&sa=X&ei=bDRLU5yfHOPLsQSOnoG4BA&ved=0CPABEIke&biw=1540&bih=887#facrc=_&imgdii=_&imgrc=qrLByG2vmhWdwM%253A%3BwAtgsTPwCxeZXM%3Bhttp%253A%252F%252Fscience.csumb.edu%252F~hkibak%252F241_web%252Fimg%252Fpng%252FCommon_Phospholipids.png%3Bhttp%253A%252F%252Fscience.csumb.edu%252F~hkibak%252F241_web%252Fcoursework_pages%252F2012_02_2.html%3B1297%3B934.

There are a few things to notice. Like just about every lipid found in our membranes, they are amphipathic — they have a very lipid soluble part (look at the long hydrocarbon changes hanging below them) and a very water soluble part — the head groups containing the phosphate.

This brings us to [ Proc. Natl. Acad. Sci. vol. 111 pp. E1334 - E1343 '14 ] Which describes ATP8A2 (aka the flippase). Interestingly, the protein, with at least 10 alpha helices spanning the membrane, and 3 cytoplasmic domains closely resembles the classic sodium pump beloved of neurophysioloogists everywhere, which pumps sodium ions out of neurons and pumps potassium ions inside, producing the equally beloved membrane potential of neurons.

Look at those structures again. While there are charges on PE, PS (on the phosphate group), these molecules are far larger than the sodium or the potassium ion (easily by a factor of 10). This has long been recognized and is called the ‘giant substrate problem’.

The paper solved the structure of ATP8A2 and used molecular dynamics stimulations to try to understand how it works. What they found is that transmembrane alpha helices 1, 2, 4 and 6 (out of 10) form a water filled cavity, which dissolves the negatively charged phosphate of the head group. What happens to those long hydrocarbon tails? The are left outside the helices in the lipid core of the membrane. It is the charged head groups that are dragged through by the flippase, with the tails wagging along behind them, just like little Bo Peep.

There’s a lot more great chemistry in the paper, particularly how Isoleucine #364 directs the sequential formation and annihilation of the water filled cavities between alpha helices 1, 2, 4 and 6, and how a particular aspartic acid is phosphorylated (by ATP, explaining why the enzyme no longer works in energetically dying cells) changing conformation of all 10 transmembrane helices, so that only one half of the channel is open at a time (either to the inside or the outside).

Go read and enjoy. It’s sad that people who don’t know organic chemistry are cut off from appreciating such elegance. There is more to esthetics than esthetics.

Just when you thought you understood neurotransmission

Back in the day, the discovery of neurotransmission allowed us to think we understood how the brain worked. I remember explaining to medical students in the early 70s, that the one way flow of information from the presynaptic neuron to the post-synaptic one was just like the flow of current in a vacuum tube — yes a vacuum tube, assuming anyone reading knows what one is. Later I changed this to transistor when integrated circuits became available.

Also the Dale hypothesis as it was taught to me, was that a given neuron released the same neurotransmitter at all its endings. As it was taught back in the 60s this meant that just one transmitter was released by a given neuron.

Retrograde transmission was just a glimmer in the mind’s eye back then. We now know that the post-synaptic neuron releases compounds which affect the presynaptic neuron, the supposed controller of the postsynaptic neuron. Among them are carbon monoxide, and the endocannabinoids (e. g. what marihuana is trying to mimic).

In addition there are neurotransmitter receptors on the presynaptic neuron, which respond to what it and other neurons are releasing to control its activity. These are outside the synapse itself. These events occur more slowly than the millisecond responses in the synapse to the main excitatory neurotransmitter of the brain (glutamic acid) and the main inhibitory neurotransmitter (gamma amino butyric acid — aka GABA). Receptors on the presynaptic neuron for the transmitter it’s releasing are called autoreceptors, but the presynaptic terminal also contains receptors for other neurotransmitters.

Well at least, neurotransmitters aren’t released by the presynaptic neuron without an action potential which depolarizes the presynaptic terminal, or so we thought until [ Neuron vol. 82 pp. 63 - 70 '14 ]. The report involves a structure near and dear to the neurologist the striatum (caudate and putamen — which is striated because the myelinated axons of the internal capsule go through its anterior end giving it a striated appearance).

It is the death of the dopamine containing neurons in the substantial nigra which cause Parkinsonism. They project some of their axons to the striatum. The striatum gets input elsewhere (from the cortex using glutamic acid) and from neurons intrinsic to itself (some of which use acetyl choline as their neurotransmitter — these are called cholinergic interneurons).

The paper makes the claim that the dopamine neurons projecting to the striatum also contain the inhibitory neurotransmitter GABA.

The paper also says that the cholinergic interneurons cause release of GABA by the dopamine neurons — they bind to a type of acetyl choline receptor called nicotinic (similar but not identical to the nicotinic receptors which allow our muscles to contract) in the presynaptic terminals of the dopamine neurons of the substantial nigra residing in the striatum. Isn’t medicine and neuroanatomy a festival of terms? It’s why you need a good memory to survive medical school.

These used optogenetics (something I don’t have time to explain — but see http://en.wikipedia.org/wiki/Optogenetics ) to selectively stimulate the 1 – 2% of striatal neurons which use acetyl choline as a neurotransmitter. What they found was that only GABA (and not dopamine) was released by the dopamine neurons in response to stimulating this small subset of neurons. Even more amazing, the GABA release occurred without an action potential depolarizing the presynaptic terminal.

This literally stands everything I thought I knew about neurotransmission on its ear. How widespread this phenomenon actually is, isn’t known at this point. Clearly, the work needs to be replicated — extreme claims require extreme evidence.

Unfortunately I’ve never provided much background on neurotransmission for the hapless chemists and medicinal chemists reading this (if there are any), but medicinal chemists must at least have a smattering of knowledge about this, since neurotransmission is involved in how large classes of CNS active drugs work — antidepressants, antipsychotics, anticonvulsants, migraine therapy. There is some background on this here — http://luysii.wordpress.com/2010/08/29/some-basic-pharmacology-for-the-college-student/

Follow

Get every new post delivered to your Inbox.

Join 68 other followers