Category Archives: Neurology & Psychiatry

20,000 NanoSensors under the Cell (apologies to Jules Verne)

Too bad Jules Verne isn’t around to read PNAS vol. 114 pp. 1789 – 1794 ’17 where 20,000 fluorescent nanoSensors were placed under a single PC12 cell. PC12 cells are derived from a pheochromocytoma, a tumor which secretes catecholamines like dopamine and norepinephrine. So they’re almost neurons, and they contain vesicles containing dopamine, just like neurons, but they don’t form synapses.

The pictures they show of the cells shows the cell bodies sitting on a slide to be about 100 microns in diameter, with multiple protrusions so how are you going to get 20,000 sensors underneath them. Assuming them to be circular that’s about 3 per square micron. A micron is 10,000 Angstroms. The authors used Single Walled Carbon NanoTubes (SWCNTs) — e.g. rolled up graphene. They have a diameter of from 5 – 20 Angstroms, so there’s plenty of room for many in a square micron.

Here’s what they did. “Previously we found that the corona phase around SWCNTs can be engineered to recognize certain small analytes––a phenomenon we termed Corona Phase Molecular Recognition (CoPhMoRe) (7, 25, 26). Specifically, DNA-wrapped SWCNTs were found to increase their near InfraRed fluorescence in the presence of catecholamines . Here, we synthesized and characterized different DNA/SWCNT com- plexes and identified the best candidates for dopamine detection.

What they found is less remarkable than having the guts to try something like this. They could stimulate the cells to release dopamine using potassium (maddeningly I couldn’t find the concentration anywhere). Then with the density of sensors they could find out where it was released (the edges of the cell) with a time resolution of .1 second. It wasn’t generally released, but in hotspots — what you’d expectd if it were being released due to vesicles containing dopamine fusing with the cell membrane.

Remarkable — hard to see how they’re going to get this sort an array into a living organism, but their use in the study of brain slices can’t be far away.

Norbert Weiner

In the Cambridge Mass of the early 60’s the name Norbert Weiner was spoken in hushed tones. Widely regarded as a genius tutti the assembled genii of Cambridge, that was all I knew about him aside from the fact that he got a bachelor’s degree in math from Tufts at age 14. As a high school student I tried to read Cybernetics, a widely respected book he wrote in 1948, and found it incomprehensible.

Surprisingly, his name never came up again in any undergraduate math courses, graduate chemistry and physics courses, extensive readings on programming and computation (until now).

From PNAS vol. 114 pp. 1281 – 1286 ’17 –“In their seminal theoretical work, Norbert Wiener and Arturo Rosenblueth showed in 1946 that the self-sustained activity in the cardiac muscle can be associated with an excitation wave rotating around an obstacle. This mechanism has since been very successfully applied to the understanding of the generation and control of malignant electrical activity in the heart. It is also well known that self-sustained excitation waves, called spirals, can exist in homogeneous excitable media. It has been demonstrated that spirals rotating within a homogeneous medium or anchored at an obstacle are generically expected for any excitable medium.”

That sounds a lot like atrial fibrillation, a serious risk factor for strokes, and something I dealt with all the time as a neurologist. Any theoretical input about what to do for it would be welcome.

A technique has been developed to cure the arrhythmia. Here it is. “Recently, an atrial defibrillation procedure was clinically introduced that locates the spiral core region by detecting the phase-change point trajectories of the electrophysiological wave field and then, by ablating that region, restores sinus rhythm.” The technique is now widely used, and one university hospital (Ohio State) says that they are doing over 600 per year.

“This is clearly at odds with the Wiener–Rosenblueth mechanism because a further destruction of the tissue near the spiral core should not improve the situation.” It’s worse than that because the summary says “In the case of a functionally determined core, an ablation procedure should even further stabilize the rotating wave”

So theory was happily (for the patients) ignored. Theorists never give up and the paper goes on to propose a mechanism explaining why the Ohio State procedure should work. Here’s what they say.

“Here, we show theoretically that fundamentally in any excitable medium a region with a propagation velocity faster than its surrounding can act as a nucleation center for reentry and can anchor an induced spiral wave. Our findings demonstrate a mechanistic underpinning for the recently developed ablation procedure.”

It certainly has the ring of post hoc propter hoc about it.

The Rorschach test

Despite spending 6 months of a 3 year neurology residency on the psychiatry service (as was typical in those days) the Rorschach test never came up. Of course, it was well known in the wider world, primarily by a joke.

For those who don’t know, the Rorschach test is a series of 10 inkblots and subjects were asked to tell the examiner what they brought to mind.  To learn more about the test see —

The joke:  The response to all 10 by one frisky subject was that they reminded him of sex. The examiner asked him why he was so obsessed with sex. The subject asked the examiner why he was showing him dirty pictures.

There is a very interesting review of a book about Dr. Rorschach in the current issue of Science (vol. 355 p.588 ’17). The reviewer is at the Department of Translational Science and Molecular Medicine, Michigan State University, Grand Rapids, MI 49503, USA. Email:

Here is the first part — unfortunately I can’t reproduce it all, as you must be a subscriber to Science —
“We’re all familiar with the inkblots that make up the Rorschach test: black and white, bilaterally symmetrical figures that hover close to familiarity. Or, at least, we think we are. In modern times,the term “Rorschach test” often serves as a metaphor for our divisiveness, as shorthand for an encoded message, or as a warning that appearances

Inkblots were used in psychology to gauge a person’s imagination for nearly two decades before Rorschach developed his version. Rorschach’s contribution was born of his desire to detect the differences in perceptual processes that explained seemingly nonsensical delusions and neuroses. When designing his inkblots, he can be deceiving. But we may not know as much as we think we do about this classic psychological tool or the man behind it, argues Damion Searls in The Inkblots: Hermann Rorschach, His Iconic Test, and The Power of Seeing.

In tracing the story of the inkblots, Searls sets out to restore two vital stipulations of the Rorschach test: that there are good answers and bad answers and that the test is a measure of perception, not of imagination or projection. The book addresses many questions fundamental to understanding the genesis and effectiveness of Rorschach’s eponymous test as well as the life of the man himself.

Hoping to create images that were suggestive of shape and movement, Rorschach hand-painted each of his 10 eponymous inkblots.”

It always seemed incredibly subjective to me (typical of much of psychoanalysis IMHO).

Not so.

I asked two friends long in the field, whose experience and intelligence and hardheadedness isn’t open to question.

The psychiatrist’s response

As a psychiatrist I was never trained in the Rorschach as psychologists are but have generally found them very helpful. In fact, I took one myself back in residency and had the psychologist interpret the results, which at the time left me feeling naked, ie, all my defenses stripped away.

My office mate doesn’t favor it largely for the reasons in the article: the lack of a scientific basis. Since he is a forensic psychiatrist, this drawback is even worse, since one might potentially have to present the results to a jury, which is almost universally likely to view it as hocus pocus even if there was more scientific basis.
There is a technology to interpret the results, but I think an experienced clinician is also key to its results being helpful. It gives a much deeper dimension to the findings s/w similar to other projective tests and relative to more scientifically based tests such as the MMPI.

Interesting article; thanks for sending.

The Psychiatric Nurse’s response

I actually did use the Rorschach test when leading groups on the in patient psychiatric unit at — a prominent Boston Hospiutal (1975-1980). It was always a challenge to get depressed, withdrawn, and psychotic people to express themselves. Trying to be creative and engaging, I would hold up the ink blots and get anywhere from 1-100 word responses……dependent upon their diagnosis! OF COURSE the bipolar manics, with pressured speech, had to be interrupted for the sake of time!

Then, the artist in me would come out. I had people make their own Rorschach’s with paint. It helped engage the withdrawn members in a different format. The response was that those with paucity of speech were able to express themselves in a non-verbal way. There was always more discussion stimulated by their own creations.

Memories are made of this ?

Back in the day when information was fed into computers on punch cards, the data was the holes in the paper not the paper itself. A far out (but similar) theory of how memories are stored in the brain just got a lot more support [ Neuron vol. 93 pp. 6 -8, 132 – 146 ’17 ].

The theory says that memories are stored in the proteins and sugar polymers surrounding neurons rather than the neurons themselves. These go by the name of extracellular matrix, and memories are the holes drilled in it which allow synapses to form.

Here’s some stuff I wrote about the idea when I first ran across it two years ago.


An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines ( where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory. However, RAM memory in the computers of the 70s used the localized buildup of charge to store bits and bytes. Since charge would leak away from where it was stored, it had to be refreshed constantly –e.g. at least 12 times a second, or it would be lost. Yet another reason data should always be frequently backed up.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.


So how does the new work support this idea? It involves a structure that I’ve never talked about — the lysosome (for more info see It’s basically a bag of at least 40 digestive and synthetic enzymes inside the cell, which chops anything brought to it (e.g. bacteria). Mutations in the enzymes cause all sorts of (fortunately rare) neurologic diseases — mucopolysaccharidoses, lipid storage diseases (Gaucher’s, Farber’s) the list goes on and on.

So I’ve always thought of the structure as a Pandora’s box best kept closed. I always thought of them as confined to the cell body, but they’re also found in dendrites according to this paper. Even more interesting, a rather unphysiologic treatment of neurons in culture (depolarization by high potassium) causes the lysosomes to migrate to the neuronal membrane and release its contents outside. One enzyme released is cathepsin B, a proteolytic enzyme which chops up the TIMP1 outside the cell. So what. TIMP1 is an endogenous inhibitor of Matrix MetalloProteinases (MMPs) which break down the extracellular matrix. So what?

Are neurons ever depolarized by natural events? Just by synaptic transmission, action potentials and spontaneously. So here we have a way that neuronal activity can cause holes in the extracellular matrix,the holes in the punch cards if you will.

Speculation? Of course. But that’s the fun of reading this stuff. As Mark Twain said ” There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Tidings of great joy

One of the hardest things I had to do as a doc was watch an infant girl waste away and die of infantile spinal muscular atrophy (Werdnig Hoffmann disease) over the course of a year. Something I never thought would happen (a useful treatment) may be at hand. The actual papers are not available yet, but two placebo controlled trials with a significant number of patients (84, 121) in each were stopped early because trial monitors (not in any way involved with the patients) found the treated group was doing much, much better than the placebo. A news report of the trials is available [ Science vol. 354 pp. 1359 – 1360 ’16 (16 December) ].

The drug, a modified RNA molecule, (details not given) binds to another RNA which codes for the missing protein. In what follows a heavy dose of molecular biology will be administered to the reader. Hang in there, this is incredibly rational therapy based on serious molecular biological knowledge. Although daunting, other therapies of this sort for other neurologic diseases (Huntington’s Chorea, FrontoTemporal Dementia) are currently under study.

If you want to start at ground zero, I’ve written a series which should tell you enough to get started. Start here —
and follow the links to the next two.

Here we go if you don’t want to plow through all three

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 – 9623 ’00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers [ Science vol. 345 pp. 624 – 625, 688 – 693 ’14 ].describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

I was extremely skeptical when I read the papers two years ago. Why? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Well, I shouldn’t have been so skeptical. I can’t say much more about the chemistry of the drug (nusinersen) until the papers come out.

Fortunately, the couple (a cop and a nurse) took the 25% risk of another child with the same thing and produced a healthy infant a few years later.

Will flickering light treat Alzheimer’s disease ?

Big pharma has spent zillions trying to rid the brain of senile plaques, to no avail. A recent paper shows that light flickering at 40 cycles/second (40 Hertz) can do it — this is not a misprint [ Nature vol. 540 pp. 207 – 208, 230 – 235 ’16 ]. As most know the main component of the senile plaque of Alzheimer’s disease is a fragment (called the aBeta peptide) of the amyloid precursor protein (APP).

The most interesting part of the paper showed that just an hour or so of light flickering at 40 Hertz temporarily reduced the amount of Abeta peptide in visual cortex of aged mice. Nothing invasive about that.

Should we try this in people? How harmful could it be? Unfortunately the visual cortex is relatively unaffected in Alzheimer’s disease — the disease starts deep inside the head in the medial temporal lobe, particularly the hippocampus — the link shows just how deep it is -

You might be able to do this through the squamous portion of the temporal bone which is just in front of and above the ear. It’s very thin, and ultrasound probes placed here can ‘see’ blood flowing in arteries in this region. Another way to do it might be a light source placed in the mouth.

The technical aspects of the paper are fascinating and will be described later.

First, what could go wrong?

The work shows that the flickering light activates the scavenger cells of the brain (microglia) and then eat the extracellular plaques. However that may not be a good thing as microglia could attack normal cells. In particular they are important in the remodeling of the dendritic tree (notably dendritic spines) that occurs during experience and learning.

Second, why wouldn’t it work? So much has been spent on trying to remove abeta, that serious doubt exists as to whether excessive extracellular Abeta causes Alzheimer’s and even if it does, would removing it be helpful.

Now for some fascinating detail on the paper (for the cognoscenti)

They used a mouse model of Alzheimer’s disease (the 5XFAD mouse). This poor creature has 3 different mutations associated with Alzheimer’s disease in the amyloid precursor protein (APP) — these are the Swedish (K670B), Florida (I716V) and London (V717I). If that wasn’t enough there are two Alzheimer associated mutations in one of the enzymes that processes the APP into Abeta (M146L, L286V) — using the single letter amino acid code – Then the whole mess is put under control of a promoter particularly active in mice (the Thy1 promoter). This results in high expression of the two mutant proteins.

So the poor mice get lots of senile plaques (particularly in the hippocampus) at an early age.

The first experiment was even more complicated, as a way was found to put channelrhodopsin into a set of hippocampal interneurons (this is optogenetics and hardly simple). Exposing the channel to light causes it to open the membrane to depolarize and the neuron to fire. Then fiberoptics were used to stimulate these neurons at 40 Hertz and the effects on the plaques were noted. Clearly a lot of work and the authors (and grad students) deserve our thanks.

Light at 8 Hertz did nothing to the plaques. I couldn’t find what other stimulation frequencies were used (assuming they were tried).

It would be wonderful if something so simple could help these people.

For other ideas about Alzheimer’s using physics rather than chemistry please see —

Very sad

The failure of Lilly’s antibody against the aBeta protein is very sad on several levels. My year started out going to a memorial service for a college classmate, fellow doc and friend who died of Alzheimer’s disease. He had some 50 papers to his credit mostly involving clinical evaluation of drugs such as captopril. Even so it was an uplifting experience — here’s a link –

There is a large body of theory that says it should have worked. Derek Lowe’s blog “In the Pipeline” has much more — and the 80 or so comments on his post will expose you to many different points of view on Abeta — here’s the link.

It’s time to ‘let 100 flowers bloom’ in Alzheimer’s research — E. g. it’s time to look at some far out possibilities — we know that most will be wrong that they will be crushed, as Mao did to all the flowers. Even so it’s worth doing.

So to buck up your spirits, here’s an old post (not a link) raising the possibility that Alzheimer’s might be a problem in physics rather than chemistry. If that isn’t enough another post follows that one on Lopid (Gemfibrozil).

Could Alzheimer’s disease be a problem in physics rather than chemistry?

Two seemingly unrelated recent papers could turn our attention away from chemistry and toward physics as the basic problem in Alzheimer’s disease. God knows we could use better therapy for Alzheimer’s disease than we have now. Any new way of looking at Alzheimer’s, no matter how bizarre,should be welcome. The approaches via the aBeta peptide, and the enzymes producing it just haven’t worked, and they’ve really been tried — hard.

The first paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 16124 – 16129 ’14 ] made surfaces with arbitrary degrees of roughness, using the microfabrication technology for making computer chips. We’re talking roughness that’s almost smooth — bumps ranging from 320 Angstroms to 800. Surfaces could be made quite regular (as in a diffraction grating) or irregular. Scanning electron microscopic pictures were given of the various degrees of roughness.

Then they plated cultured primitive neuronal cells (PC12 cells) on surfaces of varying degrees of roughness. The optimal roughness for PC12 to act more like neurons was an Rq of 320 Angstroms.. Interestingly, this degree of roughness is identical to that found on healthy astrocytes (assuming that culturing them or getting them out of the brain doesn’t radically change them). Hippocampal neurons in contact with astrocytes of this degree of roughness also began extending neurites. It’s important to note that the roughness was made with something neurons and astrocytes never see — silica colloids of varying sizes and shapes.

Now is when it gets interesting. The plaques of Alzheimer’s disease have surface roughness of around 800 Angstroms. Roughness of the artificial surface of this degree was toxic to hippocampal neurons (lower degrees of roughness were not). Normal brain has a roughness with a median at 340 Angstroms.

So in some way neurons and astrocytes can sense the amount of roughness in surfaces they are in contact with. How do they do this — chemically it comes down to Piezo1 ion channels, a story in themselves [ Science vol. 330 pp. 55 – 60 ’10 ] These are membrane proteins with between 24 and 36 transmembrane segments. Then they form tetramers with a huge molecular mass (1.2 megaDaltons) and 120 or more transmembrane segments. They are huge (2,100 – 4,700 amino acids). They can sense mechanical stress, and are used by endothelial cells to sense how fast blood is flowing (or not flowing) past them. Expression of these genes in mechanically insensitive cells makes them sensitive to mechanical stimuli.

The paper is somewhat ambiguous on whether expressing piezo1 is a function of neuronal health or sickness. The last paragraph appears to have it both ways.

So as we leave paper #1, we note that that neurons can sense the physical characteristics of their environment, even when it’s something as un-natural as a silica colloid. Inhibiting Piezo1 activity by a spider venom toxin (GsMTx4) destroys this ability. The right degree of roughness is healthy for neurons, the wrong degree kills them. Clearly the work should be repeated with other colloids of a different chemical composition.

The next paper [ Science vol. 342 pp. 301, 316 – 317, 373 – 377 ’13 ] Talks about the plumbing system of the brain, which is far more active than I’d ever imaged. The glymphatic system is a network of microscopic fluid filled channels. Cerebrospinal fluid (CSF) bathes the brain. It flows into the substance of the brain (the parenchyma) along arteries, and the fluid between the cellular elements (interstitial fluid) it exchanges with flows out of the brain along the draining veins.

This work was able to measure the amount of flow through the lymphatics by injected tracer into the CSF and/or the brain parenchyma. The important point about this is that during sleep these channels expand by 60%, and beta amyloid is cleared twice as quickly. Arousal of a sleeping mouse decreases the influx of tracer by 95%. So this amazing paper finally comes up with an explanation of why we spend 1/3 of our lives asleep — to flush toxins from the brain.

If you wish to read (a lot) more about this system — see an older post from when this paper first came out —

So what is the implication of these two papers for Alzheimer’s disease?

The surface roughness of the plaques (800 Angstroms roughness) may physically hurt neurons. The plaques are much larger or Alzheimer would never have seen them with the light microscopy at his disposal.

The size of the plaques themselves may gum up the brain’s plumbing system.

The tracer work should certainly be repeated with mouse models of Alzheimer’s, far removed from human pathology though they may be.

I find this extremely appealing because it gives us a new way of thinking about this terrible disorder. In addition it might explain why cognitive decline almost invariably accompanies aging, and why Alzheimer’s disease is a disorder of the elderly.

Next, assume this is true? What would be the therapy? Getting rid of the senile plaques in and of itself might be therapeutic. It is nearly impossible for me to imagine a way that this could be done without harming the surrounding brain.

Before we all get too excited it should be noted that the correlation between senile plaque burden and cognitive function is far from perfect. Some people have a lot of plaque (there are ways to detect them antemortem) and normal cognitive function. The work also leaves out the second pathologic change seen in Alzheimer’s disease, the neurofibrillary tangle which is intracellular, not extracellular. I suppose if it caused the parts of the cell containing them to swell, it too could gum up the plumbing.

As far as I can tell, putting the two papers together conceptually might even be original. Prasad Shastri, the author of the first paper, was very helpful discussing some points about his paper by Email, but had not heard of the second and is looking at it this weekend.

Also a trial of Lopid (Gemfibrozil) as something which might be beneficial is in progress — there is some interesting theory behind this. The trial has about another year to go. Here’s that post and happy hunting

Takes me right back to grad school

How many times in grad school did you or your friends come up with a good idea, only to see it appear in the literature a few months later by someone who’d been working on it for much longer. We’d console ourselves with the knowledge that at least we were thinking well and move on.

Exactly that happened to what I thought was an original idea in my last post — e.g. that Gemfibrozil (Lopid) might slow down (or even treat) Alzheimer’s disease. I considered the post the most significant one I’d ever written, and didn’t post anything else for a week or two, so anyone coming to the blog for any reason would see it first.

A commenter on the first post gave me a name to contact to try out the idea, but I’ve been unable to reach her. Derek Lowe was quite helpful in letting me link to the post, so presently the post has had over 200 hits. Today I wrote an Alzheimer’s researcher at Yale about it. He responded nearly immediately with a link to an ongoing clinical study in progress in Kentucky

On Aug 3, 2015, at 3:04 PM, Christopher van Dyck wrote:

Dear Dr. xxxxx

Thanks for your email. I agree that this is a promising mechanism.
My colleague Greg Jicha at U.Kentucky is already working on this:

Our current efforts at Yale are on other mechanisms:

We can’t all test every mechanism, but hopefully we can collectively test the important ones.

-best regards,
Christopher H. van Dyck, MD
Professor of Psychiatry, Neurology, and Neurobiology
Director, Alzheimers Disease Research Unit

Am I unhappy about losing fame and glory being the first to think of it? Not in the slightest. Alzheimer’s is a terrible disease and it’s great to see the idea being tested.

Even more interestingly, a look at the website for the study shows, that somehow they got to Gemfibrozil by a different mechanism — microRNAs rather than PPARalpha.

I plan to get in touch with Dr. Jicha to see how he found his way to Gemfibrozil. The study is only 1 year in duration, and hopefully is well enough powered to find an effect. These studies are incredibly expensive (and an excellent use of my taxes). I never been involved in anything like this, but data mining existing HMO data simply has to be cheaper. How much cheaper I don’t know.

Here’s the previous post —

Could Gemfibrozil (Lopid) be used to slow down (or even treat) Alzheimer’s disease?

Is a treatment of Alzheimer’s disease at hand with a drug in clinical use for nearly 40 years? A paper in this week’s PNAS implies that it might (vol. 112 pp. 8445 – 8450 ’15 7 July ’15). First a lot more background than I usually provide, because some family members of the afflicted read everything they can get their hands on, and few of them have medical or biochemical training. The cognoscenti can skip past this to the text marked ***

One of the two pathologic hallmarks of Alzheimer’s disease is the senile plaque (the other is the neurofibrillary tangle). The major component of the plaque is a fragment of a protein called APP (Amyloid Precursor Protein). Normally it sits in the cellular membrane of nerve cells (neurons) with part sticking outside the cell and another part sticking inside. The protein as made by the cell contains anywhere from 563 to 770 amino acids linked together in a long chain. The fragment destined to make up the senile plaque (called the Abeta peptide) is much smaller (39 to 42 amino acids) and is found in the parts of APP embedded in the membrane and sticking outside the cell.

No protein lives forever in the cell, and APP is no exception. There are a variety of ways to chop it up, so its amino acids can be used for other things. One such chopper is called ADAM10 (aka Kuzbanian). ADAM10breaks down APP in such a way that Abeta isn’t formed. The paper essentially found that Gemfibrozil (commercial name Lopid) increases the amount of ADAM10 around. If you take a mouse genetically modified so that it will get senile plaques and decrease ADAM10 you get a lot more plaques.

The authors didn’t artificially increase the amount of ADAM10 to see if the animals got fewer plaques (that’s probably their next paper).

So there you have it. Should your loved one get Gemfibrozil? It’s a very long shot and the drug has significant side effects. For just how long a shot and the chain of inferences why this is so look at the text marked @@@@


How does Gemfibrozil increase the amount of ADAM10 around? It binds to a protein called PPARalpha which is a type of nuclear hormone receptor. PPARalpha binds to another protein called RXR, and together they turn on the transcription of a variety of genes, most of which are related to lipid metabolism. One of the genes turned on is ADAM10, which really has never been mentioned in the context of lipid metabolism. In any event Gemfibrozil binds to PPARalpha which binds more effectively to RAR which binds more effectively to the promoter of the ADAM10 gene which makes more ADAM10 which chops of APP in such fashion that Abeta isn’t made.

How in the world the authors got to PPARalpha from ADAM10 is unknown — but I’ve written the following to the lead author just before writing this post.

Dr. Pahan;

Great paper. People have been focused on ADAM10 for years. It isn’t clear to me how you were led to PPARgamma from reading your paper. I’m not sure how many people are still on Gemfibrozil. Probably most of them have some form of vascular disease, which increases the risk of dementia of all sorts (including Alzheimer’s). Nonetheless large HMOs have prescription data which can be mined to see if the incidence of Alzheimer’s is less on Gemfibrozil than those taking other lipid lowering agents, or the population at large. One such example (involving another class of drugs) is JAMA Intern Med. 2015;175(3):401-407, where the prescriptions of 3,434 individuals 65 years or older in Group Health, an integrated health care delivery system in Seattle, Washington. I thought the conclusions were totally unwarranted, but it shows what can be done with data already out there. Did you look at other fibrates (such as Atromid)?

Update: 22 July ’15

I received the following back from the author

Dear Dr.

Wonderful suggestion. However, here, we have focused on the basic science part because the NIH supports basic science discovery. It is very difficult to compete for NIH R01 grants using data mining approach.

It is PPARα, but not PPARγ, that is involved in the regulation of ADAM10. We searched ADAM10 gene promoter and found a site where PPAR can bind. Then using knockout cells and ChIP assay, we confirmed the participation of PPARα, the protein that controls fatty acid metabolism in the liver, suggesting that plaque formation is controlled by a lipid-lowering protein. Therefore, many colleagues are sending kudos for this publication.

Thank you.

Kalipada Pahan, Ph.D.

The Floyd A. Davis, M.D., Endowed Chair of Neurology


Departments of Neurological Sciences, Biochemistry and Pharmacology

So there you have it. An idea worth pursuing according to Dr. Pahan, but one which he can’t (or won’t). So, dear reader, take it upon yourself (if you can) to mine the data on people given Gemfibrozil to see if their risk of Alzheimer’s is lower. I won’t stand in your way or compete with you as I’m a retired clinical neurologist with no academic affiliation. The data is certainly out there, just as it was for the JAMA Intern Med. 2015;175(3):401-407 study. Bon voyage.


There are side effects, one of which is a severe muscle disease, and as a neurologist I saw someone so severely weakened by drugs of this class that they were on a respirator being too weak to breathe (they recovered). The use of Gemfibrozil rests on the assumption that the senile plaque and Abeta peptide are causative of Alzheimer’s. A huge amount of money has been spent and lost on drugs (antibodies mostly) trying to get rid of the plaques. None have helped clinically. It is possible that the plaque is the last gasp of a neuron dying of something else (e.g. a tombstone rather than a smoking gun). It is also possible that the plaque is actually a way the neuron was defending itself against what was trying to kill it (e.g. the plaque as a pile of spent bullets).

One good thing about Trump’s election (maybe two)

Two comments on the election then back to some neuropharmacology and neuropsychiatry which will likely affect many of you (because of some state ballot initiatives).

First: Over the years I’ve thought the mainstream press has become increasingly biased toward the left (not on the editorial page which is fine) but in supposedly objective reporting. Here are just two post election examples

#1 Front page of the New York Times 9 Nov — the first sentence from something they characterize as ‘News Analysis’

““Donald John Trump was elected the 45th president of the United States on Tuesday in a stunning culmination of an explosive, populist and polarizing campaign that took relentless aim at the institutions and long-held ideals of American democracy.”

#2 Front page of the New York Times 10 Nov — more ‘News Analysis’ — Here’s the lead “Populist Fury may Backfire”. Don’t they wish.

I’ll never complain about this sort of thing again (well at least not for four years). Why? Because I’ve been reading the Wall Street Journal, The New York Times, The Nation and the National Review for probably 50 years, and Trump as the antiChrist is the first thing I’ve ever seen all four agree on. This biased coverage simply no longer matters. If it did, Trump would have lost and lost big. This just confirms the marked loss of credibility that the mainstream media has suffered.  People aren’t as dumb as the elites think they are.

Second: Political correctness and attempts to control speech so as not to offend lost big. That’s very good for us all right and left (although the impetus for speech control has switched to the left from the right over the past 56 years) — see

What do the state ballot initiatives have to do with neuropharmacology? Just this. Voters in California, Massachusetts and Nevada approved recreational marijuana initiatives Tuesday night, and several other states passed medical marijuana provisions.

I don’t think this is good. One of the arguments in its favor is that marihuana isn’t as bad as alcohol, which may be true, but if marihuana isn’t all good why add it to the mix? We don’t have a good handle on marihuana use, but it is likely to increase if it’s legal.

Why do I think this is bad (particularly for adolescents)? It is likely that inhibiting synaptic feedback isn’t a good thing for a brain which is pruning a lot of them (which happens in normal adolescence as the thickness of the cerebral cortex shrinks).

There have been many explanations for the decline in College Board Scores over the years. This has led to their normalization (so all our children are above average). If you’re a correlation equals causation fan, plot the decline vs. time of atmospheric lead. It is similar to the board scores decline. Or you can plot 1/adolescent marihuana use vs. time and get a similar curve. The problem, of course, is that we have no accurate figures for use.

Here’s the science — it’s an old post, but little has happened since it was written to change the science behind it

Why marihuana scares me

There’s an editorial in the current Science concerning how very little we know about the effects of marihuana on the developing adolescent brain [ Science vol. 344 p. 557 ’14 ]. We know all sorts of wonderful neuropharmacology and neurophysiology about delta-9 tetrahydrocannabinol (d9-THC) — The point of the authors (the current head of the Amnerican Psychiatric Association, and the first director of the National (US) Institute of Drug Abuse), is that there are no significant studies of what happens to adolescent humans (as opposed to rodents) taking the stuff.

Marihuana would the first mind-alteraing substance NOT to have serious side effects in a subpopulation of people using the drug — or just about any drug in medical use for that matter.

Any organic chemist looking at the structure of d9-THC (see the link) has to be impressed with what a lipid it is — 21 carbons, only 1 hydroxyl group, and an ether moiety. Everything else is hydrogen. Like most neuroactive drugs produced by plants, it is quite potent. A joint has only 9 milliGrams, and smoking undoubtedly destroys some of it. Consider alcohol, another lipid soluble drug. A 12 ounce beer with 3.2% alcohol content has 12 * 28.3 *.032 10.8 grams of alcohol — molecular mass 62 grams — so the dose is 11/62 moles. To get drunk you need more than one beer. Compare that to a dose of .009/300 moles of d9-THC.

As we’ve found out — d9-THC is so potent because it binds to receptors for it. Unlike ethanol which can be a product of intermediary metabolism, there aren’t enzymes specifically devoted to breaking down d9-THC. In contrast, fatty acid amide hydrolase (FAAH) is devoted to breaking down anandamide, one of the endogenous compounds d9-THC is mimicking.

What really concerns me about this class of drugs, is how long they must hang around. Teaching neuropharmacology in the 70s and 80s was great fun. Every year a new receptor for neurotransmitters seemed to be found. In some cases mind benders bound to them (e.g. LSD and a serotonin receptor). In other cases the endogenous transmitters being mimicked by a plant substance were found (the endogenous opiates and their receptors). Years passed, but the receptor for d9-thc wasn’t found. The reason it wasn’t is exactly why I’m scared of the drug.

How were the various receptors for mind benders found? You throw a radioactively labelled drug (say morphine) at a brain homogenate, and purify what it is binding to. That’s how the opiate receptors etc. etc. were found. Why did it take so long to find the cannabinoid receptors? Because they bind strongly to all the fats in the brain being so incredibly lipid soluble. So the vast majority of stuff bound wasn’t protein at all, but fat. The brain has the highest percentage of fat of any organ in the body — 60%, unless you considered dispersed fatty tissue an organ (which it actually is from an endocrine point of view).

This has to mean that the stuff hangs around for a long time, without any specific enzymes to clear it.

It’s obvious to all that cognitive capacity changes from childhood to adult life. All sorts of studies with large numbers of people have done serial MRIs children and adolescents as the develop and age. Here are a few references to get you started [ Neuron vol. 72 pp. 873 – 884, 11, Proc. Natl. Acad. Sci. vol. 107 pp. 16988 – 16993 ’10, vol. 111 pp. 6774 -= 6779 ’14 ]. If you don’t know the answer, think about the change thickness of the cerebral cortex from age 9 to 20. Surprisingly, it get thinner, not thicker. The effect happens later in the association areas thought to be important in higher cognitive function, than the primary motor or sensory areas. Paradoxical isn’t it? Based on animal work this is thought to be due pruning of synapses.

So throw a long-lasting retrograde neurotransmitter mimic like d9-THC at the dynamically changing adolescent brain and hope for the best. That’s what the cited editorialists are concerned about. We simply don’t know and we should.

Addendum 11 Nov ’16:  From an emerita nonscientific professor friend of my wife. “Much of the chemistry/pharmacology etc. is way beyond me, but I did get the drift of the conversation about marihuana and am glad to now have even a simplified concept of what it does to the brain. Having spent the last 20 years working with undergraduate and graduate students, I’ve seen first hand the decline in cognitive ability.” 

Scary ! ! ! !

Having been in Cambridge when Leary was just getting started in the early 60’s, I must say that the idea of tune in turn on and drop out never appealed to me. Most of the heavy marihuana users I’ve known (and treated for other things) were happy, but rather vague and frankly rather dull.

Unfortunately as a neurologist, I had to evaluate physician colleagues who got in trouble with drugs (mostly with alcohol). One very intelligent polydrug user MD, put it to me this way — “The problem is that you like reality, and I don’t”.

Two disconcerting papers

We all know that mutations cause cancer and that MRI lesions cause disability in multiple sclerosis. We do, don’t we? Maybe we don’t, say two papers out this October.

First: cancer. The number of mutations in stem cells from 3 organs (liver, colon, small intestine) was determined in biopsy samples from 19 people ranging in age 3 to 87 [ Nature vol. 538 pp. 260 – 264 ’16 ].th How did they get stem cells? An in vitro system was sued to expand single stem cells into epithelial organoids, and then the whole genome was sequenced of each. Some 45 organoids were used. Some 79,790 heterozygous clonal mutations were found. They then plotted the number of mutations vs. the age of the patient. When you have a spread in patient ages (which they did) you can calculate a tissue mutation rate for its stem cells. What is remarkable, is that all 3 tissues had the same mutation rate — about 40 mutations per year. Not bad. That’s only 4,000 if you live to 100 in your 3.2 BILLION nucleotide genome.

This is so  remarkable because the incidence of cancer is wildly different in the 3 tissues, so if mutations occurring randomly cause cancer, all 3 tissues should have the same cancer incidence (and there is much less liver cancer than gut cancer).

Of course there’s a hooker. The numbers are quite small, only 9 organoids from liver with a relatively small age range spanning only 25 years. There were more organoids from colon and small and the age ranges was wider but, clearly, the work needs o be replicated with a lot more samples. However, a look at figure one shows that the slope of the plot of mutation number vs. age is quite similar.

Second: Multiple sclerosis. First, some ancient history. I started in neurology before there were CAT scans and MRIs. All we had to evaluate the MS patient was the neurologic exam. So we’d see if new neurologic signs had developed, or the old ones worsened. There were all sorts of clinical staging scores and indices. Not terribly objective, but at least they measured function which is what physician and patient cared about the most.

The MRI revolutionized both diagnosis and our understanding of MS. We quickly found that even when the exam remained constant, that new lesions appeared and disappeared on the MRI totally silent to both patient and physician. I used to say that prior to MRI neurologists managed patients the way a hematologist would manage leukemics without blood counts, by looking at them to see how pale they were.

In general the more lesions that remained fixed, the worse shape the patient was in. So new drugs against MS could easily be evaluated without waiting years for the clinical exam to change, if a given drug just stopped lesions from appearing — stability was assumed to ensue (or at least it was when I retired almost exactly 4 presidential elections ago).

Enter Laquinimod [ Proc. Natl. Acad. Sci. vol. 113 pp. E6145 – E6152 ’16 ] which has a much greater beneficial effect on disability progression (e.g. less) than it does on clinical relapse rate (also less) and lesion appearance rate on MRI (also less). So again there is a dissociation between the MRI findings and the patient’s clinical status. Here are references to relevant papers — which I’ve not read —
Comi G, et al.; ALLEGRO Study Group (2012) Placebo-controlled trial of oral laquini- mod for multiple sclerosis. N Engl J Med 366(11):1000–1009.

Filippi M, et al.; ALLEGRO Study Group (2014) Placebo-controlled trial of oral laqui- nimod in multiple sclerosis: MRI evidence of an effect on brain tissue damage. J Neurol Neurosurg Psychiatry 85(8):851–858.

Vollmer TL, et al.; BRAVO Study Group (2014) A randomized placebo-controlled phase III trial of oral laquinimod for multiple sclerosis. J Neurol 261(4):773–783.

It is well known that there are different kinds of lesions in MS (some destroying axons, others just stripping off their myelin). Since I’ve left the field, I don’t know if MRI can distinguish the two types, and whether this was looked at.

The disconcerting thing about this paper, is that we may have given up on drugs which would  clinically help patients (rather than a biological marker) because they didn’t help the MRI ! ! !

Book Review — The Kingdom of Speech — Part III

The last half of Wolfe’s book is concerned with Chomsky and Linguistics. Neurologists still think they have something to say about how the brain produces language, something roundly ignored by the professional linguistics field. Almost at the beginning of the specialty, various types of loss of speech (aphasias) were catalogued and correlated with where in the brain the problem was. Some people could understand but not speak (motor aphasia). Most turned out to have lesions in the left frontal lobe. Others could speak but not understand what was said to them (receptive aphasia). They usually had lesions in the left temporal lobe (e.g. just behind the ear amazingly enough).

Back in the day this approach was justifiably criticized as follows — yes you can turn off a lightbulb by flicking a switch, but the switch isn’t producing the light, but is just something necessary for its production. Nowadays not so much, because we see these areas lighting up with increased blood  flow (by functional MRI) when speech is produced or listened to.

I first met Chomsky’s ideas, not about linguistics, but when I was trying to understand how a compiler of a high level computer language worked. This was so long ago that Basic and Pascal were considered high level languages. Compilers worked with formal rules, and Chomsky categorized them into a hierarchy which you can read about here —

The book describes the rise of Chomsky as the enfant terrible, the adult terrible, then the eminence grise of linguistics. Wolfe has great fun skewering him, particularly for his left wing posturing (something he did at length in “Radical Chic”). I think most of the description is accurate, but if you have the time and the interest, there’s a much better book — “The Linguistics Wars” by Randy Allen Harris — although it’s old (1993), Chomsky and linguistics had enough history even then that the book contains 356 pages (including index).

Chomsky actually did use the term language organ meaning a facility of the human brain responsible for our production of language of speech. Neuroscience never uses such a term, and Chomsky never tried to localize it in the brain, but work on the aphasias made this at least plausible. If you’ve never heard of ‘universal grammar, language acquisition device, deep structure of language, the book is a reasonably accurate (and very snarky) introduction.

As the years passed, for everything that Chomsky claimed was a universal of all languages, a language was found that didn’t have it. The last universal left standing was recursion (e.g. the ability the pack phrase within phrase — the example given “He assumed that now that her bulbs had burned out, he could shine and achieve the celebrity he had always longed for” — thought within thought within thought.

Then a missionary turned linguist (Daniel Everett) found a tribe in the Amazon (the Piraha) with a language which not only lacked recursion, but tenses as well. It makes fascinating reading, including the linguist W. Tecumseh Fitch (yes Tecumseh) who travelled up the Amazon to prove that they did have recursion (especially as he had collaborated with Chomsky and the (now disgraced) Marc Hauser on an article in 2002 saying that recursion was the true essence of human language — how’s this horrible sentence for recursion ?

The book ends with a discussion of the quote Wolfe began the book with — “Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language’s origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.”

One of the authors is Chomsky himself.

You can read the whole article at

I think, that Wolfe is right — language is just a tool (like the wheel or the axe) which humans developed to help them. That our brain size is at least 3 times the size of our nearest evolutionary cousin (the Chimpanzee) probably had something to do with it. If language is a tool, then, like the axe, it didn’t have to evolve from anything.

All in all a fascinating and enjoyable book. There’s much more in it than I’ve had time to cover.  The prose will pick you up and carry you along.