Category Archives: Neurology & Psychiatry

A new way to treat neurodegeneration

It’s probably too good to be true and the work certainly needs to be replicated, and I can’t believe it actually works but here it is.

Pelizaeus Merzbacher’s disease is a hereditary disease affecting the cells (oligodendroglia) making myelin (the fatty wrapping of nerve fibers (axons) in the brain.  The net effect is that there isn’t enough myelin.

The mutation affects PLP (proteolipid protein) which accounts for half the protein in myelin.  The biochemistry is fascinating, but not as much as the genetics.   The protein has 276 amino acids, and even 20 years ago some 60 point mutations were known (implying that not enough PLP is around), except that over half the cases have a duplication of the gene (implying that too much is around).  A mouse model (the Jimpy mouse)  is available — it has a point mutation in PLP.

Interestingly people who lack any PLP (due to mutation) have milder disease than people with the point mutations. I told you the genetics was fascinating.

Noting that null mutations in PLP did better, the authors of Nature vol. 585 pp. 397 – 403 ’20 tried to produce a knockout in the jimpy mice using CRISPR-Cas9.  Amazingly the animals did better, even living longer.

Then the authors did something incredible, they injected antisense oligonucleotides which bound to the mRNA for PLP inhibiting translation and decreasing the amount of PLP around into the ventricles of the mice, and they got better, and lived longer.

Now we have 1,000 times more neurons than the mouse does and our brain is even larger, so it’s a long way from applying this to the neurodegenerations which afflict us, but I find it amazing that antisense oligonucleotides were able to diffuse into the brain, get into the oligodendroglia cells and shut down PLP synthesis.

Oligonucleotides are big molecules, and they used two such, one 20 and one 16 nucleotides long — even a single nucleotide (adenosine monophosphate) has a mass of 347 Daltons.  It is amazing that such a large molecule could get into a living cell.

Now a molecule doesn’t know what it is diffusing in, so whether even administration of an oligonucleotide into the human cerebral ventricles would get far enough into the brain to reach its target is far from clear.

Just that it worked in these animals improving neurologic function and lifespan is incredible.  As Carl Sagan said –“Extraordinary claims require extraordinary evidence”  so the work needs repeating.

Shutting down an mRNA is one thing and I don’t see how they could be used to correct an mRNA (unless they are getting into the nucleus and correcting a splice junction mutation).

Amazing stuff.  You never know what the next issue of the journals will brain.  It’s like opening presents.

A clever way to attack autoimmune disease

The more we study the immune system, the more complicated it becomes.  Take multiple sclerosis.  A recent study looked at just about every immune parameter in blood they could think of in a collection of 42 monozyotic (identical) twins, one of whom had MS, the other didn’t.  They came up with nothing [ Proc. Natl. Acad. Sci. vol. 117 pp. 21546  – 21556 ’20 ].

Classification of anything (particularly diseases) is always a battle between the lumpers and the splitters.  The initial split in the immune system came between B cells and T cells.  The letters have nothing to do with their function, but rather where they were first found (Bursa of Fabricius, Thymus).

B cells are lymphocytes which secrete immunoglobulin antibodies.  Malignancies of them account for 90 – 95% of leukemias and lymphomas.

T cells are involved in the recognition of antigens.  They can stimulate (or repress) B cells.  Others are used to kill other cells. There are 2,000,000,000,000 of them in our bodies, making them comparable in mass to the brain.

T cells have been subdivided in to helper T cells (which Express the  Cd4 antigen ) and cytotoxic/suppressor cells which express the antigen CD8. Splitting didn’t stop there.  There are two types of helper T cells (Th1 and Th2), but the new kid on the block is the Th17 cell, which Janus-like provide protection from bacterial and fungal infections at mucosal surfaces (e.g. gut, bladder) but which can also induce autoimmune disease.

How to stop the second without causing death from infection. A very clever way was found in Cell vol. 182 pp. 641 – 654 ’20.  Areas of inflammation usually have low oxygen.  Bacteria and Archaea from which we are descended did just fine without oxygen, using something called glycolysis to burn glucose without it, so deep within our cells is the ability to use it when the going gets tough (e.g. hypoxic)

What the authors did was knock out one enzyme involved in glycolysis (Glucose phosphate isomerase — aka Gpi1) — which changes glucose 6 phosphate to fructose 6 phosphate.   This kills Th17 cells living in hypoxia.  What about the good Th17 cells protecting us? They can use a pathway I’d long forgotten about the pentose phosphate shunt and oxidative phosphorylation.

Well did it work?  Actually it did in an animal model of multiple sclerosis called EAE.  It was harder to induce when Gpi1 was knocked down, but the animals didn’t get a bunch of infections, if the protective role of Th17 cells had been lost.

How flat can a 100 amino acid protein be?

Alpha-synuclein is of interest to the neurologist because several mutations cause Parkinson’s disease or Lewy Body dementia.  The protein accumulates in the Lewy Bodies of these diseases.  These are concentric hyaline inclusions over 15 microns in diameter found in pigmented brain stem nuclei (substantia nigra, locus coeruleus).

The protein contains 140 amino acids.  It is ‘natively unfolded’ meaning that it has no ordered secondary structure (alpha helix, beta sheet).  No one is sure what it does.  Mouse knockouts are normal, so the mutations must produce something new.

Alpha-synuclein can form amyloid fibrils, which are basically stacks of pancakes made of flattened segments of proteins one on top of the other.

Would you believe that the 100 amino terminal amino acids of alpha-synuclein can form an absolutely flat structure.  Well it does and there are pictures to prove it in PNAS vol. 117 pp. 20305 – 20315 ’20.  Here’s a link if you or your institution has a subscription —

This isn’t the usual alpha-synuclein, as it was chemically synthesized with phosphorylated tyrosine at amino acid #39.  Who would have ever predicted that 100 amino acids could form a structure like this?  I wouldn’t. The structure was determined by cryoEM and all the work was done in China.  Very state of the art world class work.  Bravo.

Shocking news — the cerebellum is bigger than we thought

Cerebellum is Latin for little brain.  Not so fast says Proc. Natl. Acad. Sci. vol. 117 pp. 19538 – 19543 ’20.   Wikipedia has some nice pictures particularly of cerebellar size relative to the rest of the brain  It looks to have a volume only 10% of the rest of the brain.

Using MRI it is possible to scan the cerebral cortex (which is folded like a walnut), open it out using software and compute the total surface area.

Well the cerebellum is folded too, but the folds (called folia) are much smaller (and much more numerous).  When the authors did this they found that the surface area of the cerebellar cortex was 78% of the cerebral cortex.

They did the same thing to the brain of a monkey (the Rhesus Macaque) and found that the cerebellum when stretched out had only 33% of the surface area of the cortex.  Now our brains are much larger than the macaque, both cerebral cortex and cerebellum, but what this shows is that in the evolutionary expansion of the human brain, the cerebellum expanded more than the cortex.

A lot has been made of the role of the cerebellum in cognition, but I never bought it.  Disorders of the cerebellum produce incoordination (including incoordination of speech — e.g. dysarthria), so people with cerebellar disease sound drunk.  But I didn’t find much in the way of cognitive problems with cerebellar disease (particularly strokes).

There is even the following wacky statement “The human cerebellum contains 80% of all brain neurons”  found in Science vol. 366 pp. 454 – 460 ’19.  There are a lot of granule cells in it

Thank your inner retrovirus for your existence

When the human genome project was first rolled out 20 years ago it came as a shock to find out that 8% of our 3,200,000 nucleotide genome was made of retrovirus relics.  They are the perfect example of selfish DNA — they don’t do anything other than insure their transmission to the next generation.   They are the perfect parasite infecting the host without killing it.  Since they don’t have to do anything, mutations rapidly accumulate in them and none of them can make a functioning virus.

As most know, retroviruses have genomes made of RNA, which is reverse transcribed into by an enzyme they contain into a DNA copy (cDNA) which then is inserted into the genome of the host.  HIV1, the virus of AIDs is one such retrovirus.   Fortunately HIV1 hasn’t entered the genome of eggs or sperm, so it hasn’t become an endogenous retrovirus, but it is all over the DNA or immune cells of those infected.

What is even more interesting (and totally unexpected) is that the host can repurpose these retroviral relics to do something useful.

In fact they’ve become so useful that we couldn’t reproduce without them.  The syncytiotrophoblast layer of the placenta is at the maternal fetal interface.  It is a continuous structure, one cell deep formed by fusion of the constituent trophoblast cells.  The layer has microvillar surfaces which facilitate exchanges of nutrients and waste products between mother and fetus.

Syncytin1 is a protein expressed here.  It is produced from the env gene, of a Human Endogenous RetroVirus (HERV) called HERV-W.  Adding the protein to culture systems leads to syncytium formation.  Mice in which the gene has been knocked out die in utero, due to failure of trophoblast cells to fuse.

Well that’s pretty spectacular and not much commented on although it’s been known for 20 years.

It shows that the envelope protein from another retrovirus (HERV-K subtype HML-2 is expressed at high levels on human pluripotent stem cells.  Not only that it keeps them from differentiating — something important for our longevity — so we always have a few pluripotent stem cells around.

As a neurologist I find it fascinating that knocking down the env protein causes the stem cells to differentiate into neurons.  Don’t get too excited that we’ve found the fount of neuronal youth, as forced expression of the env protein in terminally differentiated neurons kills them.

Why do some socially isolate and some don’t

The current flare in US cases (and deaths) are likely due to a failure in social isolation, rather than a loosening of restrictions on activity.  Georgia loosened its restrictions back in April.  Following this, new cases dropped for two months, and deaths dropped for nearly 3 months, before rising again to pre-lockdown levels and above.  The number of new ‘cases’ can partially be attributed to more testing, but the number of deaths can not.  For links and the exact numbers see the copy of the previous post after the ***

I think the rise is partially explainable by a failure of social distancing. Have a look at this   It may not be a COVID party in name, but it is in fact.

That being the case, wouldn’t it be nice to know why some people social distance and others do not.

Incredibly, a paper just came out looking at exactly that Proc. Natl. Acad. Sci. vol. 117 pp. 17667 – 17674 ’20 (28 July).  It’s likely behind a paywall so let me explain what they did.

The work was conducted in the first two weeks after the 13 March declaration of a national emergency.  Some 850 participants from the USA had their working memory tested using the Mechanical Turk from Amazon —  Essentially they are volunteers.  I leave it to you to decide how characteristic of the general population at large these people are.   My guess is that they aren’t.

Then the 850 were subsequently asked how much they had complied with social distancing.

But first, a brief discussion of working memory — more is available at

Working memory is tested in a variety of methods, but it basically measures how many objects you can temporarily hold in your head at one time.  One way to test it, is to give you a series of digits, and then ask you to repeat them backwards (after a lag of a second or so).  Here’s what the authors used —

“Participants performed an online visual working memory task, in which they tried to memorize a set of briefly presented color squares for  half a second and after a 1 second delay tried to identify a changed color in the test display by clicking on it using a computer mouse.”

The more you can hold in your head for a short period of time, the more working memory you have.  There is a lot of contention about just what intelligence is and how to measure it, but study after study shows that the greater your working memory, the more intelligent you are.

To cut to the chase — here are their results.

The greater their working memory, the greater the degree of compliance with social distancing.

Here is the author’s explanation –what’s yours?

“We find that working memory capacity contributes unique variance to individual differences in social distancing compliance, which may be partially attributed to the relationship between working memory capacity and one’s ability to evaluate the true merits of the recommended social distancing guidelines. This association remains robust after taking into account individual differences in age, gender, education, socioeconomic status, personality, mood-related conditions, and fluid intelligence.”

Talk about currency and relevance ! !   If failure of social distancing explains the rise in cases, studies like this will help us attack it.

Here is the older post with numbers and links


The News is Bad from Georgia

This is an update on a series of post about Georgia and the effect of relaxing restrictions on activity.  If you’ve been following the story, this post is somewhat repetitive, but I’d rather not leave newcomers behind. As of 14 July Georgia seemed to be bucking the trend of increasing deaths (but not of increasing ‘cases’ however defined).  No longer.  Page down past the map to the chart with 3 tabs —  cases (which means daily newly diagnosed cases), cumulative cases, and death.  Clicking on the tabs will move you back and forth (or better if your screen is big enough open the link twice and compared cases vs. deaths.

Georgia has changed the way it reports cases, no longer waiting 14 days before result are secure.  I also think they changed some of the older numbers.  I don’t recall seeing over 70 deaths in a day in May and June, yet the current chart shows 4 of them.  There is no way to get the old reports from the Georgia department of health, by clicking on the links in the old posts on the subject.  They all take you to the current one.

The 7 day average of deaths back in 25 April was 35, new cases  740 based on detection of viral genome or antibodies to it — not sick people

Sadly now the 7 day average of death is now 45 and new cases 3700.

The charts allow you to see when both new cases and deaths began to rise.  The number of new cases began to spike 16 June and the number of death began to increase 19 July (eyeball the charts, and you’ll see that these are not precise numbers.  So there was about a 1 month lag between the increases.

So were the doom and gloom sayers correct?  Here they are again to refresh your memory.

From The Atlantic — “Georgia’s Experiment in Human Sacrifice — The state is about to find out how many people need to lose their lives to shore up the economy.” —

Possibly they were right, but deaths actually decreased for a month or two after 25 April hitting a low of 13 daily deaths on 2 July.   I don’t think any of them predicted a lag of 2 months before the apocalypse.

Most likely it was a change in behavior.  Have a look at this   It may not be a COVID party in name, but it is in fact.

At first glance it appears that they are trying for a Darwin award, but on second glance, based purely on a cost benefit analysis (to them only) the chances of a healthy 18 – 20 year old dying from COVID19 are less than 1 in a thousand.  Libido is incredibly intense at that age.   I’m not sure what I would have done in their shoes.  Here are some statistics from Florida with numbers large enough to be significant

Here is some older data from Florida  (from their department of health) —

Age  Range     Number of Cases  Number of Hospitalizations Deaths

14 – 24              54,815                                503                                    12

25 – 34             70,030                              1,315                                    34

This is a risk of death if you are a ‘case’ however defined of less than 1/2,000.

This is age range of most of folks in the video. Further more recent examples are lifeguards in NY and on Cape Cod.

Think of all the gay men who knew full well how AIDS was transmitted, still got it and died.  Libido is powerful.  The classic example is Randy Shilts who wrote the magnificent “And the Band Played On” in 1987 about the early days of the epidemic.  He knew everything there was to know about the way the AIDS virus (HIV1) was transmitted yet he himself died of AIDS.

Further examples are lifeguards in NY and on Cape Cod.


Book review: The Biggest Bluff

Here’s a well-written book about (1) Poker (2) The Russian emigre Experience (3) Psychology (4) Chance and Luck.  What’s not to like?

I speak of “The Biggest Bluff” by Maria Konnikova.  Then there are the remarkable personal connections   First, she went to the same high school, Acton Boxborough High in a Boston suburb as my cousin’s sons.  THe high school is a little UN, and when we went to their graduations, the graduates welcomed us in 12 different languages, each spoken by a native speaker.  Second, the parallels between Konnikova and our nephew’s wife are striking.  They’re both 36 arriving at ages 6 and 9 from Russia speaking no English;.  college:Harvard for Maria, Princeton for the other;  Grad school: Columbia for a PhD in psychology for Maria, Columbia Law for the other.  Third, another nephew is in the process of getting a PhD in psychology from Vanderbilt

I played poker for a year or so in a rather unusual venue, e. g. with cops in the on-call room for the ER intern in a ghetto hospital in Philly in the 60’s.   When on call we knew better than to go to bed before 3 a. m., an hour after the bars closed at 2 and when the carnage which was going to happen had happened.  The cops would bring them in surgical interns and residents would hang out waiting for the OR to be ready.  Cops would hang around to see if they had to take the injured to jail or whether they’d be admitted.  No one could leave, so the cops and the docs had a floating poker game, the only solid rule being that, if called, you cashed out immediately (even in the middle of a hand) and left.

The carnage in the ghetto back then was incredible.  It still is.  Sadly, despite Head Start, The War on Poverty, Affirmative Action and Anti-Racism not much has changed. (64 shootings 13 deaths)

The book concerns the author’s journey from not knowing how many cards there are in a deck to playing professional poker in just under a year.  It’s a fascinating story, but of more interest to me are the tidbits tucked in.

For Instance, Von Neumann was interested in poker because the best hand didn’t win always, and the element of chance and most importantly the betting.  By chance he met his future  wife  (who was another man’s wife at the time) in Monte Carlo  having lost his shirt with his system for beating roulette.

Here’s Immanuel Kant providing an (unintentional) explanation of why the betting in poker is so important — “It frequently happens that a man delivers his opinions with such boldness and assurance that he appears to be under no apprehension as to the possibility of his being in error.  The offer of a bet startles him, and makes him pause.  Sometimes it turns out that his persuasion may be valued at a ducat but not at ten.”

Well with von Neumann and Kant on board you know you are in for a wild ride.

The book contains all sorts of succinct summaries of great psychological experiments — the Dunning Kruger effect , Kahneman’s work apears 3 times, Langer and the illusion of control, etc. etc.

One of the more interesting passages to me occurs when she talks about what a gamble an academic career is.  She studied with Walter Mischel at Columbia who didn’t believe in something called the big five personality traits.  “Good luck to me getting a job in any psychology department where the Big Five personality traits are still big — Walter Mischel and the Big Five are not on speaking terms.   If I were to go against the head of deparment or hiring committee . . .  Bye-bye job prospects.”

I find this incredibly sad, as must most of the hard science types which read this blog. It doesn’t matter if your research was any good, did it conform to the dominant narrative?  In contrast, a guy was plucked out of making sandwiches at a Subway and made a professor of mathematics, because his paper was so astounding —

It reminds me of Voltaire’s crack about sects.

“EVERY sect, in whatever sphere, is the rallying-point of doubt and error. Scotist, Thomist, Realist, Nominalist, Papist, Calvinist, Molinist, Jansenist, are only pseudonyms.  There are no sects in geometry; one does not speak of a Euclidian, an Archimedean. When the truth is evident, it is impossible for parties and factions to arise. Never has there been a dispute as to whether there is daylight at noon. The branch of astronomy which determines the course of the stars and the return of eclipses being once known, there is no more dispute among astronomers. In England one does not say–” I am a Newtonian, a Lockian, a Halleyan.” Why? Those who have read cannot refuse their assent to the truths taught by these three great men. The more Newton is revered, the less do people style themselves Newtonians; this word supposes that there are anti-Newtonians in England.”

But your career in academic psychology can live or die depending on whether you subscribe to the Big Five.

Addendum 13 July — Peter Shenkin has a fascinating comment about why Bohm didn’t get tenure at Princeton — it was not because of his politics — it’s in the comment section.

Finally – sex.  The book describes a lot of the mostly verbal (but one time physical) abuse she took from other players.

In one of the Sherlock Holmes stories the following dialog appears

Gregory (Scotland Yard): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”

There is a very curious omission in the book.  Konnikova describes the physical appearance of the other players at length. She talks about the way players try to psych each other out.  The jacket photo shows a rather sultry attractive woman.

What doesn’t Konnikova talk about?  She doesn’t mention whether she uses her sex at the table to confuse the opposition?  Did she act seductively toward a particular opponent? What about makeup, perfume, decolletage?  Not a word.   Did she make a run at her teacher Erik Seidel?  She clearly greatly admires everything about him.  It’s on every page.

A great book, and about far more than poker.

The four hour cure for depression: what is Ketamine doing?

It is a sad state of affairs when you look forward to writing a post on depression.

From Nature 2 July — “G4 a type of swine flu virus from China can proliferate in human airway cells.  34/338 pig farm workers in China have antibodies to it.  In ferrets G4 causes lung inflammation and coughing.”

Well that’s enough reason to flee to the solace of the basic neuroscience of depression.



The drugs we use for depression aren’t great.  They don’t help at least a third of the patients, and they usually take several weeks to work for endogenous depression.  They seemed to work faster in my MS patients who had a relapse and were quite naturally depressed by an exogenous event completely out of their control.

Enter Ketamine which, when given IV, can transiently lift depression within a few hours.  You can find more details and references in an article in  Neuron ( vol. 101 pp. 774 – 778 ’19)  written by the guys at Yale who did some of the original work. However, here’s the gist of the article.  A single dose of ketamine produced antidepressant effects that began within hours peaked in 24 – 72 hours and dissipated within 2 weeks (if ketamine wasn’t repeated).  This occurred in 50 – 75% of people with treatment resistant depression.  Remarkably one third of treated patients went into remission.

This simply has to be telling us something very important about the neurochemistry of depression.

Naturally there has been a lot of work on the neurochemical changes produced by ketamine, none of which I’ve found convincing ( see ) until the following paper [ Neuron  vol. 106 pp. 715 – 726 ’20 ].

In what follows you have to have some basic knowledge of synaptic structure, but here’s a probably inadequate elevator pitch.  Synapses have two sides, pre- and post-.  On the presynaptic side neurotransmitters are enclosed in synaptic vesicles.  Their contents are released into the synaptic cleft when an action potential arrives from elsewhere in the neuron.  The neurotransmitters flow across the very narrow synapse to bind to receptors on the postsynaptic side, triggering (or not) a response of the postsynaptic neuron.  Presynaptic terminals vary in the number vesicles they contain.

Synapses are able to change their strength (how likely an action potential is to produce a postsynaptic response).  Otherwise our brains wouldn’t be able to change and learn anything.  This is called synaptic plasticity.

One way to change the strength of a synapse is to adjust the number of synaptic vesicles found on the presynaptic side.   Presynaptic neurons form synapses with many different neurons.  The average neuron in the cerebral cortex is post-synaptic to thousands of neurons.

We think that synaptic plasticity involves changes at particular synapses but not at all of them.

Not so with ketamine according to the paper.  It changes the number of presynaptic vesicles at all synapses of a given neuron by the same percentage — this is called synaptic scaling.  Given 3 synapses containing 60  50 and 40 vesicles, upward synaptic scaling by 20% would add 12 vesicles to the first 10 to the second and 8 to the third.   The paper states that this is exactly what ketamine does to neurons using glutamic acid (the major excitatory neurotransmitter found in brain).  Even more interesting, is the fact that lithium which treats mania has the opposite effects decreasing the number of vesicles in each synapse by the same percentage.

I found this rather depressing when I first read it, as I realized that there was no chemical process intrinsic to a neuron which could possibly work quickly enough to change all the synapses at once.  To do this you need a drug which goes everywhere at once.

But you don’t. There are certain brain nuclei which send their processes everywhere in the brain.  Not only that but their processes contain varicosities which release their neurotransmitter even where there is no post-synaptic apparatus.  One such nucleus (the pars compacta of the substantia nigra) has extensively ramified processes so much so that “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].  So when that single neuron fires, dopamine is likely to bathe every neuron in the brain.  We think that something similar occurs in the locus coeruleus of the lower brain which has only 15,000 neurons and releases norepinephrine, and also in the raphe nuclei of the brainstem which release serotonin.

It should be less than a surprise that drugs which alter neurotransmission by these neurotransmitters are used to treat various psychiatric diseases.  Some drugs of abuse alter them as well (Cocaine and speed release norepinephrine, LSD binds one of the serotonin receptors etc, etc.)

The substantia nigra contains only 450,000 neurons at birth, so you don’t need a big nucleus to affect our 80 billion neuron brains.

So the question before the house, is have we missed other nuclei in the brain which control volume neurotransmission by glutamic acid?   If they exist, could their malfunction be a cause of mania and/or depression?  There is plenty of room for 10,000 to 100,000 neurons to hide in an 80 billion neuron brain.

Time to think outside the box people. Here is an example:  Since ketamine blocks activation of one receptor for glutamic acid, could there be a system using volume neurotransmission which releases a receptor inhibitor?

Addendum 7 July — I sent a copy of the post to the authors and received this back from one of them. “Thank you very much for your kind words and interest in our work. Your explanation is quite accurate (my only suggestion would be to replace “vesicles” with “receptors”, as the changes we propose are postsynaptic). Reading your blog reassures us that our review article accomplished its main goal of reaching beyond the immediate neuroscience community to a wider audience like yourself.”


Functional MRI research is a scientific sewer — take 2

You’ve heard of P-hacking, slicing and dicing your data until you get a statistically significant result.  I wrote a post about null-hacking –  Welcome to the world of pipeline hacking.  Here is a brief explanation of the highly technical field of functional magnetic resonance imaging (fMRI).   Skip to the **** if you know this already.

Chemists use MRI all the time, but they call it Nuclear Magnetic Resonance. Docs and researchers quickly changed the name to MRI because no one would put their head in something with Nuclear in the name.

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD (Blood Oxygen Level Dependent), and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 1930s. When a patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist as far back as the 90s, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying.


Well now we know why.  The data produced by and MRI is so extensive and complex that computer programs (pipelines) must be used to make those pretty pictures.  The brain has a volume of 1,200 cubic centimeters (or 1,200,000 cubic millimeters).  Each voxel of an MRI (like the pixels on your screen is about 1 cubic millimeter) and basically gives you a number of how much energy is absorbed by the voxel.  Computer programs (called pipelines) must be used to process it and make those pretty pictures you see.

Enter Nature vol. 582 pp. 36 – 37, 84 – 88 ’20 and the Neuroimaging Analysis Replication and Prediction Study (NARPS).  70 different teams were given the raw data from 108 people, each of whom was performing one or the other of two versions of a task through to study decision making under risk.  The groups were asked to analyze the data to test 9 different hypotheses about what part of the brain should light up in relation to  specific feature of the task.

Now when a doc orders a hemoglobin from the lab he’s pretty should that they’ll all give the same result because they determine hemoglobin by the same method.  Not so for functional MRI.  All 70 teams analyzed the data using different pipelines and workflows.

Was there agreement.  20% of the teams reported a result different from most teams.  Random is 50%.  Remember they all got the same raw data.

From the News and Views commentary  on the the paper.

“It is unfortunately common for researchers to explore various pipelines to find the ver­sion that yields the ‘best’ results, ultimately reporting only that pipeline and ignoring the others.”

This explains why I smelled a rat 30 years ago.  I call this pipeline hacking.

Further infelicities in the field can be found in the following posts

l. it was shown in 2014 that 70% of people having functional MRIs (fMRIs) were asleep during the test, and that until then fMRI researchers hadn’t checked for it. For details please see You don’t have to go to med school, to know that the brain functions quite differently in wake and sleep.

2. A devastating report in [ Proc. Natl. Acad. Sci. vol. 113 pp. 7699 – 7600, 7900 – 7905 ’16 ] showed that certain common settings in 3 software pacakages (SPM, FSL, AFNI) used to analyze fMRI data gave false positive results ‘up to’ 70% of the time. Some 3,500 of the 40,000 fMRI studies in the literature over the past 20 years used these settings. The paper also notes that a bug (now corrected after being used for 15 years) in one of them also led to false positive results.  For details see —

In fairness to the field, the new work and #1 and #2 represent attempts by workers in fMRI to clean it up.   They’ve got a lot of work to do.

The brain is far more wired up than we thought

The hardest thing in experimental psychology is being smart enough to think of something truly clever.  Here is a beauty, showing that the brain is far more wired together than we ever thought.

First some background.  You’ve probably heard of the blind spot (although you’ve never seen it).  It’s the part in your eye were all the never fibers from the sensory part of the eye (the retina) are collected together forming the optic nerve.  Through an ophthalmoscope it appears like a white oval (largest diameter under 2 milliMeters)  It’s white because it’s all nerve fibers (1,000,000 of them) with no sensory retina overlying it.  So if you shine a very narrow light on it, you’ll see nothing.   That’s the blind spot.

Have a look at Both eyes project to both halves of he brain.  Because the blind spot is off to one side in your visual field, the other eye maps a different part of the retina to the same area of the brain.  But if you patch that eye, on one side of the brain the blind spot gets no input.


 In the healthy visual system, the cortical representation of the blind spot (BS) of the right eye receives information from the left eye only (and vice versa). Therefore, if the left eye is patched, the cortex corresponding to the BS of the right eye is deprived of its normal bottom-up input.

Proc. Natl. Acad. Sci. vol. 117 pp. 11059 – 11067 ’20

Hopefully you’ll be able to follow the link and look at figure 1 p. 11060 which will explain things.

Patching the left eye deprives that area of visual cortex of any input at all.

Here comes the brunt of the paper — within minutes of patching the left eye, the cortical representation of that spot begins to widen.  It starts responding to stimuli from areas outside its usual receptive field.

Nerves just don’t grow that fast, so the connections have to have been there to begin with.   So the brain is more wired together than we thought.  Perhaps this is just true of the visual system.

If not, the work has profound implications for neurologic rehabilitation.

I do apologize for not being able to explain this better, but the work is sufficiently important that you should know about it.

Addendum 4 June — here’s another shot at explaining things.

    • As you look straight ahead, light falls on the part of your retina with the highest spatial resolution (the macula). The blind spot due to the optic nerve is found closer to your nose, which means that in the right eye, the retina surrounding the blind spot ‘sees’ light coming from toward your ear. Light from the same direction ( your right ear) will NOT fall on the optic nerve of your left eye (which is toward your nose) so information from that area gets back to the brain (which is why you don’t see your blind spot).

      Now visual space (say looking toward the right) is sent back to the brain coherently, so that areas of visual space transmitted by either eye go to the same place in the brain.

      So if you now cover your left eye, there is an area of the brain (corresponding to the blind spot of the right eye) which is getting no information from the retina at all. So it is effectively blind. Technology permits us to actively stimulate the retina anywhere we want.. We also have ways to measure activity of the brain in any small area (functional MRI). Activity increases with visual input.

      Now with the left eye patched, stimulate with light directed at the right eye’s blind spot. Nothing happens (no increase in activity) in the cortical area representing that part of the visual field. It isn’t getting any input. So it is possible to accurately map the representation of the right eye’s blind spot in the brain in terms of the brain areas responding to it.

      Next visually stimulate the right eye with light hitting the retina adjacent to the right eye’s blind spot. Initially the blind spot area of the brain shows no activity, After just a few minutes, the area of the brain for the right eye’s blind spot begins to respond to stimuli it never responded to initially. This implies that those two areas of the brain have connections between them, that were always there, as new nerve processes just don’t grow that fast.

      To be clever enough to think of a way to show this is truly brilliant. Bravo to the authors.