How do neural nets do what they do?

Isn’t it enough that neural nets beat humans at chess, go and checkers, drive cars, recognize faces, find out what plays Shakespeare did and didn’t write?  Not at all.  Figuring out how they do what they do may allow us to figure out how the brain does what it does.

Science recently had a great bunch of articles on neural nets, deep learning [ Science vol. 356 pp. 16 – 30 ’17 ].  Chemists will be interested in p. 27 “Neural networks learn the art of chemical synthesis”.  The articles are quite accessible to the scientific layman.

To this retired neurologist, the most interesting of the bunch was the article (pp. 22 – 270 describing attempts to figure out how neural nets do what they do. Welcome to the world of the neuroscientist where a similar problem has engaged us for centuries.  DARPA is spending 70 million on exactly this according to the article.

If you are a little shaky on all this — I’ve copied a previous post on the subject (along with a few comments it inspired) below the ****

Here are four techniques currently in use:

  1. Counterfactual probes — the classic black box technique — vary the input (text, images, sound, ..  )and watch how it affects output.  It goes by the fancy name of Local Interpretable Model agnostic Explanations (LIME).  This allows the parts of the input most important in the net’s original judgement.
  2. Start with a black image or a zeroed out array of text and transition step by step toward the example being tested.  Then you watch the jumps in certainty the net makes, and you can figure out what it thinks is important.
  3. General Additive Model (GAM) is a statistical technique based on linear regression.  It operates on data to massage it.  The net is then presented with a variety of operations of GAM and studied to see which are the best at data massage so the machine can make a correct decision.
  4. Glass Box wires monotonic relationships (e.g the price of a house goes up with the number of square feet) INTO the neural net — allowing better control of what it does

The articles don’t appear to be behind a paywall, so have at it.

***

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

 Here are a few of the comments

“So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. ”

I picked up a re-post of this comment on Quanta and thought it best to reply to you directly. Though this appears to be your private blog I can’t seem to find a biography, otherwise I’d address you by name.

My background is computer science generally and neural networks (with the requisite exposure to statistical mechanics) in particular and I must correct the assertion you’ve made here; neural nets are in fact both algorithmic and even repeatable in their performance.

I think what you’re trying to say is the structure of a trained network isn’t known by the programmer in advance; rather than build a trained intelligence, the programmer builds an intelligence that may be trained. The method is entirely mathematical though, mostly based on the early work of Boltzmann and explorations of the Monte-Carlo algorithms used to model non-linear thermodynamic systems.

For a good overview of the foundations, I suggest J.J. Hopfield’s “Neural networks and physical systems with emergent collective computational abilities”, http://www.pnas.org/content/79/8/2554.abstract

Regards,
Scott.

Scott — thanks for your reply. I blog anonymously because of the craziness on the internet. I’m a retired neurologist, long interested in brain function both professionally and esthetically. I’ve been following AI for 50 years.

I even played around with LISP, which was the language of AI in the early days, read Minsky on Perceptrons, worried about the big Japanese push in AI in the 80’s when they were going to eat our lunch etc. etc.

I think you’d agree that compared to LISP and other AI of that era, neural nets are nonAlgorithmic. Of course setting up virtual neurons DOES involve programming.

The analogy with the brain is near perfect. You can regard our brains as the output of the embryologic programming with DNA as the tape.

But that’s hardly a place to stop. How the brain and neural nets do what they do remains to be determined. A wiring diagram of the net is available but really doesn’t tell us much.

Again thanks for responding.

Scott

Honestly it would be hard for me to accept that the nets I worked on weren’t algorithmic since they were literally based on formal algorithms derived directly from statistical mechanics, most of which was based on Boltzmann’s work back in the 19th century. Hopfield, who I truly consider the “father” of modern neural computing, is a physicist (now at Princeton I believe). Most think he was a computer scientists back on the 70’s when he did his basic work at Cal Tech, but that’s not really the case.

I understand what you’re trying to say, that the actual training portion of NN development isn’t algorithmic, but the NN software itself is and it’s extremely precise in its structure, much more so than say, for instance, a bubble sort. It’s pretty edgy stuff even now.

I began working on NN’s in ’82 after reading Hopfield’s seminal paper, I was developing an AI aimed at self-diagnosing computer systems for a large computer manufacturer now known as Hewlett-Packard (at the time we were a much smaller R&D company who were later aquired). We also explored expert systems and ultimately deployed a solution based on KRL, which is a LISP development environment built by a small Stanford AI spinoff. It ended up being a dead end; that was an argument I lost (I advocated the NN direction as much more promising but lost mostly for political reasons). Now I take great pleasure in gloating 🙂 even though I’m no longer commercially involved with either the technology or that particular company.

Luysii

I thought we were probably in agreement about what I said. Any idea on how to find out just how many ‘neurons’ (any how many levels) there are in AlphaGo? It would be interesting to compare the numbers with our current thinking about the numbers of cortical neurons and synapses (which grows ever larger year by year).

Who is the other Scott — Aronson? 100 years ago he would have been a Talmudic scholar (as he implies by the title of his blog).

Yes neural nets are still edgy, and my son is currently biting his fingernails in a startup hoping to be acquired which is heavily into an application of neural nets (multiple patents etc. etc.)

A possible new player

Drug development is very hard because we don’t know all the players inside the cell. A recent paper describes an entirely new class of player — circular DNA derived from an ancient virus.  The authoress is Laura Manuelidis, who would have been a med school classmate had I chosen to go to Yale med instead of Penn.   She is the last scientist standing who doesn’t believe Prusiner’s prion hypothesis.  She didn’t marry the boss’s daughter being female, so she married the boss instead;  Elias Manuelidis a Yale neuropathologist who would be 99 today had he not passed away at 72 in 1992.

The circular DNAs go by the name of SPHINX  an acronym  for  Slow Progressive Hidden INfections of X origin.  They have no sequences in common with bacterial or eukaryotic DNA, but there some homology to a virus infecting Acinebacter, a wound pathogen common in soil and water.

How did she find them?  By doggedly pursuing the idea the neurodegenerative diseases such as Cruetzfeldt Jakob Disease (CJD) and scrapie were due to an infectious agent triggering aggregation of the prion protein.

As she says:  “The cytoplasm of CJD and scrapie-infected cells, but not control cells, also contains virus-like particle arrays and because we were able to isolate these nuclease-protected particles with quantitative recovery of infectivity, but with little or no detectable PrP (Prion Protein), we began to analyze protected nucleic acids. Using Φ29 rolling circle amplification, several circular DNA sequences of <5 kb (kilobases) with ORFs (Open Reading Frames) were thereby discovered in brain and cultured neuronal cell lines. These circular DNA sequences were named SPHINX elements for their initial association with slow progressive hidden infections of X origin."

SPHINX itself codes for a 324 amino acid protein, which is found in human brain, concentrated in synaptic boutons.  Strangely, even though the DNAs are presumably viral derived, they contain intervening sequences which don't code for protein.

The use of rolling circle amplification is quite clever, as it will copy only circular DNA.

Stanley Prusiner is sure to weigh in.  Remarkably, Prusiner was at Penn Med when I was and was even in my med school fraternity (Nu Sigma Nu)  primarily a place to eat lunch and dinner.  I probably ate with him, but have no recollection of him whatsoever.

Circular DNAs outside chromosomes are called plasmids. Bacteria are full of them. The best known eukaryote containing plasmids is yeast. Perhaps we have them as well. Manuelidis may be the first person to look.

Should you take aspirin after you exercise?

I just got back from a beautiful four and a half mile walk around a reservoir behind my house.  I always take 2 adult aspirin after such things like this.  A recent paper implies that perhaps I should not [ Proc. Natl. Acad. Sci. vol. 114 pp. 6675 – 6684 ’17 ].  Here’s why.

Muscle has a set of stem cells all its own.  They are called satellite cells.  After injury they proliferate and make new muscle. One of the triggers for this is a prostaglandin known as PGE2 — https://en.wikipedia.org/wiki/Prostaglandin_E2 — clearly a delightful structure for the organic chemist to make.  It binds to a receptor on the satellite cell (called EP4R) following which all sorts of things happen, which will make sense to you if you know some cellular biochemistry.  Activation of EP4R triggers activation of the cyclic AMP (CAMP) phosphoCREB pathway.  This activates Nurr1, a transcription factor which causes cellular proliferation.

Why no aspirin? Because it inhibits cyclo-oxygenase which forms the 5 membered ring of PGE2.

I think you should still aspirin afterwards, as the injury produced in the paper was pretty severe — muscle toxins, cold injury etc. etc. Probably the weekend warriors among you don’t damage your muscles that much.

A few further points about aspirin and the NSAIDs

Now aspirin is an NSAID (NonSteroid AntiInflammatory Drug) — along with a zillion others (advil, anaprox, ansaid, clinoril, daypro, dolobid, feldene, indocin — etc. etc. a whole alphabet’s worth). It is rather different in that it has an acetyl group on the benzene ring.  Could it be an acetylating agent for things like histones and transcription factors, producing far more widespread effects than those attributable to cyclo-oxygenase inhibition.   I’ve looked at the structures of a few of them — some have CH2-COOH moieties in them, which might be metabolized to an acetyl group –doubt.  Naproxen (Anaprox, Naprosyn) does have an acetyl group — but the other 13 structures I looked at do not.

Another possible negative of aspirin after exercise, is the fact that inhibition of platelet cyclo-oxygenase makes it harder for them to stick together and form clots (this is why it is used to prevent heart attack and stroke). So aspirin might result in more extensive micro-hemorrhages in muscle after exercise (if such things exist).

Gotterdamerung — The Twilight of the GWAS

Life may be like a well, but cellular biochemistry and gene function is like a mattress.  Push on it anywhere and everything changes, because it’s all hooked together.  That’s the only conclusion possible if a review of genome wide association studies (GWAS) is correct [ Cell vol. 169 pp. 1177 – 1186 ’17 ].

 It’s been a scandal for years that GWAS studies as they grow larger and larger are still missing large amounts of the heritability of known very heritable conditions (e.g. schizophrenia, height).  It’s been called the dark matter of the genome (e.g. we know it’s there, but we don’t know what it is).

If you’re a little shaky about how GWAS works have a look at https://luysii.wordpress.com/2014/08/24/tolstoy-rides-again-schizophrenia/ — it will come up again later in this post.

We do know that less than 10% of the SNPs found by GWAS lie in protein coding genes — this means either that they are randomly distributed, or that they are in regions controlling gene expression.  Arguing for randomness — the review states that the heritability contributed by each chromosome tends to be closely proportional to chromosome length.  Schizophrenia is known to be quite heritable, and monozygotic twins have a concordance rate of 40%.  Yet an amazing study (which is quoted but which I have not read) estimates that nearly 100% of all 1 megabase windows in the human genome contribute to schizophrenia heritability (Nature Genet. vol. 47 pp. 1385 – 1392 ’15). Given the 3.2 gigaBase size of our genome that’s 3,200 loci.

Another example is the GIANT study about the heritability of height.  The study was based on 250,000 people and some 697 gene wide significant loci were found.  In aggregate they explain a mere SIXTEEN PERCENT.

So what is going on?

It gets back to the link posted earlier. The title —  “Tolstoy rides again”  isn’t a joke.  It refers to the opening sentence of Anna Karenina — “Happy families are all alike; every unhappy family is unhappy in its own way”.  So there are many routes to schizophrenia (and they are spread all over the genome).

The authors of the review think that larger and larger GWAS studies (some are planned with over a million participants) are not going to help and are probably a waste of money.  Whether the review is Gotterdamerung for GWAS isn’t clear, but the review is provocative.The review is new and it will be interesting to see the response by the GWAS people.

So what do they think is going on?  Namely that everything in organismal and cellular biochemistry, genetics and physiology is related to everything else.  Push on it in one place and like a box spring mattress, everything changes.  The SNPs found outside the DNA coding for proteins are probably changing the control of protein synthesis of all the genes.

The dark matter of the genome is ‘the plan’ which makes the difference between animate and inanimate matter.   For more on this please see — https://luysii.wordpress.com/2015/12/15/it-aint-the-bricks-its-the-plan-take-ii/

Fascinating and enjoyable to be alive at such a time in genetics, biochemistry and molecular biology.

Happy Fourth of July

Only immigrants truly appreciate this country.  So it’s worth repeating an earlier post about them. Happy fourth of July.

Hitler’s gifts (and Russia’s gift)

In the summer of 1984 Barack Obama was at Harvard Law, his future wife was a Princeton undergraduate, and Edward Frenkel a 16 year old mathematical prodigy was being examined for admission to Moscow State University. He didn’t get in because he was Jewish. His blow by blow description of the 5 hour exam on pp. 28 – 38 of his book “Love & Math” is as painful to read as it must have been for him to write.

A year earlier the left in Europe had mobilized against the placement of Pershing missiles in Europe by president Reagan, already known there as a crude and witless former actor, but, unfortunately possessed of nuclear weapons. Tens of thousands marched. He had even called the Soviet Union an Evil Empire that year. Leftists the world over were outraged. How unsophisticated to even admit the possibility of evil. Articles such as “Reagan’s image in Europe does not help Allies in deploying American missiles” appeared in the liberal press.

The hatred of America is nothing new for the left.

Reset the clock to ’60 – ’62 when I was a grad student in the Harvard Chemistry department. The best place to meet women was the International house. It had a piano, and a Polish guy who played Chopin better than I did. It had a ping pong table, and another Polish guy who beat me regularly. The zeitgeist at Harvard back then, was that America was rather crude (the Ugly American was quite popular), boorish and unappreciative of the arts, culture etc. etc.

One woman I met was going on and on about this, particularly the condition of the artist in America, and how much better things were in Europe. I brought up Solzhenitzen, and the imprisonment of dissidents over there. Without missing a beat, she replied that this just showed how important the Russian government thought writers and artists were. This was long before Vietnam.

It was definitely a Saul on the road to Damascus moment for me. When the left began spelling America, Amerika in the 60s and 70s, I just ignored it.

Fast forward to this fall, and the Nobels. The 7th Chemistry Nobel bestowed on a department member when I was there went to Marty Karplus. The others were Woodward, Corey, Lipscomb, Gilbert, Hoffman, Bloch. While Bill Lipscomb was a Kentucky gentleman to a T (and a great guy), Hoffman spent World War II hiding out in an attic, his father being in a concentration camp (guess why). Konrad Bloch (who looked as teutonic as they come) also got out of Europe due to his birth. Lastly Karplus got out of Euruope as a child for the same reason. Don Voet, a fellow grad student, whose parents got out of Europe for (I’ll make you guess), used to say that the Universal Scientific Language was — broken English.

So 3/7 of the Harvard Chemistry Nobels are Hitler and Europe’s gifts to America.

Russia, not to be outdone, gave us Frenkel. Harvard recognized his talent, and made him a visiting professorship at age 21, later enrolling him in grad school so he could get a PhD. He’s now a Stanford prof.

So the next time, someone touts the “European model” of anything, ask them about Kosovo, or any of this.

***

Those of you in training should consider the following. You really won’t know how good what you are getting really is until 50 years or so have passed. That’s not to say Harvard Chemistry’s reputation wasn’t very good back then. Schleyer said ‘now you’re going to Mecca’ when he heard I’d gotten in.

Also to be noted, is that all 7 future Nobelists in the early 60s weren’t resting on their laurels, but actively creating them. The Nobels all came later

Remember entropy? — Take II

Organic chemists have a far better intuitive feel for entropy than most chemists. Condensations such as the Diels Alder reaction decrease it, as does ring closure. However, when you get to small ligands binding proteins, everything seems to be about enthalpy. Although binding energy is always talked about, mentally it appears to be enthalpy (H) rather than Gibbs free energy (F).

A recent fascinating editorial and paper [ Proc. Natl. Acad. Sci. vol. 114 pp. 4278 – 4280, 4424 – 4429 ’17 ]shows how the evolution has used entropy to determine when a protein (CzrA) binds to DNA and when it doesn’t. As usual, advances in technology permit us to see this (e.g. multidimensional heteronuclear nuclear magnetic resonance). This allows us to determine the motion of side chains (methyl groups), backbones etc. etc. When CzrA binds to DNA methyl side chains on the protein move more, increasing entropy (deltaS) and as well all know the Gibbs free energy of reaction (deltaF) isn’t just enthalpy (deltaH) but deltaH – TdeltaS, so an increase in deltaS pushes deltaF lower meaning the reaction proceeds in that direction.

Binding of Zinc redistributes these side chain motion so that entropy decreases, and the protein moves off DNA. The authors call this dynamics driven allostery. The fascinating thing, is that this may happen without any conformational change of CzrA.

I’m not sure that molecular dynamics simulations are good enough to pick this up. Fortunately newer NMR techniques can measure it. Just another complication for the hapless drug chemist thinking about protein ligand interactions.

A recent paper [ Proc. Natl. Acad. Sci. vol. 114 pp. 6563  – 6568 ’17 ] went into more detail about measuring side chain motions  as a surrogate for conformational entropy.  It can now be measured by NMR.  They define complete restriction of  the methyl group symmetry axis as 1, and complete disorder, and state that ‘a variety of models’ imply that the value is LINEARLY related to conformational entropy making it an ‘entropy meter’.  They state that measurement of fast internal side chain motion is largely restricted to the methyl group — this makes me worry that other side chains (which they can’t measure) are moving as well and contributing to entropy.

The authors studied some 28 protein/ligand systems, and found that the contribution of conformational entropy to ligand binding can be favorable, negligible or unfavorable.

What is bothersome to the authors (and to me) is that there were no obvious structural correlates between the degree of conformation entropy and protein structure.  So it’s something you measure not something you predict, making life even more difficult for the computational chemist studying protein ligand interactions.

Correctly taken to task by two readers and some breaking news

I should have amended the previous post to say I mistrust unverified models.  Here are two comments

#1 Andyextance

  • “Leaving aside the questions of the reliability of models in different subjects, and whether all of your six reasons truly relate to models, I have one core question: Without models, how can we have any idea about what the future might hold? Models may not always be right – but as long as they have some level of predictive skill they can often at least be a guide.”

    Absolutely correct — it’s all about prediction, not plausibility.

#2 Former Bell Labs denizen

“And yet you board a commercial airliner without hesitation, freely trusting your life to the models of aerodynamics, materials science, control system theory, electronics, etc. that were used in designing the aircraft. Similar comments apply to entering a modern skyscraper, or even pushing the brake pedal on your automobile.
Perhaps what you are really saying is that you don’t trust models until their correctness is demonstrated by experience; after that, you trust them. Hey, nothing to disagree with there.”
Correct again
Breaking news
This just in — too late for yesterday’s post — the climate models have overestimated the amount of warming to be expected this century — the source  is an article  in
Nature Geoscience (2017) doi:10.1038/ngeo2973 — behind a paywall — but here’s the abstract
In the early twenty-first century, satellite-derived tropospheric warming trends were generally smaller than trends estimated from a large multi-model ensemble. Because observations and coupled model simulations do not have the same phasing of natural internal variability, such decadal differences in simulated and observed warming rates invariably occur. Here we analyse global-mean tropospheric temperatures from satellites and climate model simulations to examine whether warming rate differences over the satellite era can be explained by internal climate variability alone. We find that in the last two decades of the twentieth century, differences between modelled and observed tropospheric temperature trends are broadly consistent with internal variability. Over most of the early twenty-first century, however, model tropospheric warming is substantially larger than observed; warming rate differences are generally outside the range of trends arising from internal variability. The probability that multi-decadal internal variability fully explains the asymmetry between the late twentieth and early twenty-first century results is low (between zero and about 9%). It is also unlikely that this asymmetry is due to the combined effects of internal variability and a model error in climate sensitivity. We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations.
 
Unfortunately the abstract doesn’t quantify generally smaller.
 
Models whose predictions are falsified by data are not to be trusted.
 
Yet another reason Trump was correct to get the US out of the Paris accords— in addition to the reasons he used — no method of verification, no penalties for failure to reduce CO2 etc. etc.  The US would tie itself in economic knots trying to live up to it, while other countries would emit pious goals for reduction and do very little. 
In addition, \ I find it rather intriguing that the article was not published in Nature Climate Change   –,http://www.nature.com/nclimate/index.html — which would seem to be the appropriate place.  Perhaps it’s just too painful for them.

I mistrust models.

I have no special mistrust of climate models, I mistrust all models of complex systems.  Here are six reasons why.

Reason #1:  My cousin runs an advisory service for institutional investors (hedge funds, retirement funds, stock market funds etc. etc.)  Here is the beginning of his latest post 16 June ’17

There were 3 great reads yesterday. First was Neil Irwin’s article in the NY Times “Janet Yellen, the Fed and the Case of the Missing Inflation.”  He points out that Yellen is a labor market scholar who anticipated the sharp decline in the unemployment rate. However the models on which the Fed has relied anticipate higher levels of inflation. Yet every inflation measure that the Fed uses has fallen well short of the Fed’s 2% stability rate. If they continue raising short-term rates in the face of low inflation, then “real” rates could restrain future economic growth.

Second was Greg Ip’s article “Lousy Raise? It Might Not Get Better.” Greg makes the point that tight labor markets are a global phenomenon in many industrialized countries, yet wage inflation remains muted. Writes Greg “If a labor market this tight can’t generate better pay, quite possibly it never will in Germany & Japan.”

Third was an article by Glenn Hubbard (Dean of Columbia Business School & former chairman of the Council of Economic Advisors under George W. Bush). His Wall Street Journal op-ed was titled “How to Keep the Fed from Following its Models off a Cliff.”  Hubbard suggests that Fed officials should interact more with market participants and business people. And Fed governors should be selected because of their varied life experiences, and they should encourage a healthy skepticism of prevailing economic models.

Serious money was spent developing these models.  Do you think that climate is in some way simpler than the US economy, so that they are more likely to be accurate?  I do not.

Reason #2: Americans are getting fatter yet living longer, contradicting the model that being mildly overweight is bad for you.  It is far too long to go into so here’s the link — https://luysii.wordpress.com/2013/05/30/something-is-wrong-with-the-model-take-2/.

The first part is particularly fascinating, in that data showed that overweight (not obese) people tended to live longer.  The article describes how people who had spent their research careers telling the public that being overweight was bad, tried to discount the data. The best quote in the article is the following ““We’re scientists. We pay attention to data, we don’t try to un-explain them.”,

Reason #3: The economic predictions of the Congressional Budget Office on just about anything –inflation, gross national product, economic growth, the deficit — are consistently wrong — http://www.ncpa.org/sub/dpd/?Article_ID=21516.

Addendum 28 June “White house economists overestimated annual economic growth by about 80 percent on average for a six year stretch during Barack Obama’s presidency, according to Freedom Works economic consultant Stephen Moore.

Economists predicted growth between 3.2 to 4.6 percent for the years 2010 through 2015. Actual economic growth never hit above 2.6 percent.”

Reason #4:  Animal models of stroke:  There were at least 60, in which some therapy or other was of benefit.  None of them worked in people. It got so bad I stopped reading the literature about it.  We still have no useful treatment for garden variety strokes

Reason #5:  The Club of Rome,  — dire prediction based on a computer model which got a lot of play in the 70s.  For details see — https://luysii.wordpress.com/2017/06/01/a-bit-of-history/.  The post also has a lot about “The Population Bomb” and its failed predictions and also a review of a book about “The Bet” between Paul Ehrlich and Simon

Reason #6: Live by the model, die by the model. A fascinating book “Shattered” about the Hillary Clinton campaign, explains why the campaign did no polling in the final 3 weeks of the campaign. The man running the ‘data analytics’ (translation: model) Robby Mook, thought the analytics were better and more accurate (p. 367).

 

The best laid plans of mice and men

I sent a copy of the previous post (reprinted below) about an idea to diagnose and treat chronic fatigue syndrome to Dr. Norman Sharpless, the author of the Cell review on cellular senescence.  He thought the idea was “great”; and, even better, he ran the lab which did the test I wanted to try.  I also sent a copy to a patient group.  “Solve ME/CFS Initiative”, and they want to use the post on their website.

Sharpless noted that the problem with ideas like this is accumulating patients, something the patient group could probably provide.  So all went well until 8 days ago when Dr. Sharpless was named to be the head of the National Cancer Institute, with its 4.5 billion dollar  budget by President Trump.  Being a full prof at the University of North Carolina Medical School, he would have been the ideal individual to run the study (or find someone to do it), but he now has far bigger fish to fry.

After I wrote to congratulate him, he wrote back reiterating that the idea was good, but he said he had to sever all connections with the lab he founded due to conflict of interest considerations.  He did give me the name of someone to contact there, which is where the matter stands presently.

Since the idea is based on the correlation between the amount of fatigue after chemotherapy with the level of a white cell protein (p16^INK4a), he would have had no problem accumulating chemotherapy patients as head of NCI, but again the spectre of conflict of interest rears its ugly head.  Repeating the chemotherapy study to make sure the results are in fact real is the first order of business.

So there you have a research idea, endorsed by the new head of the NCI.  I am a retired neurologist, who no longer has a license to practice medicine (but who doesn’t need a license to think).

If you’re an academic out there, looking for something to do, write up a grant proposal.  The current treatments do help people live with chronic fatigue syndrome, but they are in no sense treatments of the underlying problem.

Here is the original post

How to (possibly) diagnose and treat chronic fatigue syndrome (myalgic encephalomyelitis)

As a neurologist I saw a lot of people who were chronically tired and fatigued, because neurologists deal with muscle weakness and diseases like myasthenia gravis which are associated with fatigue.  Once I ruled out neuromuscular disease as a cause, I had nothing to offer then (nor did medicine).  Some were undoubtedly neurotic, but there was little question in my mind that some of them had something wrong that medicine just hadn’t figured out.  Not it hasn’t been trying.

Infections of almost any sort are associated with fatigue, probably because components of the inflammatory response cause it.  Anyone who’s gone through mononucleosis knows this.    The long search for an infectious cause of chronic fatigue syndrome (CFS) has had its ups and downs — particularly downs — see https://luysii.wordpress.com/2011/03/25/evil-scientists-create-virus-causing-chronic-fatigue-syndrome-in-lab/

At worst many people with these symptoms are written off as crazy; at best, depressed  and given antidepressants.  The fact that many of those given antidepressants feel better is far from conclusive, since most patients with chronic illnesses are somewhat depressed.

Even if we didn’t have a treatment, just having a test which separated sufferers from normal people would at least be of some psychological help, by telling them that they weren’t nuts.

Two recent papers may actually have the answer. Although neither paper dealt with chronic fatigue syndrome directly, and I can find no studies in the literature linking what I’m about to describe to CFS they at least imply that there could be a diagnostic test for CFS, and a possible treatment as well.

Because I expect that many people with minimal biological background will be reading this, I’ll start by describing the basic biology of cellular senescence and death

Background:  Most cells in our bodies are destined to die long before we do. Neurons are the longest lasting (essentially as long as we do).  The lining of the intestines is renewed weekly.  No circulating blood cell lasts more than half a year.

Cells die in a variety of ways.  Some are killed (by infections, heat, toxins).  This is called necrosis. Others voluntarily commit suicide (this is called apoptosis).   Sometimes a cell under stress undergoes cellular senescence, a state in which it doesn’t die, but doesn’t reproduce either.  Such cells have a variety of biochemical characteristics — they are resistant to apoptosis, they express molecules which prevent them from proliferating and most importantly, they secrete proinflammatory molecules (this is called the Senescence Associated Secretory Phenotype — SASP).

At first the very existence of the senescent state was questioned, but exist it does.  What is it good for?  Theories abound, one being that mutation is one cause of stress, and stopping mutated cells from proliferating prevents cancer. However, senescent cells are found during fetal life; and they are almost certainly important in wound healing.  They are known to accumulate the older you get and some think they cause aging.

Many stresses induce cellular senescence.  The one of interest to us is chemotherapy for cancer, something obviously good as a cancer cell turned senescent has stopped proliferating.   If you know anyone who has undergone chemotherapy, you know that fatigue is almost invariable.

One biochemical characteristic of the senescent cell is increased levels of a protein called p16^INK4a, which helps stop cellular proliferation.  While p16^INK4a can easily be measured in tissue biopsies, tissue biopsies are inherently not easy. Fortunately it can also be measured in circulating blood cells.

The following study — Cancer Discov. vol. 7 pp. 165 – 176 ’17 looked at 89 women with breast cancer undergoing chemotherapy. They correlated the amount of fatigue experienced with the levels of p16^INK4a in a type of circulating white blood cell (T lymphocyte).  There was a 44% incidence of fatigue in the highest quartile of  p16^INK4a levels, vs. a 5% incidence of fatigue in the lowest. The cited paper didn’t mention CFS nor did the highly technical but excellent review on which much of the above is based [ Cell vol. 169 pp. 1000 -1011 ’17 ]

But it is definitely time to measure p16^INK4a levels in patients with chronic fatigue and compare them to people without it.  This may be the definitive diagnostic test, if people with CFS show higher levels of p16^INK4a.

If this turns out to be the case, then there is a logical therapy for chronic fatigue syndrome.  As mentioned above, senescent cells are resistant to apoptosis (voluntary suicide).  What stops these cells from suicide? Naturally occurring cellular suicide inhibitors (with names like BCL2, BCL-XL, BCL-W) do so .  Drugs called sensolytics already exist to target the inhibitors causing senescent cells to commit suicide.

So if excessive senescent cells are the cause of CFS, then killing them should make things better. Sensolytics do exist but there are problems; one couldn’t be used because of side effects.  Others do exist (one such is Venetoclax) and have been approved by the FDA for leukemia — but it isn’t as potent .

So there is a potentially both a diagnostic test and a treatment for CFS.

The initial experiment should be fairly easy for research to do — just corral some CSF patients and controls and run a test for p16^INK4a levels in their blood cells. Also easy on the patients as only a blood draw is involved.

This, in itself, would be great, but there is far more to think about.

If CFS patients have too many senescent cells, getting rid of them — although (hopefully) symptomatically beneficial — will not get rid of what caused the senescent cells to accumulate in the first place. In addition, getting rid of all of them at once would probably cause huge problems causing something similar to the tumor lysis syndrome – https://en.wikipedia.org/wiki/Tumor_lysis_syndrome.

But these are problems CFS patients and

How to (possibly) diagnose and treat chronic fatigue syndrome (myalgic encephalomyelitis)

As a neurologist I saw a lot of people who were chronically tired and fatigued, because neurologists deal with muscle weakness and diseases like myasthenia gravis which are associated with fatigue.  Once I ruled out neuromuscular disease as a cause, I had nothing to offer then (nor did medicine).  Some were undoubtedly neurotic, but there was little question in my mind that some of them had something wrong that medicine just hadn’t figured out.  Not it hasn’t been trying.

Infections of almost any sort are associated with fatigue, probably because components of the inflammatory response cause it.  Anyone who’s gone through mononucleosis knows this.    The long search for an infectious cause of chronic fatigue syndrome (CFS) has had its ups and downs — particularly downs — see https://luysii.wordpress.com/2011/03/25/evil-scientists-create-virus-causing-chronic-fatigue-syndrome-in-lab/

At worst many people with these symptoms are written off as crazy; at best, depressed  and given antidepressants.  The fact that many of those given antidepressants feel better is far from conclusive, since most patients with chronic illnesses are somewhat depressed.

Even if we didn’t have a treatment, just having a test which separated sufferers from normal people would at least be of some psychological help, by telling them that they weren’t nuts.

Two recent papers may actually have the answer. Although neither paper dealt with chronic fatigue syndrome directly, and I can find no studies in the literature linking what I’m about to describe to CFS they at least imply that there could be a diagnostic test for CFS, and a possible treatment as well.

Because I expect that many people with minimal biological background will be reading this, I’ll start by describing the basic biology of cellular senescence and death

Background:  Most cells in our bodies are destined to die long before we do. Neurons are the longest lasting (essentially as long as we do).  The lining of the intestines is renewed weekly.  No circulating blood cell lasts more than half a year.

Cells die in a variety of ways.  Some are killed (by infections, heat, toxins).  This is called necrosis. Others voluntarily commit suicide (this is called apoptosis).   Sometimes a cell under stress undergoes cellular senescence, a state in which it doesn’t die, but doesn’t reproduce either.  Such cells have a variety of biochemical characteristics — they are resistant to apoptosis, they express molecules which prevent them from proliferating and most importantly, they secrete proinflammatory molecules (this is called the Senescence Associated Secretory Phenotype — SASP).

At first the very existence of the senescent state was questioned, but exist it does.  What is it good for?  Theories abound, one being that mutation is one cause of stress, and stopping mutated cells from proliferating prevents cancer. However, senescent cells are found during fetal life; and they are almost certainly important in wound healing.  They are known to accumulate the older you get and some think they cause aging.

Many stresses induce cellular senescence.  The one of interest to us is chemotherapy for cancer, something obviously good as a cancer cell turned senescent has stopped proliferating.   If you know anyone who has undergone chemotherapy, you know that fatigue is almost invariable.

One biochemical characteristic of the senescent cell is increased levels of a protein called p16^INK4a, which helps stop cellular proliferation.  While p16^INK4a can easily be measured in tissue biopsies, tissue biopsies are inherently not easy. Fortunately it can also be measured in circulating blood cells.

The following study — Cancer Discov. vol. 7 pp. 165 – 176 ’17 looked at 89 women with breast cancer undergoing chemotherapy. They correlated the amount of fatigue experienced with the levels of p16^INK4a in a type of circulating white blood cell (T lymphocyte).  There was a 44% incidence of fatigue in the highest quartile of  p16^INK4a levels, vs. a 5% incidence of fatigue in the lowest. The cited paper didn’t mention CFS nor did the highly technical but excellent review on which much of the above is based [ Cell vol. 169 pp. 1000 -1011 ’17 ]

But it is definitely time to measure p16^INK4a levels in patients with chronic fatigue and compare them to people without it.  This may be the definitive diagnostic test, if people with CFS show higher levels of p16^INK4a.

If this turns out to be the case, then there is a logical therapy for chronic fatigue syndrome.  As mentioned above, senescent cells are resistant to apoptosis (voluntary suicide).  What stops these cells from suicide? Naturally occurring cellular suicide inhibitors (with names like BCL2, BCL-XL, BCL-W) do so .  Drugs called sensolytics already exist to target the inhibitors causing senescent cells to commit suicide.

So if excessive senescent cells are the cause of CFS, then killing them should make things better. Sensolytics do exist but there are problems; one couldn’t be used because of side effects.  Others do exist (one such is Venetoclax) and have been approved by the FDA for leukemia — but it isn’t as potent .

So there is a potentially both a diagnostic test and a treatment for CFS.

The initial experiment should be fairly easy for research to do — just corral some CSF patients and controls and run a test for p16^INK4a levels in their blood cells. Also easy on the patients as only a blood draw is involved.

This, in itself, would be great, but there is far more to think about. 

If CFS patients have too many senescent cells, getting rid of them — although (hopefully) symptomatically beneficial — will not get rid of what caused the senescent cells to accumulate in the first place. In addition, getting rid of all of them at once would probably cause huge problems causing something similar to the tumor lysis syndrome – https://en.wikipedia.org/wiki/Tumor_lysis_syndrome.

But these are problems CFS patients and their physicians would love to have.