Category Archives: Neurology & Psychiatry

The Silence is Deafening

A while back I wrote a post concerning a devastating paper which said that papers concerning the default mode of brain activity (as seen by functional magnetic resonance imaging { fMRI } ) had failed to make sure that the subjects were actually awake during the study (and most of them weren’t). The post is copied here after the ****

Here’s a paper from July ’14 [ Proc. Natl. Acad. Sci. vol. 111 pp. 10341 - 10346 '14 ] Functional brain networks are typically mapped in a time averaged sense, based on the assumption that functional connections remain stationary in the resting brain. Typically resting state fMRI (default network == rsfMRI) is sampled at a resolution of 2 seconds or slower.

However the human connectome project (HCP) has high-quality rsfMRI data at subsecond resolution (using multiband accelerated echo planar imaging. This work used a sliding window approach mapping the evolution of functional brain networks over a continuous 15 minute interval at subsecond resolution in 10 people. I wrote the lead author 21 July ’14 to ask how he knew the subjects weren’t asleep during this time.

No response. The silence is deafening.

Another more recent paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 14259–14264 '14 ] had interesting things to say about brain maturation in attention deficit disorder/ hyperactivity — here’s the summary

It was proposed that individuals with attention-deficit/hyperactivity disorder (ADHD) exhibit delays in brain maturation. In the last decade, resting state functional imaging has enabled detailed investigation of neural connectivity patterns and has revealed that the human brain is functionally organized into large-scale connectivity networks. In this study, we demonstrate that the developing relationships between default mode network (DMN) and task positive networks (TPNs) exhibit significant and specific maturational lag in ADHD. Previous research has found that individuals with ADHD exhibit abnormalities in DMN–TPN relationships. Our results provide strong initial evidence that these alterations arise from delays in typical maturational patterns. Our results invite further investigation into the neurobiological mechanisms in ADHD that produce delays in development of large-scale networks.

I wrote the lead author a few days ago to ask how he knew the subjects weren’t asleep during this time.

No response. The silence is deafening.

***

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the #### and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

####

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

Now we know why hot food tastes differently

An absolutely brilliant piece of physical chemistry explained a puzzling biologic phenomenon that organic chemistry was powerless to illuminate.

First, a fair amount of background

Ion channels are proteins present in the cell membrane of all our cells, but in neurons they are responsible for the maintenance of a membrane potential across the membrane, which has the ability change abruptly causing an nerve cell to fire an impulse. Functionally, ligand activated ion channels are pretty easy to understand. A chemical binds to them and they open and the neuron fires (or a muscle contracts — same thing). The channels don’t let everything in, just particular ions. Thus one type of channel which binds acetyl choline lets in sodium (not potassium, not calcium) which causes the cell to fire impulses. The GABA[A] receptor (the ion channel for gamma amino butyric acid) lets in chloride ions (and little else) which inhibits the neuron carrying it from firing. (This is why the benzodiazepines and barbiturates are anticonvulsants).

Since ion channels are full of amino acids, some of which have charged side chains, it’s easy to see how a change in electrical potential across the cell membrane could open or shut them.

By the way, the potential is huge although it doesn’t seem like much. It is usually given as 70 milliVolts (inside negatively charged, outside positively charged). Why is this a big deal? Because the electric field across our membranes is huge. 70 x 10^-3 volts is only 70 milliVolts. The cell membrane is quite thin — just 70 Angstroms. This is 7 nanoMeters (7 x 10^-9) meters. Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.

Now for the main course. We easily sense hot and cold. This is because we have a bunch of different ion channels which open in response to different temperatures. All this without neurotransmitters binding to them, or changes in electric potential across the membrane.

People had searched for some particular sequence of amino acids common to the channels to no avail (this is the failure of organic chemistry).

In a brilliant paper, entropy was found to be the culprit. Chemists are used to considering entropy effects (primarily on reaction kinetics, but on equilibria as well). What happens is that in the open state a large number of hydrophobic amino acids are exposed to the extracellular space. To accommodate them (e.g. to solvate them), water around them must be more ordered, decreasing entropy. This, of course, is why oil and water don’t mix.

As all the chemists among us should remember, the equilibrium constant has components due to kinetic energy (e.g. heat, e.g. enthalpy) and due to entropy.

The entropy term must be multiplied by the temperature, which is where the temperature sensitivity of the equilibrium constant (in this case open channel/closed channel) comes in. Remember changes in entropy and enthalpy work in opposite directions —

delta G(ibbs free energy) = delta H (enthalpy) - T * delta S (entropy

Here’s the paper [ Cell vol. 158 pp. 977 - 979, 1148 1158 '14 ] They note that if a large number of buried hydrophobic groups become exposed to water on a conformational change in the ion channel, an increased heat capacity should be produced due to water ordering to solvate the hydrophobic side chains. This should confer a strong temperature dependence on the equilibrium constant for the reaction. Exposing just 20 hydrophobic side chains in a tetrameric channel should do the trick. The side chains don’t have to be localized in a particular area (which is why organic chemists and biochemists couldn’t find a stretch of amino acids conferring cold or heat sensitivity — it didn’t matter where the hydrophobic amino acids were, as long as there were enough of them, somewhere).

In some way this entwines enthalpy and entropy making temperature dependent activation U shaped rather than monotonic. So such a channel is in principle both hot activated and cold activated, with the position of the U along the temperature axis determining which activation mode is seen at experimentally accessible temperatures.

All very nice, but how many beautiful theories have we seen get crushed by ugly facts. If they really understood what is going on with temperature sensitivity, they should be able to change a cold activated ion channel to a heat activated one (by mutating it). If they really, really understood things, they should be able to take a run of the mill temperature INsensitive ion channel and make it temperature sensitive. Amazingly, the authors did just that.

Impressive. Read the paper.

This harks back to the days when theories of organic reaction mechanisms were tested by building molecules to test them. When you made a molecule that no one had seen before and predicted how it would react you knew you were on to something.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 - E3533 '14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Tolstoy rides again — Schizophrenia

“A field plagued by inconsistency, and perhaps even a degree of charlatanism” — strong stuff indeed [ Neuron vol. 83 pp. 760 - 763 '14 ]. They are talking about studies attempting to find the genetic causes of schizophrenia.

This was the state of play four and a half years ago (in a post of April 2010)

Happy families are all alike.; every unhappy family is unhappy in its own way”. Thus beginneth Anna Karenina. That wasn’t supposed to happen with hereditary disease. The examples we had before large scale DNA sequencing became cheap were basically one gene causing one disease. Two of the best known cases were sickle cell anemia and cystic fibrosis. In the former, a change in a single position (nucleotide) of DNA caused a switch of one amino acid (valine for glutamic) acid at position #6 in beta hemoglobin. In the latter, all mutations have been found in a single gene called CFTR. 85% of known mutations involve the loss of 1 amino acids. But by 2003 over 600 different mutations accounted for only part of the other 15%. There is plenty of room for mutation as CFTR has 1480 amino acids. The kids I took care of in the muscular dystrophy clinic all turned out to have mutations in the genes for proteins found in muscle.

Why not look for the gene causing schizophrenia? It’s a terrible disease (the post “What is Schizophrenia really Like?” is included after the **** ) with a strong hereditary component. There was an awful era in psychiatry when it was thought to be environmental (e.g. the family was blamed). Deciding what is hereditary and what is environmental can be tricky. TB was thought to be hereditary (for a time) because it also ran in families. So why couldn’t schizophrenia be environmental? Well, if you are an identical twin and the other twin has it, your chances of having schizophrenia are 45%. If you are a fraternal twin your chance of having it are 3 times less (15%). This couldn’t be due to the environment.

It’s time to speak of SNPs (single nucleotide polymorphisms). Our genome has 3.2 gigaBases of DNA. With sequencing being what it is, each position has a standard nucleotide at each position (one of A, T, G, or C). If 5% of the population have one of the other 3 at this position you have a SNP. By 2004 some 7 MILLION SNPs had been found and mapped to the human genome. So to find ‘the gene’ for schizophrenia, just take schizophrenics as a group (there are lots of them — about 1% of the population) look at their SNPs, and see if they have any SNPs in common.

Study after study found suspect SNPs (which can be localized exactly in the genome) for schizophrenia. The area of the genome containing the SNP was then searched for protein coding genes to find the cause of the disease. Unfortunately each study implicated a different bunch of SNPs (in different areas of the genome). A study of 750 schizophrenics and an equal number of controls from North Carolina used 500,000 SNPs. None of the previous candidate genes held up. Not a single one [ Nature vol. 454 pp., 154 - 157 '08 ]

As of 2009 here are 3,000 diseases showing simple inheritance in which a causative gene hasn’t been found. This is the ‘dark matter’ of the genome. We are sure it exists (because the diseases are hereditary) but we simply can’t see it.

There is presently a large (and expensive) industry called GWAS (Genome Wide Association Studies) which uses SNPs to look for genetic causes of diseases with a known hereditary component. One study on coronary heart disease had 23,000 participants. In 2007 the Wellcome Trust committed 45 million (pounds? dollars?) for studies of 27 diseases in 120,000 people. This is big time science. GWAS studies have found areas of the genome associated with various disorders. However, in all GWAS studies so far, what they’ve picked up explains less than 5% of the heritability. An example is height (not a disease). Its heritability is 80% yet the top 20 candidate genetic variants identified explain only 3% of the variance. People have called for larger and larger samples to improve matters.

What’s going on?

It’s time for you to read “Genetic Heterogeneity in Human Disease” [ Cell vol. 141 pp. 210 - 217 '10 ]. It may destroy GWAS. Basically, they argue that most SNPs are irrelevant, don’t produce any functional change, and have arisen by random mutation. They are evolutionary chaff if you will. A 12 year followup study of 19,000 women looked at the 101 SNPs found by GWAS as risk variants for cardiovascular disease — not one of them predicted outcome [ J. Am. Med. Assoc. vol. 303 pp. 631 - 637 '10 ]. The SNPs haven’t been eliminated by natural selection, because they aren’t causing trouble and because the human population has grown exponentially.

There’s a lot more in this article, which is worth reading carefully. It looks like what we’re calling a given disease with a known hereditary component (schizophrenia for example) is the result of a large number of different (and rather rare) mutations. A given SNP study may pick up one or two rare mutations, but they won’t be found in the next. It certainly has been disheartening to follow this literature over the years, in the hopes that the cause of disease X, Y or Z would finally be found, and that we would have a logical point of attack (but see an old post titled “Some Humility is in Order”).

Is there an analogy?

200 years ago (before Pasteur) physicians classified a variety of diseases called fevers. They knew they were somewhat different from each other (quotidian fever, puerperal fever, etc. etc.). But fever was the common denominator and clinically they looked pretty much the same (just as dogs look pretty much the same). Now we know that infectious fever has hundreds of different causes. The Cell article argues that, given what GWAS has turned up so far, this is likely to be the case for many hereditary disorders.

Tolstoy was right.

Fast forward to the present [ Nature vol. 511 pp. 412 - 413, 421 - 427 '14 ] This is a paper from the Schizophrenia Working Group of the Psychiatric Genomics Consortium (PGC) which analyzed some 36,989 and 113,075 controls. They found 128 independent associations spanning 108 conservatively defined loci, meeting genome-wide significance. 83/128 hadn’t been previously reported. The associations were enriched among genes expressed in brain. Prior to this work, some 30 schizophrenia associated loci had been found through genome wide association studies (GWAS).

Interestingly 3/4 of the 108 loci include protein coding genes (of which 40% represent a single gene and another 8% are within 20 kiloBases of a gene).

The editorial noted that there have been 800 associations ‘of dubious value’

The present risk variants are common, and contribute in most (if not all) cases. One such association is with the D2 dopamine receptor, but not with COMT (which metabolizes dopamine). The most significant association is within the major histocompatibility complex (MHC).

The paper in Neuron cited above notes that schizophrenia genetics is a “field plagued by inconsistency, and perhaps even a degree of charlatanism” and that that there have been 800 associations ‘of dubious value’. The statistical sins of earlier work are described resulting in many HUNDREDS of variant associations with schizophrenia, and scores of falsely implicated genes. Standards have been developed to eliminate them [ Nat. Rev. Genet. vol. 9 pp. 356 - 369 '08 ].

Here is their description of the statistical sins prior to GWAS “Before GWAS, the standard practice for investigating schizophrenia genetics (as well as many other areas) was to pick a candidate gene (usually based on dopamine or glutamate pathways or linkage studies) and compare the frequency of genetic variants in cases and controls. Any difference with a p value 0.05) p values, and associations seen in partitions of a data set. Beyond all of these obvious statistical transgressions, these studies often entirely ignored well-established causes of spurious associations such as population stratification. Labs would churn out separate papers for gene after gene with no correction for multiple testing, and, on top of all of that, there was a publication bias against negative findings. “

There is a hockey stick model in which few real associations aren’t found until a particular sample size is breached. This works for hypertension.

GWAS identifies genomic regions, not precise risk factors. It is estimated the the 108 loci implicate 350 genes. However the Major Histocompatibility Complex (MHC) counts as one locus and it has tones of genes.

The NHGRI website tracks independent GWAS signals for common diseases and traits, and currently records 7,300 associations with a p value under 5 * 10^-8. Only 20 have been tracked to causal variants (depending on criteria.

The number of genes implicated will only grow as the PGC continues to increase the sample size to capture smaller and smaller effect sizes — how long will it be until the whole genome is involved? There are some significant philosophical issues involved, but this post is long enough already.

*****
What Schizophrenia is really Like

I feel that writing to you there I am writing to the source of a ray of light from within a pit of semi-darkness. It is a strange place where you live, where administration is heaped upon administration, and all tremble with fear or abhorrence (in spite of pious phrases) at symptoms of actual non-local thinking. Up the river, slightly better, but still very strange in a certain area with which we are both familiar. And yet, to see this strangeness, the viewer must be strange.”

“I observed the local Romans show a considerable interest in getting into telephone booths and talking on the telephone and one of their favorite words was pronto. So it’s like ping-pong, pinging back again the bell pinged to me.”

Could you paraphrase this? Neither can I, and when, as a neurologist I had occasion to see schizophrenics, the only way to capture their speech was to transcribe it verbatim. It can’t be paraphrased, because it makes no sense, even though it’s reasonably gramatical.

What is a neurologist doing seeing schizophrenics? That’s for shrinks isn’t it? Sometimes in the early stages, the symptoms suggest something neurological. Epilepsy for example. One lady with funny spells was sent to me with her husband. Family history is important in just about all neurological disorders, particularly epilepsy. I asked if anyone in her family had epilepsy. She thought her nephew might have it. Her husband looked puzzled and asked her why. She said she thought so because they had the same birthday.

It’s time for a little history. The board which certifies neurologists, is called the American Board of Psychiatry and Neurology. This is not an accident as the two fields are joined at the hip. Freud himself started out as a neurologist, wrote papers on cerebral palsy, and studied with a great neurologist of the time, Charcot at la Salpetriere in Paris. 6 months of my 3 year residency were spent in Psychiatry, just as psychiatrists spend time learning neurology (and are tested on it when they take their Boards).

Once a month, a psychiatrist friend and I would go to lunch, discussing cases that were neither psychiatric nor neurologic but a mixture of both. We never lacked for new material.

Mental illness is scary as hell. Society deals with it the same way that kids deal with their fears, by romanticizing it, making it somehow more human and less horrible in the process. My kids were always talking about good monsters and bad monsters when they were little. Look at Sesame street. There are some fairly horrible looking characters on it which turn out actually to be pretty nice. Adults have books like “One flew over the Cuckoo’s nest” etc. etc.

The first quote above is from a letter John Nash wrote to Norbert Weiner in 1959. All this, and much much more, can be found in “A Beatiful Mind” by Sylvia Nasar. It is absolutely the best description of schizophrenia I’ve ever come across. No, I haven’t seen the movie, but there’s no way it can be more accurate than the book.

Unfortunately, the book is about a mathematician, which immediately turns off 95% of the populace. But that is exactly its strength. Nash became ill much later than most schizophrenics — around 30 when he had already done great work. So people saved what he wrote, and could describe what went on decades later. Even better, the mathematicians had no theoretical axe to grind (Freudian or otherwise). So there’s no ego, id, superego or penis envy in the book, just page after page of description from well over 100 people interviewed for the book, who just talked about what they saw. The description of Nash at his sickest covers 120 pages or so in the middle of the book. It’s extremely depressing reading, but you’ll never find a better description of what schizophrenia is actually like — e.g. (p. 242) She recalled that “he kept shifting from station to station. We thought he was just being pesky. But he thought that they were broadcasting messages to him. The things he did were mad, but we didn’t really know it.”

Because of his previous mathematical achievments, people saved what he wrote — the second quote above being from a letter written in 1971 and kept by the recipient for decades, the first quote from a letter written in 12 years before that.

There are a few heartening aspects of the book. His wife Alicia is a true saint, and stood by him and tried to help as best she could. The mathematicians also come off very well, in their attempts to shelter him and to get him treatment (they even took up a collection for this at one point).

I was also very pleased to see rather sympathetic portraits of the docs who took care of him. No 20/20 hindsight is to be found. They are described as doing the best for him that they could given the limited knowledge (and therapies) of the time. This is the way medicine has been and always will be practiced — we never really know enough about the diseases we’re treating, and the therapies are almost never optimal. We just try to do our best with what we know and what we have.

I actually ran into Nash shortly after the book came out. The Princeton University Store had a fabulous collection of math books back then — several hundred at least, most of them over $50, so it was a great place to browse, which I did whenever I was in the area. Afterwards, I stopped in a coffee shop in Nassau Square and there he was, carrying a large disheveled bunch of papers with what appeared to be scribbling on them. I couldn’t bring myself to speak to him. He had the eyes of a hunted animal.

I sincerely hope it works, but I’m very doubtful

A fascinating series of papers offers hope (in the form of a small molecule) for the truly horrible Werdnig Hoffman disease which basically kills infants by destroying neurons in their spinal cord. For why this is especially poignant for me, see the end of the post.

First some background:

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 - 9623 '00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

Intellectually fascinating but ghastly to watch.

This brings us to the current papers [ Science vol. 345 pp. 624 - 625, 688 - 693 '14 ].

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

Why am I skeptical? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Why does the paper grip me so. I watched the beautiful infant daughter of a cop and a nurse die of it 30 – 40 years ago. Even with all the degrees, all the training I was no better for the baby than my immigrant grandmother dispensing emotional chicken soup from her dry goods store (she only had a 4th grade education). Fortunately, the couple took the 25% risk of another child with WH and produced a healthy infant a few years later.

A second reason — a beautiful baby grandaughter came into our world 24 hours ago.

Poets and religious types may intuit how miraculous our existence is, but the study of molecular biology proves it (to me at least).

As if the job shortage for organic/medicinal chemists wasn’t bad enough

Will synthetic organic chemists be replaced by a machine? Today’s (7 August ’14) Nature (vol. 512 pp. 20 – 22) describes RoboChemist. As usual the job destruction is the fruit of the species being destroyed. Nothing new here — “The Capitalists will sell us the rope with which we will hang them.” — Lenin. “I would consider it entirely feasible to build a synthesis machine which could make any one of a billion defined small molecules on demand” says one organic chemist.

The design of the machine is already being studied, but with a rather paltry grant (1.2 million dollars). Even worse, for the thinking chemist, the choice of reactants and reactions to build the desired molecule will be made by the machine (given a knowledge base, and the algorithms that experienced chemists use, assuming they can be captured by a set of rules). E. J. Corey tried to do this automatically years ago with a program called LHASA (Logic and Heuristics Applied to Synthetic Analysis), but it never took off. Corey formalized what chemists had been doing all along — see http://luysii.wordpress.com/2010/06/20/retrosynthetic-analysis-and-moliere/

Another attempt along these lines is Chematica, which recently has had some success. A problem with using the chemical literature, is that only the conditions for a successful reaction are published. A synthetic program needs to know what doesn’t work as much as it needs to know what does. This is an important problem in the medical/drug literature where only studies showing a positive effect are published. There’s a great chapter in “How Not to Be Wrong” concerning the “International Journal of Haruspicy” which publishes only statically significant results for predicting the future reading sheep entrails. They publish a lot of stuff because some 400 Haruspicists in different labs are busy performing multiple experiments, 5% of which reach statistical significance. Previously drug companies had to publish only successful clinical trials. Now they’ll be going into a database regardless of outcome.

Automated machinery for making polynucleotides and poly peptides already exists, but here the reactions are limited. Still, the problem of getting the same reaction to work over and over with different molecules of the same class (amino acids, nucleotides) has been solved.

The last sentence is the most chilling “And with a large workforce of graduate students to draw on, academic labs often have little incentive to automate.” Academics — the last Feudal system left standing.

However, telephone operators faced the same fate years ago, due to automatic switching machinery. Given the explosion of telephone volume 50 years ago, there came a point where every woman in the USA would have worked for the phone company to handle the volume.

A similar moment of terror occurred in my field (clinical neurology) years ago with the invention of computerized axial tomography (CAT scans). All our diagnostic and examination skills (based on detecting slight deviations from normal function) would be out the window, when the CAT scan showed what was structurally wrong with the brain. Diagnosis was possible because abnormalities in structure invariably occurred earlier than abnormalities in function. Didn’t happen. We’d get calls – we found this thing on the CAT scan. What does it mean?

Even this wonderful machine which can make any molecule you wish, will not tell you what cellular entity to attack, what the target does, and how attacking it will produce a therapeutically useful result.

A Troublesome Inheritance – IV — Chapter 3

Chapter III of “A Troublesome Inheritance” contains a lot of very solid molecular genetics, and a lot of unfounded speculation. I can see why the book has driven some otherwise rational people bonkers. Just because Wade knows what he’s talking about in one field, doesn’t imply he’s competent in another.

Several examples: p. 41 “”Nonethless, it is reasonable to assume that if traits like skin color have evolved in a population, the same may be true of its social behavior.” Consider yes, assume no.

p. 42 “The society of living chimps can thus with reasonable accuracy stand as a surrogate for the joint ancester” (of humans and chimps — thought to be about 7 megaYears ago) and hence describe the baseline from which human social behavior evolved.” I doubt this.

The chapter contains many just so stories about the evolution of chimp and human societies (post hoc propter hoc). Plausible, but not testable.

Then follows some very solid stuff about the effects of the hormone oxytocin (which causes lactation in nursing women) on human social interaction. Then some speculation on the ways natural selection could work on the oxytocin system to make people more or less trusting. He lists several potential mechanisms for this (1) changes in the amount of oxytocin made (2) increasing the number of protein receptors for oxytocin (3) making each receptor bind oxytocin more tightly. This shows that Wade has solid molecular biological (and biological) chops.

He quotes a Dutch psychologist on his results with oxytocin and sociality — unfortunately, there have been too many scandals involving Dutch psychologists and sociologists to believe what he says until its replicated (Google Diederik Stapel, Don Poldermans, Jens Forster, Markus Denzler if you don’t believe me). It’s sad that this probably honest individual is tarred with that brush but he is.

p. 59 — He notes that the idea that human behavior is solely the result of social conditions with no genetic influence is appealing to Marxists, who hoped to make humanity behave better by designing better social conditions. Certainly, much of the vitriol heaped on the book has come from the left. A communist uncle would always say ‘it’s the system’ to which my father would reply ‘people will corrupt any system’.

p. 61 — the effect of mutations of lactose tolerance on survival on society are noted — people herding cattle and drinking milk, survive better if their gene to digest lactose (the main sugar in milk) isn’t turned off after childhood. If your society doesn’t herd animals, there is no reason for anyone to digest milk after weaning from the breast. The mutations aren’t in the enzyme digesting lactose, but in the DNA that turns on expression of the gene for the enzyme (e.g. the promoter). Interestingly, 3 separate mutations in African herders have been found to do this, and different from the one that arose in the Funnel Beaker Culture of Scandinavia 6,000 yers ago. This is a classic example of natural selection producing the same phenotypic effect by separate mutations.

There is a much bigger biological fish to be fried here, which Wade doesn’t discuss. It takes energy to make any protein, and there is no reason to make a protein to help you digest milk if you aren’t nursing, and one very good reason not to — it wastes metabolic energy, something in short supply in humans as they lived until about 15,000 years ago. So humans evolved a way not to make the protein in adult life. The genetic change is in the DNA controlling protein production not the protein itself.

You may have heard it said that we are 98% Chimpanzee. This is true in the sense that our 20,000 or so proteins are that similar to the chimp. That’s far from the whole story. This is like saying Monticello and Independence Hall are just the same because they’re both made out of bricks. One could chemically identify Monticello bricks as coming from the Virginia piedmont, and Independence Hall bricks coming from the red clay of New Jersey, but the real difference between the buildings is the plan.

It’s not the proteins, but where and when and how much of them are made. The control for this (plan if you will) lies outside the genes for the proteins themselves, in the rest of the genome. The control elements have as much right to be called genes, as the parts of the genome coding for amino acids. Granted, it’s easier to study genes coding for proteins, because we’ve identified them and know so much about them. It’s like the drunk looking for his keys under the lamppost because that’s where the light is.

p. 62 — There follows some description of the changes of human society from hunter gathering, to agrarian, to the rise of city states, is chronicled. Whether adaptation to different social organizations produced genetic changes permitting social adaptation or were the cause of it isn’t clear. Wade says “changes in social behavior, has most probably been molded by evolution, through the underlying genetic changes have yet to be identified.” This assumes a lot, e.g. that genetic changes are involved. I’m far from sure, but the idea is not far fetched. Stating that genetic changes have never, and will never shape society, is without any scientific basis, and just as fanciful as many of Wade’s statements in this chapter. It’s an open question, which is really all Wade is saying.

In defense of Wade’s idea, think about animal breeding as Darwin did extensively. The Origin of Species (worth a read if you haven’t already read it) is full of interchanges with all sorts of breeders (pigeons, cattle). The best example we have presently are the breeds of dogs. They have very different personalities — and have been bred for them, sheep dogs mastifs etc. etc. Have a look at [ Science vol. 306 p. 2172 '04, Proc. Natl. Acad. Sci. vol. 101 pp. 18058 - 18063 '04 ] where the DNA of variety of dog breeds was studied to determine which changes determined the way they look. The length of a breed’s snout correlated directly with the number of repeats in a particular protein (Runx-2). The paper is a decade old and I’m sure that they’re starting to look at behavior.

More to the point about selection for behavioral characteristics, consider the domestication of the modern dog from the wolf. Contrast the dog with the chimp (which hasn’t been bred).

[ Science vol. 298 pp. 1634 - 1636 '02 ] Chimps are terrible at picking up human cues as to where food is hidden. Cues would be something as obvious as looking at the containing, pointing at the container or even touching it. Even those who eventually perform well, take dozens of trials or more to learn it. When tested in more difficult tests requiring them to show flexible use of social cues they don’t

This paper shows that puppies (raised with no contact with humans) do much better at reading humans than chimps. However wolf cubs do not do better than the chimps. Even more impressively, wolf cubs raised by humans don’t show the same skills. This implies that during the process of domestication, dogs have been selected for a set of social cognitive abilities that allow them to communicate with humans in unique ways. Dogs and wolves do not perform differently in a non-social memory task, ruling out the possibility that dogs outperform wolves in all human guided tasks.

All in all, a fascinating book with lots to think about, argue with, propose counterarguments, propose other arguments in support (as I’ve just done), etc. etc. Definitely a book for those who like to think, whether you agree with it all or not.

Do axons burp out mitochondria?

People have been looking at microscope slides of the brain almost since there were microscopes (Alzheimer’s paper on his disease came out in 1906). Amazingly, something new has just been found [ Proc. Natl. Acad. Sci. vol. 111 pp. 9633 - 9638 '14 ]

To a first approximation, the axon of a neuron is the long process which carries impulses to other neurons far away. They have always been considered to be quite delicate (particularly in the brain itself, in the limbs they are sheathed in tough connective tissue). After an axon is severed in the limbs, all sorts of hell breaks lose. The part of the axon no longer in contact with the neuron degenerates (Wallerian degeneration), and the neuron cell body still attached to the remaining axon, changes markedly (central chromatolysis). At least the axons making up peripheral nerves do grow back (but maddeningly slowly). In the brain, they don’t, yet another reason neurologic disease is so devastating. Huge research efforts have been made to find out why. All sorts of proteins have been found which hinder axonal regrowth in the brain (and the spinal cord). Hopefully, at some point blocking them will lead to treatment.

THe PNAS paper found that axons in the optic nerve of the mouse (which arise from neurons in the retina) burp out mitochondria. Large protrusions form containing mitochondria which are then shed, somehow leaving the remaining axon intact (remarkable when you think of it). Once shed the decaying mitochondria are found in the cells supporting the axons (astrocytes). Naturally, the authors made up a horrible name to describe the process and sound impressive (transmitophagy).

This probably occurs elsewhere in the brain, because accumulation of degrading mitochondria along nerve processes in the superficial layers of the cerebral cortex (the gray matter on the surface of the brain) have been seen. People are sure to start looking for this everywhere in the brain, and perhaps outside as well.

Where else does sort of thing this occur? In the fertilized egg, that’s where. Sperm mitochondria are activated in the egg (which is why you get your mitochondria from mommy).

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see http://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 '11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 - 1041 '11, ibid vol. 49 pp. 780 - 783, 877 - 887 '06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

Follow

Get every new post delivered to your Inbox.

Join 68 other followers