Category Archives: Neurology & Psychiatry

A Troublesome Inheritance – IV — Chapter 3

Chapter III of “A Troublesome Inheritance” contains a lot of very solid molecular genetics, and a lot of unfounded speculation. I can see why the book has driven some otherwise rational people bonkers. Just because Wade knows what he’s talking about in one field, doesn’t imply he’s competent in another.

Several examples: p. 41 “”Nonethless, it is reasonable to assume that if traits like skin color have evolved in a population, the same may be true of its social behavior.” Consider yes, assume no.

p. 42 “The society of living chimps can thus with reasonable accuracy stand as a surrogate for the joint ancester” (of humans and chimps — thought to be about 7 megaYears ago) and hence describe the baseline from which human social behavior evolved.” I doubt this.

The chapter contains many just so stories about the evolution of chimp and human societies (post hoc propter hoc). Plausible, but not testable.

Then follows some very solid stuff about the effects of the hormone oxytocin (which causes lactation in nursing women) on human social interaction. Then some speculation on the ways natural selection could work on the oxytocin system to make people more or less trusting. He lists several potential mechanisms for this (1) changes in the amount of oxytocin made (2) increasing the number of protein receptors for oxytocin (3) making each receptor bind oxytocin more tightly. This shows that Wade has solid molecular biological (and biological) chops.

He quotes a Dutch psychologist on his results with oxytocin and sociality — unfortunately, there have been too many scandals involving Dutch psychologists and sociologists to believe what he says until its replicated (Google Diederik Stapel, Don Poldermans, Jens Forster, Markus Denzler if you don’t believe me). It’s sad that this probably honest individual is tarred with that brush but he is.

p. 59 — He notes that the idea that human behavior is solely the result of social conditions with no genetic influence is appealing to Marxists, who hoped to make humanity behave better by designing better social conditions. Certainly, much of the vitriol heaped on the book has come from the left. A communist uncle would always say ‘it’s the system’ to which my father would reply ‘people will corrupt any system’.

p. 61 — the effect of mutations of lactose tolerance on survival on society are noted — people herding cattle and drinking milk, survive better if their gene to digest lactose (the main sugar in milk) isn’t turned off after childhood. If your society doesn’t herd animals, there is no reason for anyone to digest milk after weaning from the breast. The mutations aren’t in the enzyme digesting lactose, but in the DNA that turns on expression of the gene for the enzyme (e.g. the promoter). Interestingly, 3 separate mutations in African herders have been found to do this, and different from the one that arose in the Funnel Beaker Culture of Scandinavia 6,000 yers ago. This is a classic example of natural selection producing the same phenotypic effect by separate mutations.

There is a much bigger biological fish to be fried here, which Wade doesn’t discuss. It takes energy to make any protein, and there is no reason to make a protein to help you digest milk if you aren’t nursing, and one very good reason not to — it wastes metabolic energy, something in short supply in humans as they lived until about 15,000 years ago. So humans evolved a way not to make the protein in adult life. The genetic change is in the DNA controlling protein production not the protein itself.

You may have heard it said that we are 98% Chimpanzee. This is true in the sense that our 20,000 or so proteins are that similar to the chimp. That’s far from the whole story. This is like saying Monticello and Independence Hall are just the same because they’re both made out of bricks. One could chemically identify Monticello bricks as coming from the Virginia piedmont, and Independence Hall bricks coming from the red clay of New Jersey, but the real difference between the buildings is the plan.

It’s not the proteins, but where and when and how much of them are made. The control for this (plan if you will) lies outside the genes for the proteins themselves, in the rest of the genome. The control elements have as much right to be called genes, as the parts of the genome coding for amino acids. Granted, it’s easier to study genes coding for proteins, because we’ve identified them and know so much about them. It’s like the drunk looking for his keys under the lamppost because that’s where the light is.

p. 62 — There follows some description of the changes of human society from hunter gathering, to agrarian, to the rise of city states, is chronicled. Whether adaptation to different social organizations produced genetic changes permitting social adaptation or were the cause of it isn’t clear. Wade says “changes in social behavior, has most probably been molded by evolution, through the underlying genetic changes have yet to be identified.” This assumes a lot, e.g. that genetic changes are involved. I’m far from sure, but the idea is not far fetched. Stating that genetic changes have never, and will never shape society, is without any scientific basis, and just as fanciful as many of Wade’s statements in this chapter. It’s an open question, which is really all Wade is saying.

In defense of Wade’s idea, think about animal breeding as Darwin did extensively. The Origin of Species (worth a read if you haven’t already read it) is full of interchanges with all sorts of breeders (pigeons, cattle). The best example we have presently are the breeds of dogs. They have very different personalities — and have been bred for them, sheep dogs mastifs etc. etc. Have a look at [ Science vol. 306 p. 2172 '04, Proc. Natl. Acad. Sci. vol. 101 pp. 18058 - 18063 '04 ] where the DNA of variety of dog breeds was studied to determine which changes determined the way they look. The length of a breed’s snout correlated directly with the number of repeats in a particular protein (Runx-2). The paper is a decade old and I’m sure that they’re starting to look at behavior.

More to the point about selection for behavioral characteristics, consider the domestication of the modern dog from the wolf. Contrast the dog with the chimp (which hasn’t been bred).

[ Science vol. 298 pp. 1634 - 1636 '02 ] Chimps are terrible at picking up human cues as to where food is hidden. Cues would be something as obvious as looking at the containing, pointing at the container or even touching it. Even those who eventually perform well, take dozens of trials or more to learn it. When tested in more difficult tests requiring them to show flexible use of social cues they don’t

This paper shows that puppies (raised with no contact with humans) do much better at reading humans than chimps. However wolf cubs do not do better than the chimps. Even more impressively, wolf cubs raised by humans don’t show the same skills. This implies that during the process of domestication, dogs have been selected for a set of social cognitive abilities that allow them to communicate with humans in unique ways. Dogs and wolves do not perform differently in a non-social memory task, ruling out the possibility that dogs outperform wolves in all human guided tasks.

All in all, a fascinating book with lots to think about, argue with, propose counterarguments, propose other arguments in support (as I’ve just done), etc. etc. Definitely a book for those who like to think, whether you agree with it all or not.

Do axons burp out mitochondria?

People have been looking at microscope slides of the brain almost since there were microscopes (Alzheimer’s paper on his disease came out in 1906). Amazingly, something new has just been found [ Proc. Natl. Acad. Sci. vol. 111 pp. 9633 - 9638 '14 ]

To a first approximation, the axon of a neuron is the long process which carries impulses to other neurons far away. They have always been considered to be quite delicate (particularly in the brain itself, in the limbs they are sheathed in tough connective tissue). After an axon is severed in the limbs, all sorts of hell breaks lose. The part of the axon no longer in contact with the neuron degenerates (Wallerian degeneration), and the neuron cell body still attached to the remaining axon, changes markedly (central chromatolysis). At least the axons making up peripheral nerves do grow back (but maddeningly slowly). In the brain, they don’t, yet another reason neurologic disease is so devastating. Huge research efforts have been made to find out why. All sorts of proteins have been found which hinder axonal regrowth in the brain (and the spinal cord). Hopefully, at some point blocking them will lead to treatment.

THe PNAS paper found that axons in the optic nerve of the mouse (which arise from neurons in the retina) burp out mitochondria. Large protrusions form containing mitochondria which are then shed, somehow leaving the remaining axon intact (remarkable when you think of it). Once shed the decaying mitochondria are found in the cells supporting the axons (astrocytes). Naturally, the authors made up a horrible name to describe the process and sound impressive (transmitophagy).

This probably occurs elsewhere in the brain, because accumulation of degrading mitochondria along nerve processes in the superficial layers of the cerebral cortex (the gray matter on the surface of the brain) have been seen. People are sure to start looking for this everywhere in the brain, and perhaps outside as well.

Where else does sort of thing this occur? In the fertilized egg, that’s where. Sperm mitochondria are activated in the egg (which is why you get your mitochondria from mommy).

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see http://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 '11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 - 1041 '11, ibid vol. 49 pp. 780 - 783, 877 - 887 '06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

Here’s a drug target for schizophrenia and other psychiatric diseases

All agree that any drug getting schizophrenics back to normal would be a blockbuster. The more we study its genetics and biochemistry the harder the task becomes. Here’s one target — neuregulin1, one variant of which is strongly associated with schizophrenia (in Iceland).

Now that we know that neuregulin1 is a potential target, why should discovering a drug to treat schizophrenia be so hard? The gene stretches over 1.2 megaBases and the protein contains some 640 amino acids. Cells make some 30 different isoforms by alternative splicing of the gene. Since the gene is so large one would expect to find a lot of single nucleotide polymorphisms (SNPs) in the gene. Here’s some SNP background.

Our genome has 3.2 gigaBases of DNA. With sequencing being what it is, each position has a standard nucleotide at each position (one of A, T, G, or C). If 5% of the population have any one of the other 3 at this position you have a SNP. By 2004 some 7 MILLION SNPs had been found and mapped to the human genome.

Well it’s 10 years later, and a mere 23,094 SNPs have been found in the neuregulin gene, of which 40 have been associated with schizophrenia. Unfortunately most of them aren’t in regions of the gene which code for amino acids (which is to be expected as 640 * 3 = 1920 nucleotides are all you need for coding out of the 1,200,000 nucleotides making up the gene). These SNPs probably alter the amount of the protein expressed but as of now very little is known (even whether they increase or decrease neuregulin1 protein levels).

An excellent review of Neuregulin1 and schizophrenia is available [ Neuron vol. 83 pp. 27 - 49 '14 ] You’ll need a fairly substantial background in neuroanatomy, neuroembryology, molecular biology, neurophysiology to understand all of it. Included are some fascinating (but probably incomprehensible to the medicinal chemist) material on the different neurophyiologic abnormalities associated with different SNPs in the gene.

Here are a few of the high points (or depressing points for drug discovery) of the review. Neuregulin1 is a member of a 6 gene family, all fairly similar and most expressed in the brain. All of them have multiple splicing isoforms, so drug selectivity between them will be tricky. Also SNPs associated with increased risk of schizophrenia have been found in family members numbers 2, 3 and 6 as well, so neuregulin1 not be the actual target you want to hit.

It gets worse. The neuregulins bind to a family of receptors (the ERBBs) having 4 members. Tending to confirm the utility of the neuregulins as a drug target is the fact that SNPs in the ERBBs are also associated with schizophrenia. So which isoform of which neuregulin binding to which iso form of which ERBB is the real target? Knowledge isn’t always power.

A large part of the paper is concerned with the function of the neuregulins in embryonic development of the brain, leading the the rather depressing thought that the schizophrenic never had a change, having an abnormal brain to begin with. A drug to reverse such problems seems only a hope.

The neuregulin/EBBB system is only one of many genes which have been linked to schizophrenia. So it looks like a post of a 4 years ago on Schizophrenia is largely correct — http://luysii.wordpress.com/2010/04/25/tolstoy-was-right-about-hereditary-diseases-imagine-that/

Happy hunting. It’s a horrible disease and well worth the effort. We’re just beginning to find out how complex it really is. Hopefully we’ll luck out, as we did with the phenothiazines, the first useful antipsychotics.

The perfect aphrodisiac ?

We’re off to London for a few weeks to celebrate our 50th Wedding Anniversary. As a parting gift to all you lovelorn organic chemists out there, here’s a drug target for a new aphrodisiac.

Yes, it’s yet another G Protein Coupled Receptor (GPCR) of which we have 800+ in our genome, and which some 30% of drugs usable in man target (but not this one).

You can read all about it in a leisurely review of “Affective Touch” in Neuron vol. 82 pp. 737 – 755 ’14, and Nature vol. 493 pp. 669 – 673 ’13. The receptor (if the physiological ligand is known the papers are silent about it) is found on a type of nerve going to hairy skin. It’s called MRGPRB4.

The following has been done in people. Needles were put in a cutaneous nerve, and skin was lightly stroked at rates between 1 and 10 centimeters/second. Some of the nerves respond at very high frequency 50 – 100 impulses/second (50 – 100 Hertz) to this stimulus. Individuals were asked to rate the pleasantness of the sensation produced. The most pleasant sensations produced the highest frequency responses of these nerves.

MRGPRB4 is found on nerves which respond like this (and almost nowhere else as far as is known), so a ligand for it should produce feelings of pleasure. The whole subject of proteins which produce effects when the cell carrying them is mechanically stimulated is fascinating. Much of the work has been done with the hair cells of the ear, which discharge when the hairs are displaced by sound waves. Proteins embedded in the hairs trigger an action potential when disturbed.

Perhaps there is no chemical stimulus for MRGPRB4, just as there isn’t for the hair cells, but even so it’s worth looking for some chemical which does turn on MRGPRB4. Perhaps a natural product already does this, and is in one of the many oils and lotions people apply to themselves. Think of the chemoattractants for bees and other insects.

If you’re the lucky soul who finds such a drug, fame and fortune (and perhaps more) is sure to be yours.

Happy hunting

Back in a few weeks

A huge amount of work will need to be redone

The previous post is reprinted below the —- if you haven’t read it, you should do so now before proceeding.

Briefly, no one had ever bothered to check if subjects were asleep while studying the default mode of brain activity. The paper discussed in the previous post appeared in the 7 May ’14 issue of Neuron.

In the 13 May ’14 issue of PNAS [ Proc. Natl. Acad. Sci. vol. 111 pp. E2066 - E2075 '14 ] a paper appeared on genetic links to default mode abnormalities in schizophrenia and bipolar disorder.

From the abstract “Study subjects (n = 1,305) underwent a resting-state functional MRI scan and were analyzed by a two-stage approach. The initial analysis used independent component analysis (ICA) in 324 healthy controls, 296 Schizophrenic probands, 300 psychotic bipolar disorder probands, 179 unaffected first-degree relatives of schizophrenic pro bans, and 206 unaffected first-degree relatives of psychotic bipolar disorder probands to identify default mode networks and to test their biomarker and/or endophenotype status. A subset of controls and probands (n = 549) then was subjected to a parallel ICA (para-ICA) to identify imaging–genetic relationships. ICA identified three default mode networks.” The paper represents a tremendous amount of work (and expense).

No psychiatric disorder known to man has normal sleep. The abnormalities found in the PNAS study may not be of the default mode network, but in the way these people are sleeping. So this huge amount of work needs to be repeated. An tghis is just one paper. As mentioned a Google search on Default Networks garnered 32,000,000 hits.

Very sad.

____

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

Why marihuana scares me

There’s an editorial in the current Science concerning how very little we know about the effects of marihuana on the developing adolescent brain [ Science vol. 344 p. 557 '14 ]. We know all sorts of wonderful neuropharmacology and neurophysiology about delta-9 tetrahydrocannabinol (d9-THC) — http://en.wikipedia.org/wiki/Tetrahydrocannabinol The point of the authors (the current head of the Amnerican Psychiatric Association, and the first director of the National (US) Institute of Drug Abuse), is that there are no significant studies of what happens to adolescent humans (as opposed to rodents) taking the stuff.

Marihuana would the first mind-alteraing substance NOT to have serious side effects in a subpopulation of people using the drug — or just about any drug in medical use for that matter.

Any organic chemist looking at the structure of d9-THC (see the link) has to be impressed with what a lipid it is — 21 carbons, only 1 hydroxyl group, and an ether moiety. Everything else is hydrogen. Like most neuroactive drugs produced by plants, it is quite potent. A joint has only 9 milliGrams, and smoking undoubtedly destroys some of it. Consider alcohol, another lipid soluble drug. A 12 ounce beer with 3.2% alcohol content has 12 * 28.3 *.032 10.8 grams of alcohol — molecular mass 62 grams — so the dose is 11/62 moles. To get drunk you need more than one beer. Compare that to a dose of .009/300 moles of d9-THC.

As we’ve found out — d9-THC is so potent because it binds to receptors for it. Unlike ethanol which can be a product of intermediary metabolism, there aren’t enzymes specifically devoted to breaking down d9-THC. In contrast, fatty acid amide hydrolase (FAAH) is devoted to breaking down anandamide, one of the endogenous compounds d9-THC is mimicking.

What really concerns me about this class of drugs, is how long they must hang around. Teaching neuropharmacology in the 70s and 80s was great fun. Every year a new receptor for neurotransmitters seemed to be found. In some cases mind benders bound to them (e.g. LSD and a serotonin receptor). In other cases the endogenous transmitters being mimicked by a plant substance were found (the endogenous opiates and their receptors). Years passed, but the receptor for d9-thc wasn’t found. The reason it wasn’t is exactly why I’m scared of the drug.

How were the various receptors for mind benders found? You throw a radioactively labelled drug (say morphine) at a brain homogenate, and purify what it is binding to. That’s how the opiate receptors etc. etc. were found. Why did it take so long to find the cannabinoid receptors? Because they bind strongly to all the fats in the brain being so incredibly lipid soluble. So the vast majority of stuff bound wasn’t protein at all, but fat. The brain has the highest percentage of fat of any organ in the body — 60%, unless you considered dispersed fatty tissue an organ (which it actually is from an endocrine point of view).

This has to mean that the stuff hangs around for a long time, without any specific enzymes to clear it.

It’s obvious to all that cognitive capacity changes from childhood to adult life. All sorts of studies with large numbers of people have done serial MRIs children and adolescents as the develop and age. Here are a few references to get you started [ Neuron vol. 72 pp. 873 - 884, 11, Proc. Natl. Acad. Sci. vol. 107 pp. 16988 - 16993 '10, vol. 111 pp. 6774 -= 6779 '14 ]. If you don’t know the answer, think about the change thickness of the cerebral cortex from age 9 to 20. Surprisingly, it get thinner, not thicker. The effect happens later in the association areas thought to be important in higher cognitive function, than the primary motor or sensory areas. Paradoxical isn’t it? Based on animal work this is thought to be due pruning of synapses.

So throw a long-lasting retrograde neurotransmitter mimic like d9-THC at the dynamically changing adolescent brain and hope for the best. That’s what the cited editorialists are concerned about. We simply don’t know and we should.

Having been in Cambridge when Leary was just getting started in the early 60’s, I must say that the idea of tune in turn on and drop out never appealed to me. Most of the heavy marihuana users I’ve known (and treated for other things) were happy, but rather vague and frankly rather dull.

Unfortunately as a neurologist, I had to evaluate physician colleagues who got in trouble with drugs (mostly with alcohol). One very intelligent polydrug user MD, put it to me this way — “The problem is that you like reality, and I don’t”.

Just when you thought you understood neurotransmission

Back in the day, the discovery of neurotransmission allowed us to think we understood how the brain worked. I remember explaining to medical students in the early 70s, that the one way flow of information from the presynaptic neuron to the post-synaptic one was just like the flow of current in a vacuum tube — yes a vacuum tube, assuming anyone reading knows what one is. Later I changed this to transistor when integrated circuits became available.

Also the Dale hypothesis as it was taught to me, was that a given neuron released the same neurotransmitter at all its endings. As it was taught back in the 60s this meant that just one transmitter was released by a given neuron.

Retrograde transmission was just a glimmer in the mind’s eye back then. We now know that the post-synaptic neuron releases compounds which affect the presynaptic neuron, the supposed controller of the postsynaptic neuron. Among them are carbon monoxide, and the endocannabinoids (e. g. what marihuana is trying to mimic).

In addition there are neurotransmitter receptors on the presynaptic neuron, which respond to what it and other neurons are releasing to control its activity. These are outside the synapse itself. These events occur more slowly than the millisecond responses in the synapse to the main excitatory neurotransmitter of the brain (glutamic acid) and the main inhibitory neurotransmitter (gamma amino butyric acid — aka GABA). Receptors on the presynaptic neuron for the transmitter it’s releasing are called autoreceptors, but the presynaptic terminal also contains receptors for other neurotransmitters.

Well at least, neurotransmitters aren’t released by the presynaptic neuron without an action potential which depolarizes the presynaptic terminal, or so we thought until [ Neuron vol. 82 pp. 63 - 70 '14 ]. The report involves a structure near and dear to the neurologist the striatum (caudate and putamen — which is striated because the myelinated axons of the internal capsule go through its anterior end giving it a striated appearance).

It is the death of the dopamine containing neurons in the substantial nigra which cause Parkinsonism. They project some of their axons to the striatum. The striatum gets input elsewhere (from the cortex using glutamic acid) and from neurons intrinsic to itself (some of which use acetyl choline as their neurotransmitter — these are called cholinergic interneurons).

The paper makes the claim that the dopamine neurons projecting to the striatum also contain the inhibitory neurotransmitter GABA.

The paper also says that the cholinergic interneurons cause release of GABA by the dopamine neurons — they bind to a type of acetyl choline receptor called nicotinic (similar but not identical to the nicotinic receptors which allow our muscles to contract) in the presynaptic terminals of the dopamine neurons of the substantial nigra residing in the striatum. Isn’t medicine and neuroanatomy a festival of terms? It’s why you need a good memory to survive medical school.

These used optogenetics (something I don’t have time to explain — but see http://en.wikipedia.org/wiki/Optogenetics ) to selectively stimulate the 1 – 2% of striatal neurons which use acetyl choline as a neurotransmitter. What they found was that only GABA (and not dopamine) was released by the dopamine neurons in response to stimulating this small subset of neurons. Even more amazing, the GABA release occurred without an action potential depolarizing the presynaptic terminal.

This literally stands everything I thought I knew about neurotransmission on its ear. How widespread this phenomenon actually is, isn’t known at this point. Clearly, the work needs to be replicated — extreme claims require extreme evidence.

Unfortunately I’ve never provided much background on neurotransmission for the hapless chemists and medicinal chemists reading this (if there are any), but medicinal chemists must at least have a smattering of knowledge about this, since neurotransmission is involved in how large classes of CNS active drugs work — antidepressants, antipsychotics, anticonvulsants, migraine therapy. There is some background on this here — http://luysii.wordpress.com/2010/08/29/some-basic-pharmacology-for-the-college-student/

The prions within us

Head for the hills. All of us have prions within us sayeth [ Cell vol. 156 pp. 1127 - 1129, 1193 - 1206, 1206 - 1222 '14 ]. They are part of the innate immune system and help us fight infection. But aren’t all sorts of horrible disease (Bovine Spongiform Encephalopathy aka BSE, Jakob Creutzfeldt disease aka JC disease, Familial Fatal Insomnia etc. etc.) due to prions? Yes they are.

If you’re a bit shaky on just what a prion is see the previous post which should get you up to speed — https://luysii.wordpress.com/2014/03/30/a-primer-on-prions/.

Initially there was an enormous amount of contention when Stanley Prusiner proposed that Jakob Creutzfeldt disease was due to a protein forming an unusual conformation, which made other copies of the same protein adopt it. It was heredity without DNA or RNA (although this was hotly contended at the time), but the evidence accumulating over the years has convinced pretty much everyone except Laura Manuelidis (about whom more later). It convinced the Nobel Prize committee at any rate.

JC disease is a rapidly progressive dementia which kills people within a year. Fortunately rare (attack rate 1 per million per year) it is due to misfolded protein called PrP (unfortunately initially called ‘the’ prion protein although we now know of many more). Trust me, the few cases I saw over the years were horrible. Despite decades of study, we have no idea what PrP does, and mice totally lacking a functional Prp gene are normal. It is found on the surface of neurons. Bovine Spongiform Encephalopathy was a real scare for a time, because it was feared that you could get it from eating meat from a cow which had it. Fortunately there have been under 200 cases, and none recently.

If you cut your teeth on the immune system being made of antibodies and white cells and little else, you’re seriously out of date. The innate immune system is really the front line against infection by viruses and bacteria, long before antibodies against them can be made. There are all sorts of receptors inside and outside the cell for chemicals found in bacteria and viruses but not in us. Once the receptors have found something suspicious inside the cell, a large protein aggregate forms which activates an enzyme called caspase1 which cleaves the precursor of a protein called interleukin 1Beta, which is then released from some immune cells (no one ever thought the immune system would be simple given all that it has to do). Interleukin1beta acts on all sorts of cells to cause inflammation.

There are different types of inflammasomes and the nomenclature of their components is maddening. Two of the sensors for bacterial products (AIM, NLRP2) induce a polymerization of an inflammasome adaptor protein called ASC producing a platform for the rest of the inflammasome, which contains other proteins bound to it, along with caspase1 whose binding to the other proteins activates it. (Terrible sentence, but things really are that complicated).

ASC, like most platform proteins (scaffold proteins), is made of many different modules. One module in particular is called pyrin (because one of the cardinal signs of inflammation is fever). Here’s where it gets really interesting — the human pyrin domain in ASC can replace the prion domain of the first yeast prion to be discovered (Sup35 aka [ PSI+ ] — see the above link if you don’t know what these are) and still have it function as a prion in yeast. Even more amazing, is the fact that the yeast prion domain can functionally replace ASC modules in our inflammasomes and have them work (read the references above if you don’t believe this — I agree that it’s paradigm destroying). Evidence for human prions just doesn’t get any better than this. Fortunately, our inflammasome prions are totally unrelated to PrP which can cause such havoc with the nervous system.

Historical note: Stanley Prusiner was a year behind me at Penn Med graduating in ’67. Even worse, he was a member of my med school fraternity (which was more a place to get a decent meal than a social organization). Although I doubtless ate lunch and dinner with him before marrying in my Junior year, I have absolutely no recollection of him. I do remember our class’s medical Nobel — Mike Brown. Had I gone to Yale med instead of Penn, Laura Manuelidis would have been my classmate. Small world

Follow

Get every new post delivered to your Inbox.

Join 66 other followers