Category Archives: Neurology & Psychiatry

The four hour cure for depression: what is Ketamine doing?

It is a sad state of affairs when you look forward to writing a post on depression.

https://www.fox32chicago.com/news/79-shot-15-fatally-over-fourth-of-july-weekend-in-chicago

https://www.straitstimes.com/asia/south-asia/indian-grooms-wedding-funeral-leave-more-than-100-infected-with-coronavirus

From Nature 2 July — “G4 a type of swine flu virus from China can proliferate in human airway cells.  34/338 pig farm workers in China have antibodies to it.  In ferrets G4 causes lung inflammation and coughing.”

Well that’s enough reason to flee to the solace of the basic neuroscience of depression.

 

 

The drugs we use for depression aren’t great.  They don’t help at least a third of the patients, and they usually take several weeks to work for endogenous depression.  They seemed to work faster in my MS patients who had a relapse and were quite naturally depressed by an exogenous event completely out of their control.

Enter Ketamine which, when given IV, can transiently lift depression within a few hours.  You can find more details and references in an article in  Neuron ( vol. 101 pp. 774 – 778 ’19)  written by the guys at Yale who did some of the original work. However, here’s the gist of the article.  A single dose of ketamine produced antidepressant effects that began within hours peaked in 24 – 72 hours and dissipated within 2 weeks (if ketamine wasn’t repeated).  This occurred in 50 – 75% of people with treatment resistant depression.  Remarkably one third of treated patients went into remission.

This simply has to be telling us something very important about the neurochemistry of depression.

Naturally there has been a lot of work on the neurochemical changes produced by ketamine, none of which I’ve found convincing ( see https://luysii.wordpress.com/2019/10/27/how-does-ketamine-lift-depression/ ) until the following paper [ Neuron  vol. 106 pp. 715 – 726 ’20 ].

In what follows you have to have some basic knowledge of synaptic structure, but here’s a probably inadequate elevator pitch.  Synapses have two sides, pre- and post-.  On the presynaptic side neurotransmitters are enclosed in synaptic vesicles.  Their contents are released into the synaptic cleft when an action potential arrives from elsewhere in the neuron.  The neurotransmitters flow across the very narrow synapse to bind to receptors on the postsynaptic side, triggering (or not) a response of the postsynaptic neuron.  Presynaptic terminals vary in the number vesicles they contain.

Synapses are able to change their strength (how likely an action potential is to produce a postsynaptic response).  Otherwise our brains wouldn’t be able to change and learn anything.  This is called synaptic plasticity.

One way to change the strength of a synapse is to adjust the number of synaptic vesicles found on the presynaptic side.   Presynaptic neurons form synapses with many different neurons.  The average neuron in the cerebral cortex is post-synaptic to thousands of neurons.

We think that synaptic plasticity involves changes at particular synapses but not at all of them.

Not so with ketamine according to the paper.  It changes the number of presynaptic vesicles at all synapses of a given neuron by the same percentage — this is called synaptic scaling.  Given 3 synapses containing 60  50 and 40 vesicles, upward synaptic scaling by 20% would add 12 vesicles to the first 10 to the second and 8 to the third.   The paper states that this is exactly what ketamine does to neurons using glutamic acid (the major excitatory neurotransmitter found in brain).  Even more interesting, is the fact that lithium which treats mania has the opposite effects decreasing the number of vesicles in each synapse by the same percentage.

I found this rather depressing when I first read it, as I realized that there was no chemical process intrinsic to a neuron which could possibly work quickly enough to change all the synapses at once.  To do this you need a drug which goes everywhere at once.

But you don’t. There are certain brain nuclei which send their processes everywhere in the brain.  Not only that but their processes contain varicosities which release their neurotransmitter even where there is no post-synaptic apparatus.  One such nucleus (the pars compacta of the substantia nigra) has extensively ramified processes so much so that “Individual neurons of the pars compact are calculated to give rise to 4.5 meters of axons once all the branches are summed”  — [ Neuron vol. 96 p. 651 ’17 ].  So when that single neuron fires, dopamine is likely to bathe every neuron in the brain.  We think that something similar occurs in the locus coeruleus of the lower brain which has only 15,000 neurons and releases norepinephrine, and also in the raphe nuclei of the brainstem which release serotonin.

It should be less than a surprise that drugs which alter neurotransmission by these neurotransmitters are used to treat various psychiatric diseases.  Some drugs of abuse alter them as well (Cocaine and speed release norepinephrine, LSD binds one of the serotonin receptors etc, etc.)

The substantia nigra contains only 450,000 neurons at birth, so you don’t need a big nucleus to affect our 80 billion neuron brains.

So the question before the house, is have we missed other nuclei in the brain which control volume neurotransmission by glutamic acid?   If they exist, could their malfunction be a cause of mania and/or depression?  There is plenty of room for 10,000 to 100,000 neurons to hide in an 80 billion neuron brain.

Time to think outside the box people. Here is an example:  Since ketamine blocks activation of one receptor for glutamic acid, could there be a system using volume neurotransmission which releases a receptor inhibitor?

Addendum 7 July — I sent a copy of the post to the authors and received this back from one of them. “Thank you very much for your kind words and interest in our work. Your explanation is quite accurate (my only suggestion would be to replace “vesicles” with “receptors”, as the changes we propose are postsynaptic). Reading your blog reassures us that our review article accomplished its main goal of reaching beyond the immediate neuroscience community to a wider audience like yourself.”

 

Functional MRI research is a scientific sewer — take 2

You’ve heard of P-hacking, slicing and dicing your data until you get a statistically significant result.  I wrote a post about null-hacking –https://luysii.wordpress.com/2019/12/22/null-hacking-reproducibility-and-its-discontents-take-ii/.  Welcome to the world of pipeline hacking.  Here is a brief explanation of the highly technical field of functional magnetic resonance imaging (fMRI).   Skip to the **** if you know this already.

Chemists use MRI all the time, but they call it Nuclear Magnetic Resonance. Docs and researchers quickly changed the name to MRI because no one would put their head in something with Nuclear in the name.

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD (Blood Oxygen Level Dependent), and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 1930s. When a patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist as far back as the 90s, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying.

****

Well now we know why.  The data produced by and MRI is so extensive and complex that computer programs (pipelines) must be used to make those pretty pictures.  The brain has a volume of 1,200 cubic centimeters (or 1,200,000 cubic millimeters).  Each voxel of an MRI (like the pixels on your screen is about 1 cubic millimeter) and basically gives you a number of how much energy is absorbed by the voxel.  Computer programs (called pipelines) must be used to process it and make those pretty pictures you see.

Enter Nature vol. 582 pp. 36 – 37, 84 – 88 ’20 and the Neuroimaging Analysis Replication and Prediction Study (NARPS).  70 different teams were given the raw data from 108 people, each of whom was performing one or the other of two versions of a task through to study decision making under risk.  The groups were asked to analyze the data to test 9 different hypotheses about what part of the brain should light up in relation to  specific feature of the task.

Now when a doc orders a hemoglobin from the lab he’s pretty should that they’ll all give the same result because they determine hemoglobin by the same method.  Not so for functional MRI.  All 70 teams analyzed the data using different pipelines and workflows.

Was there agreement.  20% of the teams reported a result different from most teams.  Random is 50%.  Remember they all got the same raw data.

From the News and Views commentary  on the the paper.

“It is unfortunately common for researchers to explore various pipelines to find the ver­sion that yields the ‘best’ results, ultimately reporting only that pipeline and ignoring the others.”

This explains why I smelled a rat 30 years ago.  I call this pipeline hacking.

Further infelicities in the field can be found in the following posts

l. it was shown in 2014 that 70% of people having functional MRIs (fMRIs) were asleep during the test, and that until then fMRI researchers hadn’t checked for it. For details please see
https://luysii.wordpress.com/2014/05/18/how-badly-are-thy-researchers-o-default-mode-network/. You don’t have to go to med school, to know that the brain functions quite differently in wake and sleep.

2. A devastating report in [ Proc. Natl. Acad. Sci. vol. 113 pp. 7699 – 7600, 7900 – 7905 ’16 ] showed that certain common settings in 3 software pacakages (SPM, FSL, AFNI) used to analyze fMRI data gave false positive results ‘up to’ 70% of the time. Some 3,500 of the 40,000 fMRI studies in the literature over the past 20 years used these settings. The paper also notes that a bug (now corrected after being used for 15 years) in one of them also led to false positive results.  For details see — https://luysii.wordpress.com/2016/07/17/functional-mri-research-is-a-scientific-sewer/

In fairness to the field, the new work and #1 and #2 represent attempts by workers in fMRI to clean it up.   They’ve got a lot of work to do.

The brain is far more wired up than we thought

The hardest thing in experimental psychology is being smart enough to think of something truly clever.  Here is a beauty, showing that the brain is far more wired together than we ever thought.

First some background.  You’ve probably heard of the blind spot (although you’ve never seen it).  It’s the part in your eye were all the never fibers from the sensory part of the eye (the retina) are collected together forming the optic nerve.  Through an ophthalmoscope it appears like a white oval (largest diameter under 2 milliMeters)  It’s white because it’s all nerve fibers (1,000,000 of them) with no sensory retina overlying it.  So if you shine a very narrow light on it, you’ll see nothing.   That’s the blind spot.

Have a look at https://en.wikipedia.org/wiki/Visual_system. Both eyes project to both halves of he brain.  Because the blind spot is off to one side in your visual field, the other eye maps a different part of the retina to the same area of the brain.  But if you patch that eye, on one side of the brain the blind spot gets no input.

 

 In the healthy visual system, the cortical representation of the blind spot (BS) of the right eye receives information from the left eye only (and vice versa). Therefore, if the left eye is patched, the cortex corresponding to the BS of the right eye is deprived of its normal bottom-up input.

Proc. Natl. Acad. Sci. vol. 117 pp. 11059 – 11067 ’20 https://www.pnas.org/content/pnas/117/20/11059.full.pdf

Hopefully you’ll be able to follow the link and look at figure 1 p. 11060 which will explain things.

Patching the left eye deprives that area of visual cortex of any input at all.

Here comes the brunt of the paper — within minutes of patching the left eye, the cortical representation of that spot begins to widen.  It starts responding to stimuli from areas outside its usual receptive field.

Nerves just don’t grow that fast, so the connections have to have been there to begin with.   So the brain is more wired together than we thought.  Perhaps this is just true of the visual system.

If not, the work has profound implications for neurologic rehabilitation.

I do apologize for not being able to explain this better, but the work is sufficiently important that you should know about it.

Addendum 4 June — here’s another shot at explaining things.

    • As you look straight ahead, light falls on the part of your retina with the highest spatial resolution (the macula). The blind spot due to the optic nerve is found closer to your nose, which means that in the right eye, the retina surrounding the blind spot ‘sees’ light coming from toward your ear. Light from the same direction ( your right ear) will NOT fall on the optic nerve of your left eye (which is toward your nose) so information from that area gets back to the brain (which is why you don’t see your blind spot).

      Now visual space (say looking toward the right) is sent back to the brain coherently, so that areas of visual space transmitted by either eye go to the same place in the brain.

      So if you now cover your left eye, there is an area of the brain (corresponding to the blind spot of the right eye) which is getting no information from the retina at all. So it is effectively blind. Technology permits us to actively stimulate the retina anywhere we want.. We also have ways to measure activity of the brain in any small area (functional MRI). Activity increases with visual input.

      Now with the left eye patched, stimulate with light directed at the right eye’s blind spot. Nothing happens (no increase in activity) in the cortical area representing that part of the visual field. It isn’t getting any input. So it is possible to accurately map the representation of the right eye’s blind spot in the brain in terms of the brain areas responding to it.

      Next visually stimulate the right eye with light hitting the retina adjacent to the right eye’s blind spot. Initially the blind spot area of the brain shows no activity, After just a few minutes, the area of the brain for the right eye’s blind spot begins to respond to stimuli it never responded to initially. This implies that those two areas of the brain have connections between them, that were always there, as new nerve processes just don’t grow that fast.

      To be clever enough to think of a way to show this is truly brilliant. Bravo to the authors.

 

Do glia think?

Do glia think Dr. Gonatas?  This was part of an exchange between G. Milton Shy, head of neurology at Penn, and Nick Gonatas a brilliant neuropathologist who worked with Shy as the two of them described new disease after new disease in the 60s ( myotubular (centronuclear) myopathy, nemaline myopathy, mitochondrial myopathy and oculopharyngeal muscular dystrophy).

Gonatas was claiming that a small glial tumor caused a marked behavioral disturbance, and Shy was demurring.  Just after I graduated, the Texas Tower shooting brought the question back up in force — https://en.wikipedia.org/wiki/University_of_Texas_tower_shooting.

A recent paper [ Neuron vol. 105 pp. 954 – 956, 1036 – 1047 ’20] gives good evidence that glia are more than the janitors and the maintenance crew of the brain.

Glia cover most synapses (so neurotransmitter there doesn’t leak out, I thought) giving rise to the term tripartite synapse (presynaptic terminal + postsynaptic membrane + glial covering).

Here’s what they studied.  The cerebral cortex projects some of its axons (which use glutamic acid as a neurotransmitter) to a much studied nucleus in animals (the nucleus accumbens).  This is synapse #1. The same nucleus gets a projection of axons from the brainstem ventral tegmental area (VTA) which uses dopamine as a neurotransmitter.  However, the astrocytes (a type of glia) covering synapse #1 have the D1 dopamine receptor (there are 5 different dopamine receptors) on them.  It isn’t clear if the dopamine neurons actually synapse (synapse #2) on the astrocytes, or whether the dopamine  just leaks out of the synaptic cleft to the covering glia.

Optogenetic stimulation of the VTA dopamine neurons results in an elevation of calcium in the astrocytes (a sign of stimulation). Chemogenetic activation of these astrocytes depresses the presynaptic  terminals of the neurons projecting the nucleus accumbens  from the cerebral cortex .  How does this work?  Stimulated astrocytes release ATP or its produce adenosine.  This binds to the A1 purinergic receptor on the presynaptic terminal of the cortical projection.

So what?

The following sure sounds like the astrocyte here is critical to brain function.  Activation of the astrocyte D1 receptor contributes to the locomotor hyperactivity seen after an injection of amphetamine.

Dopamine is intimately involved in reward, psychosis, learning and other processes (antipsychotics and drugs for hyperactivity manipulate it).  That the humble astrocyte is involved in dopamine action takes it out of the maintenance crew and puts it in to management.

A final note about Dr. Shy.  He was a brilliant and compelling teacher, and instead of the usual 1% of a medical school class going into neurology, some 5% of ours did.  In 1967 he ascended to the chair of the pinnacle of American Neurology at the time, Columbia University.  Sadly, he died the month he assumed the chair.  Scuttlebut has it that he misdiagnosed his own heart attack as ‘indigestion’ and was found dead in his chair.

Do orphan G Protein Coupled Receptors self stimulate?

Self-stimulation is frowned on in the Bible — Genesis 38:8-10, but one important G Protein Coupled Receptor (GPCR) may actually do it.  At least 1/3 of the drugs in clinical use manipulate GPCRs, and we have lots of them (at least 826/20,000 protein coding genes according to PNAS 115 p. 12733 ’18).  However only 360 or so are not involved in smell, and in one third of them  we have no idea what the natural ligand for them actually is (Cell vol. 177 p. 1933 ’19).  These are the orphan GPCRs, and they make a juicy target for drug discovery (if only  we knew what they did)

One orphan GPCR goes by the name of GPR52. It is found on neurons carrying the D2 dopamine receptor.  GPR52 binds to G(s) family of G proteins stimulating the production of CAMP (which would antagonize dopamine signaling), enough to stimulate (if not self-stimulate) any neuropharmacologist.

Which brings us to the peculiar behavior of GPR52 as shown by Nature vol. 579 pp. 142 – 147 ’20.  The second extracellular loop (ECL2) folds into what would normally be the binding site for an exogenous ligand (the orthosteric site).  Well, it could be protecting the site from inappropriate ligands.  But it isn’t, as removing or mutating ECL2 decreases the activity of GPR52 (e.g. less CAMP is produced).  Pharmacologists have produced a synthetic GPR52 agonist (called c17).  However it binds to a side pocket, in the 7 transmembrane region of the GCPR.   This is interesting in itself, as no such site is known in any of the other GPCRs studied.

Most GPCRs have some basal (constitutive) activity where they spontaneously couple to their G proteins, but the constitutive activity of GPR52 is quite high, so c17 only slightly increases the rise in CAMP that GPR52 normally produces.

This might be an explanation for other orphan GPCRs — like a hermaphrodite they could be self-fertilizing.

Musical dyslexia

Back in the day, we were all shocked that the worst reader in our class, a girl who’d been left back, picked up a spelling mistake in our high school yearbook “The Lighththouse”.

Which brings me to the Ravel Piano Trio where I’m having the same problem she did. It’s probably one of the hardest works for piano trio in existence, and even with an hour a day on the first two movements I’m only making slow headway at best.  Even the violinist, a conservatory graduate, finds it difficult.

Normally when a pianist  looks at a score, chords and scales leap out, and you don’t have to look at every note in a Beethoven sonata to know what key he’s writing in.  Not so with Ravel.

In the second movement there is a sequence of 15 chords in which the right hand plays 4 notes, the left 2 or 3.  Here’s the first chord

left hand f double sharp, c sharp e – right hand f double sharp, a sharp, dsharp fsharp (the fsharp is about an octave higher than the the lower f double sharp).

You can’t look at that and know what to play — you’ve got to finger every note in order and hope that your brain will remember it the second time around.

All 15 of these chords like this must be played in about 10 seconds or less.

This is what life must have been like for the dyslexic girl back then (the diagnosis didn’t exist in the 50’s).  No wonder she was left back, if she had to figure out every word had letter by letter.

Like me probably after figuring out what one word (chord) was, she probably forgot what the multiple words of the sentence were trying to say (the music in the chord sequence).

You did notice the misspelling didn’t you? If not here it is — lightHThouse.  Most readers just look at the first few letters, recognize the word and move on, just like a pianist playing Mozart or Beethoven.  Not the poor girl back then or me with the Ravel.

How can it be like that?

The following quote is from an old book on LISP programming (Let’s Talk LISP) by Laurent Siklossy.“Remember, if you don’t understand it right away, don’t worry. You never learn anything, you only get used to it.”

Unlike quantum mechanics, where Feynman warned never to ask ‘how can it be like that’, those of us in any area of biology should always  be asking ourselves that question.  Despite studying the brain and its neurons for years and years and years, here’s a question I should have asked myself (but didn’t, and as far as I can tell no one has until this paper [ Proc. Natl. Acad. Sci. vol. 117 pp. 4368 – 4374 ’20 ] ).

It’s a simple enough question.  How does a neuron know what receptor to put at a given synapse, given that all neurons in the CNS have both excitatory and inhibitory synapses on them. Had you ever thought about that?  I hadn’t.

Remember many synapses are far away from the cell body.  Putting a GABA receptor at a glutamic acid synapse would be less than useful.

The paper used a rather bizarre system to at least try to answer the question.  Vertebrate muscle cells respond to acetyl choline.  The authors bathed embryonic skeletal muscle cells (before innervation) with glutamic acid, and sure enough glutamic acid receptors appeared.

There’s a lot in the paper about transcription factors and mechanism, which is probably irrelevant to the CNS (muscle nuclei underly the neuromuscular junction).   Even if you send receptors for many different neurotransmitters everywhere in a neuron, how is the correct one inserted and the rest not at a given synapse.

I’d never thought of this.  Had you?

 

Amyloid

Amyloid goes way back, and scientific writing about has had various zigs and zags starting with Virchow (1821 – 1902) who named it because he thought it was made out of sugar.  For a long time it was defined by the way it looks under the microscope being birefringent when stained with Congo red (which came out 100 years ago,  long before we knew much about protein structure (Pauling didn’t propose the alpha helix until 1951).

Birefringence itself is interesting.  Light moves at different speeds as it moves through materials — which is why your legs look funny when you stand in shallow water.  This is called the refractive index.   Birefringent materials have two different refractive indexes depending on the orientation (polarization) of the light looking at it.  So when amyloid present in fixed tissue on a slide, you see beautiful colors — for pictures and much more please see — https://onlinelibrary.wiley.com/doi/full/10.1111/iep.12330

So there has been a lot of confusion about what amyloid is and isn’t and even the exemplary Derek Lowe got it wrong in a recent post of his

“It needs to be noted that tau is not amyloid, and the TauRx’s drug has failed in the clinic in an Alzheimer’s trial.”

But Tau fibrils are amyloid, and prions are amyloid and the Lewy body is made of amyloid too, if you subscribe to the current definition of amyloid as something that shows a cross-beta pattern on Xray diffraction — https://www.researchgate.net/figure/Schematic-representation-of-the-cross-b-X-ray-diffraction-pattern-typically-produced-by_fig3_293484229.

Take about 500 dishes and stack them on top of each other and that’s the rough dimension of an amyloid fibril.  Each dish is made of a beta sheet.  Xray diffraction was used to characterize amyloid because no one could dissolve it, and study it by Xray crystallography.

Now that we have cryoEM, we’re learning much more.  I have , gone on and on about how miraculous it is that proteins have one or a few shapes — https://luysii.wordpress.com/2010/08/04/why-should-a-protein-have-just-one-shape-or-any-shape-for-that-matter/

So prion strains and the fact that alpha-synuclein amyloid aggregates produce different clinical disease despite having the same amino acid sequence was no surprise to me.

But it gets better.  The prion strains etc. etc may not be due to different structure but different decorations of the same structure by protein modifications.

The same is true for the different diseases that tau amyloid fibrils produce — never mind that they’ve been called neurofibrillary tangles and not amyloid, they have the same cross-beta structure.

A great paper [ Cell vol. 180 pp. 633 – 644 ’20 ] shows how different the tau protofilament from one disease (corticobasal degeneration) is from another (Alzheimer’s disease).  Figure three shows the side chain as it meanders around forming one ‘dish’ in the model above.  The meander is quite different in corticobasal degeneration (CBD) and Alzheimers.

It’s all the stuff tacked on. Tau is modified on its lysines (some 15% of all amino acids in the beta sheet forming part) by ubiquitination, acetylation and trimethylation, and by phosphorylation on serine.

Figure 3 is worth more of a look because it shows how different the post-translational modifications are of the same amino acid stretch of the tau protein in the Alzheimer’s and CBD.  Why has this not been seen before — because the amyloid was treated with pronase and other enzymes to get better pictures on cryoEM.  Isn’t that amazing.  Someone is probably looking to see if this explains prion strains.

The question arises — is the chain structure in space different because of the modifications, or are the modifications there because the chain structure in space is different.  This could go either way we have 500+ enzymes (protein kinases) putting phosphate on serine and/or threonine, each looking at a particular protein conformation around the two so they don’t phosphorylate everything — ditto for the enzymes that put ubiquitin on proteins.

Fascinating times.  Imagine something as simple as pronase hiding all this beautiful structure.

 

 

4 Interesting papers

Here are brief summaries of 4 recent very interesting papers, each of which may be the subject of a future post (now that I’m not as worried about the effects of the Wuhan flu on family members over in Hong Kong).  They are likely behind a pay wall unfortunately

l. Watching an endoplasmic reticulum extruded tubule cut a P-body in half. Very significant as we begin to appreciate the phase transitions going on in our cells — for an overview of this see — https://luysii.wordpress.com/2018/12/16/bye-bye-stoichiometry/.

The paper(s) itself [ Science vol. 367 pp. 507 – 508, 537, eaay7108 ’20 ]

2. Watching microglia caress the cell body (soma) of neurons [ Science vol. 367 pp. 510 – 511, 528 – 537 ’20 ].  They’re actually rather creepy, extending processes and feeling up neurons, removing synapses from processes.  They use receptors for ATP and ADP to detect when a neuron is in trouble.  A new cellular specialization is described — Somatic Purinergic Junctions — a combination of mitochondria, reticular membrane structures, vesicle-like membrane structures and clusters of a particular voltage gated potassium channel (Kv2.1)

3. The ubiquitin wars inside a macrophage invaded by TB [ Nature vol. 577 pp. 682 – 688 ’20 ]  Ubiquitin initially was thought to be a tag marking a protein for destruction.  It’s much more complicated than that.  A host E3 ubiquitin ligase (ANAPC2, a core subunit of the anaphase promoting complex/cyclosome) promotes the attachment of lysine #11 linked ubiquitin chains to lysine #76 of the TB protein Rv0222.  In some way this helps Rv022 to suppress the expression of proinflammatory cytokines.

4. FACT (FAcilitates Chromatin Transcription)  is a heterodimer of two proteins which form a heterodimer [ Nature vol. 577 pp. 426 – 431 ’20 ].  If you’ve ever wondered how the monstrously large holoenzyme of RNA polymerase II, manages to work its way around the nucleosome copying one strand, you need to know about FACT, which basically grabs the disclike nucleosome with DNA wrapped around it twice, grabs both H2A-H2B dimers and holds them outside while pol II passes.  You have to wonder which came first the nucleosome or FACT. Neither would be of much use by themselves.  Probably they both grew up together, but it’s hard to envision the intermediate stages.

Should your teen use marihuana?

Is marihuana bad for teen brain development?  The short answer is no one knows.  The long answer can be found here — https://www.pnas.org/content/117/1/7.  It’s probably the best thing out there on the question [ Proc. Natl. Acad. Sci. vol. 117 pp. 7 – 11 ’20 ].  The article basically says we don’t know, but lays out the cons (of which there are many) and the pros (of which there are equally many).

If you’re not a doc, reading the article with its conflicting arguments harmful vs. nonharmful, and then deciding what to tell your kid is very close to what practicing medicine is like.  Important decisions are to be made, based on very conflicting data, and yet the decisions can’t be put off.  Rote memory is of no use and it’s time to think and think hard.

Assuming you don’t have a PNAS subscription, or you can’t follow the link here are a few points the article makes.

It starts off with work on rats. “Tseng, based at the University of Illinois in Chicago, investigates how rats respond to THC (tetrahydrocannabinol), the main psychoactive ingredient in cannabis. He’s found that exposure to THC or similar molecules during a specific window of adolescence delays maturation of the prefrontal cortex (PFC), a region involved in complex behaviors and decision making”

Pretty impressive, but not if you’ve spent decades watching various treatments for stroke which worked in rodents crash and burn when applied to people (there are at least 50 such studies).  What separates us from rodents physically (if not morally) is our brains.  Animal studies, with all their defects of applicability to man is one of the two approaches we have — no one is going to randomize a bunch of 13 year olds to receive marihuana or not and watch what happens.

== Addendum 9 Jan ’20 — too good to pass up — Science vol. 367 pp. 83 – 87  ’20 shows just how different we are from rodents.  In addition to our cerebral cortex being 3 times thicker, human cortical neurons show something not found in any other mammal — These are graded action potentials in apical dendrites, important because they allow single neurons to calculate XORs (either a or b but not both and not none), something previously only thought possible for neuron ensembles.  XORs are important in Boolean algebra, hence in computation. ==

The other approach is observational studies on people which have led us down the garden path many times– see the disaster the women’s health study avoided here — https://luysii.wordpress.com/2016/08/23/the-plural-of-anecdote-is-not-data-in-medicine-at-least/.

45,000 Swedish military conscripts examined at conscription (age 19) and 15 years later.  Those who had used cannabis over 50 times before conscription were 6 times as likely to be diagnosed with schizophrenia.

Against that, is the fact that cannabis use has exploded since the 60s but schizophrenia has not (remaining at a very unfortunate 1% of the population).

In the Dunedin study, cannabis use by 15 was associated with a fourfold risk of schizophrenia at 26 (but not if they started using cannabis after 16 years of age. — https://en.wikipedia.org/wiki/Dunedin_Multidisciplinary_Health_and_Development_Study.

You can take the position that all drugs we use to alter mental state (coffee, cigarettes, booze, marihuana, cocaine, heroin) are medicating underly conditions which we don’t like.  Perhaps marihuana use is just a marker for people susceptible to schizophrenia.  Mol. Psychiat. vol. 19 pp. 1201 – 1204 ’14 — 2,000 healthy adults were studied looking a genome variants known to increase the risk of schizophrenia.  Those with high risk variants were ‘more likely’ to use marihuana — not having read the actual paper i don’t know how much more.

There is a lot more in the article about the effects of cannabis on cognition and cognitive development — the authors note that ‘they have not replicated well’.  You’ll have to read the text (which you can get by following the link) for this.

One hope for the future is the ABCD study (Adolescent Brain Cognitive Development Study) — aka the ABCD study.  By 2018 it reached its goal of  accumulating 10,000 kids between the ages of 9 and 10.  They will be followed for a decade (probably longer if the results are interesting).  It’s the hope for the future — but that doesn’t tell you what to say to your kid now.  Read the article, use your best judgement and welcome to the world of the physician.

What is sad, is how little the field has advanced, since I wrote the (rather technical) post on marihuana in 2014.

Here it is below

Why marihuana scares me

There’s an editorial in the current Science concerning how very little we know about the effects of marihuana on the developing adolescent brain [ Science vol. 344 p. 557 ’14 ]. We know all sorts of wonderful neuropharmacology and neurophysiology about delta-9 tetrahydrocannabinol (d9-THC) — http://en.wikipedia.org/wiki/Tetrahydrocannabinol The point of the authors (the current head of the Amnerican Psychiatric Association, and the first director of the National (US) Institute of Drug Abuse), is that there are no significant studies of what happens to adolescent humans (as opposed to rodents) taking the stuff.

Marihuana would the first mind-alteraing substance NOT to have serious side effects in a subpopulation of people using the drug — or just about any drug in medical use for that matter.

Any organic chemist looking at the structure of d9-THC (see the link) has to be impressed with what a lipid it is — 21 carbons, only 1 hydroxyl group, and an ether moiety. Everything else is hydrogen. Like most neuroactive drugs produced by plants, it is quite potent. A joint has only 9 milliGrams, and smoking undoubtedly destroys some of it. Consider alcohol, another lipid soluble drug. A 12 ounce beer with 3.2% alcohol content has 12 * 28.3 *.032 10.8 grams of alcohol — molecular mass 62 grams — so the dose is 11/62 moles. To get drunk you need more than one beer. Compare that to a dose of .009/300 moles of d9-THC.

As we’ve found out — d9-THC is so potent because it binds to receptors for it. Unlike ethanol which can be a product of intermediary metabolism, there aren’t enzymes specifically devoted to breaking down d9-THC. In contrast, fatty acid amide hydrolase (FAAH) is devoted to breaking down anandamide, one of the endogenous compounds d9-THC is mimicking.

What really concerns me about this class of drugs, is how long they must hang around. Teaching neuropharmacology in the 70s and 80s was great fun. Every year a new receptor for neurotransmitters seemed to be found. In some cases mind benders bound to them (e.g. LSD and a serotonin receptor). In other cases the endogenous transmitters being mimicked by a plant substance were found (the endogenous opiates and their receptors). Years passed, but the receptor for d9-thc wasn’t found. The reason it wasn’t is exactly why I’m scared of the drug.

How were the various receptors for mind benders found? You throw a radioactively labelled drug (say morphine) at a brain homogenate, and purify what it is binding to. That’s how the opiate receptors etc. etc. were found. Why did it take so long to find the cannabinoid receptors? Because they bind strongly to all the fats in the brain being so incredibly lipid soluble. So the vast majority of stuff bound wasn’t protein at all, but fat. The brain has the highest percentage of fat of any organ in the body — 60%, unless you considered dispersed fatty tissue an organ (which it actually is from an endocrine point of view).

This has to mean that the stuff hangs around for a long time, without any specific enzymes to clear it.

It’s obvious to all that cognitive capacity changes from childhood to adult life. All sorts of studies with large numbers of people have done serial MRIs children and adolescents as the develop and age. Here are a few references to get you started [ Neuron vol. 72 pp. 873 – 884, 11, Proc. Natl. Acad. Sci. vol. 107 pp. 16988 – 16993 ’10, vol. 111 pp. 6774 -= 6779 ’14 ]. If you don’t know the answer, think about the change thickness of the cerebral cortex from age 9 to 20. Surprisingly, it get thinner, not thicker. The effect happens later in the association areas thought to be important in higher cognitive function, than the primary motor or sensory areas. Paradoxical isn’t it? Based on animal work this is thought to be due pruning of synapses.

So throw a long-lasting retrograde neurotransmitter mimic like d9-THC at the dynamically changing adolescent brain and hope for the best. That’s what the cited editorialists are concerned about. We simply don’t know and we should.

Having been in Cambridge when Leary was just getting started in the early 60’s, I must say that the idea of tune in turn on and drop out never appealed to me. Most of the heavy marihuana users I’ve known (and treated for other things) were happy, but rather vague and frankly rather dull.

Unfortunately as a neurologist, I had to evaluate physician colleagues who got in trouble with drugs (mostly with alcohol). One very intelligent polydrug user MD, put it to me this way — “The problem is that you like reality, and I don’t”.