Category Archives: Medicine in general

An experiment of nature

Yesterday’s post https://luysii.wordpress.com/2014/10/15/ebola/ concerned the fact that 2 nurses taking care of a patient in Texas had been infected (presumably even after taking all the recommended precautions). Given that, I was concerned about the possibility of airborne spread.

Bryan wrote in to say the following:

“It seems doubtful airborne spread was involved. Remember, the Texas patient was initially sent home after showing symptoms, yet none of his family members were infected. Only those health workers directly involved in his care (and thus exposed to infected bodily fluids) have been infected, consistent with the idea that the disease can be transmitted only though contact with infected bodily fluids.”

I certainly hope he is right.

In something right out a novel, the possibility of airborne spread is now going to be empirically tested, as one of the two infected nurses flew to Cleveland, and then back to Texas in the 24 hours prior to her diagnosis. She apparently had a slight fever on boarding. So 100+ people were in a confined space with her for a few hours.

It’s why I don’t read fiction — reality is far more fantastic than anything writers can produce.

One more bizarre development. Here in Massachusetts, legislators today are scheduled to hear about the readiness of the state’s hospitals to handle Ebola. Amazingly, they will only get input from hospital CEOs. No nurses, thank you. Naturally the nurses are pissed as they should be (and so should you if you live in the state). If there were ever a time to hear from boots on the ground about Ebola readiness, it is now.

Addendum 17 Oct ’14

The Obama administration has just appointed a former chief of staff for former vice-president Gore and present vice-president Biden as the “Ebola czar”. Presumably, not for his medical expertise but for his ability to coordinate various governmental agencies, which was hardly the problem in the CDC’s response to the Texas cases. Hopefully, this will not be another case of “Brownie, you’re doing a heck of a job,” but I’m not optimistic — http://en.wikipedia.org/wiki/Michael_D._Brown

Now for some molecular biology. The genome of Ebola is RNA which mutates much more rapidly than DNA genomes. It does this so quickly that at death from AIDS (another RNA virus), there are so many viral variants present that the infecting ensemble is called a quasiSpecies. With a large population infected in Africa there is more Ebola virus extant than at any time in the past. There is some reason to hope that natural selection for a more transmissible form of Ebola in the large infected human population will not occur (the AIDS virus hasn’t become more infectious over the years). This is only a hope.

Ebola

This morning (15 October) it was announced that a second health care worker at the Texas hospital where an ebola patient died has ‘tested positive’ for it. If ebola can spread in a hospital environment where presumably precautions were taken, once it gets out into the populace at large it can spread much faster. This had to be human to human transmission — no other animal vector is involved (as it probably is in Africa).

How does it spread? We don’t know, but the two Texas cases probably imply that airborne spread is possible.

What to do?

In our case it means not getting into a confined space with over 100 people we don’t know from all over the world for an 8 – 16 hour period (e.g. an international flight). Have you ever been on a flight where no one had a cold?

For the USA, it should mean banning all flights from endemic countries. This has been the case in the past. My cousin’s wife has a lot of relatives in Brazil, because the people on the boat had lots of pink eye, and the boat was simply turned away over 100 years ago.

It should mean caring for Ebola patients in specialized facilities where only they are cared for –e.g. not in a general hospital since we don’t know how it spreads.

The greatest way to spread the disease (the Hajj — millions of people from all over the world crowded together for days followed by worldwide dispersal) has mercifully just ended before the disease escaped Africa to any extent.

Will ISIS or Al-Qaeda try to bring Ebola to the USA? Of course.

We live in a society where children have supervised play dates, and where walking unattended to school is almost considered child abuse. What will happen to such a risk-averse society when there is actual risk to going out to (the mall, the school, to work)?

The thermodynamic subtlety of cholera

Who knew that the cholera organism passed a thermodynamics course with flying colors? Consider that it has to function at widely different temperatures (37 C when it infects us, and 20 – 30 C when it’s out in the world). When it infects us it needs to make toxins and build a secretion system to export it. This cost a lot of metabolic money (ATP). Clearly there’s no point in doing this at temperatures outside the body and a lot of reasons not to (at least 60 as turning on toxin production and building the secretion system involves synthesizing at least 60 different proteins).

If some of the following terms are unfamiliar have a look at https://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/ and follow the links.

How does thermodynamics help the organism turn on these genes at body temperature (37 C in us)? ToxT is a protein which turns on production of the 60 proteins. The mRNA for ToxT is only translated into protein by the ribosome at 37 C.

[ Proc. Natl. Acad. Sci. vol. 111 pp. 14241 - 14246 '14 ] The mRNA for ToxT has what the authors call an RNA thermometer in its untranslated region. It is just a sequence of nucleotides which binds to the Shine Dalgarno (SD) element (http://en.wikipedia.org/wiki/Shine-Dalgarno_sequence) in the ToxT mRNA tying it up, so the SD element can’t bind to the ribosome, meaning the mRNA for ToxT can’t be transcribed into protein . Guess what? The thermometer only binds to the SD element at low temperatures, at higher temperatures the binding is unstable leaving the SD sequence free, turning on synthesis of ToxT which then turns on the 60 proteins involved in toxin production. Clever no?

Cholera is a terrible disease, afflicting less developed countries causing terrible infant mortality. I can’t resist mentioning a completely avoidable epidemic inflicted in the name of risk reduction years ago.

[ Nature vol. 354 p. 255 '91 ] An amazing article places the blame for the cholera epidemic sweeping South America starting in Peru on a misguided application of an Environmental Protection Study implicating water chlorination as a cause of cancer. During the 80’s Peruvian officials, citing the EPA study, stopped chlorinating many of the well in Lima. However, others say that the decision might have been more based on economics than data from the EPA.

It is comforting to know that the 3516 who have died so far have been spared a long bout with cancer.

9 Oct ’14 — Emo wrote the following comment today

Story of Peruvian officials stopping chlorinating water supply based on EPA study was debunked in a study published in Lancet one year after the nature news story: Swerdlow et al. “Waterborne Transmission of Epidemic Cholera in Trujillo, Peru: Lessons for a Continent at Risk,” Lancet Vol. 340 No. 8810 (July 4, 1992), pgs. 28-33. They never chlorinated water in Trujillo, second largest city in the country because they didn’t believe deep well water needed disinfection and cost of chlorinator and chlorine was too much

Thanks Emo

Can losing one gene do all that? Yes it can — there’s still hope

The Cancer Genome Atlas has dashed our hopes of finding ‘the’ cause of cancer. It has sequenced the genomes of a large number of cancers — the following paper looked at 21 tumor types sequencing the protein coding parts (exomes) of 4,742 specimens, along with that of normal tissues [ Nature vol. 505 pp. 495 - 501 '14 ].

The problem is that lots of mutations have been found in every type of cancer studied this way.

The following is typical — 178 cases of lung cancer (squamous cell variety) were studied. Some 360 mutations in exons, 165 genomic rearrangements, and 323 copy number alterations were found — but this doesn’t represent the results for the 178 cases as a whole. This was the average amount of genomic mayhem seen in each individual tumor . How do you find ‘the’ cause of the cancer in this mess? One way might be to find a gene mutated in all 178 cases (e. g. recurrent mutations). This would be the holy grail — the mutation driving cancer formation, the rest being the chaff of the well known genomic instability due to the high mutation rate of cancer cells. They found 11 such genes, but they were far from mutated in all cases. Pretty depressing isn’t it?

A recent paper [ Proc. Natl. Acad. Sci. vol. 111 pp. 14009 - 14010, E4066 - E4075 '14 ] gave an example of a huge number of changes in the clinical activity of a cancer cell line due to the functional loss of just one gene (called COSMC). Here’s what happened. In a pancreatic cancer cell line, COSMC knockout produced malignant xenografts (e.g. placing the cells in an immunodeficient animal and watching what happens), which could be reversed by reintroduction of COSMC. The changes include (1) increased proliferation, (2)loss of contact inhibition of growth, (3) loss of tissue architecture, (4) less basement membrane adhesion and (5) invasive growth — remarkable that knocking out just one gene could do so much. Perhaps not a driver mutation, but certainly a delicious drug target. Before getting too excited, remember that this occurred in a cell line which was cancerous to begin with.

The quick and dirty explanation of what is going on is that COSMC is a protein chaperone for an enzyme adding a sugar to proteins destined either for secretion or for insertion into the cell membrane. Lose COSMC and the whole pattern of sugar attachments to these proteins changes. There are a lot of proteins modified by adding sugars (glycosylated proteins), actually 446 of them, with 1,471 sites for this to happen.

The rest of the post is for the cognoscenti and concerns the gory details.

From the paper itself — “Neoplastic transformation of human cells is virtually always associated with aberrant glycosylation of proteins and lipids.” The most frequently seen glycophenotype are the Tn and STn carbohydrate epitopes of epithelial cell cancers. They arise when mucin-type O-linked glycans (normally more complex) are truncated so that only a single -N-acetylgalactosamine (Tn) or N-acetylgalactosamine modified with sialic acid (STn) remains attached to the protein by a serine or a threonine. There are ‘up to’ 20 GalNAc transferases adding GalNAc to serine or threonine. Overall there are some 200 glycosyltransferase found in the secretory pathway. In most cases the GalNAc is modified with beta 1 –> 3 galactose by a single enzyme (called C1GalT1). This reaction is dependent on COSMC, a protein chaperone.

Although there weren’t mutations in the glycosyltransferases studied in 46 cases of pancreatic cancer, 40% of them showed hypermethylation of the COSMC (e.g. methylated cytosines in the promoter region, which shut down transcription of COSMC). This correlated with expression of truncated O-Glycans (e.g. the Tn and STn antigens) and loss of C1GalT expression.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — http://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.

A very UNtheoretical approach to cancer diagnosis

We have tons of different antibodies in our blood. Without even taking mutation into account we have 65 heavy chain genes, 27 diversity segments, and 6 joining regions for them (making 10,530) possibilities — then there are 40 genes for the kappa light chains and 30 for the lambda light chains or over 1,200 * 10,530. That’s without the mutations we know that do occur to increase antibody affinity. So the number of antibodies probably ramming around in our blood is over a billion (I doubt that anyone has counted then, just has no one has ever counted the neurons in our brain). Antibodies can bind to anything — sugars, fats, but we think of them as mostly binding to protein fragments.

We also know that cancer is characterized by mutations, particularly in the genes coding for proteins. Many of the these mutations have never been seen by the immune system, so they act as neoantigens. So what [ Proc. Natl. Acad. Sci. vo. 111 pp. E3072 - E3080 '14 ] did was make a chip containing 10,000 peptides, and saw which of them were bound by antibodies in the blood.

The peptides were 20 amino acids long, with 17 randomly chosen amino acids, and a common 3 amino acid linker to the chip. While 10,000 seems like a lot of peptides, it is a tiny fraction (actually 10^-18
of the 2^17 * 10^17 = 1.3 * 10^22 possible 17 amino acid peptides).

The blood was first diluted 500x so blood proteins other than antibodies don’t bind significantly to the arrays. The assay is disease agnostic. The pattern of binding of a given person’s blood to the chip is called an immunosignature.

What did they measure? 20 samples from each of five cancer cohorts collected from multiple geographic sites and 20 noncancer samples. A reference immunosignature was generated. Then 120 blinded samples from the same diseases gave 95$% classification accuracy. To investigate the breadth of the approach and test sensitivity, the immunosignatures 75% of over 1,500 historical samples (some over 10 years old) comprising 14 different diseases were used as training, then the other 25% were read blind with an accuracy of over 98% — not too impressive, they need to get another 1,500 samples. Once you’ve trained on 75% of the sample space, you’d pretty much expect the other 25% to look the same.

The immunosignature of a given individual consists of an overlay of the patterns from the binding signals of many of the most prominent circulating antibodies. Some are present in everyone, some are unique.

A 2002 reference (Molecular Biology of the Cell 4th Edition) states that there are 10^9 antibodies circulating in the blood. How can you pick up a signature on 10K peptides from this. Presumably neoAntigens from cancer cells elicit higher afifnity antibodies then self-antigens. High affiity monoclonals can be diluted hundreds of times without diminishing the signal.

The next version of the immunosignature peptide microArray under development contains over 300,000 peptides.

The implication is that each cancer and each disease produces either different antigens and or different B cell responses to common antigens.

Since the peptides are random, you can’t align the peptides in the signature to the natural proteomic space to find out what the antibody is reacing to.

It’s a completely atheoretical approach to diagnosis, but intriguing. I’m amazed that such a small sample of protein space can produce a significant binding pattern diagnostic of anything.

It’s worth considering just what a random peptide of 17 amino acids actually is. How would you make one up? Would you choose randomly giving all 20 amino acids equal weight, or would you weight the probability of a choice by the percentage of that amino acid in the proteome of the tissue you are interested in. Do we have such numbers? My guess is that proline, glycine and alanine would the most common amino acids — there is so much collagen around, and these 3 make up a high percentage of the amino acids in the various collagens we have (over 15 at least).

Bad news on the cancer front

[ Nature vol. 512 pp. 143 - 144, 155 - 160 '14 ] Nuc-seq is an innovative sequencing method which achieves almost complete sequencing of whole genomes in single cells. It sequences DNA from cells about to divide (the G2/M stage of the cell cycle which has twice the DNA content of the usual cell). Genomes of multiple single cells from two types of human breast cancer (estrogen receptor positive and triple negative — the latter much more aggressive) and found that no two genomes of individual tumor cells were identical. Many cells had new mutations unique to them.

This brings into question what we actually mean by a cancer cell clone. They validated some of the single cell mutations by deep sequencing of a single molecule (not really sure what this is).

Large scale structural changes in DNA (amplification and deletion of large blocks of DNA) occurred early in tumor development. THey remain stable as clonal expansion of the tumor occur (e.g. they were found in all the cancer cells whose genome was sequenced). Point mutations accumulated more gradually generating extensive subclonal diversity. Many of the mutations occur in less than 10% of the tumor mass. Triple negative breast cancers (aggressive) have mutation rates 13 times greater than the slower growing estrogen receptor positive breast cancer cells.

This implies that the mutations are there BEFORE chemotherapy. This has always been a question as most types of chemotherapy attack DNA replication and are inherently mutagenic. It also implies that slamming cancer with chemotherapy early before it has extensively mutated is locking the barn door after the horse has been stolen. It still might help in preventing metastasis, so the approach remains viable.

However nuc-seq may only be useful for cancer cells without aneuploidy http://en.wikipedia.org/wiki/Aneuploidy which is extremely common in cancer cells.

Why is this such bad news? It means that before chemotherapy even starts there is a high degree of genetic diversity present in the tumor cell population. This means that natural selection (in the form of chemotherapy) has a diverse population to work on at the get go, making resistance far more likely to occur.

Had enough? Here’s more — [ Nature vol. 511 pp. 543 - 550 '14 ] A report of 230 resected lung adenocarcinomas using mRNA, microRNA and DNA sequencing found an incredible 8.8 mutations/megaBase — e.g. 3.2 * 3.8 * 1,000 == 28,000 mutations. Aberrations in NF1, MET. ERBB2 and RIT1 occured in 13% and were enriched in samples otherwise lacking an activated oncogene. Even when not mutated, mRNA splicing was different in tumors. As far as oncogenic pathways, multiple pathways were involved — p53in 63%, PI3K mTOR in 25%, Receptor Tyrosine Kinase in 76%, cell cycle regulators 64%.

This is the opposite side of the coin from the first paper, where the genomes of single tumor cells were sequenced. It is doubtful that all cells have the 28,000 mutations, which probably result from each cell having a subset. The first paper didn’t count how many mutations a single cell had (as far as i could see).

So oncologists are attacking a hydra-headed monster.

The perfect aphrodisiac ?

We’re off to London for a few weeks to celebrate our 50th Wedding Anniversary. As a parting gift to all you lovelorn organic chemists out there, here’s a drug target for a new aphrodisiac.

Yes, it’s yet another G Protein Coupled Receptor (GPCR) of which we have 800+ in our genome, and which some 30% of drugs usable in man target (but not this one).

You can read all about it in a leisurely review of “Affective Touch” in Neuron vol. 82 pp. 737 – 755 ’14, and Nature vol. 493 pp. 669 – 673 ’13. The receptor (if the physiological ligand is known the papers are silent about it) is found on a type of nerve going to hairy skin. It’s called MRGPRB4.

The following has been done in people. Needles were put in a cutaneous nerve, and skin was lightly stroked at rates between 1 and 10 centimeters/second. Some of the nerves respond at very high frequency 50 – 100 impulses/second (50 – 100 Hertz) to this stimulus. Individuals were asked to rate the pleasantness of the sensation produced. The most pleasant sensations produced the highest frequency responses of these nerves.

MRGPRB4 is found on nerves which respond like this (and almost nowhere else as far as is known), so a ligand for it should produce feelings of pleasure. The whole subject of proteins which produce effects when the cell carrying them is mechanically stimulated is fascinating. Much of the work has been done with the hair cells of the ear, which discharge when the hairs are displaced by sound waves. Proteins embedded in the hairs trigger an action potential when disturbed.

Perhaps there is no chemical stimulus for MRGPRB4, just as there isn’t for the hair cells, but even so it’s worth looking for some chemical which does turn on MRGPRB4. Perhaps a natural product already does this, and is in one of the many oils and lotions people apply to themselves. Think of the chemoattractants for bees and other insects.

If you’re the lucky soul who finds such a drug, fame and fortune (and perhaps more) is sure to be yours.

Happy hunting

Back in a few weeks

A huge amount of work will need to be redone

The previous post is reprinted below the —- if you haven’t read it, you should do so now before proceeding.

Briefly, no one had ever bothered to check if subjects were asleep while studying the default mode of brain activity. The paper discussed in the previous post appeared in the 7 May ’14 issue of Neuron.

In the 13 May ’14 issue of PNAS [ Proc. Natl. Acad. Sci. vol. 111 pp. E2066 - E2075 '14 ] a paper appeared on genetic links to default mode abnormalities in schizophrenia and bipolar disorder.

From the abstract “Study subjects (n = 1,305) underwent a resting-state functional MRI scan and were analyzed by a two-stage approach. The initial analysis used independent component analysis (ICA) in 324 healthy controls, 296 Schizophrenic probands, 300 psychotic bipolar disorder probands, 179 unaffected first-degree relatives of schizophrenic pro bans, and 206 unaffected first-degree relatives of psychotic bipolar disorder probands to identify default mode networks and to test their biomarker and/or endophenotype status. A subset of controls and probands (n = 549) then was subjected to a parallel ICA (para-ICA) to identify imaging–genetic relationships. ICA identified three default mode networks.” The paper represents a tremendous amount of work (and expense).

No psychiatric disorder known to man has normal sleep. The abnormalities found in the PNAS study may not be of the default mode network, but in the way these people are sleeping. So this huge amount of work needs to be repeated. An tghis is just one paper. As mentioned a Google search on Default Networks garnered 32,000,000 hits.

Very sad.

____

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

How badly are thy researchers, O default mode network

If you Google “default mode network” you get 32 million hits in under a second. This is what the brain is doing when we’re sitting quietly not carrying out some task. If you don’t know how we measure it using functional mMRI skip to the **** and then come back. I’m not a fan of functional MRI (fMRI), the pictures it produces are beautiful and seductive, and unfortunately not terribly repeatable.

If [ Neuron vol. 82 pp. 695 - 705 '14 ] is true than all the work on the default network should be repeated.

Why?

Because they found that less than half of 71 subjects studied were stably awake after 5 minutes in the scanner. E.g. they were actually asleep part of the time.

How can they say this?

They used Polysomnography — which simultaneously measures tons of things — eye movements, oxygen saturation, EEG, muscle tone, respiration pulse; the gold standard for sleep studies on the patients while in the MRI scanner.

You don’t have to be a neuroscientist to know that cognition is rather different in wake and sleep.

Pathetic.

****

There are now noninvasive methods to study brain activity in man. The most prominent one is called BOLD, and is based on the fact that blood flow increases way past what is needed with increased brain activity. This was actually noted by Wilder Penfield operating on the brain for epilepsy in the 30s. When the patient had a seizure on the operating table (they could keep things under control by partially paralyzing the patient with curare) the veins in the area producing the seizure turned red. Recall that oxygenated blood is red while the deoxygenated blood in veins is darker and somewhat blue. This implied that more blood was getting to the convulsing area than it could use.

BOLD depends on slight differences in the way oxygenated hemoglobin and deoxygenated hemoglobin interact with the magnetic field used in magnetic resonance imaging (MRI). The technique has had a rather checkered history, because very small differences must be measured, and there is lots of manipulation of the raw data (never seen in papers) to be done. 10 years ago functional magnetic imaging (fMRI) was called pseudocolor phrenology.

Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

Follow

Get every new post delivered to your Inbox.

Join 68 other followers