Category Archives: Philosophical issues raised

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see http://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 '11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 - 1041 '11, ibid vol. 49 pp. 780 - 783, 877 - 887 '06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

“A Troublesome Inheritance” – II – Four Anthropological disasters of the past 100 years

Page 5 of Wade’s book contains two recent pronouncements from the American Anthropological Association stating that “Race is about culture not biology”. It’s time to look at anthropology’s record of the past 100 years. It isn’t pretty.

Start with the influential Franz Boas (1858 – 1942) who taught at Columbia for decades. His most famous student was Margaret Mead.

He, along with his students, felt that the environment was everything for human culture and that heredity had minimal influence. Here’s what Boas did over 100 years ago.

[ Proc. Natl. Acad. Sci. vol. 99 pp. 14622 - 14623, 14436 - 14439 '02 ] Retzius invented the the cephalic index in the 1890s. It is just the widest breadth of the skull divided by the front to back length. One can be mesocephalic, dolichocephalic or brachycephalic. From this index one could differentiate Europeans by location. Anthropologists continue to take such measurements. Franz Boas in 1910 – 1913 said that the USA born offspring of immigrants showed a ‘significant’ difference from their immigrant parents in their cephalic index. This was used to reinforce the idea that environment was everything.

Boas made some 13,000 measurements. This is a reanalysis of his data showing that he seriously misinterpreted it. The genetic component of the variability was far stronger than the environmental. Some 8500 of his 13,000 cases were reanalyzed. In a later paper Boas stated that he never claimed that there were NO genetic components to head shape, but his students and colleagues took the ball and ran with it, and Boas never (publicly) corrected them. The heritability was high in the family data and between ethnic groups, which remains in the American environment.

One of Boas’ students wrote that “Heredity cannot be allowed to have acted any part in history.” The chain of events shaping a people “involves the absolute conditioning of historical events by other historical events.” Hardly scientific statements.

On to his most famous student Margaret Mead (1901 -1978) who later became the head of the American Association for the Advancement of Science (1960). In 1928 she published “Coming of Age in Samoa” about the sexual freedom of Samoan adolescents. It had a big play, and I was very interested in such matters as a pimply adolescent. It fit into the idea that ” “We are forced to conclude that human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions.”. This certainly fit nicely with the idea that mankind could be reshaped by changing the system — see http://en.wikipedia.org/wiki/New_Soviet_man one of the many fantasies of the left promoted by academia.

Subsequently, an anthropologist (Freeman) went back to Samoa and concluded that Mead had been hoaxed. He found that Samoans may beat or kill their daughters if they are not virgins on their wedding night. A young man who cannot woo a virgin may rape one to extort her into eloping. The family of a cuckolded husband may attack and kill the adulterer. For more details see Pinker “The Blank Slate” pp. 56 –>

The older among you may remember reading about “the gentle Tasaday” of the Philippines, a Stone age people who had no word for war. It was featured in the NY times in the 70s. They were the noble savages of Rousseau in the 20th century. The 1970 ”discovery” of the Tasaday as a ”Stone Age” tribe was widely heralded in newspapers, shown on national television in a National Geographic Society program and an NBC special documentary, and further publicized in ”The Gentle Tasaday: A Stone Age People in the American journalist, John Nance.

In all, Manuel Elizalde Jr., the son of a rich Filipino family, was depicted as the savior of the Tasaday through his creation of Panamin (from presidential assistant for national minorities), a cabinet-level office to protect the Tasaday and other ”minorities” from corrosive modern influences and from environmentally destructive logging companies.It appears that Manuel Elizalde hoodwinked almost everybody by paying neighboring T’boli people to take off their clothes and pose as a ”Stone Age” tribe living in a cave. Mr. Elizalde then used the avalanche of international interest and concern for his Tasaday creation to create the Panimin organization for control over ”tribal minority” lands and resources and ultimately deals with logging and mining companies.

Last but not least is “The Mismeasure of Man” (1981) in which Steven Gould tore apart the work of Samuel Morton, a 19th century Anthropologist who measured skulls. He accused Morton of (consciously or unconsciously) manipulating the data to come up with the conclusions he desired.

Well, guess what. Someone went back and looked at Morton’s figures, and remeasured some of his skulls (which are still at Penn) and found that the manipulation was all Gould’s not Morton’s. I posted about this when it came out 3 years ago — here’s the link http://luysii.wordpress.com/2011/06/26/hoisting-steven-j-gould-by-his-own-petard/.

Here is the relevant part of that post — An anthropologist [ PLoS Biol. doi:10.1371/journal.pbio.1001071;2011 ] went back to Penn (where the skulls in question reside), and remeasured some 300 of them, blinding themselves to their ethnic origins as they did. Morton’s measurements were correct. They also had the temerity to actually look at Morton’s papers. They found that, contrary to Gould, Morton did report average cranial capacities for subgroups of both populations, sometimes on the same page or on pages near to figures that Gould quotes, and therefore must have seen. Even worse (see Nature vol. 474 p. 419 ’11 ) they claim that “Gould misidentified the Native American samples, falsely inflating the average he calculated for that population”. Gould had claimed that Morton’s averages were incorrect.

Perhaps anthropology has gotten its act together now, but given this history, any pronouncements they make should be taken with a lot of salt. In fairness to the field, it should be noted that the debunkers of Boas, Mead and Gould were all anthropologists. They have a heavy load to carry.

“A Troublesome Inheritance” – I

One of the joys of a deep understanding of chemistry, is the appreciation of the ways in which life is constructed from the most transient of materials. Presumably the characteristics of living things that we can see (the phenotype) will someday be traceable back to the proteins, nucleic acids,and small metabolites (lipids, sugars, etc..) making us up.

For the time being we must content ourselves with understanding the code (our genes) and how it instructs the development of a trillion celled organism from a fertilized egg. This brings us to Wade’s book, which has been attacked as racist, by anthropologists, sociologists and other lower forms of animal life.

Their position is that races are a social, not a biological construct and that differences between societies are due to the way they are structured, not by differences in the relative frequency of the gene variants (alleles) in the populations making them up. Essentially they are saying that evolution and its mechanism descent with modification under natural selection, does not apply to humanity in the last 50,000 years when the first modern humans left Africa.

Wade disagrees. His book is very rich in biologic detail and one post about it discussing it all would try anyone’s attention span. So I’m going to go through it, page by page, commenting on the material within (the way I’ve done for some chemistry textbooks), breaking it up in digestible chunks.

As might be expected, there will be a lot of molecular biology involved. For some background see the posts in https://luysii.wordpress.com/category/molecular-biology-survival-guide/. Start with http://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/ and follow the links forward.

Wade won me over very quickly (on page 3), by his accurate and current citations to the current literature. He talks about how selection on a mitochondrial protein helped Tibetans to live at high altitude (while the same mutation those living at low altitudes leads to blindness). Some 25% Tibetans have the mutation while it is rare among those living at low altitudes.
Here’s my post of 10 June 2012 ago on the matter. That’s all for now

Have Tibetans illuminated a path to the dark matter (of the genome)?

I speak not of the Dalai Lama’s path to enlightenment (despite the title). Tall people tend to have tall kids. Eye color and hair color is also hereditary to some extent. Pitched battles have been fought over just how much of intelligence (assuming one can measure it) is heritable. Now that genome sequencing is approaching a price of $1,000/genome, people have started to look at variants in the genome to help them find the genetic contribution to various diseases, in the hopes of understanding andtreating them better.

Frankly, it’s been pretty much of a bust. Height is something which is 80% heritable, yet the 20 leading candidate variants picked up by genome wide association studies (GWAS) account for 3% of the variance [ Nature vol. 461 pp. 458 - 459 '09 ]. This has happened again and again particularly with diseases. A candidate gene (or region of the genome), say for schizophrenia, or autism, is described in one study, only to be shot down by the next. This is likely due to the fact that many different genetic defects can be associated with schizophrenia — there are a lot of ways the brain cannot work well. For details — see http://luysii.wordpress.com/2010/04/25/tolstoy-was-right-about-hereditary-diseases-imagine-that/. or see http://luysii.wordpress.com/2010/07/29/tolstoy-rides-again-autism-spectrum-disorder/.

Typically, even when an association of a disease with a genetic variant is found, the variant only increases the risk of the disorder by 2% or less. The bad thing is that when you lump them all of the variants you’ve discovered together (for something like height) and add up the risk, you never account for over 50% of the heredity. It isn’t for want of looking as by 2010 some 600 human GWAS studies had been published [ Neuron vol. 68 p. 182 '10 ]. Yet lots of the studies have shown various disease to have a degree of heritability (particularly schizophrenia). The fact that we’ve been unable to find the DNA variants causing the heritability was totally unexpected. Like the dark matter in galaxies, which we know is there by the way the stars spin around the galactic center, this missing heritability has been called the dark matter of the genome.

Which brings us to Proc. Natl. Acad. Sci. vol. 109 pp. 7391 – 7396 ’12. It concerns an awful disease causing blindness in kids called Leber’s hereditary optic neuropathy. The ’cause’ has been found. It is a change of 1 base from thymine to cytosine in the gene for a protein (NADH dehydrogenase subunit 1) causing a change at amino acid #30 from tyrosine to histidine. The mutation is found in mitochondrial DNA not nuclear DNA, making it easier to find (it occurs at position 3394 of the 16,569 nucleotide mitochondrial DNA).

Mitochondria in animal cells, and chloroplasts in plant cells, are remnants of bacteria which moved inside cells as we know them today (rest in peace Lynn Margulis).

Some 25% of Tibetans have the 3394 T–>C mutations, but they see just fine. It appears to be an adaptation to altitude, because the same mutation is found in nonTibetans on the Indian subcontinent living about 1500 meters (about as high as Denver). However, if you have the same genetic change living below this altitude you get Lebers.

This is a spectacular demonstration of the influence of environment on heredity. Granted that the altitude you live at is a fairly impressive environmental change, but it’s at least possible that more subtle changes (temperature, humidity, air conditions etc. etc.) might also influence disease susceptibility to the same genetic variant. This certainly is one possible explanation for the failure of GWAS to turn up much. The authors make no mention of this in their paper, so these ideas may actually be (drumroll please) original.

If such environmental influences on the phenotypic expression of genetic changes are common, it might be yet another explanation for why drug discovery is so hard. Consider CETP (Cholesterol Ester Transfer Protein) and the very expensive failure of drugs inhibiting it. Torcetrapib was associated with increased deaths in a trial of 15,000 people for 18 – 20 months. Perhaps those dying somehow lived in a different environment. Perhaps others were actually helped by the drug

Never stop thinking, never stop looking for an angle

Derek Lowe may soon be a very rich man if he owns some Vertex stock. An incredible pair of papers in the current Nature (vol. 505 pp. 492 – 493, 509 – 514 ’14, Science (vol 343 pp. 38 – 384, 428 – 432 ’14) has come up with a completely new way of possibly treating AIDs. Instead of attacking the virus, attack the cells it infects, and let them live (or at least die differently).

Now for some background. Cells within us are dying all the time. Red cells die within half a year, the cells in the lining of your gut die within a week and are replaced. None of this causes inflammation, and the cells die very quietly and are munched up by white cells. They even send out a signal to the white cells called an ‘eat me’ signal. The process is called apoptosis. It occurs big time during embryonic development, particularly in the nervous system. Neurons failing to make strong enough contacts effectively kill themselves.

Apoptosis is also called programmed cell death — the cell literally kills itself using enzymes called caspases to break down proteins, and other proteins to break down DNA.

We have evolved other ways for cell death to occur. Consider a cell infected by a bacterium or a virus. We don’t want it to go quietly. We want a lot of inflammatory white cells to get near it and mop up any organisms around. This type of cell death is called pyroptosis. It also uses caspases, but a different set.

You just can’t get away from teleological thinking in biology. We are always asking ‘what’s it for?’ Chemistry and physics can never answer questions like this. We’re back at the Cartesian dichotomy.

Which brings us to an unprecedented way to treat AIDS (or even prevent it).

As anyone conscious for the past 30 years knows, the AIDS virus (aka Human Immunodeficiency Virus 1 aka HIV1) destroys the immune system. It does so in many ways, but the major brunt of the disease falls on a type of white cell called a helper T cell. These cells carry a protein called CD4 on their surface, so for years docs have been counting their number as a prognostic sign, and, in earlier days, to tell them when to start treatment.

We know HIV1 infects CD4 positive (CD4+) T cells and kills them. What the papers show, is that this isn’t the way that most CD4+ cells die. Most (the papers estimate 95%) CD4+ cells die of an abortive HIV1 infection — the virus gets into the cell, starts making some of its DNA, and then the pyroptosis response occurs, causing inflammation, attracting more and more immune cells, which then get infected.

This provides a rather satisfying explanation of the chronic inflammation seen in AIDS in lymph nodes.

Vertex has a drug VX-765 which inhibits the caspase responsible for pyroptosis, but not those responsible for apoptosis. The structure is available (http://www.medkoo.com/Anticancer-trials/VX-765.html), and it looks like a protease inhibitor. Even better, VX-765 been used in humans (in phase II trials for something entirely different). It was well tolerated for 6 weeks anyway. Clearly, a lot more needs to be done before it’s brought to the FDA — how safe is it after a year, what are the long term side effects. But imagine that you could give this to someone newly infected with essentially normal CD4+ count to literally prevent the immunodeficiency, even if you weren’t getting rid of the virus.

Possibly a great advance. I love the deviousness of it all. Don’t attack the virus, but prevent cells it infects from dying in a particular way.

Never stop thinking. Hats off to those who thought of it.

The death of the synonymous codon – III

The coding capacity of our genome continues to amaze. The redundancy of the genetic code has been put to yet another use. Depending on how much you know, skip the following two links and read on. Otherwise all the background to understand the following is in them.

http://luysii.wordpress.com/2011/05/03/the-death-of-the-synonymous-codon/

http://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/

There really was no way around it. If you want to code for 20 different amino acids with only four choices at each position, two positions (4^2) won’t do. You need three positions, which gives you 64 possibilities (61 after the three stop codons are taken into account) and the redundancy that comes with it. The previous links show how the redundant codons for some amino acids aren’t redundant at all but used to code for the speed of translation, or for exonic splicing enhancers and inhibitors. Different codons for the same amino acid can produce wildly different effects leaving the amino acid sequence of a given protein alone.

The following recent work [ Science vol. 342 pp. 1325 - 1326, 1367 - 1367 '13 ] showed that transcription factors bind to the coding sequences of proteins, not just the promoters and enhancers found outside them as we had thought.

The principle behind the DNAaseI protection assay is pretty simple. Any protein binding to DNA protects it against DNAase I which chops it up. Then clone and sequence what’s left to see where proteins have bound to DNA. These are called footprints. They must have removed histones first, I imagine.

The work performed DNAaseI protection assays on a truly massive scale. They looked at 81 different cell types at nucleotide resolution. They found 11,000,000 footprints all together, about 1,000,000 per cell type. In a given cell type 25,000 were completely localized within exons (the parts of the gene actually specifying amino acids). When all the codons of the genome are looked at as a group, 1/7 of them are found in a footprint in one of the cell types.

The results wouldn’t have been that spectacular had they just looked at a few cell types. How do we know the binding sites contain transcription factors? Because the footprints match transcription factor recognition sequences.

We know that sequences around splice sites are used to code for splicing enhancers and inhibitors. Interestingly, the splice sites are generally depleted of DNAaseI footprints. Remember that splicing occurs after the gene has been transcribed.

At this point it isn’t clear how binding of a transcription factor in a protein coding region influences gene expression.

Just like a work of art, there is more than one way that DNA can mean. Remarkable !

A new parameter for ladies to measure before choosing a mate — testicular volume

I’m amazed that they actually did this work [ Proc. Natl. Acad. Sci. vol. 110 pp. 15746 - 15751 '13 ] but they did. From Atlanta Georgia, the home of the Southern Gentleman. You do have to wonder what sort of wimps would permit this type of work. 70 such individuals were found, still cohabiting with the mother. Clearly a skewed distribution as 65/70 were actually married. No mention of any effect of the sex of the offspring on what they found.

Here’s what they did

Testosterone levels and testicular volume predicted how much parenting a male actually did (based on self-reports from the two parents). Functional MRI on viewing a picture of the offspring also predicted the degree of male parenting.

So which way do you think it went?

The bigger the testicles and the higher the testosterone, the less parenting the father did. Similarly the less activation of one area of the brain in response to a picture of their chile, the less parenting.

So ladies, you may get a macho dude for a mate, but don’t expect much help.

Your fetus can hear you

There have been intimations of this. 10 years ago [ Proc. Natl. Acad. Sci. vol. 100 pp. 11702 - 11705 '13 ] Full term infants were shown to respond more to human speech played forwards than when the tape was reversed. A clever technique called optical topography was used — it is noninvasive, and relies on the thinness of the neonatal skull.

Now comes [ Proc. Natl. Acad. Sci. vol. 110 pp. 15145 - 15150 '13 ] The authors presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. (What this means is that there is (again a noninvasive) way of measuring brain activity — and there is greater activity when an unexpected variant is presented. We are novelty seekers from the get go.

Also there was a significant correlation between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Even more impressive, the learning effect was generalized to other types of similar speech sounds not included in the training material.

We know know that the infant brain gets tuned to the language they hear. Japanese infants can hear the r sound at birth (again brain electrical activity responds to it) but after 6 – 9 months of listening to a language without the sound, their brain becomes tuned so they can’t.

So maybe pregnant ladies should listen to Mozart

The most interesting paper I’ve read in the past 5 years — finale

Recall from https://luysii.wordpress.com/2013/06/13/the-most-interesting-paper-ive-read-in-the-past-5-years-introduction-and-allegro/ that if you knew the ones and zeroes coding for the instruction your computer was currently working on you’d know exactly what it would do. Similarly, it has long been thought that, if you knew the sequence of the 4 letters of the genetic code (A, T, G, C) coding for a protein, you’d know exactly what would happen. The cellular machinery (the ribosome) producing output (a protein in this case) was thought to be an automaton similar to a computer blindly carrying out instructions. Assuming the machinery is intact, the cellular environment should have nothing to do with the protein produced. Not so. In what follows, I attempt to provide an abbreviated summary of the background you need to understand what goes wrong, and how, even here, environment rears its head.

If you find the following a bit terse, have a look at the https://luysii.wordpress.com/category/molecular-biology-survival-guide/ . In particular the earliest 3 articles (Roman numerals I, II and III) should be all you need).

We’ve learned that our DNA codes for lots of stuff that isn’t protein. In fact only 2% of it codes for the amino acids comprising our 20,000 proteins. Proteins are made of sequences of 20 different amino acids. Each amino acid is coded for by a sequence of 3 genetic code letters. However there are 64 possibilities for these sequences (4 * 4 * 4). 3 possibilities tell the machinery to quit (they don’t code for an amino acid). So some amino acids have as many as 6 codons (sequences of 3 letters) for them — e.g. Leucine (L) has 6 different codons (synonymous codons) for it while Methionine (M) has but 1. The other 18 amino acids fall somewhere between.

The cellular machine making proteins (the ribosome) uses the transcribed genetic code (mRNA) and a (relatively small) adapter, called transfer RNA (tRNA). There are 64 different tRNAs (61 for each codon specifying an amino acid and 3 telling the machine to stop). Each tRNA contains a sequence of 3 letters (the antiCodon) which exactly pairs with the codon sequence in the mRNA, the same way the letters (bases if you’re a chemist) in the two strands of DNA pair with each other. Hanging off the opposite end of each tRNA is the amino acid the antiCodon refers to. The ribosome basically stitches two amino acids from adjacent tRNAs together and then gets rid of one tRNA.

So which particular synonymous codon is found in the mRNA shouldn’t make any difference to the final product of the ribosome. That’s what the computer model of the cell tells us.

Since most cells are making protein all the time. There is lots of tRNA around. We need so much tRNA that instead of 64 genes (one for each tRNA) we have some 500 in our genome. So we have multiple different genes coding for each tRNA. I can’t find out how many of each we have (which would be very nice to know in what follows). The amount of tRNA of each of the 64 types is roughly proportional to the number of genes coding for it (the gene copy number) according to the papers cited below.

This brings us to codon usage. You have 6 different codons (synonymous codons) for leucine. Are they all used equally (when you look at every codon in the genome which codes for leucine)? They are not. Here are the percentages for the usages of the 6 distinct leucine codons in human DNA: 7, 7, 13, 13, 20, 40. For random use they should all be around 16. The most frequently appearing codon occurs as often as the least frequently used 4.

It turns out the the most used synonymous codons are the ones with the highest number of genes for the corresponding tRNA. Makes sense as there should be more of that synonymous tRNA around (at least in most cases) This is called codon bias, but I can’t seem to find the actual numbers.

This brings us (at last) to the actual paper [ Nature vol. 495 pp. 111 - 115 '13 ] and the accompanying editorial (ibid. pp. 57 – 58). The paper says “codon-usage bias has been observed in almost all genomes and is thought to result from selection for efficient and accurate translation (into protein) of highly expressed genes” — 3 references given. Essentially this says that the more tRNA around matching a particular codon, the faster the mRNA will find it (le Chatelier’s principle in action).

An analogy at this point might help. When I was a kid, I hung around a print shop. In addition to high speed printing, there was also a printing press, where individual characters were selected from boxes of characters, placed on a line (this is where the font term leading comes from), and baked into place using some scary smelling stuff. This was so the same constellation of characters could be used over and over. For details see http://en.wikipedia.org/wiki/Printing_press. You can regard the 6 different tRNAs for leucine as 6 different fonts for the letter L. To make things right, the correct font must be chosen (by the printer or the ribosome). Obviously if a rare font is used, the printer will have to fumble more in the L box to come up with the right one. This is exactly le Chatelier’s principle.

The papers concern a protein (FRQ) used in the circadian clock of a fungus — evolutionarily far from us to be sure, but hang in there. Paradoxically, the FRQ gene uses a lot of ‘rare’ synonymous codons. Given the technology we have presently, the authors were able to switch the ‘rare’ synonymous codons to the most common ones. As expected, the organism made a lot more FRQ using the modified gene.

The fascinating point (to me at least) is that the protein, with exactly the same amino acids did not fulfill its function in the circadian clock. As expected there was more of the protein around (it was easier for the ribosome machinery to make).

Now I’ve always been amazed that the proteins making us up have just a few shapes, something I’d guess happens extremely rarely. For details see http://luysii.wordpress.com/2010/10/24/the-essential-strangeness-of-the-proteins-that-make-us-up/.

Well, as we know, proteins are just a linear string of amino acids, and they have to fold to their final shape. The protein made by codon optimization must not have had the proper shape. Why? For one thing the protein is broken down faster. For another it is less stable after freeze thaw cycles. For yet another, it just didn’t work correctly in the cell.

What does this mean? Most likely it means that the protein made from codon optimized mRNA has a different shape. The organism must make it more slowly so that it folds into the correct shape. Recall that the amino acid chain is extruded from one by one from the ribosome, like sausage from a sausage making machine. As it’s extruded the chain (often with help from other proteins called chaperones) flops around and finds its final shape.

Why is this so fascinating (to me at least)? Because here,in the very uterus of biologic determinism, the environment (how much of each type of synonymous tRNA is around) rears its head. Forests have been felled for papers on the heredity vs. environment question. Just as American GIs wrote “Kilroy was here” everywhere they went in WWII, here’s the environment popping up where no one thought it would.

In addition the implications for protein function, if this is a widespread phenomenon, are simply staggering.

The DSM again

The Diagnostic and Statistical Manual of the American Psychiatric Association (DSM-V) is in the news. The press has not been favorable, nor have two new books concerning it. Here are some links

l. A review of a book on it from today’s Nature (2 May ’13)–http://www.nature.com/nature/journal/v497/n7447/full/497036a.html
2. An article in the New York Times today concerning the Nature book and one other — neither favorable –http://www.nytimes.com/2013/05/02/books/greenbergs-book-of-woe-and-francess-saving-normal.html?ref=todayspaper&_r=0

Added 8 May ’13 The US National Institute of Mental Health (NIMH) will no longer use the Diagnostic and Statistical Manual of Mental Disorders (DSM) to guide psychiatric research, NIMH director Thomas Insel announced on 30 April. The manual has long been used as a gold standard for defining mental disorders. Insel described the DSM as ill-suited to scientific studies, and said the NIMH will now support studies that cut across DSM-defined disease categories.

But, as Ernst Mayr once said — nothing in biology makes sense except in the light of evolution. Keeping that thought in mind, what I wrote a few years ago is relevant today. Here’s the post. Although it starts off in Mathematics, it gives some history which helps explain why the DSM is the way it is.

Even so, psychiatric wisdom should be taken with a good deal of salt. A psychiatrist in my medical school class (1966) knew people who were thrown out of their psychiatric residencies because they were gay, and back then homosexuality was a psychiatric disease.

Here’s the post of 3 years ago

Reification in mathematics and medicine

Can you bring an object into existence just by naming and describing it? Well, no one has created a unicorn yet, but mathematicians and docs do it all the time. Let’s start with mathematicians, most of whom are Platonists. They don’t think they’re inventing anything, they’re just describing an external reality that is ‘out there’ but isn’t physical. So is any language an external reality, but when the last person who knows that language dies, so does the language. It will never reappear as people invent new languages, and invent them they do as the experience with deaf Nicaraguan children has shown [ Science vol. 293 pp. 1758 - 1759 '01 ]. Mathematics has been developed independently multiple times all over the world, and it’s always the same. The subject matter is out there, and not just a social construct as some say.

A fascinating book, “Naming Infinity” describes a Russian school of mathematicians who extended set theory beyond the work of the French and Germans. They literally believed that describing a mathematical object and its properties implied that the object existed (assuming the properties were consistent). The mathematicians involved were also very devout mystical Christians, who were called “Name Worshippers”. They thought that repeatedly invoking the name of Jesus would allow them to reach an ecstatic state. The rather contentious theory of the book is that their religious stance allowed them to imbue all names with powerful properties which could bring what they named into existence and this led to their extensions of set theory. Naturally the Communists hated them, and exterminated many (see p. 126). People possessed of all absolute truths dislike those possessed of a different set.

Docs bring diseases into existence all the time simply by naming them. This is why the new DSM-V (Diagnostic and Statistical Manual of Mental Disorders) of the American Psychiatric Association (APA) is so important. Is homosexuality a disease? Years ago the APA thought it was. If your teenager won’t do what you want, is this “Adolescent Defiant Disorder”? Is it a disease? It will be if the DSM-V says it is.

There are a lot of things wrong with what the DSM has become (297 disorders in 886 pages in DSM-IV), but the original impetus for the major shift that occurred with DSM-III in the 70s was excellent. So it’s time for a bit of history. Prior to that time, it was quite possible for the same individual to go to 3 psychiatric teaching hospitals in New York City and get 3 different diagnoses. Why? Because diagnosis was based on the reconstruction of the psychodynamics of the case. Just as there is no single way to interpret “Stopping by Woods on a Snowy Evening” (see the previous post), there isn’t one for a case history. Freud’s case studies are great literature, but someone else would write up the case differently.

The authors of the DSM-III decided to be more like medical docs than shrinks. In our usual state of ignorance, we docs define diseases by how they act — the symptoms, the physical signs, the clinical course. So the DSM-III abandoned the literary approach of psychodynamics and started asking what psychiatric patients looked like — were they hallucinating, did they take no pleasure in things, was there sleep disturbance, were they delusional etc. etc. As you can imagine, there was a huge uproar from the psychoanalysts.

Now no individual fits any disease exactly. There are always parts missing, and there are always additional symptoms and signs present to confuse matters. The net result was that psychiatric diagnosis became like choosing from a menu in a Chinese restaurant, so many symptoms and findings from column A, so many from column B. (Update 2013 — Having been to China for 3 weeks this year, restaurant menus over there aren’t like that).

This led to a rather atheoretical approach, but psychiatric diagnoses became far more consistent. Docs have always been doing this sort of thing and still do (look at the multiple confusing initial manifestations of what turned out to AIDS back in the 80s). Different infections were classified by how they acted, long before Pasteur proved that they were caused by micro-organisms. Back when I was running a muscular dystrophy clinic, we saw something called limb girdle muscular dystrophy , in which the patients were weak primarily in muscles about the shoulders and hips. Now we know that there are at least 13 different genetic causes of the disorder. So there are many distinct causes of the same clinical picture. This is similar to the many different genetic causes of Parkinson’s disease I talked about 2 and 3 posts earlier. At least with limb girdle muscular dystrophy it is much easier to see how the genetic defects cause muscle weakness — all of the known genetic causes involve proteins found in muscle.

Where DSM-IV (and probably DSM-V — it’s coming out later this month) went off the rails, IMHO, is the multiplicity of diagnoses they have reified. Do you really think there are 297 psychiatric disorders? Not only that, many of them are treated the same way — with an SSRI (Selective Serotonin Reuptake Inhibitor). You don’t treat all infections with the same antibiotic. This makes me wonder just how ‘real’ these diagnoses are. However in defense of them, you do treat classic Parkinsonism pretty much the same way regardless of the genetic defect causing it (and at this point we know of genetic causes of less than 10% of cases).

There is a fascinating series of articles in Science starting 12 Feb ’10 about the new DSM-V. The first is on pp. 770 – 771. One of the most interesting points is that 40% of academic inpatients receive a diagnosis of NOS (Not Otherwise Specified — e.g. not in the DSM-IV — clearly even 297 diagnoses are missing quite a bit).

But insurance companies and the government treat this stuff as holy writ. Would you really like your frisky adolescent labeled with “prepsychotic risk syndrome” which is proposed for DSM-V. Also, casting doubt on the whole enterprise, are the radical changes the DSM has undergone since it’s inception nearly 60 years ago. We’ve learned a lot about all sorts of medical diseases since then, but strokes and heart attacks back then are still strokes and heart attacks today and TB is still TB. Do these guys really know what they’re talking about, and should we allow them to reify things?

That being said, cut psychiatry some slack. Regardless of theory, there are plenty of mentally ill people out there who need help. They aren’t going to go away (or get better) any time soon. Psychiatrists (like all docs) are doing the best they can with what they know.

That’s why it’s nice to be retired and reading stuff that it is at least possible to understand — like math, physics, organic chemistry and molecular biology. But never forget that it is trivial compared to human suffering. That’s why the carnage in the drug discovery industry is so sad — there goes our only hope making things better (written in 2010, but still true in 2013).

Retinal physiology and the demise of the pure percept

Rooming with 2 philosophy majors warps the mind even if it was 50 years ago.  Conundrums raised back then still hang around.  It was the heyday of Bertrand Russell before he became a crank.  One idea being bandied about back then was the ‘pure percept’ — a sensation produced by the periphery  before the brain got to mucking about with it.   My memory about the concept was a bit foggy so who better to ask than two philosophers I knew.

The first was my nephew, a Rhodes in philosophy, now an attorney with a Yale degree.  I got this back when I asked –

I would be delighted to be able to tell you that my two bachelors’ degrees in philosophy — from the leading faculties on either side of the Atlantic — leave me more than prepared to answer your question. Unfortunately, it would appear I wasn’t that diligent. I focused on moral and political philosophy, and although the idea of a “pure precept” rings a bell, I can’t claim to have a concrete grasp on what that phrase means, much less a commanding one.

 Just shows what a Yale degree does to the mind.

So I asked a classmate, now an emeritus prof. of philosophy and got this back
This pp nonsense was concocted because Empiricists [Es]–inc. Russell, in his more empiricistic moods–believed that the existence of pp was a necessary condition for empirical knowledge. /Why? –>
1. From Plato to Descartes, philosophers often held that genuine Knowledge [K] requires beliefs that are “indubitable” [=beyond any possible doubt]; that is, a belief counts as K only if it [or at least its ultimate source] is beyond doubt. If there were no such indubitable source for belief, skepticism would win: no genuine K, because no beliefs are beyond doubt. “Pure percepts” were supposed to provide the indubitable source for empirical K.
2. Empirical K must originate in sensory data [=percepts] that can’t be wrong, because they simply copy external reality w/o any cognitive “shopping” [as in Photoshop]. In order to avoid any possible ‘error’, percepts must be pure in that they involve no interpretation [= error-prone cognitive manipulation].
{Those Es who contend  that all K derives from our senses tend to ignore mathematical and other allegedly a priori K, which does not “copy” the sensible world.} In sum, pp are sensory data prior to [=unmediated by] any cognitive processing.

So it seems as though the concept is no longer taken seriously.  To drive a stake through its heart it’s time to talk about the retina.

It lies in the back of our eyes, and is organized rather counter-intuitively.  The photoreceptors (the pixels of the camera if you wish) are the last retinal elements to be hit by light, which must pass through the many other layers of the retina to get to them.

We have a lot of them — at least 100,000,000 of one type (rods).  The nerve cells sending impulses back to the brain, are called ganglion cells, and there are about 1,000,000 in each eye.  Between the them are bipolar cells and amacrine cells which organize the information falling on the photoreceptors.

All this happens in something only .2 milliMeters thick.

The organization of information results in retinal ganglion cells responding to different types of stimuli.  How do we know?  Impale the ganglion cell with an electrode while still in the retina, and try out various visual stimuli to see what it responds to.

Various authorities put the number of retinal ganglion cell types in the mouse at 11, 12, 14, 19 and 22.  Each responds to a given type of stimulus. Here are a few examples:

The X-type ganglion cell responds linearly to brightness

Y cells respond to movement in a particular direction,

Blue-ON transmits the mean spectral luminance (color distribution) along the spectrum from blue to green.

From an evolutionary point of view, it would be very useful to detect motion.  Some retinal ganglion cells being responding before they should. How do we know this?  It’s easy (but tedious) to map the area of visual space a ganglion cell responds to — this is called its receptive field.  The responses of some anticipate the incursion of a moving stimulus — clearly this must be the way they are hooked to photoreceptors via the intermediate cells.

Just think about the way photoreceptors at the back of the spherical eye are excited by something moving in a straight line in visual space.  It certainly isn’t a straight line on the retinal surfaced.  Somehow the elements of the retina are performing this calculation and predicting where something moving in a straight line will be next.  Why  couldn’t the brain bedoing this?  Because it can be seen in isolated retinas with no brain attached.

Now for something even more amazing.  Each type of ganglion cell (and I’ve just discussed a few) tiles the retina. This means that every patch of the retina has a ganglion cell responding to each type of visual stimulus.  So everything hitting every area of the retina is being analyzed 11, 12, 14, 19 or 22 different ways simultaneously.

So much for the pure percept: it works for a digital camera, but not the retina.  There is an immense amount of computation of the visual input going right there, before anything gets back to the brain.

If you wish to read more about this — an excellent review is available, but it’s quite technical and not for someone coming to neuroanatomy and neurophysiology for the first time.  [ Neuron vol. 76 pp. 266 - 280 '12 ]

Follow

Get every new post delivered to your Inbox.

Join 61 other followers