Old dog does new(ly discovered) tricks

One of the evolutionarily oldest enzyme classes is aaRS (for amino acyl tRNA synthetase). Every cell has them including bacteria. Life as we know it wouldn’t exist without them. Briefly they load tRNA with the appropriate amino acid. If this Greek to you, look at the first 3 articles in https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Amino acyl tRNA syntheses are enzymes of exquisite specificity, having to correctly match up 20 amino acids to some 61 different types of tRNAs. Mistakes in the selection of the correct amino acid occurs every 1/10,000 to 1/100,000, and in the selection of the correct tRNA every 1/1,000,000. The lower tRNA error rate is due to the fact that tRNAs are much larger than amino acids, and so more contacts between enzyme and tRNA are possible.

As the tree of life was ascended from bacteria over billions of years, 13 new protein domains which have no obvious association with aminoacylation have been added to AARS genes. More importantly, the additions have been maintained over the course of evolution (with no change in the primary function of the synthetase). Some of the new domains are appended to each of several synthetases, while others are specific to a single synthetase. The fact that they’ve been retained implies they are doing something that natural selection wants (teleology inevitably raises its ugly head with any serious discussion of molecular biology or cellular physiology — it’s impossible to avoid).

[ Science vol.345 pp 328 - 332 '14 ] looked at what mRNAs some 37 different AARS genes were transcribed into. Six different human tissues were studied this way. Amazingly, 79% of the 66 in-frame splice variants removed or disrupted the aaRS catalytic domain. . The AARS for histidine had 8 inframe splice variants all of which removed the catalytic domain. 60/70 variants losing the catalytic domain (they call these catalytic nulls) retained at least one of the 13 added domains in higher eukaryotes. Some of the transcripts were tissue specific (e.g. present in some of the 6 tissues but not all).

Recent work has shown roles for specific AARSs in a variety of pathways — blood vessel formation, inflammation, immune response, apoptosis, tumor formation, p53 signaling. The process of producing a completely different function for a molecule is called exaptation — to contrast it with adaptation.

Up to now, when a given protein was found to have enzymatic activity, the book on what that protein did was closed (with the exception of the small GTPases). End of story. Yet here we have cells spending the metabolic energy to make an enzymatically dead protein (aaRSs are big — the one for alanine has nearly 1,000 amino acids). Teleology screams — what is it used for? It must be used for something! This is exactly where chemistry is silent. It can explain the incredible selectivity and sensitivity of the enzyme but not what it is ‘for’. We have crossed the Cartesian dualism between flesh and spirit.

Could this sort of thing be the tip of the iceberg? We know that splice variants of many proteins are common. Could other enzymes whose function was essentially settled once substrates were found, be doing the same thing? We may have only 20,000 or so protein coding genes, but 40,000, 60,000, . . . or more protein products of them, each with a different biological function.

So aaRSs are very old molecular biological dogs, who’ve been doing new tricks all along. We just weren’t smart enough to see them (’till now).

Novels may have only 7 basic plots, but molecular biology continues to surprise and enthrall.

Two math tips

Two of the most important theorems in differential geometry are Gauss’s Theorem egregium and the Inverse function theorem. Basically the theorem egregium says that you don’t need to look at the shape of a two dimensional surface (say the surface of a walnut) from outside (e.g. from the way it sits in 3 dimensional space) to understand its shape. All the information is contained in the surface itself.

The inverse function theorem (InFT) is used over and over. If you have a continuous function from Euclidean space U of finite dimension n to Euclidean space V of the same dimension, and certain properties of its derivative are present at a point x of U, then there exists a another function to get you back from space V to U.

Even better, once you’ve proved the inverse function theorem, proof of another important theorem (the implicit function theorem aka the ImFT) is quite simple. The ImFT lets you know if given f(x, y, .. .) –> R (e.g. a real valued function) if you can express one variable (say x) in terms of the others. Again sometimes it’s difficult to solve such an equation for x in terms of y — consider arctan(e^(x + y^2) * sin(xy) + ln x). What is important to know in this case, is whether it’s even possible.

The proofs of both are tricky. In particular, the proof of the inverse function theorem is an existence proof. You may not be able to write down the function from V to U even though you’ve just proved that it exists. So using the InFT to prove the implicit function theory is also nonconstructive.

At some point in your mathematical adolescence, you should sit down and follow these proofs. They aren’t easy and they aren’t short.

Here’s where to go. Both can be found in books by James J. Callahan, emeritus professor of Mathematics at Smith College in Northampton Mass. The proof of the InVT is to be found on pages 169 – 174 of his “Advanced Calculus, A Geometric View”, which is geometric, with lots of pictures. What’s good about this proof is that it’s broken down into some 13 steps. Be prepared to meet a lot of functions and variables.

Just the statement of InVT involves functions f, f^-1, df, df^-1, spaces U^n, R^n, variables a, q, B

The proof of InVT involves functions g, phi, dphi, h, dh, N, most of which are vector valued (N is real valued)

Then there are the geometric objects U^n, R^n, Wa, Wfa, Br, Br/2

Vectors a, x, u, delta x, delta u, delta v, delta w

Real number r

That’s just to get you through step 8 of the 13 step proof, which proves the existence of the inverse function (aka f^-1). The rest involves proving properties of f^-1 such as continuity and differentiability. I must confess that just proving existence of f^-1 was enough for me.

The proof of the implicit function theorem for two variables — e.g. f(x, y) = k takes less than a page (190).

The proof of the Theorem Egregium is to be found in his book “The Geometry of Spacetime” pp. 258 – 262 in 9 steps. Be prepared for fewer functions, but many more symbols.

As to why I’m doing this please see http://luysii.wordpress.com/2011/12/31/some-new-years-resolutions/

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

Do axons burp out mitochondria?

People have been looking at microscope slides of the brain almost since there were microscopes (Alzheimer’s paper on his disease came out in 1906). Amazingly, something new has just been found [ Proc. Natl. Acad. Sci. vol. 111 pp. 9633 - 9638 '14 ]

To a first approximation, the axon of a neuron is the long process which carries impulses to other neurons far away. They have always been considered to be quite delicate (particularly in the brain itself, in the limbs they are sheathed in tough connective tissue). After an axon is severed in the limbs, all sorts of hell breaks lose. The part of the axon no longer in contact with the neuron degenerates (Wallerian degeneration), and the neuron cell body still attached to the remaining axon, changes markedly (central chromatolysis). At least the axons making up peripheral nerves do grow back (but maddeningly slowly). In the brain, they don’t, yet another reason neurologic disease is so devastating. Huge research efforts have been made to find out why. All sorts of proteins have been found which hinder axonal regrowth in the brain (and the spinal cord). Hopefully, at some point blocking them will lead to treatment.

THe PNAS paper found that axons in the optic nerve of the mouse (which arise from neurons in the retina) burp out mitochondria. Large protrusions form containing mitochondria which are then shed, somehow leaving the remaining axon intact (remarkable when you think of it). Once shed the decaying mitochondria are found in the cells supporting the axons (astrocytes). Naturally, the authors made up a horrible name to describe the process and sound impressive (transmitophagy).

This probably occurs elsewhere in the brain, because accumulation of degrading mitochondria along nerve processes in the superficial layers of the cerebral cortex (the gray matter on the surface of the brain) have been seen. People are sure to start looking for this everywhere in the brain, and perhaps outside as well.

Where else does sort of thing this occur? In the fertilized egg, that’s where. Sperm mitochondria are activated in the egg (which is why you get your mitochondria from mommy).

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see http://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 '11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 - 1041 '11, ibid vol. 49 pp. 780 - 783, 877 - 887 '06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

“A Troublesome Inheritance” – III — the first two chapters

Most scientific types I’ve known aren’t terribly interested in history (even of their own fields). The first two chapters of Wade’s book (to p. 38) are mostly about the history of the concept of race, and worth reading. I doubt that any open minded reader reading them will think Wade admires the fruits of racism past. If your definition of racism and racist is someone who believes that races of man exist, then Wade is.

All sorts of fascinating tidbits are to be found here, such as the fact that Marx wanted to dedicate Das Kapital to Darwin (he refused), and that the originator of the term Caucasian (Blumenbach 1795) meant it to apply the peoples or Europe, North Africa and the Indian subcontinent. Trouble started early, with Gobineau’s book “An Essay on the Inequality of Human Races” 1853. Darwin himself was against the idea of race, and incidentally didn’t originate “the survival of the fittest” which was due to Herbert Spencer. But he did use it.

There then follow (pp. 28 – 38) the very sad history of race, eugenics and its perversion racism. Read these pages to understand why the whole concept of racism arouses such visceral loathing in civilized people. Classmate Dan Kevles’ book (which I’m embarrassed to say I haven’t read) “In the Name of Eugenics: Genetics and the Uses of Heredity” is cited.

I haven’t read many of the reviews of Wade’s book, but most of his severest critics probably didn’t read the conclusion of chapter 1. “Readers should be fully aware that in chapters 6 through 10 they are leaving the world of hard science and entering into a much more speculative arena at the interface of economics and evolution.” I suppose he could have prefaced each his chapters with this, since few will read a book like this from start to finish, particularly those pointed to particular passages by reviews.

However Wade clearly reveals his political orientation (p. 27) “Intellectuals as a class are notoriously prone to fine-sounding theoretical schemes that lead to catastrophe, such as Social Darwinism, Marxism or indeed Eugenics.” As a med school classmate from the University of Chicago would often say –”OK. That’s how it works in practice, but how does it work in theory?”

Here’s a drug target for schizophrenia and other psychiatric diseases

All agree that any drug getting schizophrenics back to normal would be a blockbuster. The more we study its genetics and biochemistry the harder the task becomes. Here’s one target — neuregulin1, one variant of which is strongly associated with schizophrenia (in Iceland).

Now that we know that neuregulin1 is a potential target, why should discovering a drug to treat schizophrenia be so hard? The gene stretches over 1.2 megaBases and the protein contains some 640 amino acids. Cells make some 30 different isoforms by alternative splicing of the gene. Since the gene is so large one would expect to find a lot of single nucleotide polymorphisms (SNPs) in the gene. Here’s some SNP background.

Our genome has 3.2 gigaBases of DNA. With sequencing being what it is, each position has a standard nucleotide at each position (one of A, T, G, or C). If 5% of the population have any one of the other 3 at this position you have a SNP. By 2004 some 7 MILLION SNPs had been found and mapped to the human genome.

Well it’s 10 years later, and a mere 23,094 SNPs have been found in the neuregulin gene, of which 40 have been associated with schizophrenia. Unfortunately most of them aren’t in regions of the gene which code for amino acids (which is to be expected as 640 * 3 = 1920 nucleotides are all you need for coding out of the 1,200,000 nucleotides making up the gene). These SNPs probably alter the amount of the protein expressed but as of now very little is known (even whether they increase or decrease neuregulin1 protein levels).

An excellent review of Neuregulin1 and schizophrenia is available [ Neuron vol. 83 pp. 27 - 49 '14 ] You’ll need a fairly substantial background in neuroanatomy, neuroembryology, molecular biology, neurophysiology to understand all of it. Included are some fascinating (but probably incomprehensible to the medicinal chemist) material on the different neurophyiologic abnormalities associated with different SNPs in the gene.

Here are a few of the high points (or depressing points for drug discovery) of the review. Neuregulin1 is a member of a 6 gene family, all fairly similar and most expressed in the brain. All of them have multiple splicing isoforms, so drug selectivity between them will be tricky. Also SNPs associated with increased risk of schizophrenia have been found in family members numbers 2, 3 and 6 as well, so neuregulin1 not be the actual target you want to hit.

It gets worse. The neuregulins bind to a family of receptors (the ERBBs) having 4 members. Tending to confirm the utility of the neuregulins as a drug target is the fact that SNPs in the ERBBs are also associated with schizophrenia. So which isoform of which neuregulin binding to which iso form of which ERBB is the real target? Knowledge isn’t always power.

A large part of the paper is concerned with the function of the neuregulins in embryonic development of the brain, leading the the rather depressing thought that the schizophrenic never had a change, having an abnormal brain to begin with. A drug to reverse such problems seems only a hope.

The neuregulin/EBBB system is only one of many genes which have been linked to schizophrenia. So it looks like a post of a 4 years ago on Schizophrenia is largely correct — http://luysii.wordpress.com/2010/04/25/tolstoy-was-right-about-hereditary-diseases-imagine-that/

Happy hunting. It’s a horrible disease and well worth the effort. We’re just beginning to find out how complex it really is. Hopefully we’ll luck out, as we did with the phenothiazines, the first useful antipsychotics.

Happy 4th of July

Having spent our 50th anniversary in London, a few Independence Day thoughts are in order.

First, while watching the changing of the guard at Buckingham Palace, with all the pomp and rigidity of the occasion, I found it amazing that democracy originated out of this. But it did and the world owes them.

Second, the security surrounding the royals is intense and thorough. Guys with submachine guns with fixed bayonets etc. etc. I haven’t seen things like that since NY State penitentiary denizens were brought to my office for neurologic evaluations. I wouldn’t want to live like that.

Third, I can begin to see why 50+ years ago in grad school at Harvard, the US was regarded as somewhat crude, slow and inelegant. It was the era of the ugly American etc. etc. This, despite Don Voet’s observation that the Universal Scientific Language was broken English.

Going through London’s excellent museums one can see why people who’d been to Europe back then might have thought this way. But the museums are all about the past (except for an incredible exhibit at the natural history museum on epigenetics complete with research professor and two graduate students). What did the next 50 years bring? They’re all carrying cell phones over there, and iPads, and using Google and of course the internet, all originating in the USA. Compare the Science the USA has produced during that time to that of Europe: equal at the worst.

Never mind that we did it with European castoffs (4 of the 7 Nobels in the Harvard Chemistry department during this time, were Jewish refugees or their children). That’s the great strength of America, they’re as American as anyone else, just like Sergey Brin the cofounder of Google, a Russian Jew by birth. Or Andrew Grove, etc. etc.

Even back in the 60s, I never thought Europe was so wonderful. Two world wars, the concentration camps, Stalin and the Gulags to atone for. So I never regarded them as particularly civilized, something only strengthened in the 90s, with their atrocious handling of genocide in Kosovo.

Lest you think this is all in the past, my cousin the month we were in London was on some sort of river cruise down the Danube, and their tour of Vienna had to be rerouted because of a NeoNazi rally. They appear to have learned nothing from their awful history.

So happy 4th of July. Glad to be back in the good ol’ USA.

“A Troublesome Inheritance” – II – Four Anthropological disasters of the past 100 years

Page 5 of Wade’s book contains two recent pronouncements from the American Anthropological Association stating that “Race is about culture not biology”. It’s time to look at anthropology’s record of the past 100 years. It isn’t pretty.

Start with the influential Franz Boas (1858 – 1942) who taught at Columbia for decades. His most famous student was Margaret Mead.

He, along with his students, felt that the environment was everything for human culture and that heredity had minimal influence. Here’s what Boas did over 100 years ago.

[ Proc. Natl. Acad. Sci. vol. 99 pp. 14622 - 14623, 14436 - 14439 '02 ] Retzius invented the the cephalic index in the 1890s. It is just the widest breadth of the skull divided by the front to back length. One can be mesocephalic, dolichocephalic or brachycephalic. From this index one could differentiate Europeans by location. Anthropologists continue to take such measurements. Franz Boas in 1910 – 1913 said that the USA born offspring of immigrants showed a ‘significant’ difference from their immigrant parents in their cephalic index. This was used to reinforce the idea that environment was everything.

Boas made some 13,000 measurements. This is a reanalysis of his data showing that he seriously misinterpreted it. The genetic component of the variability was far stronger than the environmental. Some 8500 of his 13,000 cases were reanalyzed. In a later paper Boas stated that he never claimed that there were NO genetic components to head shape, but his students and colleagues took the ball and ran with it, and Boas never (publicly) corrected them. The heritability was high in the family data and between ethnic groups, which remains in the American environment.

One of Boas’ students wrote that “Heredity cannot be allowed to have acted any part in history.” The chain of events shaping a people “involves the absolute conditioning of historical events by other historical events.” Hardly scientific statements.

On to his most famous student Margaret Mead (1901 -1978) who later became the head of the American Association for the Advancement of Science (1960). In 1928 she published “Coming of Age in Samoa” about the sexual freedom of Samoan adolescents. It had a big play, and I was very interested in such matters as a pimply adolescent. It fit into the idea that ” “We are forced to conclude that human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions.”. This certainly fit nicely with the idea that mankind could be reshaped by changing the system — see http://en.wikipedia.org/wiki/New_Soviet_man one of the many fantasies of the left promoted by academia.

Subsequently, an anthropologist (Freeman) went back to Samoa and concluded that Mead had been hoaxed. He found that Samoans may beat or kill their daughters if they are not virgins on their wedding night. A young man who cannot woo a virgin may rape one to extort her into eloping. The family of a cuckolded husband may attack and kill the adulterer. For more details see Pinker “The Blank Slate” pp. 56 –>

The older among you may remember reading about “the gentle Tasaday” of the Philippines, a Stone age people who had no word for war. It was featured in the NY times in the 70s. They were the noble savages of Rousseau in the 20th century. The 1970 ”discovery” of the Tasaday as a ”Stone Age” tribe was widely heralded in newspapers, shown on national television in a National Geographic Society program and an NBC special documentary, and further publicized in ”The Gentle Tasaday: A Stone Age People in the American journalist, John Nance.

In all, Manuel Elizalde Jr., the son of a rich Filipino family, was depicted as the savior of the Tasaday through his creation of Panamin (from presidential assistant for national minorities), a cabinet-level office to protect the Tasaday and other ”minorities” from corrosive modern influences and from environmentally destructive logging companies.It appears that Manuel Elizalde hoodwinked almost everybody by paying neighboring T’boli people to take off their clothes and pose as a ”Stone Age” tribe living in a cave. Mr. Elizalde then used the avalanche of international interest and concern for his Tasaday creation to create the Panimin organization for control over ”tribal minority” lands and resources and ultimately deals with logging and mining companies.

Last but not least is “The Mismeasure of Man” (1981) in which Steven Gould tore apart the work of Samuel Morton, a 19th century Anthropologist who measured skulls. He accused Morton of (consciously or unconsciously) manipulating the data to come up with the conclusions he desired.

Well, guess what. Someone went back and looked at Morton’s figures, and remeasured some of his skulls (which are still at Penn) and found that the manipulation was all Gould’s not Morton’s. I posted about this when it came out 3 years ago — here’s the link http://luysii.wordpress.com/2011/06/26/hoisting-steven-j-gould-by-his-own-petard/.

Here is the relevant part of that post — An anthropologist [ PLoS Biol. doi:10.1371/journal.pbio.1001071;2011 ] went back to Penn (where the skulls in question reside), and remeasured some 300 of them, blinding themselves to their ethnic origins as they did. Morton’s measurements were correct. They also had the temerity to actually look at Morton’s papers. They found that, contrary to Gould, Morton did report average cranial capacities for subgroups of both populations, sometimes on the same page or on pages near to figures that Gould quotes, and therefore must have seen. Even worse (see Nature vol. 474 p. 419 ’11 ) they claim that “Gould misidentified the Native American samples, falsely inflating the average he calculated for that population”. Gould had claimed that Morton’s averages were incorrect.

Perhaps anthropology has gotten its act together now, but given this history, any pronouncements they make should be taken with a lot of salt. In fairness to the field, it should be noted that the debunkers of Boas, Mead and Gould were all anthropologists. They have a heavy load to carry.

“A Troublesome Inheritance” – I

One of the joys of a deep understanding of chemistry, is the appreciation of the ways in which life is constructed from the most transient of materials. Presumably the characteristics of living things that we can see (the phenotype) will someday be traceable back to the proteins, nucleic acids,and small metabolites (lipids, sugars, etc..) making us up.

For the time being we must content ourselves with understanding the code (our genes) and how it instructs the development of a trillion celled organism from a fertilized egg. This brings us to Wade’s book, which has been attacked as racist, by anthropologists, sociologists and other lower forms of animal life.

Their position is that races are a social, not a biological construct and that differences between societies are due to the way they are structured, not by differences in the relative frequency of the gene variants (alleles) in the populations making them up. Essentially they are saying that evolution and its mechanism descent with modification under natural selection, does not apply to humanity in the last 50,000 years when the first modern humans left Africa.

Wade disagrees. His book is very rich in biologic detail and one post about it discussing it all would try anyone’s attention span. So I’m going to go through it, page by page, commenting on the material within (the way I’ve done for some chemistry textbooks), breaking it up in digestible chunks.

As might be expected, there will be a lot of molecular biology involved. For some background see the posts in https://luysii.wordpress.com/category/molecular-biology-survival-guide/. Start with http://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/ and follow the links forward.

Wade won me over very quickly (on page 3), by his accurate and current citations to the current literature. He talks about how selection on a mitochondrial protein helped Tibetans to live at high altitude (while the same mutation those living at low altitudes leads to blindness). Some 25% Tibetans have the mutation while it is rare among those living at low altitudes.
Here’s my post of 10 June 2012 ago on the matter. That’s all for now

Have Tibetans illuminated a path to the dark matter (of the genome)?

I speak not of the Dalai Lama’s path to enlightenment (despite the title). Tall people tend to have tall kids. Eye color and hair color is also hereditary to some extent. Pitched battles have been fought over just how much of intelligence (assuming one can measure it) is heritable. Now that genome sequencing is approaching a price of $1,000/genome, people have started to look at variants in the genome to help them find the genetic contribution to various diseases, in the hopes of understanding andtreating them better.

Frankly, it’s been pretty much of a bust. Height is something which is 80% heritable, yet the 20 leading candidate variants picked up by genome wide association studies (GWAS) account for 3% of the variance [ Nature vol. 461 pp. 458 - 459 '09 ]. This has happened again and again particularly with diseases. A candidate gene (or region of the genome), say for schizophrenia, or autism, is described in one study, only to be shot down by the next. This is likely due to the fact that many different genetic defects can be associated with schizophrenia — there are a lot of ways the brain cannot work well. For details — see http://luysii.wordpress.com/2010/04/25/tolstoy-was-right-about-hereditary-diseases-imagine-that/. or see http://luysii.wordpress.com/2010/07/29/tolstoy-rides-again-autism-spectrum-disorder/.

Typically, even when an association of a disease with a genetic variant is found, the variant only increases the risk of the disorder by 2% or less. The bad thing is that when you lump them all of the variants you’ve discovered together (for something like height) and add up the risk, you never account for over 50% of the heredity. It isn’t for want of looking as by 2010 some 600 human GWAS studies had been published [ Neuron vol. 68 p. 182 '10 ]. Yet lots of the studies have shown various disease to have a degree of heritability (particularly schizophrenia). The fact that we’ve been unable to find the DNA variants causing the heritability was totally unexpected. Like the dark matter in galaxies, which we know is there by the way the stars spin around the galactic center, this missing heritability has been called the dark matter of the genome.

Which brings us to Proc. Natl. Acad. Sci. vol. 109 pp. 7391 – 7396 ’12. It concerns an awful disease causing blindness in kids called Leber’s hereditary optic neuropathy. The ’cause’ has been found. It is a change of 1 base from thymine to cytosine in the gene for a protein (NADH dehydrogenase subunit 1) causing a change at amino acid #30 from tyrosine to histidine. The mutation is found in mitochondrial DNA not nuclear DNA, making it easier to find (it occurs at position 3394 of the 16,569 nucleotide mitochondrial DNA).

Mitochondria in animal cells, and chloroplasts in plant cells, are remnants of bacteria which moved inside cells as we know them today (rest in peace Lynn Margulis).

Some 25% of Tibetans have the 3394 T–>C mutations, but they see just fine. It appears to be an adaptation to altitude, because the same mutation is found in nonTibetans on the Indian subcontinent living about 1500 meters (about as high as Denver). However, if you have the same genetic change living below this altitude you get Lebers.

This is a spectacular demonstration of the influence of environment on heredity. Granted that the altitude you live at is a fairly impressive environmental change, but it’s at least possible that more subtle changes (temperature, humidity, air conditions etc. etc.) might also influence disease susceptibility to the same genetic variant. This certainly is one possible explanation for the failure of GWAS to turn up much. The authors make no mention of this in their paper, so these ideas may actually be (drumroll please) original.

If such environmental influences on the phenotypic expression of genetic changes are common, it might be yet another explanation for why drug discovery is so hard. Consider CETP (Cholesterol Ester Transfer Protein) and the very expensive failure of drugs inhibiting it. Torcetrapib was associated with increased deaths in a trial of 15,000 people for 18 – 20 months. Perhaps those dying somehow lived in a different environment. Perhaps others were actually helped by the drug

Follow

Get every new post delivered to your Inbox.

Join 61 other followers