Category Archives: Philosophical issues raised

Why we imperfectly understand randomness the way we do.

The cognoscenti think the average individual is pretty dumb when it comes to probability and randomness. Not so, says a fascinating recent paper [ Proc. Natl. Acad. Sci. vol. 112 pp. 3788 – 3792 ’15 ]. The average joe (this may mean you) when asked to draw a random series of fifty or so heads and tails never puts in enough runs of heads or runs of tails. This leads to the gambler’s fallacy, that if an honest coin gives a run of say 5 heads, the next result is more likely to be tails.

There is a surprising amount of structure lurking within purely random sequences such as the toss of a fair coin where the probability of heads is exactly 50%. Even with a series with 50% heads, the waiting time for two heads (HH) or two tails (TT) to appear is significantly longer than for an alternation (HT or TH). On average 6 tosses will be required for HH or TT to appear while only an average of 4 are needed for HT or TH.

This is why Joe SixPack never puts in enough runs of Hs or Ts.

Why should the wait be longer for HH or TT even when 50% of the time you get a H or T. The mean time for HH and TT is the same as for HT and TH. The variance is different because the occurrences of HH and TT are bunched in time, while the HT and TH are spread evenly.

It gets worse for longer repetitions — they can build on each other. HHH contains two instances of HH, while alterations do not. Repetitions bunch together as noted earlier. We are very good at perceiving waiting times, and this is probably why we think repetitions are less likely and soon to break up.

The paper goes a lot farther constructing a neural model, based on the way our brains integrate information over time when processing sequences of events. It takes into consideration our perceptions of mean time AND waiting times. We average the two. This produces the best fitting bias gain parameter for an existing Bayesian model of randomness.

See, you’re not as dumb as they thought you were.

Another reason for our behavior comes from neuropsychology and physiological psychology. We have ways to watch the electrical activity of your brain and find out when you perceive something as different. It’s called mismatch negativity (see http://en.wikipedia.org/wiki/Mismatch_negativity for more detail). It a brain potential (called P300) peaking .1 -.25 seconds after a deviant tone or syllable.

Play 5 middle c’s in a row followed by a d than c’s again. The potential doesn’t occur after any of the c’s just after the d. This has been applied to the study of infant perception long before they can speak.

It has shown us that asian and western newborn infants both hear ‘r’ and ‘l’ quite well (showing mismatch negativity to a sudden ‘r’ or ‘l’ in a sequence of other sounds). If the asian infant never hears people speaking words with r and l in them for 6 months, it loses mismatch negativity to them (and clinical perception of them). So our brains are literally ‘tuned’ to understand the language we hear.

So we are more likely to notice the T after a run of H’s, or an H after a run of T’s. We are also likely to notice just how long it has been since it last occurred.

This is part of a more general phenomenon — the ability of our brains to pick up and focus on changes in stimuli. Exactly the same phenomenon explains why we see edges of objects so well — at least here we have a solid physiologic explanation — surround inhibition (for details see — http://en.wikipedia.org/wiki/Lateral_inhibition). It happens in the complicated circuitry of the retina, before the brain is involved.

Philosophers should note that this destroys the concept of the pure (e.g. uninterpreted) sensory percept — information is being processed within our eyes before it ever gets to the brain.

Update 31 Mar — I wrote the following to the lead author

” Dr. Sun:

Fascinating paper. I greatly enjoyed it.

You might be interested in a post from my blog (particularly the last few paragraphs). I didn’t read your paper carefully enough to see if you mention mismatch negativity, P300 and surround inhibition. if not, you should find this quite interesting.

Luysii

And received the following back in an hour or two

“Hi, Luysii- Thanks for your interest in our paper. I read your post, and find it very interesting, and your interpretation of our findings is very accurate. I completely agree with you making connections to the phenomenon of change detection and surround inhibition. We did not spell it out in the paper, but in the supplementary material, you may find some relevant references. For example, the inhibitory competition between HH and HT detectors is a key factor for the unsupervised pattern association we found in the neural model.

Yanlong”

Nice ! ! !

How formal tensor mathematics and the postulates of quantum mechanics give rise to entanglement

Tensors continue to amaze. I never thought I’d get a simple mathematical explanation of entanglement, but here it is. Explanation is probably too strong a word, because it relies on the postulates of quantum mechanics, which are extremely simple but which lead to extremely bizarre consequences (such as entanglement). As Feynman famously said ‘no one understands quantum mechanics’. Despite that it’s never made a prediction not confirmed by experiments, so the theory is correct even if we don’t understand ‘how it can be like that’. 100 years of correct prediction of experimentation are not to be sneezed at.

If you’re a bit foggy on just what entanglement is — have a look at https://luysii.wordpress.com/2010/12/13/bells-inequality-entanglement-and-the-demise-of-local-reality-i/. Even better; read the book by Zeilinger referred to in the link (if you have the time).

Actually you don’t even need all the postulates for quantum mechanics (as given in the book “Quantum Computation and Quantum Information by Nielsen and Chuang). No differential equations. No Schrodinger equation. No operators. No eigenvalues. What could be nicer for those thirsting for knowledge? Such a deal ! ! ! Just 2 postulates and a little formal mathematics.

Postulate #1 “Associated to any isolated physical system, is a complex vector space with inner product (that is a Hilbert space) known as the state space of the system. The system is completely described by its state vector which is a unit vector in the system’s state space”. If this is unsatisfying, see an explication of this on p. 80 of Nielson and Chuang (where the postulate appears)

Because the linear algebra underlying quantum mechanics seemed to be largely ignored in the course I audited, I wrote a series of posts called Linear Algebra Survival Guide for Quantum Mechanics. The first should be all you need. https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/ but there are several more.

Even though I wrote a post on tensors, showing how they were a way of describing an object independently of the coordinates used to describe it, I did’t even discuss another aspect of tensors — multi linearity — which is crucial here. The post itself can be viewed at https://luysii.wordpress.com/2014/12/08/tensors/

Start by thinking of a simple tensor as a vector in a vector spacespace. The tensor product is just a way of combining vectors in vector spaces to get another (and larger) vector space. So the tensor product isn’t a product in the sense that multiplication of two objects (real numbers, complex numbers, square matrices) produces another object of the exactly same kind.

So mathematicians use a special symbol for the tensor product — a circle with an x inside. I’m going to use something similar ‘®’ because I can’t figure out how to produce the actual symbol. So let V and W be the quantum mechanical state spaces of two systems.

Their tensor product is just V ® W. Mathematicians can define things any way they want. A crucial aspect of the tensor product is that is multilinear. So if v and v’ are elements of V, then v + v’ is also an element of V (because two vectors in a given vector space can always be added). Similarly w + w’ is an element of W if w an w’ are. Adding to the confusion trying to learn this stuff is the fact that all vectors are themselves tensors.

Multilinearity of the tensor product is what you’d think

(v + v’) ® (w + w’) = v ® (w + w’ ) + v’ ® (w + w’)

= v ® w + v ® w’ + v’ ® w + v’ ® w’

You get all 4 tensor products in this case.

This brings us to Postulate #2 (actually #4 on the book on p. 94 — we don’t need the other two — I told you this was fairly simple)

Postulate #2 “The state space of a composite physical system is the tensor produce of the state spaces of the component physical systems.”

http://planetmath.org/simpletensor

Where does entanglement come in? Patience, we’re nearly done. One now must distinguish simple and non-simple tensors. Each of the 4 tensors products in the sum on the last line is simple being the tensor product of two vectors.

What about v ® w’ + v’ ® w ?? It isn’t simple because there is no way to get this by itself as simple_tensor1 ® simple_tensor2 So it’s called a compound tensor. (v + v’) ® (w + w’) is a simple tensor because v + v’ is just another single element of V (call it v”) and w + w’ is just another single element of W (call it w”).

So the tensor product of (v + v’) ® (w + w’) — the elements of the two state spaces can be understood as though V has state v” and W has state w”.

v ® w’ + v’ ® w can’t be understood this way. The full system can’t be understood by considering V and W in isolation, e.g. the two subsystems V and W are ENTANGLED.

Yup, that’s all there is to entanglement (mathematically at least). The paradoxes entanglement including Einstein’s ‘creepy action at a distance’ are left for you to explore — again Zeilinger’s book is a great source.

But how can it be like that you ask? Feynman said not to start thinking these thoughts, and if he didn’t know you expect a retired neurologist to tell you? Please.

Tensors

Anyone wanting to understand the language of general relativity must eventually tackle tensors. The following is what I wished I’d known about them before I started studying them on my own.

First, mathematicians and physicists describe tensors so differently, that it’s hard to even see that they’re talking about the same thing (one math book of mine says exactly that). Also mathematicians basically dump on the physicists’ way of doing tensors.

My first experience with tensors was years ago when auditing a graduate abstract algebra course. The instructor prefaced his first lecture by saying that tensors were the hardest thing in mathematics. Unfortunately right at that time my father became ill and I had to leave the area.

I’ll write a bit more about the mathematical approach at the end.

The physicist’s way of looking at tensors actually is a philosophical position. It basically says that there is something out there, and how two people viewing that something from different perspectives are seeing the same thing, and how they numerically describe it, while important, is irrelevant to the thing itself (ding an sich if you want to get fancy). What a tensor tries to capture is how one view of the object can be transformed into another without losing the object in the process.

This is a bit more subtle than using different measuring scales (fahrenheit vs. centigrade). That salt shaker siting there looks a bit different to everyone present at the table. Relative to themselves they’d all use different numbers to describe its location, height and width. Depending on distance it would subtend different visual angles. But it’s out there and has but one height and no one around the table would disagree.

You’re tall and see it from above, while your child sees it at eye level. You measure the distances from your eye to its top and to its bottom, subtract them and get the height. So does you child. You get the same number.

The two of you have actually used two distinct vectors in two different coordinate systems. To transform your view into that of your child’s you have to transform your coordinate system (whose origin is your eye) to the child’s. The distance numbers to the shaker from the eye are the coordinates of the shaker in each system.

So the position of the bottom of the shaker actually has two parts (e.g. the vector describing it)
l. The coordinate system of the viewer
2. The distances measured by each (the components or the coefficients of the vector).

To shift from your view of the salt shaker to that of your child’s you must change both the coordinate system and the distances measured in each. This is what tensors are all about. So the vector from the top to the bottom of the salt shaker is what you want to keep constant. To do this the coordinate system and the components must change in opposite ways. This is where the terms covariant and contravariant and all the indices come in.

What is taken as the basic change is that of the coordinate system (the basis vectors if you know what they are). In the case of the vector to the salt shaker the components transform the opposite way (as they must to keep the height of the salt shaker the same). That’s why they are called contravariant.

The use of the term contravariant vector is terribly confusing, because every vector has two parts (the coefficients and the basis) which transform oppositely. There are mathematical objects whose components (coefficients) transform the same way as the original basis vectors — these are called covariant (the most familiar is the metric, a bilinear symmetric function which takes two vectors and produces a real number). Remember it’s the way the coefficients of the mathematical object transform which determines whether they are covariant or contravariant. To make things a bit easier to remember, contRavariant coefficients have their indices above the letter (R for roof), while covariant coefficients have their indices below the letter. The basis vectors (when written in) always have the opposite position of their indices.

Another trap — the usual notation for a vector skips the basis vectors entirely, so the most familial example (x, y, z) or (x^1, x^2, x^3) is really
x^1 * e_1 + x^2 * e_2 + x^3 * e-3. Where e_1 is (1,0,0), etc. etc.

So the crucial thing about tensors is the way they transform from one coordinate system to another.

There is a far more abstract way to define tensors, as the way multilinear products of vector spaces factor through it. I don’t think you need it for relativity (I hope not). If you want to see a very concrete to this admittedly abstract business — I recommend “Differential Geometry of Manifolds” by Stephen Lovett pp. 381 – 383.

An even more abstract definition of tensors (seen in the graduate math course) is to define them on modules, not vector spaces. Modules are just vector spaces whose scalars are rings, rather than fields like the real or the complex numbers. The difference, is that unlike fields the nonZero elements don’t have inverses.

I hope this is helpful to some of you

The incredible information economy of frameshifting

Her fox and dog ate our pet rat

H erf oxa ndd oga teo urp etr at

He rfo xan ddo gat eou rpe tra t

The last two lines make no sense at all, but (neglecting the spaces) they have identical letter sequences.

Here are similar sequences of nucleotides making up the genetic code as transcribed into RNA

ATG CAT TAG CCG TAA GCC GTA GGA

TGC ATT AGC CGT AAG CCG TAG GA.

GCA TTA GCC TAA GCC GTA GGA ..

Again, in our genome there are no spaces between the triplets. But all the triplets you see are meaningful in the sense that they each code for one of the twenty amino acids (except for TAA which says stop). ATG codes for methionine (the purists will note that all the T’s should be U). I’m too lazy to look the rest up, but the ribosome doesn’t care, and will happily translate all 3 sequences into the sequential amino acids of a protein.

Both sets of sequences have undergone (reading) frame shifts.

A previous post https://luysii.wordpress.com/2014/10/13/the-bach-fugue-of-the-genome/ marveled about how something too small even to be called a virus coded for a protein whose amino acids were read in two different frames.

Frameshifting is used by viruses to get more mileage out of their genomes. Why? There is only so much DNA you can pack into the protein coat (capsids) of a virus.

[ Proc. Natl. Acad. Sci. vol. 111 pp. 14675 – 14680 ’14 ] Usually DNA density in cell nuclei or bacteria is 5 – 10% of volume. However, in viral capsids it is 55% of volume. The pressure inside the viral capsid can reach ten atmospheres. Ejection is therefore rapid (60,000 basepairs/second).

The AIDS virus (HIV1) relies on frame shifting of its genome to produce viable virus. The genes for two important proteins (gag and pol) have 240 nucleotides (80 amino acids) in common. Frameshifting occurs to allow the 240 nucleotides to be read by the cell’s ribosomes in two different frames (not at once). Granted that there are 61 3 nucleotide combinations to code for only 20 amino acids, so some redundancy is built in, but the 80 amino acids coded by the two frames are usually quite different.

That the gag and pol proteins function at all is miraculous.

The phenomenon is turning out to be more widespread. [ Proc. Natl. Acad. Sci. vol. 111 pp. E4342 – E4349 ’14 ] KSHV (Kaposi’s Sarcoma HerpesVirus) causes (what else?) Kaposi’s sarcoma, a tumor quite rare until people with AIDS started developing it (due to their lousy immune system being unable to contend with the virus). Open reading frame 73 (ORF73) codes for a major latency associated nuclear antigen 1 (LANA1). It has 3 domains a basic amino terminal region, an acidic central repeat region (divisible into CR1, CR2 and CR3) and another basic carboxy terminal region. LANA1 is involved in maintaning KSHV episomes, regulation of viral latency, transcriptional regulation of viral and cellular genes.

LANA1 is made of multiple high and lower molecular weight isoforms — e.g. a LANA ladder band pattern seen in immunoblotting.

This work shows that LANA1 (and also Epstein Barr Nuclear antigen 1` ) undergo highly efficient +1 and -2 programmed frameshifting, to generate previously undescribed alternative reading frame proteins in their repeat regions. Programmed frameshifting to generate multiple proteins from one RNA sequence can increase coding capacity, without increasing the size of the viral capsid.

The presence of similar repeat sequences in human genes (such as huntingtin — the defective gene in Huntington’s chorea) implies that we should look for frame shifting translation in ourselves as well as in viruses. In the case of mutant huntingtin frame shifting in the abnormally expanded CAG tracts rproduces proteins containing polyAlanine or polySerineArginine tracts.

Well G, A , T and C are the 1’s and 0’s of the way genetic information is stored in our genomic computer. It really isn’t surprising that the genome can be read in alternate frames. In the old days, textual information in bytes had parity bits to make sure the 1’s and 0’s were read in the correct frame. There is nothing like that in our genome (except for the 3 stop codons).

What is truly suprising it that reading in alternate frame produces ‘meaningful’ proteins. This gets us into philosophical waters. Clearly

Erf oxa ndd oga teo urp etr at

Rfo xan ddo gat eou rpe tra t

aren’t meaningful to us. Yet gag and pol are quite meaningful (even life and death meaningful) to the AIDS virus. So meaningful in the biologic sense, means able to function in the larger context of the cell. That really is the case for linguistic meaning. You have to know a lot about the world (and speak English) for the word cat to be meaningful to you. So meaning can never be defined by the word itself. Probably the same is true for concepts as well, but I’ll leave that to the philosophers, or any who choose to comment on this.

The Bach Fugue of the Genome

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
– Hamlet (1.5.167-8), Hamlet to Horatio

Just when you thought we’d figured out what genomes could do, the virusoid of rice yellow mottle virus performs a feat of dense coding I’d have thought impossible. The following work requires a fairly sophisticated understanding of molecular biology which the articles in “Molecular Biology Survival Guide for Chemists” might provide the background. Give it a shot. This is fascinating stuff. If the following seems incomprehensible, start with –https://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/ and then follow the links forward.

Virusoids are single stranded circular RNAs which are dependent on a virus for replication. They are distinct from viroids because viroids need nothing else to replicate. Neither the virusoid or the viroid were thought to code for protein (until now). They are usually found inside the protein shells of plant viruses.

[ Proc. Natl. Acad. Sci. vol. 111 pp. 14542 – 14547 ’14 ] Viroids and virusoids (viroid like satellite RNAs) are small (220 – 450 nucleotide) covalently closed circular RNAs. They are the smallest known replicating circular RNA pathogens. They replicate via a rolling circle mechanism to produce larger concatemers which are then processed into monomeric forms by a self-splicing hammerhead ribozyme, or by cellular enzymes.

The rice yellow mottle virus (RYMV) contains a virusoid which is a covalently closed circular RNA of a mere 220 nucleotides. A 16 kiloDalton basic protein is made from it. How can this be? Figure the average molecular mass of an amino acid at 100 Daltons, and 3 codons per amino acid. This means that 220 can code for 73 amino acids at most (e.g. for a 7 – 8 kiloDalton protein).

So far the RYMV virusoid is the only RNA of viroids and virusoids which actually codes for a protein. The virusoid sequence contains an internal ribosome entry site (IRES) of the following form UGAUGA. Intiation starts at the AUG, and since 220 isn’t an integral multiple of 3 (the size of amino acid codons), it continues replicating in another reading frame until it gets to one of the UGAs (termination codons) in UGAUGA or UGAUGA. Termination codons can be ignored (leaky codons) to obtain larger read through proteins. So this virusoid is a circular RNA with no NONcoding sequences which codes for a protein in either 2 or 3 of the 3 possible reading frames. Notice that UGAUGA contains UGA in both of the alternate reading frames ! So it is likely that the same nucleotide is being read 2 or 3 ways. Amazing ! ! !

It isn’t clear what function the virusoid protein performs for the virus when the virus has infected a cell. Perhaps there aren’t any, and the only function of the protein is to help the virusoid continue existence inside the virus.

Talk about information density. The RYMV virusoid is the Bach Fugue of the genome. Bach sometimes inverts the fugue theme, and sometimes plays it backwards (a musical palindrome if you will).

It is unfortunate that more people don’t understand the details of molecular biology so they can appreciate mechanisms of this elegance. Whether you think understanding it is an esthetic experience, is up to you. I do. To me, this resembles the esthetic experience that mathematics offers.

A while back I wrote a post, wondering if the USA was acquiring brains from the MidEast upheavals, the way we did from Europe because of WWII. Here’s the link https://luysii.wordpress.com/2014/09/28/maryam-mirzakhani/.

Clearly Canada has done just that. Here are the authors of the PNAS paper above and their affiliations. Way to go Canada !

Mounir Georges AbouHaidar
aDepartment of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada M5S 3B2; and
Srividhya Venkataraman
aDepartment of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada M5S 3B2; and
Ashkan Golshani
bBiology Department, Carleton University, Ottawa, ON, Canada K1S 5B6
Bolin Liu
aDepartment of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada M5S 3B2; and
Tauqeer Ahmad
aDepartment of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada M5S 3B2; and

A Troublesome Inheritance – IV — Chapter 3

Chapter III of “A Troublesome Inheritance” contains a lot of very solid molecular genetics, and a lot of unfounded speculation. I can see why the book has driven some otherwise rational people bonkers. Just because Wade knows what he’s talking about in one field, doesn’t imply he’s competent in another.

Several examples: p. 41 “”Nonethless, it is reasonable to assume that if traits like skin color have evolved in a population, the same may be true of its social behavior.” Consider yes, assume no.

p. 42 “The society of living chimps can thus with reasonable accuracy stand as a surrogate for the joint ancester” (of humans and chimps — thought to be about 7 megaYears ago) and hence describe the baseline from which human social behavior evolved.” I doubt this.

The chapter contains many just so stories about the evolution of chimp and human societies (post hoc propter hoc). Plausible, but not testable.

Then follows some very solid stuff about the effects of the hormone oxytocin (which causes lactation in nursing women) on human social interaction. Then some speculation on the ways natural selection could work on the oxytocin system to make people more or less trusting. He lists several potential mechanisms for this (1) changes in the amount of oxytocin made (2) increasing the number of protein receptors for oxytocin (3) making each receptor bind oxytocin more tightly. This shows that Wade has solid molecular biological (and biological) chops.

He quotes a Dutch psychologist on his results with oxytocin and sociality — unfortunately, there have been too many scandals involving Dutch psychologists and sociologists to believe what he says until its replicated (Google Diederik Stapel, Don Poldermans, Jens Forster, Markus Denzler if you don’t believe me). It’s sad that this probably honest individual is tarred with that brush but he is.

p. 59 — He notes that the idea that human behavior is solely the result of social conditions with no genetic influence is appealing to Marxists, who hoped to make humanity behave better by designing better social conditions. Certainly, much of the vitriol heaped on the book has come from the left. A communist uncle would always say ‘it’s the system’ to which my father would reply ‘people will corrupt any system’.

p. 61 — the effect of mutations of lactose tolerance on survival on society are noted — people herding cattle and drinking milk, survive better if their gene to digest lactose (the main sugar in milk) isn’t turned off after childhood. If your society doesn’t herd animals, there is no reason for anyone to digest milk after weaning from the breast. The mutations aren’t in the enzyme digesting lactose, but in the DNA that turns on expression of the gene for the enzyme (e.g. the promoter). Interestingly, 3 separate mutations in African herders have been found to do this, and different from the one that arose in the Funnel Beaker Culture of Scandinavia 6,000 yers ago. This is a classic example of natural selection producing the same phenotypic effect by separate mutations.

There is a much bigger biological fish to be fried here, which Wade doesn’t discuss. It takes energy to make any protein, and there is no reason to make a protein to help you digest milk if you aren’t nursing, and one very good reason not to — it wastes metabolic energy, something in short supply in humans as they lived until about 15,000 years ago. So humans evolved a way not to make the protein in adult life. The genetic change is in the DNA controlling protein production not the protein itself.

You may have heard it said that we are 98% Chimpanzee. This is true in the sense that our 20,000 or so proteins are that similar to the chimp. That’s far from the whole story. This is like saying Monticello and Independence Hall are just the same because they’re both made out of bricks. One could chemically identify Monticello bricks as coming from the Virginia piedmont, and Independence Hall bricks coming from the red clay of New Jersey, but the real difference between the buildings is the plan.

It’s not the proteins, but where and when and how much of them are made. The control for this (plan if you will) lies outside the genes for the proteins themselves, in the rest of the genome. The control elements have as much right to be called genes, as the parts of the genome coding for amino acids. Granted, it’s easier to study genes coding for proteins, because we’ve identified them and know so much about them. It’s like the drunk looking for his keys under the lamppost because that’s where the light is.

p. 62 — There follows some description of the changes of human society from hunter gathering, to agrarian, to the rise of city states, is chronicled. Whether adaptation to different social organizations produced genetic changes permitting social adaptation or were the cause of it isn’t clear. Wade says “changes in social behavior, has most probably been molded by evolution, through the underlying genetic changes have yet to be identified.” This assumes a lot, e.g. that genetic changes are involved. I’m far from sure, but the idea is not far fetched. Stating that genetic changes have never, and will never shape society, is without any scientific basis, and just as fanciful as many of Wade’s statements in this chapter. It’s an open question, which is really all Wade is saying.

In defense of Wade’s idea, think about animal breeding as Darwin did extensively. The Origin of Species (worth a read if you haven’t already read it) is full of interchanges with all sorts of breeders (pigeons, cattle). The best example we have presently are the breeds of dogs. They have very different personalities — and have been bred for them, sheep dogs mastifs etc. etc. Have a look at [ Science vol. 306 p. 2172 ’04, Proc. Natl. Acad. Sci. vol. 101 pp. 18058 – 18063 ’04 ] where the DNA of variety of dog breeds was studied to determine which changes determined the way they look. The length of a breed’s snout correlated directly with the number of repeats in a particular protein (Runx-2). The paper is a decade old and I’m sure that they’re starting to look at behavior.

More to the point about selection for behavioral characteristics, consider the domestication of the modern dog from the wolf. Contrast the dog with the chimp (which hasn’t been bred).

[ Science vol. 298 pp. 1634 – 1636 ’02 ] Chimps are terrible at picking up human cues as to where food is hidden. Cues would be something as obvious as looking at the containing, pointing at the container or even touching it. Even those who eventually perform well, take dozens of trials or more to learn it. When tested in more difficult tests requiring them to show flexible use of social cues they don’t

This paper shows that puppies (raised with no contact with humans) do much better at reading humans than chimps. However wolf cubs do not do better than the chimps. Even more impressively, wolf cubs raised by humans don’t show the same skills. This implies that during the process of domestication, dogs have been selected for a set of social cognitive abilities that allow them to communicate with humans in unique ways. Dogs and wolves do not perform differently in a non-social memory task, ruling out the possibility that dogs outperform wolves in all human guided tasks.

All in all, a fascinating book with lots to think about, argue with, propose counterarguments, propose other arguments in support (as I’ve just done), etc. etc. Definitely a book for those who like to think, whether you agree with it all or not.

Old dog does new(ly discovered) tricks

One of the evolutionarily oldest enzyme classes is aaRS (for amino acyl tRNA synthetase). Every cell has them including bacteria. Life as we know it wouldn’t exist without them. Briefly they load tRNA with the appropriate amino acid. If this Greek to you, look at the first 3 articles in https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Amino acyl tRNA syntheses are enzymes of exquisite specificity, having to correctly match up 20 amino acids to some 61 different types of tRNAs. Mistakes in the selection of the correct amino acid occurs every 1/10,000 to 1/100,000, and in the selection of the correct tRNA every 1/1,000,000. The lower tRNA error rate is due to the fact that tRNAs are much larger than amino acids, and so more contacts between enzyme and tRNA are possible.

As the tree of life was ascended from bacteria over billions of years, 13 new protein domains which have no obvious association with aminoacylation have been added to AARS genes. More importantly, the additions have been maintained over the course of evolution (with no change in the primary function of the synthetase). Some of the new domains are appended to each of several synthetases, while others are specific to a single synthetase. The fact that they’ve been retained implies they are doing something that natural selection wants (teleology inevitably raises its ugly head with any serious discussion of molecular biology or cellular physiology — it’s impossible to avoid).

[ Science vol.345 pp 328 – 332 ’14 ] looked at what mRNAs some 37 different AARS genes were transcribed into. Six different human tissues were studied this way. Amazingly, 79% of the 66 in-frame splice variants removed or disrupted the aaRS catalytic domain. . The AARS for histidine had 8 inframe splice variants all of which removed the catalytic domain. 60/70 variants losing the catalytic domain (they call these catalytic nulls) retained at least one of the 13 added domains in higher eukaryotes. Some of the transcripts were tissue specific (e.g. present in some of the 6 tissues but not all).

Recent work has shown roles for specific AARSs in a variety of pathways — blood vessel formation, inflammation, immune response, apoptosis, tumor formation, p53 signaling. The process of producing a completely different function for a molecule is called exaptation — to contrast it with adaptation.

Up to now, when a given protein was found to have enzymatic activity, the book on what that protein did was closed (with the exception of the small GTPases). End of story. Yet here we have cells spending the metabolic energy to make an enzymatically dead protein (aaRSs are big — the one for alanine has nearly 1,000 amino acids). Teleology screams — what is it used for? It must be used for something! This is exactly where chemistry is silent. It can explain the incredible selectivity and sensitivity of the enzyme but not what it is ‘for’. We have crossed the Cartesian dualism between flesh and spirit.

Could this sort of thing be the tip of the iceberg? We know that splice variants of many proteins are common. Could other enzymes whose function was essentially settled once substrates were found, be doing the same thing? We may have only 20,000 or so protein coding genes, but 40,000, 60,000, . . . or more protein products of them, each with a different biological function.

So aaRSs are very old molecular biological dogs, who’ve been doing new tricks all along. We just weren’t smart enough to see them (’till now).

Novels may have only 7 basic plots, but molecular biology continues to surprise and enthrall.

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

“A Troublesome Inheritance” – II – Four Anthropological disasters of the past 100 years

Page 5 of Wade’s book contains two recent pronouncements from the American Anthropological Association stating that “Race is about culture not biology”. It’s time to look at anthropology’s record of the past 100 years. It isn’t pretty.

Start with the influential Franz Boas (1858 – 1942) who taught at Columbia for decades. His most famous student was Margaret Mead.

He, along with his students, felt that the environment was everything for human culture and that heredity had minimal influence. Here’s what Boas did over 100 years ago.

[ Proc. Natl. Acad. Sci. vol. 99 pp. 14622 – 14623, 14436 – 14439 ’02 ] Retzius invented the the cephalic index in the 1890s. It is just the widest breadth of the skull divided by the front to back length. One can be mesocephalic, dolichocephalic or brachycephalic. From this index one could differentiate Europeans by location. Anthropologists continue to take such measurements. Franz Boas in 1910 – 1913 said that the USA born offspring of immigrants showed a ‘significant’ difference from their immigrant parents in their cephalic index. This was used to reinforce the idea that environment was everything.

Boas made some 13,000 measurements. This is a reanalysis of his data showing that he seriously misinterpreted it. The genetic component of the variability was far stronger than the environmental. Some 8500 of his 13,000 cases were reanalyzed. In a later paper Boas stated that he never claimed that there were NO genetic components to head shape, but his students and colleagues took the ball and ran with it, and Boas never (publicly) corrected them. The heritability was high in the family data and between ethnic groups, which remains in the American environment.

One of Boas’ students wrote that “Heredity cannot be allowed to have acted any part in history.” The chain of events shaping a people “involves the absolute conditioning of historical events by other historical events.” Hardly scientific statements.

On to his most famous student Margaret Mead (1901 -1978) who later became the head of the American Association for the Advancement of Science (1960). In 1928 she published “Coming of Age in Samoa” about the sexual freedom of Samoan adolescents. It had a big play, and I was very interested in such matters as a pimply adolescent. It fit into the idea that ” “We are forced to conclude that human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions.”. This certainly fit nicely with the idea that mankind could be reshaped by changing the system — see http://en.wikipedia.org/wiki/New_Soviet_man one of the many fantasies of the left promoted by academia.

Subsequently, an anthropologist (Freeman) went back to Samoa and concluded that Mead had been hoaxed. He found that Samoans may beat or kill their daughters if they are not virgins on their wedding night. A young man who cannot woo a virgin may rape one to extort her into eloping. The family of a cuckolded husband may attack and kill the adulterer. For more details see Pinker “The Blank Slate” pp. 56 –>

The older among you may remember reading about “the gentle Tasaday” of the Philippines, a Stone age people who had no word for war. It was featured in the NY times in the 70s. They were the noble savages of Rousseau in the 20th century. The 1970 ”discovery” of the Tasaday as a ”Stone Age” tribe was widely heralded in newspapers, shown on national television in a National Geographic Society program and an NBC special documentary, and further publicized in ”The Gentle Tasaday: A Stone Age People in the American journalist, John Nance.

In all, Manuel Elizalde Jr., the son of a rich Filipino family, was depicted as the savior of the Tasaday through his creation of Panamin (from presidential assistant for national minorities), a cabinet-level office to protect the Tasaday and other ”minorities” from corrosive modern influences and from environmentally destructive logging companies.It appears that Manuel Elizalde hoodwinked almost everybody by paying neighboring T’boli people to take off their clothes and pose as a ”Stone Age” tribe living in a cave. Mr. Elizalde then used the avalanche of international interest and concern for his Tasaday creation to create the Panimin organization for control over ”tribal minority” lands and resources and ultimately deals with logging and mining companies.

Last but not least is “The Mismeasure of Man” (1981) in which Steven Gould tore apart the work of Samuel Morton, a 19th century Anthropologist who measured skulls. He accused Morton of (consciously or unconsciously) manipulating the data to come up with the conclusions he desired.

Well, guess what. Someone went back and looked at Morton’s figures, and remeasured some of his skulls (which are still at Penn) and found that the manipulation was all Gould’s not Morton’s. I posted about this when it came out 3 years ago — here’s the link https://luysii.wordpress.com/2011/06/26/hoisting-steven-j-gould-by-his-own-petard/.

Here is the relevant part of that post — An anthropologist [ PLoS Biol. doi:10.1371/journal.pbio.1001071;2011 ] went back to Penn (where the skulls in question reside), and remeasured some 300 of them, blinding themselves to their ethnic origins as they did. Morton’s measurements were correct. They also had the temerity to actually look at Morton’s papers. They found that, contrary to Gould, Morton did report average cranial capacities for subgroups of both populations, sometimes on the same page or on pages near to figures that Gould quotes, and therefore must have seen. Even worse (see Nature vol. 474 p. 419 ’11 ) they claim that “Gould misidentified the Native American samples, falsely inflating the average he calculated for that population”. Gould had claimed that Morton’s averages were incorrect.

Perhaps anthropology has gotten its act together now, but given this history, any pronouncements they make should be taken with a lot of salt. In fairness to the field, it should be noted that the debunkers of Boas, Mead and Gould were all anthropologists. They have a heavy load to carry.

“A Troublesome Inheritance” – I

One of the joys of a deep understanding of chemistry, is the appreciation of the ways in which life is constructed from the most transient of materials. Presumably the characteristics of living things that we can see (the phenotype) will someday be traceable back to the proteins, nucleic acids,and small metabolites (lipids, sugars, etc..) making us up.

For the time being we must content ourselves with understanding the code (our genes) and how it instructs the development of a trillion celled organism from a fertilized egg. This brings us to Wade’s book, which has been attacked as racist, by anthropologists, sociologists and other lower forms of animal life.

Their position is that races are a social, not a biological construct and that differences between societies are due to the way they are structured, not by differences in the relative frequency of the gene variants (alleles) in the populations making them up. Essentially they are saying that evolution and its mechanism descent with modification under natural selection, does not apply to humanity in the last 50,000 years when the first modern humans left Africa.

Wade disagrees. His book is very rich in biologic detail and one post about it discussing it all would try anyone’s attention span. So I’m going to go through it, page by page, commenting on the material within (the way I’ve done for some chemistry textbooks), breaking it up in digestible chunks.

As might be expected, there will be a lot of molecular biology involved. For some background see the posts in https://luysii.wordpress.com/category/molecular-biology-survival-guide/. Start with https://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/ and follow the links forward.

Wade won me over very quickly (on page 3), by his accurate and current citations to the current literature. He talks about how selection on a mitochondrial protein helped Tibetans to live at high altitude (while the same mutation those living at low altitudes leads to blindness). Some 25% Tibetans have the mutation while it is rare among those living at low altitudes.
Here’s my post of 10 June 2012 ago on the matter. That’s all for now

Have Tibetans illuminated a path to the dark matter (of the genome)?

I speak not of the Dalai Lama’s path to enlightenment (despite the title). Tall people tend to have tall kids. Eye color and hair color is also hereditary to some extent. Pitched battles have been fought over just how much of intelligence (assuming one can measure it) is heritable. Now that genome sequencing is approaching a price of $1,000/genome, people have started to look at variants in the genome to help them find the genetic contribution to various diseases, in the hopes of understanding andtreating them better.

Frankly, it’s been pretty much of a bust. Height is something which is 80% heritable, yet the 20 leading candidate variants picked up by genome wide association studies (GWAS) account for 3% of the variance [ Nature vol. 461 pp. 458 – 459 ’09 ]. This has happened again and again particularly with diseases. A candidate gene (or region of the genome), say for schizophrenia, or autism, is described in one study, only to be shot down by the next. This is likely due to the fact that many different genetic defects can be associated with schizophrenia — there are a lot of ways the brain cannot work well. For details — see https://luysii.wordpress.com/2010/04/25/tolstoy-was-right-about-hereditary-diseases-imagine-that/. or see https://luysii.wordpress.com/2010/07/29/tolstoy-rides-again-autism-spectrum-disorder/.

Typically, even when an association of a disease with a genetic variant is found, the variant only increases the risk of the disorder by 2% or less. The bad thing is that when you lump them all of the variants you’ve discovered together (for something like height) and add up the risk, you never account for over 50% of the heredity. It isn’t for want of looking as by 2010 some 600 human GWAS studies had been published [ Neuron vol. 68 p. 182 ’10 ]. Yet lots of the studies have shown various disease to have a degree of heritability (particularly schizophrenia). The fact that we’ve been unable to find the DNA variants causing the heritability was totally unexpected. Like the dark matter in galaxies, which we know is there by the way the stars spin around the galactic center, this missing heritability has been called the dark matter of the genome.

Which brings us to Proc. Natl. Acad. Sci. vol. 109 pp. 7391 – 7396 ’12. It concerns an awful disease causing blindness in kids called Leber’s hereditary optic neuropathy. The ’cause’ has been found. It is a change of 1 base from thymine to cytosine in the gene for a protein (NADH dehydrogenase subunit 1) causing a change at amino acid #30 from tyrosine to histidine. The mutation is found in mitochondrial DNA not nuclear DNA, making it easier to find (it occurs at position 3394 of the 16,569 nucleotide mitochondrial DNA).

Mitochondria in animal cells, and chloroplasts in plant cells, are remnants of bacteria which moved inside cells as we know them today (rest in peace Lynn Margulis).

Some 25% of Tibetans have the 3394 T–>C mutations, but they see just fine. It appears to be an adaptation to altitude, because the same mutation is found in nonTibetans on the Indian subcontinent living about 1500 meters (about as high as Denver). However, if you have the same genetic change living below this altitude you get Lebers.

This is a spectacular demonstration of the influence of environment on heredity. Granted that the altitude you live at is a fairly impressive environmental change, but it’s at least possible that more subtle changes (temperature, humidity, air conditions etc. etc.) might also influence disease susceptibility to the same genetic variant. This certainly is one possible explanation for the failure of GWAS to turn up much. The authors make no mention of this in their paper, so these ideas may actually be (drumroll please) original.

If such environmental influences on the phenotypic expression of genetic changes are common, it might be yet another explanation for why drug discovery is so hard. Consider CETP (Cholesterol Ester Transfer Protein) and the very expensive failure of drugs inhibiting it. Torcetrapib was associated with increased deaths in a trial of 15,000 people for 18 – 20 months. Perhaps those dying somehow lived in a different environment. Perhaps others were actually helped by the drug

Follow

Get every new post delivered to your Inbox.

Join 75 other followers