Category Archives: Philosophical issues raised

The strangeness of mathematical proof

I’ver written about Urysohn’s Lemma before and a copy of that post will be found at the end. I decided to plow through the proof since coming up with it is regarded by Munkres (the author of a widely used book on topology) as very creative. Here’s how he introduces it

“Now we come to the first deep theorem of the book,. a theorem that is commonly called the “Urysohn lemma”. . . . It is the crucial tool used in proving a number of important theorems. . . . Why do we call the Urysohn lemma a ‘deep’ theorem? Because its proof involves a really original idea, which the previous proofs did not. Perhaps we can explain what we mean this way: By and large, one would expect that if one went through this book and deleted all the proofs we have given up to now and then handed the book to a bright student who had not studied topology, that student ought to be able to go through the book and work out the proofs independently. (It would take a good deal of time and effort, of course, and one would not expect the student to handle the trickier examples.) But the Uyrsohn lemma is on a different level. It would take considerably more originality than most of us possess to prove this lemma.”

I’m not going to present the proof just comment on one of the tools used to prove it. This is a list of all the rational numbers found in the interval from 0 to 1, with no repeats.

Munkres gives the list at its start and you can see why it would list all the rational numbers. Here it is

0, 1, 1/2, 1/3, 2/3, 1/4, 3/4, 1/5 . . .

Note that 2/4 is missing (because 2 divides into 4 leaving a whole number). It would be fairly easy to write a program to produce the list, but a computer running the program would never stop. In addition it would be slow, because to avoid repeats given a denominator n, it would include 1/n and n-1/n in the list, but to rule out repeats it would have to perform n-2 divisions. It it had a way of knowing if a number was prime it could just put in 1/prime, 2/prime , , , (prime -1)/n without the division. But although there are lists of primes for small integers, there is no general way to find them, so brute force is required. So for 10^n, that means 10^n – 2 divisions. Once the numbers get truly large, there isn’t enough matter in the universe to represent them, nor is there enough time since the big bang to do the calculations.

However, the proof proceeds blithely on after showing the list — this is where the strangeness comes in. It basically uses the complete list of rational numbers as indexes for the infinite number of open sets to be found in a normal topological space. The proof below refers to the assumption of infinite divisibility of space (inherent in the theorem on normal topological spaces), something totally impossible physically.

So we’re in the never to be seen land of completed infinities (of time, space, numbers of operations). It’s remarkable that this stuff applies to the world we inhibit, but it does, and anyone wishing to understand physics at a deep level must come to grips with mathematics at this level.

Here’s the old post

Urysohn’s Lemma

The above quote is from one of the standard topology texts for undergraduates (or perhaps the standard text) by James R. Munkres of MIT. It appears on page 207 of 514 pages of text. Lee’s text book on Topological Manifolds gets to it on p. 112 (of 405). For why I’m reading Lee see

Well it is a great theorem, and the proof is ingenious, and understanding it gives you a sense of triumph that you actually did it, and a sense of awe about Urysohn, a Russian mathematician who died at 26. Understanding Urysohn is an esthetic experience, like a Dvorak trio or a clever organic synthesis [ Nature vol. 489 pp. 278 – 281 ’12 ].

Clearly, you have to have a fair amount of topology under your belt before you can even tackle it, but I’m not even going to state or prove the theorem. It does bring up some general philosophical points about math and its relation to reality (e.g. the physical world we live in and what we currently know about it).

I’ve talked about the large number of extremely precise definitions to be found in math (particularly topology). Actually what topology is about, is space, and what it means for objects to be near each other in space. Well, physics does that too, but it uses numbers — topology tries to get beyond numbers, and although precise, the 202 definitions I’ve written down as I’ve gone through Lee to this point don’t mention them for the most part.

Essentially topology reasons about our concept of space qualitatively, rather than quantitatively. In this, it resembles philosophy which uses a similar sort of qualitative reasoning to get at what are basically rather nebulous concepts — knowledge, truth, reality. As a neurologist, I can tell you that half the cranial nerves, and probably half our brains are involved with vision, so we automatically have a concept of space (and a very sophisticated one at that). Topologists are mental Lilliputians trying to tack down the giant Gulliver which is our conception of space with definitions, theorems, lemmas etc. etc.

Well one form of space anyway. Urysohn talks about normal spaces. Just think of a closed set as a Russian Doll with a bright shiny surface. Remove the surface, and you have a rather beat up Russian doll — this is an open set. When you open a Russian doll, there’s another one inside (smaller but still a Russian doll). What a normal space permits you to do (by its very definition), is insert a complete Russian doll of intermediate size, between any two Dolls.

This all sounds quite innocent until you realize that between any two Russian dolls an infinite number of concentric Russian dolls can be inserted. Where did they get a weird idea like this? From the number system of course. Between any two distinct rational numbers p/q and r/s where p, q, r and s are whole numbers, you can always insert a new one halfway between. This is where the infinite regress comes from.

For mathematics (and particularly for calculus) even this isn’t enough. The square root of two isn’t a rational number (one of the great Euclid proofs), but you can get as close to it as you wish using rational numbers. So there are an infinite number of non-rational numbers between any two rational numbers. In fact that’s how non-rational numbers (aka real numbers) are defined — essentially by fiat, that any series of real numbers bounded above has a greatest number (think 1, 1.4, 1.41, 1.414, defining the square root of 2).

What does this skullduggery have to do with space? It says essentially that space is infinitely divisible, and that you can always slice and dice it as finely as you wish. This is the calculus of Newton and the relativity of Einstein. It clearly is right, or we wouldn’t have GPS systems (which actually require a relativistic correction).

But it’s clearly wrong as any chemist knows. Matter isn’t infinitely divisible, Just go down 10 orders of magnitude from the visible and you get the hydrogen atom, which can’t be split into smaller and smaller hydrogen atoms (although it can be split).

It’s also clearly wrong as far as quantum mechanics goes — while space might not be quantized, there is no reasonable way to keep chopping it up once you get down to the elementary particle level. You can’t know where they are and where they are going exactly at the same time.

This is exactly one of the great unsolved problems of physics — bringing relativity, with it’s infinitely divisible space together with quantum mechanics, where the very meaning of space becomes somewhat blurry (if you can’t know exactly where anything is).

Interesting isn’t it?

Book Review — The Kingdom of Speech — Part III

The last half of Wolfe’s book is concerned with Chomsky and Linguistics. Neurologists still think they have something to say about how the brain produces language, something roundly ignored by the professional linguistics field. Almost at the beginning of the specialty, various types of loss of speech (aphasias) were catalogued and correlated with where in the brain the problem was. Some people could understand but not speak (motor aphasia). Most turned out to have lesions in the left frontal lobe. Others could speak but not understand what was said to them (receptive aphasia). They usually had lesions in the left temporal lobe (e.g. just behind the ear amazingly enough).

Back in the day this approach was justifiably criticized as follows — yes you can turn off a lightbulb by flicking a switch, but the switch isn’t producing the light, but is just something necessary for its production. Nowadays not so much, because we see these areas lighting up with increased blood  flow (by functional MRI) when speech is produced or listened to.

I first met Chomsky’s ideas, not about linguistics, but when I was trying to understand how a compiler of a high level computer language worked. This was so long ago that Basic and Pascal were considered high level languages. Compilers worked with formal rules, and Chomsky categorized them into a hierarchy which you can read about here —

The book describes the rise of Chomsky as the enfant terrible, the adult terrible, then the eminence grise of linguistics. Wolfe has great fun skewering him, particularly for his left wing posturing (something he did at length in “Radical Chic”). I think most of the description is accurate, but if you have the time and the interest, there’s a much better book — “The Linguistics Wars” by Randy Allen Harris — although it’s old (1993), Chomsky and linguistics had enough history even then that the book contains 356 pages (including index).

Chomsky actually did use the term language organ meaning a facility of the human brain responsible for our production of language of speech. Neuroscience never uses such a term, and Chomsky never tried to localize it in the brain, but work on the aphasias made this at least plausible. If you’ve never heard of ‘universal grammar, language acquisition device, deep structure of language, the book is a reasonably accurate (and very snarky) introduction.

As the years passed, for everything that Chomsky claimed was a universal of all languages, a language was found that didn’t have it. The last universal left standing was recursion (e.g. the ability the pack phrase within phrase — the example given “He assumed that now that her bulbs had burned out, he could shine and achieve the celebrity he had always longed for” — thought within thought within thought.

Then a missionary turned linguist (Daniel Everett) found a tribe in the Amazon (the Piraha) with a language which not only lacked recursion, but tenses as well. It makes fascinating reading, including the linguist W. Tecumseh Fitch (yes Tecumseh) who travelled up the Amazon to prove that they did have recursion (especially as he had collaborated with Chomsky and the (now disgraced) Marc Hauser on an article in 2002 saying that recursion was the true essence of human language — how’s this horrible sentence for recursion ?

The book ends with a discussion of the quote Wolfe began the book with — “Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language’s origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.”

One of the authors is Chomsky himself.

You can read the whole article at

I think, that Wolfe is right — language is just a tool (like the wheel or the axe) which humans developed to help them. That our brain size is at least 3 times the size of our nearest evolutionary cousin (the Chimpanzee) probably had something to do with it. If language is a tool, then, like the axe, it didn’t have to evolve from anything.

All in all a fascinating and enjoyable book. There’s much more in it than I’ve had time to cover.  The prose will pick you up and carry you along.

Book Review — The Kingdom of Speech — Part II

Although Darwin held off writing up his ideas for 20 years, fearing the reaction he knew would come from the church, the criticisms that really bothered him the most were those of fellow intellectuals about the evolution of language. They began immediately after the Origin of Species came out in 1859, by linguists and later by Wallace himself. Even worse, one critic mocked him. The idea that language evolved from animal sounds was called the bow wow theory, or language arose from sounds that things made (the ding dong theory).

This is all detailed in pp. 54 – 87 of The Kingdom of Speech, about which I knew very little. If any real experts on the early history of evolutionary theory are out there and reading this and disagree, please post a comment. I am assuming that the facts as given by Wolfe are correct (I’ve already disagreed with him about his interpretation of some of them —

The real attack on Darwin’s ideas is that man’s mental capacities were so far above those of animals, that there was no missing link (particularly since there were lots or primates still around). By this critique man was so special, that a special act of creation (not evolution) was called for.  It’s theology getting in the back door, but of course this is essentially the claim of all theologies — special creation by a superior being(s).

In his later book “The Origin of Species and the Descent of Man” – 1871 (which I’ve not read), according to Wolfe Darwin made up all stories (many involving his beloved dog) to show the antecedents of all sorts of things in animal behavior — Darwin actually said that language originated with the songs birds sang during mating. Female protolanguage persists today in mothers cooing to their babies. Darwin spent a lot of time discussing his dog — how it recognized other dogs as a sign of intelligence. Religion came from the love of a dog for his master (Wolfe claims that Darwin said this in the book– I haven’t read the Descent of Man).

Darwin’s second book didn’t get much response. Postive reviews avoided his reasoning, and negative reviews said it was thin. In 1872 the Philological Society of London gave up on trying to find out the origin of language, and wouldn’t accept patpers about it. The Linguistic Society of Paris did this even earlier (1866).

Evolutionists basically stopped talking about language from 1872 to 1949.

As soon as Mendel’s work on genetics was discovered, evolution went into scientific eclipse. Here was something that wasn’t just armchair speculation about things happening in the remote past, something on which experiments could be done.
Mendel’s experiments with green peas took 9 years and involved 28,000 plants.

In a fascinating aside, Wolfe notes that Mendel actually sent his work to Darwin. Tragically it was found unread with its pages uncut in Darwin’s papers after his death. In all fairness to Darwin, he and his peers had no idea how heredity worked and there are parts in The Origin of Species in which Darwin appears to accept the inheritance of acquired characteristics (the blacksmith’s large muscles passed on to his son etc. etc.). I don’t think you can read the Origin without being impressed by the tremendous power of Darwin’s mind, and how much work he put in and how far he got with how little he had to go on.

Wolfe says Darwin’s ideas about the origin or language were mocked by Gould  one hundred years later (1972) as “Just So Stories”, fantastic bizarre explanations for why animals are the way they are — see I’m not so sure, the citation for this gives an article  Sociobiology which Gould and Lewontin (see later) relentlessly attacked. Gould himself saw what he wanted to see in his book “The Mismeasure of Man” — for details see —

As you can see,The Kingdom of Speech is full of all sorts of interesting stuff, and I’m not even halfway through talking about it.

Next up, linguistics, to include Noam  Chomsky and his admission that he doesn’t understand language or where it came from.

Book Review — The Kingdom of Speech — Part I

If you’re interested in evolution, its history, English social and intellectual history, language, Chomsky and the origins of the journal Nature then Tom Wolfe’s “The Kingdom of Language” is the book for you. Fellow blogistas will be awed by the clarity and elegance of his writing, and how he easily carries the reader easily along. It’s very funny and sardonic as well. The review will be split into several parts because there’s so much in the book.

One caveat: I’ve made no attempt to check any of the historical statements in the book. Hopefully they are all true. If you think any of it is incorrect, please post a comment.

Although the book has a lot to say about language, it doesn’t get into this until nearly 1/3 of the way through. It starts with Alfred Russell Wallace in 1858 lying in a sickbed with Malaria in the Malay peninsula coming up with the idea of natural selection, survival of the fittest (his term) and the origin of species. He writes an essay of 20+ pages and sends it off to Darwin, in the hopes that Darwin will pass it to Sir Charles Lyell (who Wallace didn’t know) who might find it worthy enough to publish.

Darwin gets it in June and is floored. The ideas that he’s been working on since 1838 (in silence for fear of what the religious establishment will say) are all laid out by what was called a ‘flycatcher’, someone making their living by going off to the colonies and sending back exotica for British Gentleman back home.

Tom Wolfe has always been fascinated by social class and distinctions between them (about this much more in part II).

British Gentlemen were landed gentry, who inherited land and wealth (if not noble titles). Darwin’s history went back to Erasmus Earle who was an attorney for Cromwell in the mid 1600’s. He made so much money, that no one in the succeeding EIGHT generations had to work. Robert Darwin, Charles’s father) nonetheless did — he was an M. D. but was more a businessman. He also attained even more money by marrying Wedgewood’s daughter.

Fortunately Robert had lots of money, as Charles was something of a slacker. He started by studying medicine at Edinburgh, but dropped out. He then went to Christ’s College Cambridge to become a clergyman — he dropped this as well, graduating eventually from Cambridge without an honor to his man. So Robert paid to have Charles to on a 5 year voyage of exploration on the Beagle. On return, Robert bought Charles a amLL pied a terre in the country (Down House) with 8 – 9 servants. (Did you know any of this).

The idea of species change was not new. Erasmus Darwin (Darwin’s grandfather) in 1794 and Lamarck in 1800 thought present day species had evolved from earlier ones.

Lamarck’s rather blasphemous thinking was saved by his heroics in battle at age 17 (as a private). His unit was decimated, all officers killed, Lamarck took command somehow and held their position until reinforcements arrived.

There’s a lot in the book about how Darwin Lyell and Hooker screwed out of the priority of thinking of evolution and natural selection first. Here Wolfe gets things seriously wrong, while Wallace was first into print, his thinking lagged Darwin’s by 20 years. However, Darwin, not wishing to be attacked by the clergy kept things to himself, only telling Lyell about is in 1856.

Most of the readership is probably fully engaged with work, family career and doesn’t have time to actually read “The Origin of Species”. In retirement, I did,and the power of Darwin’s mind is simply staggering. He did so much with what little information he had. There was no clear idea of how heredity worked and at several points he’s a Lamarckian — inheritance of acquired characteristics. If you do have the time I suggest that you read the 1859 book chapter by chapter along with a very interesting book — Darwin’s Ghost by Steve Jones (published in 1999) which update’s Darwin’s book to contemporary thinking chapter by chapter.

Wolfe also gets evolution wrong, saying there is no evidence for it. E.g. no one has seen a species change, etc. etc.  Perhaps, but the biochemical evidence is incontrovertible for descent with modification, otherwise you couldn’t replace a vital yeast protein gene with the human homolog and have it work.

Do you know what the X club is? It was a group of 9 naturalists (including Thomas Huxley and Hooker) who met monthly to defend Darwin’s ideas. They also created the journal we know today as Nature.

This actually explains a lot of stuff there that I’ve read over the years — the correct interpretation of evolutionary doctrine receives a great deal of space — punctuated evolution, group selection, kin selection, what is the proper unit of selection etc. etc.

The attacks that bothered Darwin the most, were those about language. That’s the subject of the next part of the review.

You are alive because the lipid bilayer of your plasma membrane is asymmetric

You are an organism with trillions of cells. A mosquito bit you depositing millions of viruses in your tissues. The virus can reproduce only within one of your cells and it has exploited all sorts of protein protein chemistry to get in. Antibodies (if you are fortunate enough to have them) can get rid of the extracellular critters. However, 500,000 have made into the same number of your cells, and are merrily trying to reproduce.

How does the asymmetry of the lipid bilayer of your plasma membrane help you survive. If each virus infected cell killed itself before the virus reproduced, you’d survive. Although 500,000 is a large number is is less than 1 millionth of your cell total.

Well you do have intracellular defenses against viruses, called the innate immune system. One of them is a protein with the ugly name of gasdermin D. The activated innate immune system (in the form of inflammatory caspases) cleaves gasdermin. This breaks up the inhibition of the amino terminal part of gasdermin by the carboxy terminal part giving a fragment which binds to one particular membrane component (phosphatidyl serine) which makes up 20% of the inner leaflet of the cell membrane. It then forms a large diameter (to a cell 140 Angstroms is quite large) pore in the cell membrane. No cell can survive this, so it dies, releasing cellular contents (probably some viral components but not fully formed one). For details see [ Nature vol. 535 pp 111 – 116, 153 – 158 ’16 ]

Wait a minute. The toxic gasdermin fragment is also released. So how come it doesn’t kill everything in sight? Because our cellular membranes keep phosphatidyl serine confined to the inner membrane, normal cells don’t show it on their exterior, so they can be bathed in gasdermin with no ill effect. What is responsible for this asymmetry — believe it or not an ATP consuming enzyme called flippase (about this more later) which takes any phosphatidyl serine it finds on the outer leaflet and schleps it back inside the cell.

There is all sorts of elegant chemistry which explains just how gasdermin binds to phosphatidyl serine and none of the many other phospholipids found on the inner leaflet. There is more elegant chemistry explaining how flippase works (see later).

What chemistry cannot explain, is why organisms would ‘want’ an asymmetric membrane. As soon as you get into the function of a particular compound in an organism, chemistry is powerless to tell you why. Nothing else can explain how a given molecule does what it does on the molecular level but that is not enough for a satisfying explanation.

One further explanation before some hard core cellular biochemistry follows (after ***). Our cells are dying all the time. The lining of your gut is replaced every 5 days. Even the longest lasting element of your blood is gone after half a year, and most other elements are turned over at least once a month. When these cells die, they must be cleaned up, without undue fuss (such as inflammation). The cleaners are cells called macrophages. A dying cell releases chemical signals, actually called ‘eat me’, one of which is phosphatidyl serine found on the membrane fragments of a dead cell. The fact that flippases keep it on the inner leaflet means that macrophages won’t attack a normal cell.

Slick isn’t it?


Flippase is a MgATPdependent aminophospholipid translocase. It localizes phosphatidylserine and phosphatidylethanolamine to the inner membrane leaflet by rapidly translocating them from the outer to the inner leaflet against an electrochemical gradient. The stoichiometry between amino phospholipid translocation and ATP hydrolysis is close to one (how will the cell have enough ATP to do anything else?). The flippase is inhibited by high calcium, and by pseudosubstrates such as vanadate, acetylphosphate and para-nitrophenyl phosphate, and by SH reactive reagents such as N-ethylmaleimide and pyridyldithioethylamine (PDA) a specific inhibitor of phospholipid translocation

[ Proc. Natl. Acad. Sci. vol. 109 pp. 1449 – 1454 ’12 ] P4-ATPases are a subfamily of P-type ATPases. They transport aminophospholipids from the exoplasmic to the cytoplasmic leaflet (and are known as flippases). Man has 14 P4-ATPases, expressed in various cell types. They are thought to be similar to the catalytic subunits of the Ca++ ATPase, and the Na, K ATPase, consisting of cytoplasmic, N, P and A domains and a membrane domain made of 10 transmembrane helices (M1 – M10).

[ Proc. Natl. Acad. Sci. vol. 111 pp. E1334 – E1343 ’14 ] The P4-ATPases are thought to resemble the classic P-type ATPase cation pumps — a transmembrane domain of 10 helices and 3 cytoplasmic domains (P for phosphorylation, N for nucleotide binding and A for actuator). ATP8A2 forms an intermediate phosphorylated on aspartic acid (E2P)and undergoes a catalytic cycle similar to the sodium pump (Na+, K+ ATPase). Dephosphorylation of E2P is activated by the transported substrates phosphatidyl serine (PS) and phosphatidyl ethanolamine (PE), similar to the K+ activation of dephosphorylation in the sodium pump.

PE and PS are 10x as large as the cations transported by the sodium pump. This is known as the giant substrate problem. This work shows that isoleucine #364 (mutated in — patients with the ataxia, retardation and dysequilibrium syndrome Eur. J. Hum. Genet. vol. 21 pp. 281 – 285 ’13 aka CAMRQ syndrome ) forms a hydrophobic gate separating the entry and exit sites of PS. I364 likely directs the sequential formation and annihilation of water filled cavities (as shown by molecular dynamics simulations) allowing transport of the hydrophilic phospholipid head group, in a groove outlined by TMs 1, 2, 4 and 6, with the hydrocarbon chains following passively, still in the membrane lipid phase (and presumably outside the channel) — this must disrupt the hell out of the protein as it passes. They call this the credit card model — only the interaction with part of the molecule is important — just as the magnetic stripe is the only important thing about the credit card.

Why you do and don’t need chemistry to understand why we have big brains

You need some serious molecular biological chops to understand why primates such as ourselves have large brains. For this you need organic chemistry. Or do you? Yes and no. Yes to understand how the players are built and how they interact. No because it can be explained without any chemistry at all. In fact, the mechanism is even clearer that way.

It’s an exercise in pure logic. David Hilbert, one of the major mathematicians at the dawn of the 20th century famously said about geometry — “One must be able to say at all times–instead of points, straight lines, and planes–tables, chairs, and beer mugs”. The relationships between the objects of geometry were far more crucial to him than the objects themselves. We’ll take the same tack here.

So instead of the nucleotides Uridine (U), Adenine (A), Guanine (G), Cytosine (C), we’re going to talk about lock and key and hook and eye.

We’re going to talk about long chains of these four items. The order is crucial Two long chains of them can pair up only only if there are segments on each where the locks on one pair with the keys on the other and the hooks with the eyes. How many possible combinations of the four are there on a chain of 20 — just 4^20 or 2^40 = 1,099,511,621,776. So to get two randomly chosen chains to pair up exactly is pretty unlikely, unless in some way you or the blind Watchmaker chose them to do so.

Now you need a Turing machine to take a long string of these 4 items and turn it into a protein. In the case of the crucial Notch protein the string of locks, keys, hooks and eyes contains at least 5,000 of them, and their order is important, just as the order of letters in a word is crucial for its meaning (consider united and untied).

The cell has tons of such Turing machines (called ribosomes) and lots of copies of strings coding for Notch (called Notch mRNAs).

The more Notch protein around in the developing brain, the more the proliferating precursors to neurons proliferate before differentiating into neurons, resulting in a bigger brain.

The Notch string doesn’t all code for protein, at one end is a stretch of locks, keys, hooks and eyes which bind other strings, which when bound cause the Notch string to be degraded, mean less Notch and a smaller brain. The other strings are about 20 long and are called microRNAs.

So to get more Notch and a bigger brain, you need to decrease the number of microRNAs specifically binding to the Notch string. One particular microRNA (called miR-143-3p) has it in for the Notch string. So how did primates get rid of miR-143-3p they have an insert (unique to them) in another string which contains 16 binding sites for miR-143-3p. So this string called lincND essentially acts as a sponge for miR-143-3p meaning it can’t get to the Notch string, meaning that neuronal precursor cells proliferate more, and primate brains get bigger.

So can you forget organic chemistry if you want to understand why we have big brains? In the above sense you can. Your understanding won’t be particularly rich, but it will be at a level where chemical explanation is powerless.

No amount of understanding of polyribonucleotide double helices will tell you why a particular choice out of the 1,099,511,621,776 possible strings of 20 will be important. Literally we have moved from physicality to the realm of pure ideas, crossing the Cartesian dichotomy in the process.

Here’s a copy of the original post with lots of chemistry in it and all the references you need to get the molecular biological chops you’ll need.

Why our brains are large: the elegance of its molecular biology

Primates have much larger brains in proportion to their body size than other mammals. Here’s why. The mechanism is incredibly elegant. Unfortunately, you must put a sizable chunk of recent molecular biology under your belt before you can comprehend it. Anyone can listen to Mozart without knowing how to read or write music. Not so here.

I doubt that anyone can start from ground zero and climb all the way up, but here is all the background you need to comprehend what follows. Start here —
and follow the links (there are 5 more articles).

Also you should be conversant with competitive endogenous RNA (ceRNA) — here’s a link —

Also you should understand what microRNAs are — we’re still discovering all the things they do — here’s the background you need —

Still game?

Now we must delve into the embryology of the brain, something few chemists or nonbiological type scientists have dealt with.

You’ve probably heard of the term ‘water on the brain’. This refers to enlargement of the ventricular system, a series of cavities in all our brains. In the fetus, all nearly all our neurons are formed from cells called neuronal precursor cells (NPCs) lining the fetal ventricle. Once formed they migrate to their final positions.

Each NPC has two choices — Choice #1 –divide into two NPCs, or Choice #2 — divide into an NPC and a daughter cell which will divide no further, but which will mature, migrate and become an adult neuron. So to get a big brain make NPCs adopt choice #1.

This is essentially a choice between proliferation and maturation. It doesn’t take many doublings of a NPC to eventually make a lot of neurons. Naturally cancer biologists are very interested in the mechanism of this choice.

Well to make a long story short, there is a protein called NOTCH — vitally important in embryology and in cancer biology which, when present, causes NPCs to make choice #1. So to make a big brain keep Notch around.

Well we know that some microRNAs bind to the mRNA for NOTCH which helps speed its degradation, meaning less NOTCH protein. One such microRNA is called miR-143-3p.

We also know that the brain contains a lncRNA called lncND (ND for Neural Development). The incredible elegance is that there is a primate specific insert in lncND which contains 16 (yes 16) binding sites for miR-143-3p. So lncND acts as a sponge for miR-143-3p meaning it can’t bind to the mRNA for NOTCH, meaning that there is more NOTCH around. Is this elegant or what. Let’s hear it for the Blind Watchmaker, assuming you have the faith to believe in such things.

Fortunately lncND is confined to the brain, otherwise we’d all be dead of cancer.

Should you want to read about this, here’s the reference [ Neuron vol. 90 pp. 1141 – 1143, 1255 – 1262 ’16 ] where there’s a lot more.

Historically, this was one of the criticisms of the Star Wars Missile Defense — the Russians wouldn’t send over a few missles, they’d send hundreds which would act as sponges to our defense. Whether or not attempting to put Star Wars in place led to Russia’s demise is debatable, but a society where it was a crime to own a copying machine, could never compete technically to produce such a thing.

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started —

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

The higher drivel – II

From the obituary of a leading philosopher at an Ivy League institution. He proposed the following thought experiment to resolve the question of whether objects and relationship exist in the world independently of how we perceive them. This is what bothered Einstein about quantum mechanics, and he is said to have asked Bohr (I think) ” do you think the moon is not there if we don’t look at it”. The thought experiment is a brain placed in a vat by a mad scientist (I’m not making this up). So the brain in the vat — call him Oscar –could not formulate the sentence of “I am a brain in vat” because Oscar has no experience of a real brain or a real vat.

For this they’re currently paying 60K+ a year? It’s the higher drivel.

I read a book by Nozick with similar impossible situations he worried about after a rave review in the New York Times book review a few years ago. It had questions of the order ‘would bubblegum taste the same on the surface of the sun’.

The higher drivel series will appear from time to time — here’s the first one (published 5 years ago)

“The predicament of any tropological analysis of narrative always lies in its own effaced and circuitous recourse to a metaphoric mode of apprehending its object; the rigidity and insistence of its taxonomies and the facility with which it relegates each vagabond utterance to a strict regimen of possible enunciative formations testifies to a constitutive faith that its own interpretive meta-language will approximate or comply with the linguistic form it examines.”

From p. 35 of the NYTimes book review 16 October’11

You could actually major in this stuff (Semiotics) at an Ivy League university (Brown) in the 80’s. According to the article, Semiotics was the third most popular humanities major there at the time.  One son got in in ’86, but (fortunately) didn’t go there.  Nonetheless he was quite interested in Semiotics, hence the name of this blog.  Fortunately the author of the above quote recovered and notes “I now spend more time learning from the insights of science than deconstructing its truth claims.”

What a gigantic waste of time.  Think what Brown could have done by abolishing the department and using the funds for chemistry or mathematics.  The writer tries to salvage something from the experience noting that ‘a striking number of semiotics students have gone on to influential careers in the media and the creative arts.’  Unfortunately this explains a lot about the current media and ‘the creative arts’.

Students were being conned then, and they’re being conned now.  It might not have mattered what you majored in 50+ years ago at an Ivy League university, the world seemed to want us regardless.   A friend majored in Near Eastern studies, was hired by a bank, never saw the MidEast and did quite well.  Not so today.  The waitress serving us last Wednesday at a local bar was a graduate of one of the seven sisters in 2010.  She majored in Sociology and Psychology, is in debt for > 20K for the experience and is unable to find better work.   It isn’t clear what such a major prepares you for other than what she’s doing.  Finding out the distribution of majors of the jobless 20 somethings participating in OWS would be interesting

For a taste of the semiotics world of the 80’s, Google Alan Sokal and read about the fun he had with such a journal — “Social Text”.  Should you  still have the stomach for such things read “The Higher Superstition” by Gross and Levitt, which goes into more detail about Derrida, Foucault and a host of (mostly French) philosophes and what they tried to pull off.

Freud was right, there is an unconscious mind and it’s pretty smart.

Freud has fallen out of favor, with his analogies of the workings of the mind to a steam engine (drives, pressures, releases, displacements), the dominant technology of his day–as the computer is to ours. However, the following paper [ Proc. Natl. Acad. Sci. vol. 113 pp. E616 – E6125 ’16 ] shows that we have an unconscious mind, and that it is mathematically sophisticated (although I don’t think the authors made this point).

[ Proc. Natl. Acad. Sci. vol. 113 pp. E616 – E625 ’16 ] The work used magnetoencephalography (MEG), to record brain activity in response to a series of tone pips. MEG is conceptually quite similar to the electrocardiogram (EKG) or the electroencephalogram (EEG), both measuring voltage differences between two electrodes over time. Well, a voltage difference causes an electric current to flow through a conductor, and the nice wet brain is nothing if not that. Anyone who has studied how an electric motor works, knows that an electric current produces a magnetic field, which is what the MEG measures. The great advantage of MEG is that it is temporally precise, and changes can be measured in milliSeconds.

So what did they do? They presented tone pips to an unspecified number of subjects. The relation of one pip to another could either be completely random (RAND) or part of a repeating pattern — say pip pip pip silence silence pip pip pip silence silence (PAT). In one series of experiments, subjects were asked to press a button as soon as the pip sequence went from random to patterned (RAND –> PAT), all this while the MEG was being recorded. In another, they were asked to do this for PAT –> RAND. The subjects were as good as something called the the mathematical ideal observer of the variable order Markov model. It only took one or two random pips after a patterned sequence to notice it and press the button. They could also pick up that a pattern was formed midway through the second repetition of a pattern.

The MEG showed abrupt changes at either transition (RAND –> PAT or PAT –> RAND). The work didn’t stop with just sequences of just one tone. They could use an ‘alphabet of tones’. The subjects could pick up when the number of tones in the alphabet changed, again with MEG values to match.  So they had an independent signal from the MEG show that the brain picked up the transition without requiring any cooperation from the subjects.  All very nice, but anyone who likes music can do this.

Then the subjects were then asked to perform the n-back task, in which the subject is presented with a sequence of stimuli;  and the task consists of indicating when the current stimulus matches the one from n steps earlier in the sequence. Tricky isn’t it? Certainly, something that requires concentration. The load factor n can be adjusted to make the task more or less difficult. You’ve got to hold the sequence just presented in your head so the n-back task is a test of working memory.

Drum roll —

If the tone pips were presented while the subjects were doing the n-back task, the MEG still picked up RAND –> PAT and PAT –> RAND transitions, something the subjects weren’t consciously trying to do.

We know the brain does all sorts of things unconsciously — e.g. breathing. But they are pretty simple. The tests here are conceptually subtle. Your unconscious brain picks up statistical regularities and irregularities without your consciously trying. Who knows what else it does — maybe Freud was right.

Why should this be useful? Well, you’d want to know if a predator is sneaking up on you. The same work should be done with animals performing a task they’ve been trained to do.

SmORFs and DWORFs — has molecular biology lost its mind?

There’s Plenty of Room at The Bottom is a famous talk given by Richard Feynman 56 years ago. He was talking about something not invented until decades later — nanotechnology. He didn’t know that the same advice now applies to molecular biology. The talk itself is well worth reading — here’s the link

Those not up to speed on molecular biology can find what they need at — Just follow the links (there are only 5) in the series.

lncRNA stands for long nonCoding RNA — nonCoding for protein that is. Long is taken to mean over 200 nucleotides. There is considerable debate concerning how many there are — but “most estimates place the number in the tens of thousands” [ Cell vol. 164 p. 69 ’16 ]. Whether they have any cellular function is also under debate. Could they be like the turnings from a lathe, produced by the various RNA polymerases we have (3 actually) simply transcribing the genome compulsively? I doubt this, because transcription takes energy and cells are a lot of things but wasteful isn’t one of them.

Where does Feynmann come in? Because at least one lncRNA codes for a very small protein using a Small Open Reading Frame (SMORF) to do so. The protein in question is called DWORF (for DWorf Open Reading Frame). It contains only 34 amino acids. Its function is definitely not trivial. It binds to something called SERCA, which is a large enzyme in the sarcoplasmic reticulum of muscle which allows muscle to relax after contracting. Muscle contraction occurs when calcium is released from the endoplasmic reticulum of muscle.  SERCA takes the released calcium back into the endoplasmic reticulum allowing muscle to contract. So repetitive muscle contraction depends on the flow and ebb of calcium tides in the cell. Amazingly there are 3 other small proteins which also bind to SERCA modifying its function. Their names are phospholamban (no kidding) sarcolipin and myoregulin — also small proteins of 52, 31 and 46 amino acids.

So here is a lncRNA making an oxymoron of its name by actually coding for a protein. So DWORF is small, but so are its 3 exons, one of which is only 4 amino acids long. Imagine the gigantic spliceosome which has a mass over 1,300,000 Daltons, 10,574 amino acids making up 37 proteins, along with several catalytic RNAs, being that precise and operating on something that small.

So there’s a whole other world down there which we’ve just begun to investigate. It’s probably a vestige of the RNA world from which life is thought to have sprung.

Then there are the small molecules of intermediary metabolism. Undoubtedly some of them are used for control as well as metabolism. I’ll discuss this later, but the Human Metabolome DataBase (HMDB) has 42,000 entries and METLIN, a metabolic database has 240,000 entries.

Then there is competitive endogenous RNA –

Do you need chemistry to understand this? Yes and no. How the molecules do what they do is the province of chemistry. The description of their function doesn’t require chemistry at all. As David Hilbert said about axiomatizing geometry, you don’t need points, straight lines and planes You could use tables, chairs and beer mugs. What is important are the relations between them. Ditto for the chemical entities making us up.

I wouldn’t like that.  It’s neat to picture in my mind our various molecular machines, nuts and bolts doing what they do.  It’s a much richer experience.  Not having the background is being chemical blind..  Not a good thing, but better than nothing.