Category Archives: Philosophical issues raised

Catching God’s dice being thrown

Einstein famously said “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.”  Astronomers have caught the dice being thrown (at least as far as the origin of life is concerned).

This post will contain a lot more background than most, as I expect some readers won’t have much scientific background.  The technically inclined can read the article on which this is based — http://www.pnas.org/content/115/28/7166

To cut to the chase — astronomers have found water, a simple sugar, and a compound containing carbon, hydrogen, oxygen and nitrogen around newly forming stars and planets.  You need no more than these 4 atoms to build the bases making up the DNA of our genes, all our sugars and carbohydrates, and 18 of the 20 amino acids that make up our proteins. Throw in sulfur and you have all 20 amino acids.  Add phosphorus and you have DNA and its cousin RNA (neither has been found around newly forming stars so far).

These are the ingredients of life itself. Here’s a quote from the article — “What I can definitively say is that the ingredients needed to make biogenic molecules like DNA and RNA are found around every forming protostar. They are there at an early stage, incorporating into bodies at least as large as comets, which we know are the building blocks of terrestrial planets. Whether these molecules survive or are delivered at the late stage of planet formation, that’s the part of it we don’t know very well.”

So each newly formed star and planetary system is a throw of God’s/Nature’s/Soulless physics’ dice for the creation of life.

As of 1 July 2018, there are 3,797 confirmed planets around 2,841 stars, with 632 having more than one (Wikipedia).  And that’s just in the stars close enough to us to study.  Our galaxy, the milky way, contains 400,000,000,000.

Current estimates have some 100,000,000,000 galaxies in the universe.  https://www.space.com/25303-how-many-galaxies-are-in-the-universe.html.  That’s a lot tosses for life to arise.

Suppose that some day life is found on one such planet.  Does this invalidate Genesis, the Koran?  Assume that they are the word of God somehow transmitted to man.  If the knowledge we have about astronomy (above), biology etc. etc. were imparted to Jesus, Mohammed, Abraham, Moses — it never would have been believed.  The creator had to start with something plausible.

 

 

Advertisements

Chemistry and Biochemistry can’t answer the important questions but without them we are lost

The last two posts — one concerning the histone code and cerebral embryogenesis https://luysii.wordpress.com/2018/06/07/omar-khayyam-and-the-embryology-of-the-cerebral-cortex/ and the other concerning PVT1 enhancers promoters and cancer https://luysii.wordpress.com/2018/06/04/marshall-mcluhan-rides-again/ — would be impossible without chemical and biochemical knowledge and technology, but the results they produce and the answers they seek and lie totally outside both disciplines.

In fact they belong outside the physical realm in the space of logic, ideas, function — e.g. in the other half of the Cartesian dichotomy — the realm of ideas and spirit.  Certainly the biological issues are instantiated physically in molecules, just as computer memory used to be instantiated in magnetic cores, rather than transistors.

Back when I was starting out as a grad student in Chemistry in the early 60s, people were actually discovering the genetic code, poly U coded for phenylalanine etc. etc.  Our view was that all we had to do was determine the structure of things and understanding would follow.  The first xray structures of proteins (myoglobin) and Anfinsen’s result on ribonuclease showing that it could fold into its final compact form all by itself reinforced this. It also led us to think that all proteins had ‘a’ structure.

This led to people thinking that the only difference between us and a chimpanzee were a few amino acid differences in our proteins (remember the slogan that we were 98% chimpanzee).

So without chemistry and biochemistry we’d be lost, but the days of crude reductionism of the 60s and 70s are gone forever.  Here’s another example of chemical and biochemical impotence from an earlier post.

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this his hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? 

Advertisements

Relativity becomes less comprehensible

“To get Hawking radiation we have to give up on the idea that spacetime always had 3 space dimensions and one time dimension to get a quantum theory of the big bang.”  I’ve been studying relativity for some years now in the hopes of saying something intelligent to the author (Jim Hartle), if we’re both lucky enough to make it to our 60th college reunion in 2 years.  Hartle majored in physics under John Wheeler who essentially revived relativity from obscurity during the years when quantum mechanics was all the rage. Jim worked with Hawking for years, spoke at his funeral and wrote this in an appreciation of Hawking’s work [ Proc.Natl. Acad. Sci. vol. 115 pp. 5309 – 5310 ’18 ].

I find the above incomprehensible.  Could anyone out there enlighten me?  Just write a comment.  I’m not going to bother Hartle

Addendum 25 May

From a retired math professor friend —

I’ve never studied this stuff, but here is one way to get more actual dimensions without increasing the number of apparent dimensions:
Start with a 1-dimensional line, R^1 and now consider a 2-dimensional cylinder S^1 x R^1.  (S^1 is the circle, of course.)  If the radius of the circle is small, then the cylinder looks like a narrow tube.  Make the radius even smaller–lsay, ess than the radius of an atomic nucleus.  Then the actual 2-dimensional cylinder appears to be a 1-dimensional line.
The next step is to rethink S^1 as a line interval with ends identified (but not actually glued together.  Then S^1 x R^1 looks like a long ribbon with its two edges identified.  If the width of the ribbon–the length of the line interval–is less, say, than the radius of an atom, the actual 2-dimensional “ribbon with edges identified” appears to be just a 1-dimensional line.
Okay, now we can carry all these notions to R^2.  Take S^1 X R^2, and treat S^1 as a line interval with ends identified.  Then S^1 x R^2 looks like a (3-dimensional) stack of planes with the top plane identified, point by point, with the bottom plane.  (This is the analog of the ribbon.)  If the length of the line interval is less, say, than the radius of an atom, then the actual 3-dimensional s! x R^2 appears to be a 2-dimensional plane.
That’s it.  In general, the actual n+1-dimensional S^1 x R^n appears to be just n-space R^n when the radius of S^1 is sufficiently small.
All this can be done with a sphere S^2, S^3, … of any dimension, so that the actual k+n-dimensional manifold S^k x R^n appears to be just the n-space R^n when the radius of S^k is sufficiently small.  Moreover, if M^k is any compact manifold whose physical size is sufficiently small, then the actual k+n-dimensional manifold M^k x R^n appears to be just the n-plane R^n.
That’s one way to get “hidden” dimensions, I think. “

The bias of the unbiased

A hilarious paper from Stanford shows the bias of the unbiased [ Proc. Natl. Acad. Sci. vol. 115 pp. E3635 – E3644 ’18 ].  No one wants to be considered biased or to use stereotypes, but this paper indicts all of us.  They use a technique called word embedding to look at a large body of printed material (Wikipedia, Google news articles etc. etc.) over the past 100 years, to look for word associations  -e.g. male trustworthy female submissive and the like. In word embedding models, each word in a given language is associated with a high dimensional vector (not clear to me how the dimensions are chosen) and the metric between the words is measured.  A metric is simply a mathematical device that takes two objects and associates a number with them.  The distance between cities is a good example.

 

The vector for France is close to vectors for Austria and Italy.  The difference between London and England (obtained by subtracting them) is parallel to the difference between to the difference between Paris and France.  This allows embeddings to capture analogy relationships such as London is to England as Paris is to France.

So word embeddings were used as a way to study gender and ethnic stereotypes in the 20th and 21st centuries in the USA.  Not only that but they plotted how the biases changed over time.

So in your mind the metric between bias == bad, stereotype == worse is clear

So just as women’s occupations have changed so have the descriptors of women.  Back in the day women, if they worked out of the home at all, were teachers or nurses.  A descendent of Jonathan Edwards was a grade school teacher in the town of my small rural high school.

As women moved into the wider workforce from them the descriptors of them changed.  The following is a pair of direct quotes from the article.”

“More importantly, these correlations are very similar over the decades, suggesting that the relationship between embedding bias score and “reality,” as measured by occupation participation, is consistent over time” ….”This consistency makes the interpretation of embedding bias more reliable; i.e., a given bias score corresponds to approximately the same percentage of the workforce in that occupation being women, regardless of the embedding decade.”

English translation:  As women’s percentage of workers in a given occupation changed the ‘bias score’ changed with it.

So what the authors describe and worse, define, as bias and stereotyping is actually an accurate perception of reality.  We’re all guilty.

The authors are following Humpty Dumpty in Alice in Wonderland  — ““When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

I find the paper hilarious and an example of the bias of the supposedly unbiased.

Consensus isn’t what it used to be.

Technology marches on.  The influence of all 2^20 = 1,048,576 variants of 5 nucleotides on either side of two consensus sequences for transcription factor binding were (1) synthesized (2) had their dissociation constants (Kd’s) measured.  The consensus sequences were for two yeast transcription factors (Pho4 and Cbf1).  [ Proc.  Natl. Acad. Sci. vol. 115 pp. E3692 – E3702 ’18 ] .  The technique is called BET-seq (Binding Energy Topography by sequencing).

What do you think they found?

A ‘large fraction’ of the flanking mutations changed overall binding energies by as much as consensus site mutations.  The numbers aren’t huge (only 2.6 kiloCalories/mole).  However at 298 Kelvin 25 Centigrade 77 Fahrenheit (where RT = .6) every 1.36 kiloCalories/mole is worth a factor of 10 in the equilibrium constant.  So binding can vary by 100 fold even in this range.

The work may explain some ChIP data in which some strips of DNA are occupied despite the lack of a consensus site, with other regions containing consensus sites remaining unoccupied.  The authors make the interesting point that submaximal binding sites might be preferred to maximal ones because they’d be easier for the cell to control (notice the anthropomorphism of endowing the cell with consciousness, or natural selection with consciousness).  It is very easy to slide into teleological thinking in these matters.  Whether or not you like it is a matter of philosophical and/or theological taste.

Pity the poor computational chemist, trying to figure out binding energy to such accuracy with huge molecules like a transcriptional factors and long segments of DNA.

It is also interesting to think what “Molar” means with these monsters.  How much does a mole of hemoglobin weigh?  64 kiloGrams more or less.  It simply can’t be put into 1000 milliliters of water (which weighs 1 kiloGram).  A liter of water contains 1000/18 moles (55.6) moles of water.  So solubilizing 1 molecule of hemoglobin would certainly use more than 55 molecules of water.  Reality must intrude, but we blithely talk about concentration this way.  Does anyone out there know what the maximum achievable concentration of hemoglobin actually is?

The death of the pure percept — otoacoustic division

Rooming with 2 philosophy majors warps the mind even if it was 60 years ago.  Conundrums raised back then still hang around.  It was the heyday of Bertrand Russell before he became a crank.  One idea being bandied about back then was the ‘pure percept’ — a sensation produced by the periphery  before the brain got to mucking about with it.   My memory about the concept was a bit foggy so who better to ask than two philosophers I knew.

The first was my nephew, a Rhodes in philosophy, now an attorney with a Yale degree.  I got this back when I asked —

I would be delighted to be able to tell you that my two bachelors’ degrees in philosophy — from the leading faculties on either side of the Atlantic — leave me more than prepared to answer your question. Unfortunately, it would appear I wasn’t that diligent. I focused on moral and political philosophy, and although the idea of a “pure precept” rings a bell, I can’t claim to have a concrete grasp on what that phrase means, much less a commanding one.

 Just shows what a Yale degree does to the mind.

So I asked a classmate, now an emeritus prof. of philosophy and got this back
This pp nonsense was concocted because Empiricists [Es]–inc. Russell, in his more empiricistic moods–believed that the existence of pp was a necessary condition for empirical knowledge. /Why? –>
1. From Plato to Descartes, philosophers often held that genuine Knowledge [K] requires beliefs that are “indubitable” [=beyond any possible doubt]; that is, a belief counts as K only if it [or at least its ultimate source] is beyond doubt. If there were no such indubitable source for belief, skepticism would win: no genuine K, because no beliefs are beyond doubt. “Pure percepts” were supposed to provide the indubitable source for empirical K.
2. Empirical K must originate in sensory data [=percepts] that can’t be wrong, because they simply copy external reality w/o any cognitive “shopping” [as in Photoshop]. In order to avoid any possible ‘error’, percepts must be pure in that they involve no interpretation [= error-prone cognitive manipulation].
{Those Es who contend  that all K derives from our senses tend to ignore mathematical and other allegedly a priori K, which does not “copy” the sensible world.} In sum, pp are sensory data prior to [=unmediated by] any cognitive processing.

So it seems as though the concept is no longer taken seriously.

I’ve written about this before — as it applies to the retina — https://luysii.wordpress.com/2013/02/11/retinal-physiology-and-the-demise-of-the-pure-percept/

This time it involves the ear and eye movements.  Time for some anatomy.  Behind the eardrum are 3 tiny little bones (malleus, incus and stapes — the latter looking just like a stirrup with the foot plate pressed against an opening in the bone to communicate movement of the eardrum produced by sound waves to the delicate mechanisms of the inner ear).  There is a a tiny muscle just 1 millimeter long called the stapedius which stabilizes the stapes making it vibrate less protecting the inner ear against loud sounds.  There is another muscle called the tensor tympani which tenses the eardrum meaning that external sounds vibrate it less.  It protects us against loud sounds.

An article in PNAS (vol. 115 pp. 1309 – E1318 ’18) shows that just moving your eyes to a target causes the eardrum to oscillate.  Even more interesting, the eardrum movements occur 10 milliSeconds before you move your eye.  The oscillations last throughout the eye movement and will into subsequent periods of steady fixation.

It is well recognized in addition to the brain receiving nerve input from the inner ear, it sends nerves to the inner ear to control it.  So ‘the brain’ is controlling the sense organs proving input to it.  Of course the whole question of control in a situation with feedback is up in the air — see https://luysii.wordpress.com/2011/11/20/life-may-not-be-like-a-well-but-control-of-events-in-the-cell-is-like-a-box-spring-mattress/

As soon as feedback (or simultaneous influence) enters the picture it becomes like the three body problem in physics, where 3 objects influence each other’s motion at the same time by the gravitational force. As John Gribbin (former science writer at Natureand now prolific author) said in his book ‘Deep Simplicity’, “It’s important to appreciate, though, that the lack of solutions to the three-body problem is not caused by our human deficiencies as mathematicians; it is built into the laws of mathematics.” As John Gribbin (former science writer at Natureand now prolific author) said in his book ‘Deep Simplicity’, “It’s important to appreciate, though, that the lack of solutions to the three-body problem is not caused by our human deficiencies as mathematicians; it is built into the laws of mathematics.” The physics problem is actually much easier than the brain because we know the exact strength and form of the gravitational force. We aren’t even close to this for a single synapse.

Life at 250 Atmospheres pressure 1.8 tons/square inch

Tube worms (actually a form of mollusc) live on the depths of the ocean floor where there is almost no light, and very little oxygen. Just as plants use light energy to remove electrons from water to form oxygen and fix carbon, passing the stolen electrons back to oxygen taxing it though intermediary metabolism, symbiotic bacteria living in the worms remove electrons from hydrogen sulfide (H2S) formed by the hydrothermal vents on the seafloor. . How did the tube worms get this far down? By riding decaying wood down there. [Proc. Natl. Acad. Sci. vol. 114 pp. E3652 – E3658 ’17 ] This is the wooden-steps hypothesis [Distel DL, et al. (2000) Nature 403:725–726] which states that the large chemosynthetic mussels (ship worms) found at deep-sea hydrothermal vents descend from much smaller species associated with sunken wood and other organic deposits, and that the endosymbionts of these progenitors made use of hydrogen sulfide from biogenic sources (e.g., decaying wood) rather than from vent fluids.

At 2500 meters down the water pressure is 3750 pounds per square inch. One can only imagine the changes required in the amino acid sequences of their proteins required so they aren’t denatured or aggregated by such pressure.

The idea that life on planetary moons with subsurface oceans (Ganymede, Europa, Titan, Enceladus) could exist is no longer as fantastic as it initially seemed.

If it be found the implications for our conception of our place in the natural world are enormous.

Why wasn’t this mentioned in Genesis or any known creation myth? Assume for the moment that there actually is a creator who made itself known to our ancestors. If it tried to give Abraham, Gautama Budda, Mohammed et. etc. knowledge of these things, it wouldn’t have been believed. Planets? Planets with moons? Please. A few miracles here and there would be all that would be needed.

The incredible combinatorial complexity of cellular biochemistry

K8, K14, K20, T92, P125, S129, S137, Y176, T195, K276, T305, T308, T312, P313, T315, T326, S378, T450, S473, S477, S479. No, this is not some game of cosmic bingo. They represent amino acid positions in Protein Kinase B (AKT).

In the 1 letter amino acid code K is lysine T, threonine, S serine, P proline, Y tyrosine.

All 21 amino acids are modified (or not) one of them in 3 ways. This gives 4 * 2^20 = 4,194,304 possible post-translational modifications. Will we study all of them? It’s pretty easy to substitute alanine for serine or threonine making an unmodifiable position, or to substitute aspartic acid for threonine or serine making a phosphorylation mimic which is pretty close to phosphoserine or phosphothreonine, creating even more possibilities for study.

Most of the serines, threonines, tyrosines listed are phosphorylated, but two of the threonines are Nacetyl glucosylated. The two prolines are hydroxylated in the ring. The lysines can be methylated, acetylated, ubiquitinated, sumoylated. I did take the trouble to count the number of serines in the complete amino acid sequence and there are 24, of which only 6 are phosphorylated — so the phosphorylation pattern is likely to be specific and selected for. Too lazy do the same for lysine, threonine, tyrosine and proline. Here’s a link to the full sequence if you want to do it — http://www.uniprot.org/uniprot/P31749

The phosphorylations at each serine/threonine/tyrosine are carried out by not more than one of the following 8 kinases (CK2, IKKepsilon, ACK1,TBK1, PDK1, GSK3alpha, mTORC2 and CDK2)

AKT contains some 481 amino acids, divided (by humans for the purposes of comprehension) into 4 regions Pleckstrin Homology (#1 – #108), linker (#108 – #152) catalytic –e.g. kinase (#152 – #409),regulatory (#409 – #481).

This is from an excellent review of the functions of AKT in Cell vol. 169 pp. 381 – 3405 ’17. It only takes up the first two pages of the review before the functionality of AKT is even discussed.

This raises the larger issue of the possibility of human minds comprehending cellular biochemistry.

This is just one protein, although a very important one. Do you think we’ll ever be able to conduct enough experiments, to figure out what each modification (along or in combination) does to the many functions of AKT (and there are many)?

Now design a drug to affect one of the actions of AKT (particularly since AKT is the cellular homolog of a viral oncogene). Quite a homework assignment.

The strangeness of mathematical proof

I’ver written about Urysohn’s Lemma before and a copy of that post will be found at the end. I decided to plow through the proof since coming up with it is regarded by Munkres (the author of a widely used book on topology) as very creative. Here’s how he introduces it

“Now we come to the first deep theorem of the book,. a theorem that is commonly called the “Urysohn lemma”. . . . It is the crucial tool used in proving a number of important theorems. . . . Why do we call the Urysohn lemma a ‘deep’ theorem? Because its proof involves a really original idea, which the previous proofs did not. Perhaps we can explain what we mean this way: By and large, one would expect that if one went through this book and deleted all the proofs we have given up to now and then handed the book to a bright student who had not studied topology, that student ought to be able to go through the book and work out the proofs independently. (It would take a good deal of time and effort, of course, and one would not expect the student to handle the trickier examples.) But the Uyrsohn lemma is on a different level. It would take considerably more originality than most of us possess to prove this lemma.”

I’m not going to present the proof just comment on one of the tools used to prove it. This is a list of all the rational numbers found in the interval from 0 to 1, with no repeats.

Munkres gives the list at its start and you can see why it would list all the rational numbers. Here it is

0, 1, 1/2, 1/3, 2/3, 1/4, 3/4, 1/5 . . .

Note that 2/4 is missing (because 2 divides into 4 leaving a whole number). It would be fairly easy to write a program to produce the list, but a computer running the program would never stop. In addition it would be slow, because to avoid repeats given a denominator n, it would include 1/n and n-1/n in the list, but to rule out repeats it would have to perform n-2 divisions. It it had a way of knowing if a number was prime it could just put in 1/prime, 2/prime , , , (prime -1)/n without the division. But although there are lists of primes for small integers, there is no general way to find them, so brute force is required. So for 10^n, that means 10^n – 2 divisions. Once the numbers get truly large, there isn’t enough matter in the universe to represent them, nor is there enough time since the big bang to do the calculations.

However, the proof proceeds blithely on after showing the list — this is where the strangeness comes in. It basically uses the complete list of rational numbers as indexes for the infinite number of open sets to be found in a normal topological space. The proof below refers to the assumption of infinite divisibility of space (inherent in the theorem on normal topological spaces), something totally impossible physically.

So we’re in the never to be seen land of completed infinities (of time, space, numbers of operations). It’s remarkable that this stuff applies to the world we inhibit, but it does, and anyone wishing to understand physics at a deep level must come to grips with mathematics at this level.

Here’s the old post

Urysohn’s Lemma

The above quote is from one of the standard topology texts for undergraduates (or perhaps the standard text) by James R. Munkres of MIT. It appears on page 207 of 514 pages of text. Lee’s text book on Topological Manifolds gets to it on p. 112 (of 405). For why I’m reading Lee see https://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/.

Well it is a great theorem, and the proof is ingenious, and understanding it gives you a sense of triumph that you actually did it, and a sense of awe about Urysohn, a Russian mathematician who died at 26. Understanding Urysohn is an esthetic experience, like a Dvorak trio or a clever organic synthesis [ Nature vol. 489 pp. 278 – 281 ’12 ].

Clearly, you have to have a fair amount of topology under your belt before you can even tackle it, but I’m not even going to state or prove the theorem. It does bring up some general philosophical points about math and its relation to reality (e.g. the physical world we live in and what we currently know about it).

I’ve talked about the large number of extremely precise definitions to be found in math (particularly topology). Actually what topology is about, is space, and what it means for objects to be near each other in space. Well, physics does that too, but it uses numbers — topology tries to get beyond numbers, and although precise, the 202 definitions I’ve written down as I’ve gone through Lee to this point don’t mention them for the most part.

Essentially topology reasons about our concept of space qualitatively, rather than quantitatively. In this, it resembles philosophy which uses a similar sort of qualitative reasoning to get at what are basically rather nebulous concepts — knowledge, truth, reality. As a neurologist, I can tell you that half the cranial nerves, and probably half our brains are involved with vision, so we automatically have a concept of space (and a very sophisticated one at that). Topologists are mental Lilliputians trying to tack down the giant Gulliver which is our conception of space with definitions, theorems, lemmas etc. etc.

Well one form of space anyway. Urysohn talks about normal spaces. Just think of a closed set as a Russian Doll with a bright shiny surface. Remove the surface, and you have a rather beat up Russian doll — this is an open set. When you open a Russian doll, there’s another one inside (smaller but still a Russian doll). What a normal space permits you to do (by its very definition), is insert a complete Russian doll of intermediate size, between any two Dolls.

This all sounds quite innocent until you realize that between any two Russian dolls an infinite number of concentric Russian dolls can be inserted. Where did they get a weird idea like this? From the number system of course. Between any two distinct rational numbers p/q and r/s where p, q, r and s are whole numbers, you can always insert a new one halfway between. This is where the infinite regress comes from.

For mathematics (and particularly for calculus) even this isn’t enough. The square root of two isn’t a rational number (one of the great Euclid proofs), but you can get as close to it as you wish using rational numbers. So there are an infinite number of non-rational numbers between any two rational numbers. In fact that’s how non-rational numbers (aka real numbers) are defined — essentially by fiat, that any series of real numbers bounded above has a greatest number (think 1, 1.4, 1.41, 1.414, defining the square root of 2).

What does this skullduggery have to do with space? It says essentially that space is infinitely divisible, and that you can always slice and dice it as finely as you wish. This is the calculus of Newton and the relativity of Einstein. It clearly is right, or we wouldn’t have GPS systems (which actually require a relativistic correction).

But it’s clearly wrong as any chemist knows. Matter isn’t infinitely divisible, Just go down 10 orders of magnitude from the visible and you get the hydrogen atom, which can’t be split into smaller and smaller hydrogen atoms (although it can be split).

It’s also clearly wrong as far as quantum mechanics goes — while space might not be quantized, there is no reasonable way to keep chopping it up once you get down to the elementary particle level. You can’t know where they are and where they are going exactly at the same time.

This is exactly one of the great unsolved problems of physics — bringing relativity, with it’s infinitely divisible space together with quantum mechanics, where the very meaning of space becomes somewhat blurry (if you can’t know exactly where anything is).

Interesting isn’t it?

Book Review — The Kingdom of Speech — Part III

The last half of Wolfe’s book is concerned with Chomsky and Linguistics. Neurologists still think they have something to say about how the brain produces language, something roundly ignored by the professional linguistics field. Almost at the beginning of the specialty, various types of loss of speech (aphasias) were catalogued and correlated with where in the brain the problem was. Some people could understand but not speak (motor aphasia). Most turned out to have lesions in the left frontal lobe. Others could speak but not understand what was said to them (receptive aphasia). They usually had lesions in the left temporal lobe (e.g. just behind the ear amazingly enough).

Back in the day this approach was justifiably criticized as follows — yes you can turn off a lightbulb by flicking a switch, but the switch isn’t producing the light, but is just something necessary for its production. Nowadays not so much, because we see these areas lighting up with increased blood  flow (by functional MRI) when speech is produced or listened to.

I first met Chomsky’s ideas, not about linguistics, but when I was trying to understand how a compiler of a high level computer language worked. This was so long ago that Basic and Pascal were considered high level languages. Compilers worked with formal rules, and Chomsky categorized them into a hierarchy which you can read about here — https://en.wikipedia.org/wiki/Chomsky_hierarchy

The book describes the rise of Chomsky as the enfant terrible, the adult terrible, then the eminence grise of linguistics. Wolfe has great fun skewering him, particularly for his left wing posturing (something he did at length in “Radical Chic”). I think most of the description is accurate, but if you have the time and the interest, there’s a much better book — “The Linguistics Wars” by Randy Allen Harris — although it’s old (1993), Chomsky and linguistics had enough history even then that the book contains 356 pages (including index).

Chomsky actually did use the term language organ meaning a facility of the human brain responsible for our production of language of speech. Neuroscience never uses such a term, and Chomsky never tried to localize it in the brain, but work on the aphasias made this at least plausible. If you’ve never heard of ‘universal grammar, language acquisition device, deep structure of language, the book is a reasonably accurate (and very snarky) introduction.

As the years passed, for everything that Chomsky claimed was a universal of all languages, a language was found that didn’t have it. The last universal left standing was recursion (e.g. the ability the pack phrase within phrase — the example given “He assumed that now that her bulbs had burned out, he could shine and achieve the celebrity he had always longed for” — thought within thought within thought.

Then a missionary turned linguist (Daniel Everett) found a tribe in the Amazon (the Piraha) with a language which not only lacked recursion, but tenses as well. It makes fascinating reading, including the linguist W. Tecumseh Fitch (yes Tecumseh) who travelled up the Amazon to prove that they did have recursion (especially as he had collaborated with Chomsky and the (now disgraced) Marc Hauser on an article in 2002 saying that recursion was the true essence of human language — how’s this horrible sentence for recursion ?

The book ends with a discussion of the quote Wolfe began the book with — “Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language’s origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.”

One of the authors is Chomsky himself.

You can read the whole article at http://journal.frontiersin.org/article/10.3389/fpsyg.2014.00401/full

I think, that Wolfe is right — language is just a tool (like the wheel or the axe) which humans developed to help them. That our brain size is at least 3 times the size of our nearest evolutionary cousin (the Chimpanzee) probably had something to do with it. If language is a tool, then, like the axe, it didn’t have to evolve from anything.

All in all a fascinating and enjoyable book. There’s much more in it than I’ve had time to cover.  The prose will pick you up and carry you along.