Category Archives: Philosophical issues raised

A visual proof of the the theorem egregium of Gauss

Nothing better illustrates the difference between the intuitive understanding that something is true and being convinced by logic that something is true  than the visual proof of the theorem egregium of Gauss found in “Visual Differential Geometry and Forms” by Tristan Needham and  the 9 step algebraic proof in  “The Geometry of Spacetime” by Jim Callahan.

Mathematicians attempt to tie down the Gulliver of our powerful appreciation of space with Lilliputian strands of logic.

First: some background on the neurology of vision and our perception of space and why it is so compelling to us.

In the old days, we neurologists figured out what the brain was doing by studying what was lost when parts of the brain were destroyed (usually by strokes, but sometimes by tumors or trauma).  This wasn’t terribly logical, as pulling the plug on a lamp plunges you in darkness, but the plug has nothing to do with how the lightbulb or LED produces light.  Even so,  it was clear that the occipital lobe was important — destroy it on both sides and you are blind — https://en.wikipedia.org/wiki/Occipital_lobe but the occipital lobe accounts for only 10% of the gray matter of the cerebral cortex.

The information flowing into your brain from your eyes is enormous.  The optic nerve connecting the eyeball to the brain has a million fibers, and they can fire ‘up to 500 times a second.  If each firing (nerve impulse) is a bit, then that’s an information flow into your brain of a gigaBit/second.   This information is highly processed by the neurons and receptors in the 10 layers of the retina. Over 30 retinal cell types in our retinas are known, each responding to a different aspect of the visual stimulus.  For instance, there are cells responding to color, to movement in one direction, to a light stimulus turning on, to a light stimulus turning off, etc. etc.

So how does the relatively small occipital lobe deal with this? It doesn’t.  At least half of your the brain responds to visual stimuli.  How do we know?   It’s complicated, but something called functional Magnetic Resonance Imaging (fMRI) is able to show us increased neuronal activity primarily by the increase in blood flow it causes.

Given that half of your brain is processing what you see, it makes sense to use it to ‘see’ what’s going on in Mathematics involving space.  This is where Tristan Needham’s books come in.

I’ve written several posts about them.

and Here — https://luysii.wordpress.com/2022/03/07/visual-differential-geometry-and-forms-q-take-3/

 

 

OK, so what is the theorem egregium?  Look at any object (say a banana). You can see how curved it is by just looking at its surface (e.g. how it looks in the 3 dimensional space of our existence).  Gauss showed that you don’t
have to even look at an object in 3 space,  just perform local measurements (using the distance between surface points, e.g. the metric e.g.  the metric tensor) .  Curvature is intrinsic to the surface itself, and you don’t have to get outside of the surface (as we are) to find it.

 

 

The idea (and mathematical machinery) has been extended to the 3 dimensional space we live in (something we can’t get outside of).  Is our  universe curved or not? To study the question is to determine its intrinsic curvature by extrapolating the tools Gauss gave us to higher dimensions and comparing the mathematical results with experimental observation. The elephant in the room is general relativity which would be impossible without this (which is why I’m studying the theorem egregium in the first place).

 

So how does Callahan phrase and prove the theorem egregium? He defines curvature as the ratio of the area on a (small) patch on the surface to the area of another patch on the unit sphere. If you took some vector calculus, you’ll know that the area spanned by two nonCollinear vectors is the numeric value of their cross product.

 

 

The vectors Callahan needs for the cross product are the normal vectors to the surface.  Herein beginneth the algebra. Callahan parameterizes the surface in 3 space from a region in the plane, uses the metric of the surface to determine a formula for the normal vector to the surface  at a point (which has 3 components  x , y and z,  each of which is the sum of 4 elements, each of which is the product of a second order derivative with a first order derivative of the metric). Forming the cross product of the normal vectors and writing it out is an algebraic nightmare.  At this point you know you are describing something called curvature, but you have no clear conception of what curvature is.  But you have a clear definition in terms of the ratio of areas, which soon disappears in a massive (but necessary) algebraic fandango.

 

 

On pages 258 – 262 Callahan breaks down the proof into 9 steps involving various mathematical functions of the metric and its derivatives such as  Christoffel symbols,  the Riemann curvature tensors etc. etc.  It is logically complete, logically convincing, and shows that all this mathematical machinery arises from the metric (intrinsic to the surface) and its derivatives (some as high as third order).

 

 

For this we all owe Callahan a great debt.  But unfortunately, although I believe it,  I don’t see it.  This certainly isn’t to denigrate Callahan, who has helped me through his book, and a guy who I consider a friend as I’ve drunk beer with him and his wife while  listening to Irish music in a dive bar north of Amherst.

 

 

Callahan’s proof is the way Gauss himself did it and Callahan told me that Gauss didn’t have the notational tools we have today making the theorem even more outstanding (egregious).

 

Well now,  onto Needham’s geometrical proof.  Disabuse yourself of the notion that it won’t involve much intellectual work on your part even though it uses the geometric intuition you were born with (the green glasses of Immanuel Kant — http://onemillionpoints.blogspot.com/2009/07/kant-for-dummies-green-sunglasses.html)

 

Needham’s definition of curvature uses angular excess of a triangle.  Angles are measured in radians, which is the ratio of the arc subtended by the angle to the radius of the circle (not the circumference as I thought I remembered).  Since the circumference of a circle is 2*pi*radius, radian measure varies from 0 to 2*pi.   So a right angle is pi/2 radians.

 

Here is a triangle with angular excess.  Start with a sphere of radius R.  Go to the north pol and drop a longitude down to the equator.  It meets the equator at a right angle (pi/2).  Go back to the north pole, form an angle of pi/2 with the first longitude, and drop another longitude at that angle which meets the equator at an angle of pi/2.   The two points on the equator and the north pole form a triangle, with total internal angles of 3*(pi/2).  In plane geometry we know that the total angles of a triangle is 2 (pi/2).  (Interestingly this depends on the parallel postulate. See if you can figure out why).  So the angular excess of our triangle is pi/2.  Nothing complicated to understand (or visualize) here.

 

Needham defines the curvature of the triangle (and any closed area) as the ratio between the angular excess of the triangle to its area

 

 

What is the area of the triangle?  Well, the volume of a sphere is (4/3) pi * r^3, and its area is the integral (4 pi * r^2).  The area of the north hemisphere, is 2 pi *r^2, and the area of the triangle just made is 1/2 * Pi * r^2.

 

 

So the curvature of the triangle is (pi/2) / (1/2 * pi * r^2) = 1 / r^2.   More to the point, this is the curvature of a sphere of radius r.

 

 

At this point you should have a geometric intuition of just what curvature is, and how to find it.  So when you are embroiled in the algebra in higher dimensions trying to describe curvature there, you will have a mental image of what the algebra is attempting to describe, rather than just the symbols and machinations of the algebra itself (the Lilliputian strands of logic tying down the Gulliver of curvature).

 

The road from here to the Einstein gravitational field equations (p. 326 of Needham) and one I haven’t so far traversed,  presently is about 50 pages.Just to get to this point however,  you have been exposed to comprehensible geometrical expositions, of geodesics, holonomy,  parallel transport and vector fields, and you should have mental images of them all.Interested?  Be prepared to work, and to reorient how you think about these things if you’ve met them before.  The 3 links mentioned about will give you a glimpse of Needham’s style.  You probably should read them next.

The Chinese Room Argument, Understanding Math and the imposter syndrome

The Chinese Room Argument

 was first published in a 1980 article by American philosopher John Searle. He imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

 

So it was with me and math as an undergraduate due to a history dating back to age 10.  I hit college being very good at manipulating symbols whose meaning I was never given to understand.  I grew up 45 miles from the nearest synagogue.  My fanatically religious grandfather thought it was better not to attend services at all than to drive up there on the Sabbath.  My father was a young lawyer building a practice, and couldn’t close his office on Friday.   So my he taught me how to read Hebrew letters and reproduce how they sound, so I could read from the Torah at my Bar Mitzvah (which I did comprehending nothing).  Since I’m musical, learning the cantillations under the letters wasn’t a problem.

 

I’ve always loved math and solving problems of the plug and chug variety was no problem.  I’d become adept years earlier at this type of thing thanks to my religiously rigid grandfather.   It was the imposter syndrome writ large.  I’ve never felt like this about organic chemistry and it made a good deal of intuitive sense the first time I ran into it.  For why have a look at — https://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/

 

If there is anything in math full of arcane symbols calling for lots of mechanical manipulation, it is the differential geometry and tensors needed to understand General relativity.   So I’ve plowed through a lot of it, but still don’t see what’s really going on.

 

Enter Tristan Needham’s book “Visual Differential Geometry and Forms”.  I’ve written about it several times
and Here — https://luysii.wordpress.com/2022/03/07/visual-differential-geometry-and-forms-q-take-3/

 

If you’ve studied any math, his approach will take getting used to as it’s purely visual and very UNalgebraic.  But what is curvature but a geometric concept.

 

So at present I’m about 80 pages away from completing Needham’s discussion of general relativity.  I now have an intuitive understanding of curvature, torsion, holonomy, geodesics and the Gauss map that I never had before.   It is very slow going, but very clear.  Hopefully I’ll make it to p. 333.  Wish me luck.

Brilliant structural work on the Arp2/3 complex with actin filaments and why it makes me depressed

The Arp2/3 complex of 5 proteins forms side branches on existing actin filaments.  The following paper shows its beautiful structure along with movies.  Have a look — it’s open access. https://www.pnas.org/doi/10.1073/pnas.2202723119.

Why should it make me depressed? Because I could spend the next week studying all the ins and outs of the structure and how it works without looking at anything else.  Similar cryoEM studies of other multiprotein machines are coming out which will take similar amounts of time.  Understanding how single enzymes work is much simpler, although similarly elegant — see Cozzarelli’s early work on topoisomerase.

So I’m depressed because I’ll never understand them to the depth I understand enzymes, DNA, RNA etc. etc.

Also the complexity and elegance of these machines brings back my old worries about how they could possibly have arisen simply by chance with selection acting on them.  So I plan to republish a series of old posts about the improbability of our existence, and the possibility of a creator, which was enough to me get thrown off Nature Chemistry as a blogger.

Enough whining.

Here is why the Arp2/3 complex is interesting.  Actin filaments are long (1,000 – 20,000 Angstroms and thin (70 Angstroms).  It you want to move a cell forward by having them grow toward its leading edge, growing actin filaments would puncture the membrane like a bunch of needles, hence the need for side branches, making actin filaments a brush-like mesh which could push the membrane forward as it grows.

The Arp2/3 complex has a molecular mass of 225 kiloDaltons, or probably 2,250 amino acids or 16 thousand atoms.

Arp2 stands for actin related protein 2, something quite similar to the normal actin monomer so it can sneak into the filament. So can Arp3.  The other 5 proteins grab actin monomers and start them polymerizing as a branch.

But even this isn’t enough, as Arp2/3 is intrinsically inactive and multiple classes of nucleation promoting factors (NPFs) are needed to stimulate it.  One such NPF family is the WASP proteins (for Wiskott Aldrich Syndrome Protein) mutations of which cause the syndrome characterized by hereditary thrombocytopenia, eczema and frequent infections.

The paper’s pictures do not include WASP, just the 7 proteins of the complex snuggling up to an actin filament.

In the complex the Arps are in a twisted conformation, in which they resemble actin monomers rather than filamentous actin subunits which have a flattened conformation.  After activation arp2 and arp3 mimic the arrangement of two consecutive subunits along the short pitch helical axis of an actin filament and each arp transitions from a twisted (monomerLike) to a flattened (filamentLike) conformation.

So look at the pictures and the movies and enjoy the elegance of the work of the Blind Watchmaker (if such a thing exists).

Why there’s more to chemistry than quantum mechanics

As juniors entering the Princeton Chemistry Department as majors in 1958 we were told to read “The Logic Of Modern Physics” by P. W. Bridgeman — https://en.wikipedia.org/wiki/The_Logic_of_Modern_Physics.   I don’t remember whether we ever got together to discuss the book with faculty, but I do remember that I found the book intensely irritating.  It was written in 1927, in early hay day of quantum mechanics.  It  said that all you could know was measurements (numbers on a dial if you wish) without any understanding of what went on in between them.

I thought chemists knew a lot more than that.  Here’s Henry Eyring — https://en.wikipedia.org/wiki/Henry_Eyring_(chemist)https://en.wikipedi developing transition state theory a few years later in 1935 in the department.  It was pure ideation based on thermodynamics, which was developed long before quantum mechanics and is still pretty much a quantum mechanics free zone of physics (although people are busy at work on the interface).

Henry would have loved a recent paper [ Proc. Natl. Acad. Sci. vol. 118 e2102006118 ’21 ] where the passage of a molecule back and forth across the free energy maximum was measured again and again.

A polyNucleotide hairpin of DNA  was connected to double stranded DNA handles in optical traps where it could fluctuate between folded (hairpin) and unfolded (no hairpin) states.  They could measure just how far apart the handles were and in the hairpin state the length appears to be 100 Angstroms (10 nanoMeters) shorter than the unfolded state.

So they could follow the length vs. time and measure the 50 microSeconds or so it took to make the journey across the free energy maximum (e.g. the transition state). A mere 323,495 different transition paths were studied.  You can find much more about the work here — https://luysii.wordpress.com/2022/02/15/transition-state-theory/

Does Bridgeman have the last laugh — remember all that is being measured are numbers (lengths) on a dial.

Here’s another recent paper Eyring would have loved — [ Proc. Natl. Acad. Sci. vol. 119 e2112372118 ’22  — ] https://www.pnas.org/doi/epdf/10.1073/pnas.2112382119  ]

The paper studied Barnase, a 110 amino acid protein which degrades RNA (so much like the original protein Anfinsen studied years ago).  Barnase is highly soluble and very stable making it one of the E. Coli’s of protein folding studies.

The new wrinkle of the paper is that they were able to study the folding and unfolding and the transition state of single molecules of Barnase at different temperatures (an experiment which would have been unlikely for Eyring to even think about doing in 1935 when he developed transition state theory, and yet this is exactly the sort of thing what he was thinking about but not about proteins whose structure was unknown back then).

This allowed them to determine not just the change in free energy (deltaG)  between the unfolded (U) and the transition state (TS) and the native state (N) of Barnase, but also the changes in enthalpy (delta H) and entropy (delta S) between U and TS and between N and TS.

Remember delta G = Delta H – T delta S.  A process will occur if deltaG is negative, which is why an increase in entropy is favorable, and why the decrease in entropy between U and TS is unfavorable.   You can find out more about this work here — https://luysii.wordpress.com/2022/03/25/new-light-on-protein-folding/

So the purely mental ideas of Eyring are being confirmed once again (but by numbers on a dial).  I doubt that Eyring would have thought such an experiment possible back in 1935.

Chemists know so much more than quantum mechanics says we can know.  But much of what we do know would be impossible without quantum mechanics.

However, Eyring certainly wasn’t averse to quantum mechanics, having written a text book Quantum Chemistry with Walter and Kimball on the very subject in 1944.

How Infants learn language – V

Infants don’t learn language like neural nets do. Unlike nets, no feedback is involved, which amazingly, makes learning faster.

As is typical of research in psychology, the hard part is thinking of something clever to do, rather than actually carrying it out.

[ Proc. Natl. Acad. Sci. vol. 117 pp. 26548 – 26549 ’20 ] is a short interview with psychologist Richard N. Aslin. Here’s a link — hopefully not behind a paywall — https://www.pnas.org/content/pnas/117/43/26548.full.pdf.

He was interested in how babies pull out words from a stream of speech.

He took a commonsense argument and ran with it.

“The learning that I studied as an undergrad was reinforcement learning—that is, you’re getting a reward for responding to certain kinds of input—but it seemed that that kind of learning, in language acquisition, didn’t make any sense. The mother is not saying, “listen to this word…no, that’s the wrong word, listen to this word,” and giving them feedback. It’s all done just by being exposed to the language without any obvious reward”

So they performed an experiment whose results surprised them. They made a ‘language’ of speech sounds which weren’t words and presented them 4 per second for a few minutes, to 8 month old infants. There was an underlying statistical structure, as certain sounds were more likely to follow another one, others were less likely. That’s it. No training. No feedback. No nothin’, just a sequence of sounds. Then they presented sequences (from the same library of sounds) which the baby hadn’t heard before and the baby recognized them as different. The interview didn’t say how they knew the baby was recognizing them, but my guess is that they used the mismatch negativity brain potential which automatically arises to novel stimuli.

Had you ever heard of this? I hadn’t but the references to the author’s papers go back to 1996 ! Time for someone to replicate this work.

So our brains have an innate ability to measure statistical probability of distinct events occurring. Even better we react to the unexpected event. This may be the ‘language facility’ Chomsky was talking about half a century ago. Perhaps this innate ability is the origin of music, the most abstract of the arts.

How infants learn language is likely inherently fascinating to many, not just neurologists.

Here are links to some other posts on the subject you might be interested in.

https://luysii.wordpress.com/2013/06/03/how-infants-learn-language-iv/

https://luysii.wordpress.com/2011/10/10/how-infants-learn-language-iii/

https://luysii.wordpress.com/2010/10/03/how-infants-learn-language-ii/

https://luysii.wordpress.com/2010/09/30/how-infants-learn-language/

Phillip Anderson, 1923 – 202 R. I. P.

Phil Anderson probably never heard of Ludwig Mies Van Der Rohe, he of the Bauhaus and his famous dictum ‘less is more’, so he probably wasn’t riffing on it when he wrote “More Is Different” in August of 1970 [ Science vol. 177 pp. 393 – 396 ’72 ] — https://science.sciencemag.org/content/sci/177/4047/393.full.pdf.

I was just finishing residency and found it a very unusual paper for Science Magazine.  His Nobel was 5 years away, but Anderson was of sufficient stature that Science published it.  The article was a nonphilosophical attack on reductionism with lots of hard examples from solid state physics. It is definitely worth reading, if the link will let you.  The philosophic repercussions are still with us.

He notes that most scientists are reductionists.  He puts it this way ” The workings of our minds and bodies and of all the matter animate and inanimate of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under extreme conditions we feel we know pretty well.”

So many body physics/solid state physics obeys the laws of particle physics, chemistry obeys the laws of many body physics, molecular biology obeys the laws of chemistry, and onward and upward to psychology and the social sciences.

What he attacks is what appears to be a logical correlate of this, namely that understanding the fundamental laws allows you to derive from them the structure of the universe in which we live (including ourselves).   Chemistry really doesn’t predict molecular biology, and cellular molecular biology doesn’t really predict the existence of multicellular organisms.  This is because new phenomena arise at each level of increasing complexity, for which laws (e.g. regularities) appear which don’t have an explanation by reducing them the next fundamental level below.

Even though the last 48 years of molecular biology, biophysics have shown us a lot of new phenomena, they really weren’t predictable.  So they are a triumph of reductionism, and yet —

As soon as you get into biology you become impaled on the horns of the Cartesian dualism of flesh vs. spirit.  As soon as you ask what something is ‘for’ you realize that reductionism can’t help.  As an example I’ll repost an old one in which reductionism tells you exactly how something happens, but is absolutely silent on what that something is ‘for’

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this is hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” — https://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/ –puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? 

Now is the winter of our discontent

One of the problems with being over 80 is that you watch your friends get sick.  In the past month, one classmate developed ALS and another has cardiac amyloidosis complete with implantable defibrillator.  The 40 year old daughter of a friend who we watched since infancy has serious breast cancer and is undergoing surgery radiation and chemo.  While I don’t have survivor’s guilt (yet), it isn’t fun.

Reading and thinking about molecular biology has been a form of psychotherapy for me (for why, see the reprint of an old post on this point at the end).

Consider ALS (Amyotrophic Lateral Sclerosis, Lou Gehrig disease).  What needs explaining is not why my classmate got it, but why we all don’t have it.  As you know human neurons don’t replace themselves (forget the work in animals — it doesn’t apply to us).  Just think what the neurons  which die in ALS have to do.  They have to send a single axon several feet (not nanoMeters, microMeters, milliMeters — but the better part of a meter) from their cell bodies in the spinal cord to the muscle the innervate (which could be in your foot).

Supplying the end of the axon with proteins and other molecules by simple diffusion would never work.  So molecular highways (called microtubules) inside the axon are constructed, along which trucks (molecular motors such as kinesin and dynein) drag cargos of proteins, and mRNAs to make more proteins.

We know a lot about microtubules, and Cell vol. 179 pp. 909 – 922 ’19 gives incredible detail about them (even better with lots of great pictures).  Start with the basic building block — the tubulin heterodimer — about 40 Angstroms wide and 80 Angstroms high.  The repeating unit of the microtubule is 960 Angstroms long, so 12 heterodimers are lined up end to end in each repeating unit — this is the protofilament of the microtubule, and our microtubules have 13 of them, so that’s 156 heterodimers per microtubule repeat length which is 960 Angstroms or 96 nanoMeters (96 billionths of a meter).  So a microtubule (or a bunch of microtubules extending a meter has 10^7 such repeats or about 1 billion heterodimers.  But the axon of a motor neuron has a bunch of microtubules in it (between 10 and 100), so the motor neuron firing to  the muscle moving my finger has probably made billions and billions of heterodimers.  Moreover it’s been doing this for 80 plus years.

This is why, what needs explaining is not ALS, but why we don’t all have it.

Here’s the old post

The Solace of Molecular Biology

Neurology is fascinating because it deals with illnesses affecting what makes us human. Unfortunately for nearly all of my medical career in neurology ’62 – ’00 neurologic therapy was lousy and death was no stranger. In a coverage group with 4 other neurologists taking weekend call (we covered our own practices during the week) about 1/4 of the patients seen on call weekend #1 had died by on call weekend #2 five weeks later.

Most of the deaths were in the elderly with strokes, tumors, cancer etc, but not all. I also ran a muscular dystrophy clinic and one of the hardest cases I saw was an infant with Werdnig Hoffman disease — similar to what Steven Hawking has, but much, much faster — she died at 1 year. Initially, I found the suffering of such patients and their families impossible to accept or understand, particularly when they affected the young, or even young adults in the graduate student age.

As noted earlier, I started med school in ’62, a time when the genetic code was first being cracked, and with the background then that many of you have presently understanding molecular biology as it was being unravelled wasn’t difficult. Usually when you know something you tend to regard it as simple or unimpressive. Not so the cell and life. The more you know, the more impressive it becomes.

Think of the 3.2 gigaBases of DNA in each cell. At 3 or so Angstroms aromatic ring thickness — this comes out to a meter or so stretched out — but it isn’t, rather compressed so it fits into a nucleus 5 – 10 millionths of a meter in diameter. Then since DNA is a helix with one complete turn every 10 bases, the genome in each cell contains 320,000,000 twists which must be unwound to copy it into RNA. The machinery which copies it into messenger RNA (RNA polymerase II) is huge — but the fun doesn’t stop there — in the eukaryotic cell to turn on a gene at the right time something called the mediator complex must bind to another site in the DNA and the RNA polymerase — the whole mess contains over 100 proteins and has a molecular mass of over 2 megaDaltons (with our friend carbon containing only 12 Daltons). This monster must somehow find and unwind just the right stretch of DNA in the extremely cramped confines of the nucleus. That’s just transcription of DNA into RNA. Translation of the messenger RNA (mRNA) into protein involves another monster — the ribosome. Most of our mRNA must be processed lopping out irrelevant pieces before it gets out to the cytoplasm — this calls for the spliceosome — a complex of over 100 proteins plus some RNAs — a completely different molecular machine with a mass in the megaDaltons. There’s tons more that we know now, equally complex.

So what.

Gradually I came to realize that what needs explaining is not the poor child dying of Werdnig Hoffman disease but that we exist at all and for fairly prolonged periods of time and in relatively good shape (like my father who was actively engaged in the law and a mortgage operation until 6 months before his death at age100). Such is the solace of molecular biology. It ain’t much, but it’s all I’ve got (the religious have a lot more). You guys have the chemical background and the intellectual horsepower to understand molecular biology — and even perhaps to extend it.

 

The Russian language

“The power of language is its ambiguity” sayeth I.  This came up because my nephew married a wonderful Russian expat a few weeks ago.  Plucky fellow that he is, he’s learning to speak Russian.  Like my wife’s friend of 50+ years ago he is amazed at how many words the language has.  Russian apparently has a word for everything so there is little ambiguity, which must make the language hard to pun in.

Someone Googled the number of words in Russian and English and they’re about the same.

Perhaps the lack of ambiguity makes Russian hard to learn (and use).  Computer languages (basic, C, pascal) are completely unambiguous.  Every reserved word and operator means exactly one thing, no more no less.

Most people find programming far from intuitive.  It’s hard to express our sloppy ideas in unambiguous computer language.  Given it’s difficulty giving concrete form to your ideas, computer languages aren’t as powerful (in the sense of being easy to use) as your sloppy sentences.

Why should language be so ambiguous?  My guess is, that it has to be this way given the way we perceive the world (and the way the world probably actually is — ontology if you want to impress your friends).

We don’t live in Plato’s world of perfect forms, but in a world of objects that only partially and rather poorly instantiate them.  This is as true of science as anything else — even supposedly well defined terms change their meaning — are the giant viruses really viruses?  What do we really mean by a gene?  It used to be a part of DNA coding for a protein, but what about the DNA that controls when and where a protein is made.   Mutations here can cause disease, so are they genes?

Language, to be useful, must express our imperfect ways of rigidly classifying the world (perhaps because such a classification is impossible).

Socially, I never thought of our family as inhibited, but the Russians I met seemed more alive and vibrant than our lot (this without them living up to their reputation of hard drinking).

Prolegomena to reading Fall by Neal Stephenson

As a college freshman I spent hours trying to untangle Kant’s sentences in “Prolegomena to Any Future Metaphysics”  Here’s sentence #1.   “In order that metaphysics might, as science, be able to lay claim, not merely to deceitful persuasion, but to insight and conviction, a critique of reason itself must set forth the entire stock of a priori concepts, their division according to the different sources (sensibility, understanding, and reason), further, a complete table of those concepts, and the analysis of all of them along with everything that can be derived from that analysis; and then, especially, such a critique must set forth the possibility of synthetic cognition a priori through a deduction of these concepts, it must set forth the principles of their use, and finally also the boundaries of that use; and all of this in a complete system.”

This post is something to read before tackling “Fall” by Neal Stephenson, a prolegomena if you will.  Hopefully it will be more comprehensible than Kant.   I’m only up to p. 83 of a nearly 900 page book.  But so far the book’s premise seems to be that if you knew each and every connection (synapse) between every neuron, you could resurrect the consciousness of an individual (e.g. a wiring diagram).  Perhaps Stephenson will get more sophisticated as I proceed through the book.  Perhaps not.  But he’s clearly done a fair amount neuroscience homework.

So read the following old post about why a wiring diagram of the brain isn’t enough to explain how it works.   Perhaps he’ll bring in the following points later in the book.

Here’s the old post.  Some serious (and counterintuitive) scientific results to follow in tomorrow’s post.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home.  Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain.  They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to.  For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana).  This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time.  Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them).  This is where the neurological urban myth of the loss of 10,000 neurons each day came from.  For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses.  Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it.  I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the  complexity of the interconnection between neurons and just how many there are.  Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope.  Stare at figure 2.f. a while and try to figure out what’s going on.  It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing.  There are at least 3 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses).  A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible.  It is now possible to watch synapses in the living brain of awake animal for extended periods of time.  But we now know that synapses come and go in the brain.  The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].  So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses.  Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission)  Their importance in psychiatry and drug addiction is unparalleled.  Examples of such volume transmitters include serotonin, dopamine and norepinephrine.  Drugs of abuse affecting their action include cocaine, amphetamine.  Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg.  This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles.  There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse.  The construction of the brain from the fertilized egg must be in some sense statistical.  Remarkable that it happens at all.  Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

 

Feynman and Darwin

What do Richard Feynman and Charles Darwin have in common?  Both have written books which show a brilliant mind at work.  I’ve started reading the New Millennium Edition of Feynman’s Lectures on Physics (which is the edition you should get as all 1165 errata found over the years have been corrected), and like Darwin his thought processes and their power are laid out for all to see.  Feynman’s books are far from F = ma.  They are basically polished versions of lectures, so it reads as if Feynman is directly talking to you.  Example: “We have already discussed the difference between knowing the rules of the game of chess and being able to play.”  Another: talking about Zeno  “The Greeks were somewhat confused by such problems, being helped, of course, by some very confusing Greeks.”

He’s always thinking about the larger implications of what we know.  Example: “Newton’s law has the peculiar property that if it is right on a certain small scale, then it will be right on a larger scale”

He then takes this idea and runs with it.  “Newton’s laws are the ‘tail end’ of the atomic laws extrapolated to a very large size”  The fact that they are extrapolatable and the fact that way down below are the atoms producing them means, that extrapolatable laws are the only type of physical law which could be discovered by us (until we could get down to the atomic level).  Marvelous.  Then he notes that the fundamental atomic laws (e.g. quantum mechanics) are NOTHING like what we see in the large scale environment in which we live.

If you like this sort of thing, you’ll love the books.  I don’t think they would be a good way to learn physics for the first time however.  No problems, etc. etc.  But once you’ve had exposure to some physics “it is good to sit at the feet of the master” — Bill Gates.

Most of the readership is probably fully engaged with work, family career and doesn’t have time to actually read “The Origin of Species”. In retirement, I did,and the power of Darwin’s mind is simply staggering. He did so much with what little information he had. There was no clear idea of how heredity worked and at several points he’s a Lamarckian — inheritance of acquired characteristics. If you do have the time I suggest that you read the 1859 book chapter by chapter along with a very interesting book — Darwin’s Ghost by Steve Jones (published in 1999) which update’s Darwin’s book to contemporary thinking chapter by chapter.  Despite the advances in knowledge in 140 years, Darwin’s thinking beats Jones hands down chapter by chapter.