Category Archives: Philosophical issues raised

How far we’ve come from the McCulloch Pitts neuron

The McCulloch Pitts neuron was described in 1943.  It consists of a bunch of inputs (dendrites) some excitatory, some inhibitory, which are just summed (integrated) the results determining the output (whether the  axon of the neuron fired or didn’t).  Hooking them together could instantiate a variety of boolean functions and ultimately a Turing machine.

The McCulloch Pitts neuron really isn’t that far from the ‘neurons’ in neural nets which underlie the spectacular achievements of artificial intelligence (ChatGTP etc. etc.)   The neuron of the neural net is nothing more than a set of inputs, a set of weights, and an activation function. The neuron translates these inputs into a single output, which can then be picked up as input for another layer of neurons later on.

The major difference between the computation a linked bunch of neurons in the two models (McCulloch Pitts and neural net) is that given the same set of inputs in McCulloch Pitts you always get the same output, while in neural nets you don’t.  The difference is that the set of weights on the inputs to each neuron in the net which can be and are adjusted which depends on how close the output of the net is to the target (which in the case of ChatGTP is how accurately it predicts the next word in a sample of text).

There is a huge debate going on as to whether ChatGTP and similar neural nets understand what they are doing and whether they are/will become conscious.

So does ChatGTP explain how our brains do what they do?  Not at all.  Our neurons are doing far more than integrating input and firing.  This was brought home in a paper focused on something entirely different, the gamma oscillations of brain electrical activity (Neuron vol. 111 pp. 936 – 953 ’23).  People have been studying brain rhythms since Hans Berger discovered alpha rhythm just shy of a century ago.  The electroencephalogram (EEG) measures the various rhythms as they occur over the brain.  Back in the day when I was starting out in neurology (1967), it was one of the few diagnostic tools we had.  It wasn’t very good, and a cynical attending described it as useless but not worthless (because you could charge for it).

The gray matter of the surface of our brains (cerebral cortex) is gray because it is packed with the cell bodies of neurons — some 100,000 under each square millimeter of cortex.  Somehow they are wired together so that they can produce coherent rhythmic electrical activity as they fire.

The best place to study how a bunch neurons produce rhythms is the hippocampus, an area crucial in forming memories and one of the earliest places the senile plaques of Alzheimer’s disease show up.

Unlike the jumble of neurons in the cortex, the large neurons of the hippocampus are all lined up and oriented the same way like trees in a forest.  All the cell bodies lie in roughly the same layer, with the major dendrite (apical dendrite) going up like the trunk of a tree, and the ones near the cell body spreading out like the roots of a tree.

Technology has marched on, and it is now possible to fashion electrodes, which can measure neuronal electrical activity along the trunk, and watch it in real time.

Figure 2b p. 941 shows that different parts of the trunk of the hippocampal  neurons show rhythmic activity at different frequencies at any given time.  Not only that, but as time passes each area of the trunk (apical dendrite) changes the frequency of its rhythmic activity.  This is light years away from the integrate and fire model of McCulloch Pitts, or the adjustment of weights on the inputs to the neurons of the neuronal net.

It shows that each of these neurons is a complex processor of information (a computer if you will).  Even though artificial intelligence has made great strides, it really isn’t telling us how the brain does what it does.

Finally if you want to see what genius looks like, check out the life of Walter Pitts — https://en.wikipedia.org/wiki/Walter_Pitts  — corresponding with Bertrand Russell about Principia Mathematica at age 12, studying with Carnap at the University of Chicago at 15, all while he was homeless.

 

When does a description of something become an explanation ?

“It’s just evolution”. I found this explanation of the molecular biology underlying our brain’s threefold expansion relative to the chimp extremely unsatisfying.  The molecular biology of part of the expansion is fascinating and beautifully worked out. For details see a copy of the previous post below the ***.

To say that these effects are ‘just evolution’ is using the name we’ve put on the process to explain the process itself, e.g.  being satisfied with the description of something as an explanation  of it.

Newton certainly wanted more than that for his description of gravity (the inverse square law, action at a distance etc. etc.) brilliant and transformative though it was.  Here he is in a letter to Richard Bentley

“That gravity should be innate inherent & {essential} to matter so that one body may act upon another at a distance through a vacuum without the mediation of any thing else by & through which their action or force {may} be conveyed from one to another is to me so great an absurdity that I believe no man who has in philosophical matters any competent faculty of thinking can ever fall into it. ”

But the form of the force law for gravity combined with Newton’s three laws of motion (1687) became something much more powerful, a set of predictions of phenomena as yet unseen.

The Lagrange points are one example.  They are points of equilibrium for small-mass objects under the influence of two massive bodies orbiting their common center of gravity.  The first Lagrange points were found by Euler in 1750, Lagrange coming in 10 years later.  One of the Lagrange points of the Earth Sun  system is where the James Webb telescope sits today remaining stable without expending much energy to keep it there.  In a rather satisfying sense the gravitational force law explains their existence (along with Newton’s laws of motion and a lot of math).  So here is where a description (the force law) is actually an explanation of something else.

But Newton wanted more, much more than his description of the gravitational force (the inverse square law).  It took Einstein centuries later to come up with General Relativity — the theory of the gravitational force.  Just as a ball rolls down an incline here under the force of gravity, planets roll down the shape of Einstein’s spacetime, which is put there by the massive bodies it contains.  By shaping space everywhere, masses give the illusion of force, no action at a distance is needed at all.

It is exactly in that sense that I find the explanation for the 8 million year scuplting of our brain as evolution unsatisfying.  It is essential a description trying to pass itself off as an explanation.  Perhaps there is no deeper explanation of what we’re finding out.  Supernatural explanations have been with us in every culture.

Hopefully if such an explanation exists, we won’t have to wait over two centuries for it as did Newton.

*****

The evolutionary construction and magnification of the human brain

Our brains are 3 times the size of the chimp and more complex.  Now that we have the complete genome sequences of both (and other monkeys) it is possible to look for the protein coding genes which separate us.

First some terminology.  Not every species found since the divergence of man and chimp is our direct ancestor.  Many banches are extinct.  The whole group of species are called hominins [Nature vol. 422 pp. 849 – 857 ‘ 03 ].  Hominids are species in the path between us and the chimp — sort of a direct line of descent.  However the terminology is in flux and confusing and I’m not sure this is right.   But we do need some terminology to proceed.

Hominid Specific genes (HS genes) result which result from recent gene duplications in hominid/human genomes.  Gene duplication is a great way for evolution to work quickly.  Even if one gene is essential, messing with the other copy won’t be fatal.  HS genes include >20 gene families that are dynamically expressed during the formation of the human brain.  It was hard for me to find out just how many HS genes there are.

Here are some examples. The human-specific NOTCH2NL genes increase the self-renewal potential of human cortical progenitors (meaning more brain cell can result from them).  TBC1D3and ARGHAP11B, are involved in basal progenitor amplification (ditto).

A recent paper [ Neuron vol. 111 pp. 65 – 80 ’23 ] discusses CROCCP2 (you don’t want to know what the acronym stands for) which is one of several genes in this family with at least 6 copies in various hominid genomes.  However, CROCCP2 is a duplicate unique to man.   It is highly expressed during brain development and enhances outer Radial Glial Cell progenitor proliferation.

The mechanism by which this happens is detailed in the paper and involves the cilium found on every neuron, mTOR, IFT20 and others.

But that’s not the point here, fascinating although these mechanisms are.   We’re watching a series of at least 20 gene duplications with subsequent modifications build the brain that is unique to us over relatively rapid evolutionary times.  The split between man and chimp is thought to have happened only 8 million years ago.

What should we call this process?  Evolution?  The Creator in action? The Blind Watchmaker?   It is certainly is eerie to think about.  There are 17 more HS genes to go involving in building our brains remaining to be worked out.  Stay tuned

 

The evolutionary construction and magnification of the human brain

Our brains are 3 times the size of the chimp and more complex.  Now that we have the complete genome sequences of both (and other monkeys) it is possible to look for the protein coding genes which separate us.

First some terminology.  Not every species found since the divergence of man and chimp is our direct ancestor.  Many banches are extinct.  The whole group of species are called hominins [Nature vol. 422 pp. 849 – 857 ‘ 03 ].  Hominids are species in the path between us and the chimp — sort of a direct line of descent.  However the terminology is in flux and confusing and I’m not sure this is right.   But we do need some terminology to proceed.

Hominid Specific genes (HS genes) result which result from recent gene duplications in hominid/human genomes.  Gene duplication is a great way for evolution to work quickly.  Even if one gene is essential, messing with the other copy won’t be fatal.  HS genes include >20 gene families that are dynamically expressed during the formation of the human brain.  It was hard for me to find out just how many HS genes there are.

Here are some examples. The human-specific NOTCH2NL genes increase the self-renewal potential of human cortical progenitors (meaning more brain cell can result from them).  TBC1D3and ARGHAP11B, are involved in basal progenitor amplification (ditto).

A recent paper [ Neuron vol. 111 pp. 65 – 80 ’23 ] discusses CROCCP2 (you don’t want to know what the acronym stands for) which is one of several genes in this family with at least 6 copies in various hominid genomes.  However, CROCCP2 is a duplicate unique to man.   It is highly expressed during brain development and enhances outer Radial Glial Cell progenitor proliferation.

The mechanism by which this happens is detailed in the paper and involves the cilium found on every neuron, mTOR, IFT20 and others.

But that’s not the point here, fascinating although these mechanisms are.   We’re watching a series of at least 20 gene duplications with subsequent modifications build the brain that is unique to us over relatively rapid evolutionary times.  The split between man and chimp is thought to have happened only 8 million years ago.

What should we call this process?  Evolution?  The Creator in action? The Blind Watchmaker?   It is certainly is eerie to think about.  There are 17 more HS genes to go involving in building our brains remaining to be worked out.  Stay tuned

Orwell does Stanford, Stanford does Newspeak

At the end of 1984, Orwell adds a non-novelistic appendix — “The Principles of Newspeak”

” Newspeak was the official language of Oceania (what England and Europe had become).    . . ..  The purpose of Newspeak was not only to provide a medium of expression for the world-view and mental habits proper to the devotees of Ingsoc or English Socialism, but to make all other modes of thought impossible.”  …

“This was done partly by the invention of new words, but chiefly by eliminating undesirable words … ”

“It was intended that when Newspeak had been adopted once and for all and Oldspeak forgotten …  a thought diverging from the principles of Ingsoc — should be literally unthinkable, at least so far as thought is dependent on words.”

Which brings us to 20 December ’22 and an editorial in the Wall Street Journal titled “The Stanford Guide to Acceptable Words” — unfortunately behind a paywall.   Stanford administrators apparently published an index of forbidden words to be eliminated from the school’s websites and computer codes, with inclusive replacements.   Somehow it came to light this month, and Stanford has hidden it because of the hilarity it induced, so we can’t enjoy it.

Fortunately, the WSJ editorial does provide a few examples

American is to be replaced by U. S. Citizen

Immigrant is to be replaced with person who has immigrated

Master (as in mastering a subject) is out because of its slavery connocations

Not to beat a dead horse is gone because it “normalizes violence against animals”

The list was prefaced with a trigger warning (a phrase no  longer to be used) “This website contains language that is offensive or harmnful. Please engage with this website at your own pace.

The editorial concludes by noting that ‘stupid’ is on the list.

How prescient Orwell was

The previous post in the series shows how his idea of doublethink encapsulates what chemists must do to use quantum mechanics to understand atoms.  — https://luysii.wordpress.com/2022/12/27/orwell-does-quantum-mechanics/

The one before that anticipates the 180 degree reversal of COVID advice in China — https://luysii.wordpress.com/2022/12/26/orwell-does-china/

 

A visual proof of the the theorem egregium of Gauss

Nothing better illustrates the difference between the intuitive understanding that something is true and being convinced by logic that something is true  than the visual proof of the theorem egregium of Gauss found in “Visual Differential Geometry and Forms” by Tristan Needham and  the 9 step algebraic proof in  “The Geometry of Spacetime” by Jim Callahan.

Mathematicians attempt to tie down the Gulliver of our powerful appreciation of space with Lilliputian strands of logic.

First: some background on the neurology of vision and our perception of space and why it is so compelling to us.

In the old days, we neurologists figured out what the brain was doing by studying what was lost when parts of the brain were destroyed (usually by strokes, but sometimes by tumors or trauma).  This wasn’t terribly logical, as pulling the plug on a lamp plunges you in darkness, but the plug has nothing to do with how the lightbulb or LED produces light.  Even so,  it was clear that the occipital lobe was important — destroy it on both sides and you are blind — https://en.wikipedia.org/wiki/Occipital_lobe but the occipital lobe accounts for only 10% of the gray matter of the cerebral cortex.

The information flowing into your brain from your eyes is enormous.  The optic nerve connecting the eyeball to the brain has a million fibers, and they can fire ‘up to 500 times a second.  If each firing (nerve impulse) is a bit, then that’s an information flow into your brain of a gigaBit/second.   This information is highly processed by the neurons and receptors in the 10 layers of the retina. Over 30 retinal cell types in our retinas are known, each responding to a different aspect of the visual stimulus.  For instance, there are cells responding to color, to movement in one direction, to a light stimulus turning on, to a light stimulus turning off, etc. etc.

So how does the relatively small occipital lobe deal with this? It doesn’t.  At least half of your the brain responds to visual stimuli.  How do we know?   It’s complicated, but something called functional Magnetic Resonance Imaging (fMRI) is able to show us increased neuronal activity primarily by the increase in blood flow it causes.

Given that half of your brain is processing what you see, it makes sense to use it to ‘see’ what’s going on in Mathematics involving space.  This is where Tristan Needham’s books come in.

I’ve written several posts about them.

and Here — https://luysii.wordpress.com/2022/03/07/visual-differential-geometry-and-forms-q-take-3/

 

 

OK, so what is the theorem egregium?  Look at any object (say a banana). You can see how curved it is by just looking at its surface (e.g. how it looks in the 3 dimensional space of our existence).  Gauss showed that you don’t
have to even look at an object in 3 space,  just perform local measurements (using the distance between surface points, e.g. the metric e.g.  the metric tensor) .  Curvature is intrinsic to the surface itself, and you don’t have to get outside of the surface (as we are) to find it.

 

 

The idea (and mathematical machinery) has been extended to the 3 dimensional space we live in (something we can’t get outside of).  Is our  universe curved or not? To study the question is to determine its intrinsic curvature by extrapolating the tools Gauss gave us to higher dimensions and comparing the mathematical results with experimental observation. The elephant in the room is general relativity which would be impossible without this (which is why I’m studying the theorem egregium in the first place).

 

So how does Callahan phrase and prove the theorem egregium? He defines curvature as the ratio of the area on a (small) patch on the surface to the area of another patch on the unit sphere. If you took some vector calculus, you’ll know that the area spanned by two nonCollinear vectors is the numeric value of their cross product.

 

 

The vectors Callahan needs for the cross product are the normal vectors to the surface.  Herein beginneth the algebra. Callahan parameterizes the surface in 3 space from a region in the plane, uses the metric of the surface to determine a formula for the normal vector to the surface  at a point (which has 3 components  x , y and z,  each of which is the sum of 4 elements, each of which is the product of a second order derivative with a first order derivative of the metric). Forming the cross product of the normal vectors and writing it out is an algebraic nightmare.  At this point you know you are describing something called curvature, but you have no clear conception of what curvature is.  But you have a clear definition in terms of the ratio of areas, which soon disappears in a massive (but necessary) algebraic fandango.

 

 

On pages 258 – 262 Callahan breaks down the proof into 9 steps involving various mathematical functions of the metric and its derivatives such as  Christoffel symbols,  the Riemann curvature tensors etc. etc.  It is logically complete, logically convincing, and shows that all this mathematical machinery arises from the metric (intrinsic to the surface) and its derivatives (some as high as third order).

 

 

For this we all owe Callahan a great debt.  But unfortunately, although I believe it,  I don’t see it.  This certainly isn’t to denigrate Callahan, who has helped me through his book, and a guy who I consider a friend as I’ve drunk beer with him and his wife while  listening to Irish music in a dive bar north of Amherst.

 

 

Callahan’s proof is the way Gauss himself did it and Callahan told me that Gauss didn’t have the notational tools we have today making the theorem even more outstanding (egregious).

 

Well now,  onto Needham’s geometrical proof.  Disabuse yourself of the notion that it won’t involve much intellectual work on your part even though it uses the geometric intuition you were born with (the green glasses of Immanuel Kant — http://onemillionpoints.blogspot.com/2009/07/kant-for-dummies-green-sunglasses.html)

 

Needham’s definition of curvature uses angular excess of a triangle.  Angles are measured in radians, which is the ratio of the arc subtended by the angle to the radius of the circle (not the circumference as I thought I remembered).  Since the circumference of a circle is 2*pi*radius, radian measure varies from 0 to 2*pi.   So a right angle is pi/2 radians.

 

Here is a triangle with angular excess.  Start with a sphere of radius R.  Go to the north pol and drop a longitude down to the equator.  It meets the equator at a right angle (pi/2).  Go back to the north pole, form an angle of pi/2 with the first longitude, and drop another longitude at that angle which meets the equator at an angle of pi/2.   The two points on the equator and the north pole form a triangle, with total internal angles of 3*(pi/2).  In plane geometry we know that the total angles of a triangle is 2 (pi/2).  (Interestingly this depends on the parallel postulate. See if you can figure out why).  So the angular excess of our triangle is pi/2.  Nothing complicated to understand (or visualize) here.

 

Needham defines the curvature of the triangle (and any closed area) as the ratio between the angular excess of the triangle to its area

 

 

What is the area of the triangle?  Well, the volume of a sphere is (4/3) pi * r^3, and its area is the integral (4 pi * r^2).  The area of the north hemisphere, is 2 pi *r^2, and the area of the triangle just made is 1/2 * Pi * r^2.

 

 

So the curvature of the triangle is (pi/2) / (1/2 * pi * r^2) = 1 / r^2.   More to the point, this is the curvature of a sphere of radius r.

 

 

At this point you should have a geometric intuition of just what curvature is, and how to find it.  So when you are embroiled in the algebra in higher dimensions trying to describe curvature there, you will have a mental image of what the algebra is attempting to describe, rather than just the symbols and machinations of the algebra itself (the Lilliputian strands of logic tying down the Gulliver of curvature).

 

The road from here to the Einstein gravitational field equations (p. 326 of Needham) and one I haven’t so far traversed,  presently is about 50 pages.Just to get to this point however,  you have been exposed to comprehensible geometrical expositions, of geodesics, holonomy,  parallel transport and vector fields, and you should have mental images of them all.Interested?  Be prepared to work, and to reorient how you think about these things if you’ve met them before.  The 3 links mentioned about will give you a glimpse of Needham’s style.  You probably should read them next.

The Chinese Room Argument, Understanding Math and the imposter syndrome

The Chinese Room Argument

 was first published in a 1980 article by American philosopher John Searle. He imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

 

So it was with me and math as an undergraduate due to a history dating back to age 10.  I hit college being very good at manipulating symbols whose meaning I was never given to understand.  I grew up 45 miles from the nearest synagogue.  My fanatically religious grandfather thought it was better not to attend services at all than to drive up there on the Sabbath.  My father was a young lawyer building a practice, and couldn’t close his office on Friday.   So my he taught me how to read Hebrew letters and reproduce how they sound, so I could read from the Torah at my Bar Mitzvah (which I did comprehending nothing).  Since I’m musical, learning the cantillations under the letters wasn’t a problem.

 

I’ve always loved math and solving problems of the plug and chug variety was no problem.  I’d become adept years earlier at this type of thing thanks to my religiously rigid grandfather.   It was the imposter syndrome writ large.  I’ve never felt like this about organic chemistry and it made a good deal of intuitive sense the first time I ran into it.  For why have a look at — https://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/

 

If there is anything in math full of arcane symbols calling for lots of mechanical manipulation, it is the differential geometry and tensors needed to understand General relativity.   So I’ve plowed through a lot of it, but still don’t see what’s really going on.

 

Enter Tristan Needham’s book “Visual Differential Geometry and Forms”.  I’ve written about it several times
and Here — https://luysii.wordpress.com/2022/03/07/visual-differential-geometry-and-forms-q-take-3/

 

If you’ve studied any math, his approach will take getting used to as it’s purely visual and very UNalgebraic.  But what is curvature but a geometric concept.

 

So at present I’m about 80 pages away from completing Needham’s discussion of general relativity.  I now have an intuitive understanding of curvature, torsion, holonomy, geodesics and the Gauss map that I never had before.   It is very slow going, but very clear.  Hopefully I’ll make it to p. 333.  Wish me luck.

Brilliant structural work on the Arp2/3 complex with actin filaments and why it makes me depressed

The Arp2/3 complex of 5 proteins forms side branches on existing actin filaments.  The following paper shows its beautiful structure along with movies.  Have a look — it’s open access. https://www.pnas.org/doi/10.1073/pnas.2202723119.

Why should it make me depressed? Because I could spend the next week studying all the ins and outs of the structure and how it works without looking at anything else.  Similar cryoEM studies of other multiprotein machines are coming out which will take similar amounts of time.  Understanding how single enzymes work is much simpler, although similarly elegant — see Cozzarelli’s early work on topoisomerase.

So I’m depressed because I’ll never understand them to the depth I understand enzymes, DNA, RNA etc. etc.

Also the complexity and elegance of these machines brings back my old worries about how they could possibly have arisen simply by chance with selection acting on them.  So I plan to republish a series of old posts about the improbability of our existence, and the possibility of a creator, which was enough to me get thrown off Nature Chemistry as a blogger.

Enough whining.

Here is why the Arp2/3 complex is interesting.  Actin filaments are long (1,000 – 20,000 Angstroms and thin (70 Angstroms).  It you want to move a cell forward by having them grow toward its leading edge, growing actin filaments would puncture the membrane like a bunch of needles, hence the need for side branches, making actin filaments a brush-like mesh which could push the membrane forward as it grows.

The Arp2/3 complex has a molecular mass of 225 kiloDaltons, or probably 2,250 amino acids or 16 thousand atoms.

Arp2 stands for actin related protein 2, something quite similar to the normal actin monomer so it can sneak into the filament. So can Arp3.  The other 5 proteins grab actin monomers and start them polymerizing as a branch.

But even this isn’t enough, as Arp2/3 is intrinsically inactive and multiple classes of nucleation promoting factors (NPFs) are needed to stimulate it.  One such NPF family is the WASP proteins (for Wiskott Aldrich Syndrome Protein) mutations of which cause the syndrome characterized by hereditary thrombocytopenia, eczema and frequent infections.

The paper’s pictures do not include WASP, just the 7 proteins of the complex snuggling up to an actin filament.

In the complex the Arps are in a twisted conformation, in which they resemble actin monomers rather than filamentous actin subunits which have a flattened conformation.  After activation arp2 and arp3 mimic the arrangement of two consecutive subunits along the short pitch helical axis of an actin filament and each arp transitions from a twisted (monomerLike) to a flattened (filamentLike) conformation.

So look at the pictures and the movies and enjoy the elegance of the work of the Blind Watchmaker (if such a thing exists).

Why there’s more to chemistry than quantum mechanics

As juniors entering the Princeton Chemistry Department as majors in 1958 we were told to read “The Logic Of Modern Physics” by P. W. Bridgeman — https://en.wikipedia.org/wiki/The_Logic_of_Modern_Physics.   I don’t remember whether we ever got together to discuss the book with faculty, but I do remember that I found the book intensely irritating.  It was written in 1927, in early hay day of quantum mechanics.  It  said that all you could know was measurements (numbers on a dial if you wish) without any understanding of what went on in between them.

I thought chemists knew a lot more than that.  Here’s Henry Eyring — https://en.wikipedia.org/wiki/Henry_Eyring_(chemist)https://en.wikipedi developing transition state theory a few years later in 1935 in the department.  It was pure ideation based on thermodynamics, which was developed long before quantum mechanics and is still pretty much a quantum mechanics free zone of physics (although people are busy at work on the interface).

Henry would have loved a recent paper [ Proc. Natl. Acad. Sci. vol. 118 e2102006118 ’21 ] where the passage of a molecule back and forth across the free energy maximum was measured again and again.

A polyNucleotide hairpin of DNA  was connected to double stranded DNA handles in optical traps where it could fluctuate between folded (hairpin) and unfolded (no hairpin) states.  They could measure just how far apart the handles were and in the hairpin state the length appears to be 100 Angstroms (10 nanoMeters) shorter than the unfolded state.

So they could follow the length vs. time and measure the 50 microSeconds or so it took to make the journey across the free energy maximum (e.g. the transition state). A mere 323,495 different transition paths were studied.  You can find much more about the work here — https://luysii.wordpress.com/2022/02/15/transition-state-theory/

Does Bridgeman have the last laugh — remember all that is being measured are numbers (lengths) on a dial.

Here’s another recent paper Eyring would have loved — [ Proc. Natl. Acad. Sci. vol. 119 e2112372118 ’22  — ] https://www.pnas.org/doi/epdf/10.1073/pnas.2112382119  ]

The paper studied Barnase, a 110 amino acid protein which degrades RNA (so much like the original protein Anfinsen studied years ago).  Barnase is highly soluble and very stable making it one of the E. Coli’s of protein folding studies.

The new wrinkle of the paper is that they were able to study the folding and unfolding and the transition state of single molecules of Barnase at different temperatures (an experiment which would have been unlikely for Eyring to even think about doing in 1935 when he developed transition state theory, and yet this is exactly the sort of thing what he was thinking about but not about proteins whose structure was unknown back then).

This allowed them to determine not just the change in free energy (deltaG)  between the unfolded (U) and the transition state (TS) and the native state (N) of Barnase, but also the changes in enthalpy (delta H) and entropy (delta S) between U and TS and between N and TS.

Remember delta G = Delta H – T delta S.  A process will occur if deltaG is negative, which is why an increase in entropy is favorable, and why the decrease in entropy between U and TS is unfavorable.   You can find out more about this work here — https://luysii.wordpress.com/2022/03/25/new-light-on-protein-folding/

So the purely mental ideas of Eyring are being confirmed once again (but by numbers on a dial).  I doubt that Eyring would have thought such an experiment possible back in 1935.

Chemists know so much more than quantum mechanics says we can know.  But much of what we do know would be impossible without quantum mechanics.

However, Eyring certainly wasn’t averse to quantum mechanics, having written a text book Quantum Chemistry with Walter and Kimball on the very subject in 1944.

How Infants learn language – V

Infants don’t learn language like neural nets do. Unlike nets, no feedback is involved, which amazingly, makes learning faster.

As is typical of research in psychology, the hard part is thinking of something clever to do, rather than actually carrying it out.

[ Proc. Natl. Acad. Sci. vol. 117 pp. 26548 – 26549 ’20 ] is a short interview with psychologist Richard N. Aslin. Here’s a link — hopefully not behind a paywall — https://www.pnas.org/content/pnas/117/43/26548.full.pdf.

He was interested in how babies pull out words from a stream of speech.

He took a commonsense argument and ran with it.

“The learning that I studied as an undergrad was reinforcement learning—that is, you’re getting a reward for responding to certain kinds of input—but it seemed that that kind of learning, in language acquisition, didn’t make any sense. The mother is not saying, “listen to this word…no, that’s the wrong word, listen to this word,” and giving them feedback. It’s all done just by being exposed to the language without any obvious reward”

So they performed an experiment whose results surprised them. They made a ‘language’ of speech sounds which weren’t words and presented them 4 per second for a few minutes, to 8 month old infants. There was an underlying statistical structure, as certain sounds were more likely to follow another one, others were less likely. That’s it. No training. No feedback. No nothin’, just a sequence of sounds. Then they presented sequences (from the same library of sounds) which the baby hadn’t heard before and the baby recognized them as different. The interview didn’t say how they knew the baby was recognizing them, but my guess is that they used the mismatch negativity brain potential which automatically arises to novel stimuli.

Had you ever heard of this? I hadn’t but the references to the author’s papers go back to 1996 ! Time for someone to replicate this work.

So our brains have an innate ability to measure statistical probability of distinct events occurring. Even better we react to the unexpected event. This may be the ‘language facility’ Chomsky was talking about half a century ago. Perhaps this innate ability is the origin of music, the most abstract of the arts.

How infants learn language is likely inherently fascinating to many, not just neurologists.

Here are links to some other posts on the subject you might be interested in.

https://luysii.wordpress.com/2013/06/03/how-infants-learn-language-iv/

https://luysii.wordpress.com/2011/10/10/how-infants-learn-language-iii/

https://luysii.wordpress.com/2010/10/03/how-infants-learn-language-ii/

https://luysii.wordpress.com/2010/09/30/how-infants-learn-language/

Phillip Anderson, 1923 – 202 R. I. P.

Phil Anderson probably never heard of Ludwig Mies Van Der Rohe, he of the Bauhaus and his famous dictum ‘less is more’, so he probably wasn’t riffing on it when he wrote “More Is Different” in August of 1970 [ Science vol. 177 pp. 393 – 396 ’72 ] — https://science.sciencemag.org/content/sci/177/4047/393.full.pdf.

I was just finishing residency and found it a very unusual paper for Science Magazine.  His Nobel was 5 years away, but Anderson was of sufficient stature that Science published it.  The article was a nonphilosophical attack on reductionism with lots of hard examples from solid state physics. It is definitely worth reading, if the link will let you.  The philosophic repercussions are still with us.

He notes that most scientists are reductionists.  He puts it this way ” The workings of our minds and bodies and of all the matter animate and inanimate of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under extreme conditions we feel we know pretty well.”

So many body physics/solid state physics obeys the laws of particle physics, chemistry obeys the laws of many body physics, molecular biology obeys the laws of chemistry, and onward and upward to psychology and the social sciences.

What he attacks is what appears to be a logical correlate of this, namely that understanding the fundamental laws allows you to derive from them the structure of the universe in which we live (including ourselves).   Chemistry really doesn’t predict molecular biology, and cellular molecular biology doesn’t really predict the existence of multicellular organisms.  This is because new phenomena arise at each level of increasing complexity, for which laws (e.g. regularities) appear which don’t have an explanation by reducing them the next fundamental level below.

Even though the last 48 years of molecular biology, biophysics have shown us a lot of new phenomena, they really weren’t predictable.  So they are a triumph of reductionism, and yet —

As soon as you get into biology you become impaled on the horns of the Cartesian dualism of flesh vs. spirit.  As soon as you ask what something is ‘for’ you realize that reductionism can’t help.  As an example I’ll repost an old one in which reductionism tells you exactly how something happens, but is absolutely silent on what that something is ‘for’

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this is hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” — https://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/ –puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory?