Tag Archives: Roger Penrose

Fiber bundles at last

As an  undergraduate, I loved looking at math books in the U-store.  They had a wall of them back then, now it’s mostly swag.  The title of one book by a local prof threw me — The Topology of Fiber Bundles.

Decades later I found that to understand serious physics you had to understand fiber bundles.

It was easy enough to memorize the definition, but I had no concept what they really were until I got to page 387 of Roger Penrose’s marvelous book “The Road to Reality”.  It’s certainly not a book to learn physics from for the first time.  But if you have some background (say just from reading physics popularizations), it will make things much clearer, and will (usually) give you a different an deeper perspective on it.

Consider a long picket fence.  Each fencepost is just like every other, but different, because each has its own place.  The pickets are the fibers and the line in the ground on which they sit is something called the base space.

What does that have to do with our 3 dimensional world and its time?

Everything.

So you’re sitting at your computer looking at this post.  Nothing changes position as you do so.  The space between you and the screen  is the same.

But the 3 dimensional space you’re sitting in is different at every moment, just as the pickets are different at every position on the fence line.

Why?  Because you’re siting on earth.  The earth is rotating, the solar system is rotating about the galactic center, which is itself moving toward the center of the local galactic cluster.

Penrose shows that this is exactly the type of space implied by Galilean relativity. (Yes Galileo conceived of relativity long before Einstein).   Best to let him speak for himself.   It’s a long quote but worth reading.

“Shut yourself up with some friend in the main cabin below decks on some large ship, and have with you there some flies, butterflies, and other small flying animals. Have a large bowl of water with some fish in it; hang up a bottle that empties drop by drop into a wide vessel beneath it. With the ship standing still, observe carefully how the little animals fly with equal speed to all sides of the cabin. The fish swim indifferently in all directions; the drops fall into the vessel beneath; and, in throwing something to your friend, you need throw it no more strongly in one direction than another, the distances being equal; jumping with your feet together, you pass equal spaces in every direction. When you have observed all these things carefully (though doubtless when the ship is standing still everything must happen in this way), have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still. In jumping, you will pass on the floor the same spaces as before, nor will you make larger jumps toward the stern than toward the prow even though the ship is moving quite rapidly, despite the fact that during the time that you are in the air the floor under you will be going in a direction opposite to your jump. In throwing something to your companion, you will need no more force to get it to him whether he is in the direction of the bow or the stern, with yourself situated opposite. The droplets will fall as before into the vessel beneath without dropping toward the stern, although while the drops are in the air the ship runs many spans. The fish in their water will swim toward the front of their bowl with no more effort than toward the back, and will go with equal ease to bait placed anywhere around the edges of the bowl. Finally the butterflies and flies will continue their flights indifferently toward every side, nor will it ever happen that they are concentrated toward the stern, as if tired out from keeping up with the course of the ship, from which they will have been separated during long intervals by keeping themselves in the air. And if smoke is made by burning some incense, it will be seen going up in the form of a little cloud, remaining still and moving no more toward one side than the other. The cause of all these correspondences of effects is the fact that the ship’s motion is common to all the things contained in it, and to the air also. That is why I said you should be below decks; for if this took place above in the open air, which would not follow the course of the ship, more or less noticeable differences would be seen in some of the effects noted.”

I’d read this many times, but Penrose’s discussion draws out what Galileo is implying. “Clearly we should take Galileo seriously.  There is no meaning to be attached to notion that any particular point in space a minute from now is to be judged as the same point in space that I have chosen. In Galilean dynamics we do not have just one Euclidean 3-space as an arena for the actions of the physical world evolving with time, we have a different E^3 for each moment in time, with no natural identification between these various E^3 ‘s.”

Although it was obvious to us that the points of our space retain their identity from one moment to the next, they don’t.

Penrose’s book is full of wonderful stuff like this.  However, all is not perfect.  Physics Nobelist Frank Wilczek in his review of the book [ Science vol. 307 pp. 852 – 853 notes that “The worst parts of the book are the chapters on high energy physics and quantum field theory, which in spite of their brevity contain several serious blunders.”

However, all the math is fine, and Wilczek says “the discussions of the conformal geometry of special relativity and of spinors are real gems.”

Since he doesn’t even get to quantum mechanics until p. 493 (of 1049) there is a lot to chew on (without worrying about anything other than the capability of your intellect).

 

A premature book review and a 60 year history with complex variables in 4 acts

“Visual Differential Geometry and Forms” (VDGF) by Tristan Needham is an incredible book.  Here is a premature review having only been through the first 82 pages of 464 pages of text.

Here’s why.

While mathematicians may try to tie down the visual Gulliver with Lilliputian strands of logic, there is always far more information in visual stimuli than logic can appreciate.  There is no such a thing as a pure visual percept (a la Bertrand Russell), as visual processing begins within the 10 layers of the retina and continues on from there.  Remember: half your brain is involved in processing visual information.  Which is a long winded way of saying that Needham’s visual approach to curvature and other visual constructs is an excellent idea.
Needham loves complex variables and geometry and his book is full of pictures (probably on 50% of the pages).

My history with complex variables goes back over 60 years and occurs in 4 acts.

 

Act I:  Complex variable course as an undergraduate. Time late 50s.  Instructor Raymond Smullyan a man who, while in this world, was definitely not of it.  He really wasn’t a bad instructor but he appeared to be thinking about something else most of the time.

 

Act II: Complex variable course at Rocky Mountain College, Billings Montana.  Time early 80s.  The instructor and MIT PhD was excellent.  Unfortunately I can’t remember his name.  I took complex variables again, because I’d been knocked out for probably 30 minutes the previous year and wanted to see if I could still think about the hard stuff.

 

Act III: 1999 The publication of Needham’s first book — Visual Complex Analysis.  Absolutely unique at the time, full of pictures with a glowing recommendation from Roger Penrose, Needham’s PhD advisor.  I read parts of it, but really didn’t appreciate it.

 

Act IV 2021 the publication of Needham’s second book, and the subject of this partial review.  Just what I wanted after studying differential geometry with a view to really understanding general relativity, so I could read a classmate’s book on the subject.  Just like VCA, and I got through 82 pages or so, before I realized I should go back and go through the relevant parts (several hundred pages) of VCA again, which is where I am now.  Euclid is all you need for the geometry of VCA, but any extra math you know won’t hurt.

 

I can’t recommend both strongly enough, particularly if you’ve been studying differential geometry and physics.  There really is a reason for saying “I see it” when you understand something.

 

Both books are engagingly and informally written, and I can’t recommend them enough (well at least the first 82 pages of VDGF).

 

How do neural nets do what they do?

Isn’t it enough that neural nets beat humans at chess, go and checkers, drive cars, recognize faces, find out what plays Shakespeare did and didn’t write?  Not at all.  Figuring out how they do what they do may allow us to figure out how the brain does what it does.

Science recently had a great bunch of articles on neural nets, deep learning [ Science vol. 356 pp. 16 – 30 ’17 ].  Chemists will be interested in p. 27 “Neural networks learn the art of chemical synthesis”.  The articles are quite accessible to the scientific layman.

To this retired neurologist, the most interesting of the bunch was the article (pp. 22 – 270 describing attempts to figure out how neural nets do what they do. Welcome to the world of the neuroscientist where a similar problem has engaged us for centuries.  DARPA is spending 70 million on exactly this according to the article.

If you are a little shaky on all this — I’ve copied a previous post on the subject (along with a few comments it inspired) below the ****

Here are four techniques currently in use:

  1. Counterfactual probes — the classic black box technique — vary the input (text, images, sound, ..  )and watch how it affects output.  It goes by the fancy name of Local Interpretable Model agnostic Explanations (LIME).  This allows the parts of the input most important in the net’s original judgement.
  2. Start with a black image or a zeroed out array of text and transition step by step toward the example being tested.  Then you watch the jumps in certainty the net makes, and you can figure out what it thinks is important.
  3. General Additive Model (GAM) is a statistical technique based on linear regression.  It operates on data to massage it.  The net is then presented with a variety of operations of GAM and studied to see which are the best at data massage so the machine can make a correct decision.
  4. Glass Box wires monotonic relationships (e.g the price of a house goes up with the number of square feet) INTO the neural net — allowing better control of what it does

The articles don’t appear to be behind a paywall, so have at it.

***

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

 Here are a few of the comments

“So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. ”

I picked up a re-post of this comment on Quanta and thought it best to reply to you directly. Though this appears to be your private blog I can’t seem to find a biography, otherwise I’d address you by name.

My background is computer science generally and neural networks (with the requisite exposure to statistical mechanics) in particular and I must correct the assertion you’ve made here; neural nets are in fact both algorithmic and even repeatable in their performance.

I think what you’re trying to say is the structure of a trained network isn’t known by the programmer in advance; rather than build a trained intelligence, the programmer builds an intelligence that may be trained. The method is entirely mathematical though, mostly based on the early work of Boltzmann and explorations of the Monte-Carlo algorithms used to model non-linear thermodynamic systems.

For a good overview of the foundations, I suggest J.J. Hopfield’s “Neural networks and physical systems with emergent collective computational abilities”, http://www.pnas.org/content/79/8/2554.abstract

Regards,
Scott.

Scott — thanks for your reply. I blog anonymously because of the craziness on the internet. I’m a retired neurologist, long interested in brain function both professionally and esthetically. I’ve been following AI for 50 years.

I even played around with LISP, which was the language of AI in the early days, read Minsky on Perceptrons, worried about the big Japanese push in AI in the 80’s when they were going to eat our lunch etc. etc.

I think you’d agree that compared to LISP and other AI of that era, neural nets are nonAlgorithmic. Of course setting up virtual neurons DOES involve programming.

The analogy with the brain is near perfect. You can regard our brains as the output of the embryologic programming with DNA as the tape.

But that’s hardly a place to stop. How the brain and neural nets do what they do remains to be determined. A wiring diagram of the net is available but really doesn’t tell us much.

Again thanks for responding.

Scott

Honestly it would be hard for me to accept that the nets I worked on weren’t algorithmic since they were literally based on formal algorithms derived directly from statistical mechanics, most of which was based on Boltzmann’s work back in the 19th century. Hopfield, who I truly consider the “father” of modern neural computing, is a physicist (now at Princeton I believe). Most think he was a computer scientists back on the 70’s when he did his basic work at Cal Tech, but that’s not really the case.

I understand what you’re trying to say, that the actual training portion of NN development isn’t algorithmic, but the NN software itself is and it’s extremely precise in its structure, much more so than say, for instance, a bubble sort. It’s pretty edgy stuff even now.

I began working on NN’s in ’82 after reading Hopfield’s seminal paper, I was developing an AI aimed at self-diagnosing computer systems for a large computer manufacturer now known as Hewlett-Packard (at the time we were a much smaller R&D company who were later aquired). We also explored expert systems and ultimately deployed a solution based on KRL, which is a LISP development environment built by a small Stanford AI spinoff. It ended up being a dead end; that was an argument I lost (I advocated the NN direction as much more promising but lost mostly for political reasons). Now I take great pleasure in gloating 🙂 even though I’m no longer commercially involved with either the technology or that particular company.

Luysii

I thought we were probably in agreement about what I said. Any idea on how to find out just how many ‘neurons’ (any how many levels) there are in AlphaGo? It would be interesting to compare the numbers with our current thinking about the numbers of cortical neurons and synapses (which grows ever larger year by year).

Who is the other Scott — Aronson? 100 years ago he would have been a Talmudic scholar (as he implies by the title of his blog).

Yes neural nets are still edgy, and my son is currently biting his fingernails in a startup hoping to be acquired which is heavily into an application of neural nets (multiple patents etc. etc.)

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.