Tag Archives: Quantum mechanics

Quantum Field Theory as Simply as Possible

The following is my review published in Amazon — the other 25 reviews are interesting and somewhat divergent– here’s a link to them all — https://www.amazon.com/Quantum-Field-Theory-Simply-Possible/dp/0691174296#customerReviews.

 

 

I’ve put in two further thoughts I left out to keep the Amazon review to a reasonable length (Addendums 1 and 2).

 

Disclaimer: I have probably spent more time with Quantum Field Theory As Simply As Possible (QFTASAP) than most as I was a lay reader providing commentary to Dr. Zee as he was writing it. I came to know Dr. Zee after he responded to some questions I had while going through another of his books, Group Theory in a Nutshell for Physicists. We’ve corresponded for years about math, physics, medicine, biology and life both turning out to be grumpy old Princeton Alums (1960 and 1966). If this makes me a friend of his despite never having met and living on opposite coasts — and if this precludes this review appearing on Amazon, so be it.

Should you buy the book? It depends on two things: (1) your ability and (2) your background. Dr. Zee spends a lot of time in the preface describing the rather diverse collection of people he is writing for. “I am particularly solicitous of the young, the future physicists of the world. .. delighted beyond words if some college students, or even a few high school students are inspired by this book .. “

Addendum 1  In retirement, I met one such high school student whil auditing an abstract algebra course at a local college.  He was simultaneously doing his German homework while listening to the lecture with one ear.  He did not go on in physics but did get a double summa in math and physics at an Ivy League university (not Princeton despite my attempts to get him to go there).

 

I fall into another class of reader for QFTASAP which the author mentions, “scientists, engineers, medical doctors, lawyers and other professionals . … Quite a few are brave enough to tackle my textbooks. I applaud these older readers, and address them as I write.”

So you need to know the background I bring to the book to put what I say in perspective. I am a retired neurologist. I did two years of graduate work in chemistry ’60 – ’62 before going to medical school. All grad students in chemistry back then took quantum mechanics, solving the Schrodinger equation to see where atomic (and molecular orbitals) come from.

At my 50th reunion, I met a classmate I didn’t know as an undergraduate, Jim Hartle, a world class relativist still writing papers with Hawking, so I decided to try and learn relativity so I’d have something intelligent to say to him if we met at another reunion. I studied his book on Gravitation (and Dr. Zee’s). Unfortunately COVID19 has stopped my attendance at reunions. To even begin to understand gravitation (which is what general relativity is about), you must first study special relativity. So my background was perfect for QFTASAP, as quantum field theory (QFT) tries to merge special (not general) relativity and quantum mechanics.

Do not despair if you have neither background as Dr. Zee starts the book explaining both in the first 99 pages or so. His style is very informal, with jokes, historical asides, and blinding clarity. As a retired MD I can’t speak to how accurate any of it is, but the publisher notes that his textbooks have been used at MIT and Cal Tech, which is good enough for me.

So quantum mechanics and special relativity gets you to base camp for the intellectual ascent to QFT — which takes the rest of the book (to page 342). If this sounds daunting, remember thAt physics majors and physics grad student’s QFT courses last a year (according to Dr. Zee). So read a little bit at a time.

The big leap (for me) was essentially abandoning the idea of force and thinking about action (which was totally unfamiliar). Much of the math, as a result is rather unfamiliar, and even if you took calculus, the integrals you will meet look like nothing you’ve ever seen.
e.g. d^4x e psibar (x) gamma^mu psi(x) A_mu(x) all under the integral sign
Fortunately, on p. 154 Dr. Zee says “I have to pause to teach you how to read this hieroglyphic.” This is very typical of his informal and friendly teaching style.

QFTASAP contains all sorts of gems which deepened my understanding of stuff I’d studied before. For example, Dr. Zee shows how special relativity demolishes the notion of simultaneity, then he goes even farther and explains how this implies the existence of antiparticles. Once you get integrals like the above under your belt, he gives a coherent explanation of where and how the idea of the expanding universe comes from and how it looks mathematically.

There is much, much more: gauge theory, Yang Mills, the standard model of particle physics etc. etc.

To a Princetonian, some of the asides are fascinating. One in particular tells you why you or your kid should want to go there (spoiler alert — not to meet the scion of a wealthy family, or an heiress, not to form connections which will help you in your career). He mentions that there was an evening seminar for physics majors given by a young faculty member (33) about recent discoveries in physics. In 1964 the same young prof (James Cronin) said that he had discovered something exiting — in 1980 he got the Nobel for it. For my part, it was John Wheeler (he of the black hole, wormholes etc) teaching premeds and engineers (not future physicists) freshman physics and bringing in Neils Bohr to talk to us. So go to Princeton for the incredible education you will get, and the way Princeton exposes their undergraduates to their very best faculty.

Addendum #2 — As a Princeton chemistry major, my undergraduate adviser was Paul  Schleyer , Princeton ’52, Harvard PhD ’56. We spent a lot of time together in his lab, and would sometimes go out for pizza after finishing up in the lab of an evening. For what working with him was like please see — https://luysii.wordpress.com/2014/12/15/paul-schleyer-1930-2014-a-remembrance/ and https://luysii.wordpress.com/2014/12/14/paul-schleyer-1930-2014-r-i-p/

Contrast this with Harvard where I did chemistry graduate work from ’60 – ’62.   None of the 7 people who were in the department back then who later won the Nobel prize later (Woodward, Corey, Hoffmann, +4 more) did any undergraduate teaching.  I did most of the personal teaching the Harvard undergrads got — 6 hours a week as a teaching assistant in the organic chemistry lab.  I may have been good, but I was nowhere as good as I would be if I stayed in the field for 8 more years.   I thought the Harvard students were basically cheated. 

I guess every review should have a quibble, and I do; but it’s with the publisher, not Dr. Zee. The whole book is one mass of related concepts and is filled with forward and backward references to text, figures and diagrams. Having a page to go to instead of Chapter III, 1 or figure IV.3.2 would make reading much easier. Only the publisher could do this once the entire text has been laid out.

One further point. QFTASAP clarified for me the differences between the (substantial) difficulties of medicine and the (substantial) difficulties of theoretical physics. When learning medicine you are exposed to thousands of unrelated (because we don’t understand what lies behind them) facts. That’s OK because you don’t need to remember all of them. Ask the smartest internist you know to name the 12 cranial nerves or the 8 bones of the wrist. The facts of theoretical physics are far fewer, but you must remember, internalize and use them — that’s why QFTASAP contains all these forward and backward references.

There is a ton more to say about the book and I plan to write more as I go through the book again. If interested, just Google Chemiotics II now and then. QFTASAP is definitely worth reading more than once.

 

Helpful

Book Review: Hawking Hawking

To this neurologist, Stephen Hawking’s greatest contribution wasn’t in physics. I ran a muscular dystrophy clinic for 15 years in the 70s and 80s. Few of my ALS patients had heard of Hawking back then. I made sure they did. Hawking did something for them, that I could never do as a physician — he gave them hope.

Which brings me to an excellent biography of Hawking by Charles Seife “Hawking Hawking” which tries to strip away the aura and myths that Hawking assiduously constructed and show the man underneath.

Even better, Seife is an excellent writer and has the mathematical and scientific  chops (Princeton math major, Yale masters in math) to explain the problems Hawking was wrestling with.

Hawking was smart.  One story tells it all (p. 328).  Apparently there were only 3 other physics majors at Oxford that year.  They were all given a set of 13 problems on electromagnetism and a week to do them.    One of the others (Derek Powney) tells the tale. “I discovered very rapidly that I couldn’t do any of them”.  So he teamed up with one of the others, and by the end of the week they’d done 1.5 problems.  The thrd student (working alone) solved one. 

At the end of the week “Stephen as always hadn’t even started”. He went to his room and came out 3 hours later. “Well, I’ve only had the time to do the first ten.”  “I think at that point we realized that it’s not just that we weren’t on the same street, we weren’t on the same planet.”

Have you ever had an experience like that?  I’ve had two.  The first occurred in grade school. I was a pretty good piano player, better than the rest of Dr, Rudnytsky’s students.  Then, someone told me that at age 3 his son would tell him what notes passing trains were whistling on, and that later on he’d sit behind a door listening to his father give lessons, and then come in afterwards and play by ear what the students had been playing.  The second occurred a within day or so of starting my freshman year in college. My roommate told me about a guy who thought he ought to know everybody in our class of 700+.  So he got out the freshman herald which had our pictures and names and a day later knew everyone in the class by name. 

The reason people of a scientific bent should read the book, is not the sociology, or the complicated sexuality of Hawking and his two wives, and god knows what else.  It is the excellent explanations of the problems in math and physics that Hawking faced and solved.  Even better, Seife puts them in context of the work done before Hawking was born.  

Two  examples

1. pp. 14 – 18 — a superb explanation of what Einstein did to create special relativity. 

2. pp. 240 – 245 an excellent description of the horizon problem, the flatness problem and how inflation solved it. 

Any really good book will teach you something.  People in physics, math and biology are consumed with the idea of information.  The book (pp. 131 – 134) explains why Hawking was so focused on the black hole information paradox.  It always seemed pretty arcane and superficial to me (on the order of how many angels could dance on the head of a pin).  

Wrong ! Wrong !

The black hole information paradox is at the coalface of ignorance in modern physics.  Why?  Because the two great theories we have in  (quantum mechanics and general relativity) disagree with what happens to the information contained in an object (such as an astronaut) swallowed by a black hole.  Relativity says it’s destroyed, while quantum mechanics says that’s impossible. 

So reconciling the two descriptions would lead to a deeper theory, and showing that one was wrong, would discredit a powerful theory. 

So even if you’re not interested in the sociology of the circles Hawking moved in or his sex life, there is a lot of well-explained physics and math to be learned for the general reader.  

The black hole information paradox resembles a similarly unresolved pair of phenomena in the world we live in, the Cartesian dualism between flesh and spirit.  It is writ large in biology.

Chemistry is great and can provide mechanistic explanations what we see, such as the example from the following old post, produced after the ***

It’s quite technical, but is an elegant explanation of how different cells make different amounts of two different forms of a muscle protein (beta actin and gamma actin ).  I never thought we’d have an explanation this good, but we do.  Well that’s the flesh and the physicality of the explanation.  Asking why different cells would want this, or what the function of all is puts you immediately in the world of spirit (ideas, which are inherently noncorporeal).  Physical chemistry and biochemistry are silent, and all the abstract explanations science gives us (the function, the why, the reason) is essentially teleological. 

*****

The last post “The death of the synonymous codon – II” puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  

Addendum 30 April ’21:  Called to task on the above  by a reader.  This statement is no longer true.  The material below the *** was bodily lifted from something I wrote 10 years ago.  Time and AI have marched on since then.

So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

 

The pleasures of reading Feynman on Physics – II

If you’re tired of hearing and thinking about COVID-19 24/7 even when you don’t want to, do what I did when I was a neurology resident 50+ years ago making clever diagnoses and then standing helplessly by while patients died.  Back then I read topology and the intense concentration required to absorb and digest the terms and relationships, took me miles and miles away.  The husband of one of my interns was a mathematician, and she said he would dream about mathematics.

Presumably some of the readership are chemists with graduate degrees, meaning that part of their acculturation as such was a course in quantum mechanics.  Back in the day it was pretty much required of chemistry grad students — homage a Prof. Marty Gouterman who taught the course to us 3 years out from his PhD in 1961.  Definitely a great teacher.  Here he is now, a continent away — http://faculty.washington.edu/goutermn/.

So for those happy souls I strongly recommend volume III of The Feynman Lectures on Physics.  Equally strongly do I recommend getting the Millennium Edition which has been purged of the 1,100 or so errors found in the 3 volumes over the years.

“Traditionally, all courses in quantum mechanics have begun in the same way, retracing the path followed in the historical development of the subject.  One first learns a great deal about classical mechanics so that he will be able to understand how to solve the Schrodinger equation.  Then he spends a long time working out various solutions.  Only after a detailed study of this equation does he get to the advanced subject of the electron’s spin.”

The first half of volume III is about spin

Feynman doesn’t even get to the Hamiltonian until p. 88.  I’m almost half through volume III and there has been no sighting of the Schrodinger equation so far.  But what you will find are clear explanations of Bosons and Fermions and why they are different, how masers and lasers operate (they are two state spin systems), how one electron holds two protons together, and a great explanation of covalent bonding.  Then there is great stuff beyond the ken of most chemists (at least this one) such as the Yukawa explanation of the strong nuclear force, and why neutrons and protons are really the same.  If you’ve read about Bell’s theorem proving that ‘spooky action at a distance must exist’, you’ll see where the numbers come from quantum mechanically that are simply impossible on a classical basis.  Zeilinger’s book “The Dance of the Photons” goes into this using .75 (which Feynman shows is just cos(30)^2.

Although Feynman doesn’t make much of a point about it, the essentiality of ‘imaginary’ numbers (complex numbers) to the entire project of quantum mechanics impressed me.  Without them,  wave interference is impossible.

I’m far from sure a neophyte could actually learn QM from Feynman, but having mucked about using and being exposed to QM and its extensions for 60 years, Feynman’s development of the subject is simply a joy to read.

So get the 3 volumes and plunge in.  You’ll forget all about the pandemic (for a while anyway)

 

Feynman and Darwin

What do Richard Feynman and Charles Darwin have in common?  Both have written books which show a brilliant mind at work.  I’ve started reading the New Millennium Edition of Feynman’s Lectures on Physics (which is the edition you should get as all 1165 errata found over the years have been corrected), and like Darwin his thought processes and their power are laid out for all to see.  Feynman’s books are far from F = ma.  They are basically polished versions of lectures, so it reads as if Feynman is directly talking to you.  Example: “We have already discussed the difference between knowing the rules of the game of chess and being able to play.”  Another: talking about Zeno  “The Greeks were somewhat confused by such problems, being helped, of course, by some very confusing Greeks.”

He’s always thinking about the larger implications of what we know.  Example: “Newton’s law has the peculiar property that if it is right on a certain small scale, then it will be right on a larger scale”

He then takes this idea and runs with it.  “Newton’s laws are the ‘tail end’ of the atomic laws extrapolated to a very large size”  The fact that they are extrapolatable and the fact that way down below are the atoms producing them means, that extrapolatable laws are the only type of physical law which could be discovered by us (until we could get down to the atomic level).  Marvelous.  Then he notes that the fundamental atomic laws (e.g. quantum mechanics) are NOTHING like what we see in the large scale environment in which we live.

If you like this sort of thing, you’ll love the books.  I don’t think they would be a good way to learn physics for the first time however.  No problems, etc. etc.  But once you’ve had exposure to some physics “it is good to sit at the feet of the master” — Bill Gates.

Most of the readership is probably fully engaged with work, family career and doesn’t have time to actually read “The Origin of Species”. In retirement, I did,and the power of Darwin’s mind is simply staggering. He did so much with what little information he had. There was no clear idea of how heredity worked and at several points he’s a Lamarckian — inheritance of acquired characteristics. If you do have the time I suggest that you read the 1859 book chapter by chapter along with a very interesting book — Darwin’s Ghost by Steve Jones (published in 1999) which update’s Darwin’s book to contemporary thinking chapter by chapter.  Despite the advances in knowledge in 140 years, Darwin’s thinking beats Jones hands down chapter by chapter.

Book Review — The Universe Speaks in Numbers

Let’s say that back in the day, as a budding grad student in chemistry you had to take quantum mechanics to see where those atomic orbitals came from.   Say further, that as the years passed you knew enough to read News and Views in Nature and Perspectives in Science concerning physics as they appeared. So you’ve heard various terms like J/Psi, Virasoro algebra, Yang Mills gauge symmetry, Twisters, gluons, the Standard Model, instantons, string theory, the Atiyah Singer theorem etc. etc.  But you have no coherent framework in which to place them.

Then “The Universe Speaks in Numbers” by Graham Farmelo is the book for you.  It will provide a clear and chronological narrative of fundamental physics up to the present.  That isn’t the main point of the book, which is an argument about the role of beauty and mathematics in physics, something quite contentious presently.  Farmelo writes well and has a PhD in particle physics (1977) giving him a ringside seat for the twists and turns of  the physics he describes.  People disagree with his thesis (http://www.math.columbia.edu/~woit/wordpress/?p=11012) , but nowhere have I seen anyone infer that any of Farmelo’s treatment of the physics described in the book is incorrect.

40 years ago, something called the Standard Model of Particle physics was developed.  Physicists don’t like it because it seems like a kludge with 19 arbitrary fixed parameters.  But it works, and no experiment and no accelerator of any size has found anything inconsistent with it.  Even the recent discovery of the Higgs, was just part of the model.

You won’t find much of the debate about what physics should go from here in the book.  Farmelo says just study more math.  Others strongly disagree — Google Peter Woit, Sabine Hossenfelder.

The phenomena String theory predicts would require an accelerator the size of the Milky Way or larger to produce particles energetic enough to probe it.  So it’s theory divorced from any experiment possible today, and some claim that String Theory has become another form of theology.

It’s sad to see this.  The smartest people I know are physicists.  Contrast the life sciences, where experiments are easily done, and new data to explain arrives weekly.

 

 

Off to band camp for adults 2018

No posts for a while, as I’ll be at a chamber music camp for adult amateurs (or what a friend’s granddaughter calls — band camp for adults).  In a week or two if you see a beat up old Honda Pilot heading west on the north shore of Lake Superior, honk and wave.

I expect the usual denizens to be there — mathematicians, physicists, computer programmers, MDs, touchy-feely types who are afraid of chemicals etc. etc. We all get along but occasionally the two cultures do clash, and a polymer chemist friend is driven to distraction by a gentle soul who is quite certain that “chemicals” are a very bad thing. For the most part, everyone gets along. Despite the very different mindsets, all of us became very interested in music early on, long before any academic or life choices were made.

So, are the analytic types soulless automatons producing mechanically perfect music which is emotionally dead? Are the touchy-feely types sloppy technically and histrionic musically? A double-blind study would be possible, but I think both groups play pretty much the same (less well than we’d all like, but with the same spirit and love of music).

A few years ago I had the pleasure of playing Beethoven with Heisenberg —   along with an excellent violinist I’ve played with for years, the three of us read Beethoven’s second piano trio (Opus 1, #2) with Heisenberg’s son Jochem (who, interestingly enough, is a retired physics professor).  He is an excellent cellist who knows the literature cold.  The violinist and I later agreed that we have rarely played worse.  Oh well. Heisenberg, of course, was a gentleman throughout.

Later that evening, several of us had the pleasure of discussing quantum mechanics with him. He didn’t disagree with my idea that the node in the 2S orbital (where no electron is ever found) despite finding the electron on either side of the node, forces us to give up the idea of electron trajectory (aromatic ring currents be damned).   He pretty much seemed to agree with the Copenhagen interpretation — macroscopic concepts just don’t apply to the quantum world, and language trips us up.

One rather dark point about the Heisenberg came up in an excellent book about the various interpretations of what Quantum Mechanics actually means: “What Is Real?” by Adam Becker.  I have no idea if the following summary is actually true, but here it is.   Heisenberg was head of the German nuclear program to develop an atomic bomb.  Nuclear fission was well known in Germany, having been discovered there.  An old girl friend wrote a book about Lise Meitner, one of the discoverers and how she didn’t get the credit she was due.

At the end of the war there was an entire operation to capture German physicists who had worked on nuclear development (operation Alsos).  Those captured (Heisenberg, Hahn, von Laue and others) were taken to Farm Hall, an English manor house which had been converted into a military intelligence center.  It was supplied with chalkboards, sporting equipment, a radio, good food and secretly bugged to high heaven.  The physicists were told that they were being held “at His Majesty’s pleasure.”.  Later they told the American’s had dropped the atomic bomb.  They didn’t believe it as their own work during the war led them to think it was impossible.

All their discussions were recorded, unknown to Heisenberg.  It was clear that the Germans had no idea how to build a bomb even though they tried.  However  Heisenberg  and von Weizsacker constructed a totally false narrative, that they had never tried to build a bomb, but rather a nuclear reactor.  According to Becker, Heisenberg was never caught out on this because the Farm Hall transcripts were classified.  It isn’t clear to me from reading Becker’s book, when they were UNclassified, but apparently Heisenberg got away with it until his death in 1978.

Amazing stuff if true

 

A creation myth

Sigmund Freud may have been wrong about penis envy, but most lower forms of scientific life (chemists, biologists) do have physics envy — myself included.  Most graduate chemists have taken a quantum mechanics course, if only to see where atomic and molecular orbitals come from.  Anyone doing physical chemistry has likely studied statistical mechanics. I was fortunate enough to audit one such course given by E. Bright Wilson (of Pauling and Wilson).

Although we no longer study physics per se, most of us read books about physics.  Two excellent such books have come out in the past year.  One is “What is Real?” — https://www.basicbooks.com/titles/adam-becker/what-is-real/9780465096053/, the other is “Lost in Math” by Sabine Hossenfelder whose blog on physics is always worth reading, both for herself and the heavies who comment on what she writes — http://backreaction.blogspot.com

Both books deserve a long discursive review here. But that’s for another time.  Briefly, Hossenfelder thinks that physics for the past 30 years has become so fascinated with elegant mathematical descriptions of nature, that theories are judged by their mathematical elegance and beauty, rather than agreement with experiment.  She acknowledges that the experiments are both difficult and expensive, and notes that it took a century for one such prediction (gravitational waves) to be confirmed.

The mathematics of physics can certainly be seductive, and even a lowly chemist such as myself has been bowled over by it.  Here is how it hit me

Budding chemists start out by learning that electrons like to be in filled shells. The first shell has 2 elements, the next 2 + 6 elements etc. etc. It allows the neophyte to make some sense of the periodic table (as long as they deal with low atomic numbers — why the 4s electrons are of lower energy than the 3d electons still seems quite ad hoc to me). Later on we were told that this is because of quantum numbers n, l, m and s. Then we learn that atomic orbitals have shapes, in some wierd way determined by the quantum numbers, etc. etc.

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back in college).

So it wasn’t a shock when the QM instructor back in 1961 got to them in the course of solving the Schrodinger equation for the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3 variables, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman, the instructor, put it: the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursion relations.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

But what interested me the most about “Lost in Math” was Hossenfelder’s discussion of the different physical laws appearing at different physical scales (e.g. effective laws), emergent properties and reductionism (pp. 44 –> ).  Although things at larger scales (atoms) can be understood in terms of the physics of smaller scales (protons, neutrons, electrons), the details of elementary particle interactions (quarks, gluons, leptons etc.) don’t matter much to the chemist.  The orbits of planets don’t depend on planetary structure, etc. etc.  She notes that reduction of events at one scale to those at a smaller one is not an optional philosophical position to hold, it’s just the way nature is as revealed by experiment.  She notes that you could ‘in principle, derive the theory for large scales from the theory for small scales’ (although I’ve never seen it done) and then she moves on

But the different structures and different laws at different scales is what has always fascinated me about the world in which we exist.  Do we have a model for a world structured this way?

Of course we do.  It’s the computer.

 

Neurologists have always been interested in computers, and computer people have always been interested in the brain — von Neumann wrote “The Computer and the Brain” shortly before his death in 1958.

Back in med school in the 60s people were just figuring out how neurons talked to each other where they met at the synapse.  It was with a certain degree of excitement that we found that information appeared to flow just one way across the synapse (from the PREsynaptic neuron to the POST synaptic neuron).  E.g. just like the vacuum tubes of the earliest computers.  Current (and information) could flow just one way.

The microprocessors based on transistors that a normal person could play with came out in the 70s.  I was naturally interested, as having taken QM I thought I could understand how transistors work.  I knew about energy gaps in atomic spectra, but how in the world a crystal with zillions of atoms and electrons floating around could produce one seemed like a mystery to me, and still does.  It’s an example of ’emergence’ about which more later.

But forgetting all that, it’s fairly easy to see how electrons could flow from a semiconductor with an abundance of them (due to doping) to a semiconductor with a deficit — and have a hard time flowing back.  Again a one way valve, just like our concept of the synapses.

Now of course, we know information can flow the other way in the synapse from POST synaptic to PREsynaptic neuron, some of the main carriers of which are the endogenous marihuana-like substances in your brain — anandamide etc. etc.  — the endocannabinoids.

In 1968 my wife learned how to do assembly language coding with punch cards ones and zeros, the whole bit.  Why?  Because I was scheduled for two years of active duty as an Army doc, a time in which we had half a million men in Vietnam.  She was preparing to be a widow with 2 infants, as the Army sent me a form asking for my preferences in assignment, a form so out of date, that it offered the option of taking my family with me to Vietnam if I’d extend my tour over there to 4 years.  So I sat around drinking Scotch and reading Faulkner waiting to go in.

So when computers became something the general populace could have, I tried to build a mental one using and or and not logical gates and 1s and 0s for high and low voltages. Since I could see how to build the three using transistors (reductionism), I just went one plane higher.  Note, although the gates can be easily reduced to transistors, and transistors to p and n type semiconductors, there is nothing in the laws of semiconductor physics that implies putting them together to form logic gates.  So the higher plane of logic gates is essentially an act of creation.  They do not necessarily arise from transistors.

What I was really interested in was hooking the gates together to form an ALU (arithmetic and logic unit).  I eventually did it, but doing so showed me the necessity of other components of the chip (the clock and in particular the microcode which lies below assembly language instructions).

The next level up, is what my wife was doing — sending assembly language instructions of 1’s and 0’s to the computer, and watching how gates were opened and shut, registers filled and emptied, transforming the 1’s and 0’s in the process.  Again note that there is nothing necessary in the way the gates are hooked together to make them do anything.  The program is at yet another higher level.

Above that are the higher level programs, Basic, C and on up.  Above that hooking computers together to form networks and then the internet with TCP/IP  etc.

While they all can be reduced, there is nothing inherent in the things that they are reduced to which implies their existence.  Their existence was essentially created by humanity’s collective mind.

Could something be going on in the levels of the world seen in physics.  Here’s what Nobel laureate Robert Laughlin (he of the fractional quantum Hall effect) has to say about it — http://www.pnas.org/content/97/1/28.  Note that this was written before people began taking quantum computers seriously.

“However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing (2). We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment (35). However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is Nthen the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light (6). With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures (7). But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors (810). Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

So reductionism doesn’t explain the laws we have at various levels.  They are regularities to be sure, and they describe what is happening, but a description is NOT an explanation, in the same way that Newton’s gravitational law predicts zillions of observations about the real world.     But even  Newton famously said Hypotheses non fingo (Latin for “I feign no hypotheses”) when discussing the action at a distance which his theory of gravity entailed. Actually he thought the idea was crazy. “That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it”

So are the various physical laws things that are imposed from without, by God only knows what?  The computer with its various levels of phenomena certainly was consciously constructed.

Is what I’ve just written a creation myth or is there something to it?

Relativity becomes less comprehensible

“To get Hawking radiation we have to give up on the idea that spacetime always had 3 space dimensions and one time dimension to get a quantum theory of the big bang.”  I’ve been studying relativity for some years now in the hopes of saying something intelligent to the author (Jim Hartle), if we’re both lucky enough to make it to our 60th college reunion in 2 years.  Hartle majored in physics under John Wheeler who essentially revived relativity from obscurity during the years when quantum mechanics was all the rage. Jim worked with Hawking for years, spoke at his funeral and wrote this in an appreciation of Hawking’s work [ Proc.Natl. Acad. Sci. vol. 115 pp. 5309 – 5310 ’18 ].

I find the above incomprehensible.  Could anyone out there enlighten me?  Just write a comment.  I’m not going to bother Hartle

Addendum 25 May

From a retired math professor friend —

I’ve never studied this stuff, but here is one way to get more actual dimensions without increasing the number of apparent dimensions:
Start with a 1-dimensional line, R^1 and now consider a 2-dimensional cylinder S^1 x R^1.  (S^1 is the circle, of course.)  If the radius of the circle is small, then the cylinder looks like a narrow tube.  Make the radius even smaller–lsay, ess than the radius of an atomic nucleus.  Then the actual 2-dimensional cylinder appears to be a 1-dimensional line.
The next step is to rethink S^1 as a line interval with ends identified (but not actually glued together.  Then S^1 x R^1 looks like a long ribbon with its two edges identified.  If the width of the ribbon–the length of the line interval–is less, say, than the radius of an atom, the actual 2-dimensional “ribbon with edges identified” appears to be just a 1-dimensional line.
Okay, now we can carry all these notions to R^2.  Take S^1 X R^2, and treat S^1 as a line interval with ends identified.  Then S^1 x R^2 looks like a (3-dimensional) stack of planes with the top plane identified, point by point, with the bottom plane.  (This is the analog of the ribbon.)  If the length of the line interval is less, say, than the radius of an atom, then the actual 3-dimensional s! x R^2 appears to be a 2-dimensional plane.
That’s it.  In general, the actual n+1-dimensional S^1 x R^n appears to be just n-space R^n when the radius of S^1 is sufficiently small.
All this can be done with a sphere S^2, S^3, … of any dimension, so that the actual k+n-dimensional manifold S^k x R^n appears to be just the n-space R^n when the radius of S^k is sufficiently small.  Moreover, if M^k is any compact manifold whose physical size is sufficiently small, then the actual k+n-dimensional manifold M^k x R^n appears to be just the n-plane R^n.
That’s one way to get “hidden” dimensions, I think. “

NonAlgorithmic Intelligence

Penrose was right. Human intelligence is nonAlgorithmic. But that doesn’t mean that our physical brains produce consciousness and intelligence using quantum mechanics (although all matter is what it is because of quantum mechanics). The parts (even small ones like neurotubules) contain so much mass that their associated wavefunction is too small to exhibit quantum mechanical effects. Here Penrose got roped in by Kauffman thinking that neurotubules were the carriers of the quantum mechanical indeterminacy. They aren’t, they are just too big. The dimer of alpha and beta tubulin contains 900 amino acids — a mass of around 90,000 Daltons (or 90,000 hydrogen atoms — which are small enough to show quantum mechanical effects).

So why was Penrose right? Because neural nets which are inherently nonAlgorithmic are showing intelligent behavior. AlphaGo which beat the world champion is the most recent example, but others include facial recognition and image classification [ Nature vol. 529 pp. 484 – 489 ’16 ].

Nets are trained on real world images and told whether they are right or wrong. I suppose this is programming of a sort, but it is certainly nonAlgorithmic. As the net learns from experience it adjusts the strength of the connections between its neurons (synapses if you will).

So it should be a simple matter to find out just how AlphaGo did it — just get a list of the neurons it contains, and the number and strengths of the synapses between them. I can’t find out just how many neurons and connections there are, but I do know that thousands of CPUs and graphics processors were used. I doubt that there were 80 billion neurons or a trillion connections between them (which is what our brains are currently thought to have).

Just print out the above list (assuming you have enough paper) and look at it. Will you understand how AlphaGo won? I seriously doubt it. You will understand it less well than looking at a list of the positions and momenta of 80 billion gas molecules will tell you its pressure and temperature. Why? Because in statistical mechanics you assume that the particles making up an ideal gas are featureless, identical and do not interact with each other. This isn’t true for neural nets.

It also isn’t true for the brain. Efforts are underway to find a wiring diagram of a small area of the cerebral cortex. The following will get you started — https://www.quantamagazine.org/20160406-brain-maps-micron-program-iarpa/

Here’s a quote from the article to whet your appetite.

“By the end of the five-year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end.”

I don’t think this will help us understand how the brain works any more than the above list of neurons and connections from AlphaGo. There are even more problems with such a list. Connections (synapses) between neurons come and go (and they increase and decrease in strength as in the neural net). Some connections turn on the receiving neuron, some turn it off. I don’t think there is a good way to tell what a given connection is doing just by looking a a slice of it under the electron microscope. Lastly, some of our most human attributes (emotion) are due not to connections between neurons but due to release of neurotransmitters generally into the brain, not at the very localized synapse, so it won’t show up on a wiring diagram. This is called volume neurotransmission, and the transmitters are serotonin, norepinephrine and dopamine. Not convinced? Among agents modifying volume neurotransmission are cocaine, amphetamine, antidepressants, antipsychotics. Fairly important.

So I don’t think we’ll ever truly understand how the neural net inside our head does what it does.

A book recommendation, not a review

My first encounter with a topology textbook was not a happy one. I was in grad school knowing I’d leave in a few months to start med school and with plenty of time on my hands and enough money to do what I wanted. I’d always liked math and had taken calculus, including advanced and differential equations in college. Grad school and quantum mechanics meant more differential equations, series solutions of same, matrices, eigenvectors and eigenvalues, etc. etc. I liked the stuff. So I’d heard topology was cool — Mobius strips, Klein bottles, wormholes (from John Wheeler) etc. etc.

So I opened a topology book to find on page 1

A topology is a set with certain selected subsets called open sets satisfying two conditions
l. The union of any number of open sets is an open set
2. The intersection of a finite number of open sets is an open set

Say what?

In an effort to help, on page two the book provided another definition

A topology is a set with certain selected subsets called closed sets satisfying two conditions
l. The union of a finite number number of closed sets is a closed set
2. The intersection of any number of closed sets is a closed set

Ghastly. No motivation. No idea where the definitions came from or how they could be applied.

Which brings me to ‘An Introduction to Algebraic Topology” by Andrew H. Wallace. I recommend it highly, even though algebraic topology is just a branch of topology and fairly specialized at that.

Why?

Because in a wonderful, leisurely and discursive fashion, he starts out with the intuitive concept of nearness, applying it to to classic analytic geometry of the plane. He then moves on to continuous functions from one plane to another explaining why they must preserve nearness. Then he abstracts what nearness must mean in terms of the classic pythagorean distance function. Topological spaces are first defined in terms of nearness and neighborhoods, and only after 18 pages does he define open sets in terms of neighborhoods. It’s a wonderful exposition, explaining why open sets must have the properties they have. He doesn’t even get to algebraic topology until p. 62, explaining point set topological notions such as connectedness, compactness, homeomorphisms etc. etc. along the way.

This is a recommendation not a review because, I’ve not read the whole thing. But it’s a great explanation for why the definitions in topology must be the way they are.

It won’t set you back much — I paid. $12.95 for the Dover edition (not sure when).