Category Archives: Math

Want to understand Quantum Computing — buy this book

As quantum mechanics enters its second century, quantum computing has been hot stuff for the last third of it, beginning with Feynman’s lectures on computation in 84 – 86.  Articles on quantum computing  appear all the time in Nature, Science and even the mainstream press.

Perhaps you tried to understand it 20 years ago by reading Nielsen and Chuang’s massive tome Quantum Computation and Quantum information.  I did, and gave up.  At 648 pages and nearly half a million words, it’s something only for people entering the field.  Yet quantum computers are impossible to ignore.

That’s where a new book “Quantum Computing for Everyone” by Chris Bernhardt comes in.  You need little more than high school trigonometry and determination to get through it.  It is blazingly clear.  No term is used before it is defined and there are plenty of diagrams.   Of course Bernhardt simplifies things a bit.  Amazingly, he’s able to avoid the complex number system. At 189 pages and under 100,000 words it is not impossible to get through.

Not being an expert, I can’t speak for its completeness, but all the stuff I’ve read about in Nature, Science is there — no cloning, entanglement, Ed Frenkin (and his gate), Grover’s algorithm,  Shor’s algorithm, the RSA algorithm.  As a bonus there is a clear explanation of Bell’s theorem.

You don’t need a course in quantum mechanics to get through it, but it would make things easier.  Most chemists (for whom this blog is basically written) have had one.  This plus a background in linear algebra would certainly make the first 70 or so pages a breeze.

Just as a book on language doesn’t get into the fonts it can be written in, the book doesn’t get into how such a computer can be physically instantiated.  What it does do is tell you how the basic guts of the quantum computer work. Amazingly, they are just matrices (explained in the book) which change one basis for representing qubits (explained) into another.  These are the quantum gates —  ‘just operations that can be described by orthogonal matrices” p. 117.  The computation comes in by sending qubits through the gates (operating on vectors by matrices).

Be prepared to work.  The concepts (although clearly explained) come thick and fast.

Linear algebra is basic to quantum mechanics.  Superposition of quantum states is nothing more than a linear combination of vectors.  When I audited a course on QM 10 years ago to see what had changed in 50 years, I was amazed at how little linear algebra was emphasized.  You could do worse that read a series of posts on my blog titled “Linear Algebra Survival Guide for Quantum Mechanics” — There are 9 — start here and follow the links — you may find it helpful — https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/

From a mathematical point of view entanglement (discussed extensively in the book) is fairly simple -philosophically it’s anything but – and the following was described by a math prof as concise and clear– https://luysii.wordpress.com/2014/12/28/how-formal-tensor-mathematics-and-the-postulates-of-quantum-mechanics-give-rise-to-entanglement/

The book is a masterpiece — kudos to Bernhardt

Why it is sometimes good to read the preface7

“the (gravitational) field equations are derived  . . .  from an analysis of tidal forces.”  Thus sayeth p. xii of the preface to “The Geometry of Spacetime” by James J. Callahan.  This kept me from passing over pp. 174 –> on tides, despite a deep dive back into differentiating complicated functions, Taylor series etc. etc. Hard thinking about tidal forces finally gave me a glimpse of what general relativity is all about.

Start with a lemma.  Given a large object (say the sun) and a single small object (say a satellite, or a spacecraft), the path traced out by the spacecraft will lie in a plane.  Why?

All gravitational force is directed toward the sun (which can be considered as a point mass – it is my recollection that it took Newton 20 years to prove this delaying the publication of the Principia , but I can’t find the source).  This makes the gravitational force radially symmetric.

Now consider an object orbiting the sun (falling toward the sun as it orbits, but not hitting the sun because its angular momentum carries away). Look at two close by points in the orbit and the sun, forming a triangle.  The long arms of the triangle point toward the sun.  Now imagine in the next instant that the object goes to a fourth point out of the plane formed by the first 3.  Such a shift in direction requires a force to produce it, but in the model there is only gravity, so this is impossible meaning that all points of the orbit lie in a plane containing the sun considered as a point mass.

You are weightless when falling, even though you are responding to the gravitational force (paradoxic but true).   An astronaut in a space capsule is weightless because he or she is falling but the conservation of angular momentum keeps them going around the sun.

Well if the sun can be considered a point mass, so can the space capsule.  Call the local coordinate of the point representing the capsule point mass x.  The orbit of x around the sun is in the x — sun plane.

Next put two objects 1 foot above and below the x — sun plane.

object1

x  ——————————sun point mass

object2

Objects 1 and 2 orbit in a plane containing the sun point mass as well.  They do not orbit parallel to x (but very close to parallel).

What happens with the passage of time?   The objects approach each other.  To an astronaut inside the capsule it looks like a force (similar to gravity) is pushing them together.  These are the tidal forces Callahan is talking about.

Essentially the tidal are produced by gravitational force of the sun even though everything in the capsule floats around feeling no gravity at all.

Consider a great circle on a sphere — a longitudinal circle on the earth will do.  Two different longitudinal great circles will eventually meet each other.  No force is producing this, but the curvature of the surface in which they are embedded.

I think that  what appear to be tidal forces in general relativity are really due to the intrinsic curvature of spacetime.  So gravity disappears into the curvature of spacetime produced by mass.   I’ll have to go through the rest of the book to find out.  But I think this is pretty close, and likely why Callahan put the above quote into the preface.

If you are studying on your own, a wonderful explanation of just what is going on under the algebraic sheets of the Taylor series is to be found pp. 255 –> of Fundamentals of Physics by R. Shankar.  In addition to being clear, he’s funny.  Example: Nowadays we worry about publish and perish, but in the old days it was publish and perish.

Book recommendation

Tired of reading books about physics?  Want the real McCoy”?  Well written and informal?  Contains stuff whose names you know but don’t understand — Jones Polynomial, Loop Quantum Gravity, Quantum field theory, Gauge groups and transformations —  etc. etc.

Up to date?  Well no, it’s 25 years old but still very much worth a read, so very unlike molecular biology, chemistry, computer science etc. etc.

Probably you should know as much physics and math as a beginning chemistry grad student. If you studied electromagnetism through Maxwell’s equations it would be a plus.  I stopped at Coulomb’s Law, and picked up enough to understand NMR.

This will give you a sample of the way it is written

“Much odder is that we are saying the vector field v is the linear combination of . .  partial derivatives.  What we are doing might be regarded as rather sloppy, since we are identifying two different although related things: the vector  field and the operator v^i * d-/dx^i which takes a directional derivative in the direction of v.”

“Now let us define vector fields on a manifold M. .. . these will be entities whose sole ambition in life is to differentiate functions”

The book is “Gauge FIelds, Knots and Gravity” by John Baez and Javier P. Muniain.

The writing, although clear has a certain humility.  “Unfortunately understanding these new ideas depends on a through mastery of quantum field theory, general relativity, geometry, topology and algebra.  Indeed, it is almost certain that nobody is sufficiently prepared to understand these ideas fully.”

I’m going to take it with me to the amateur chamber music festival.  As usual, at least 2 math full professors will be there to help me out.  Buy it and enjoy

 

 

Book Review — The Universe Speaks in Numbers

Let’s say that back in the day, as a budding grad student in chemistry you had to take quantum mechanics to see where those atomic orbitals came from.   Say further, that as the years passed you knew enough to read News and Views in Nature and Perspectives in Science concerning physics as they appeared. So you’ve heard various terms like J/Psi, Virasoro algebra, Yang Mills gauge symmetry, Twisters, gluons, the Standard Model, instantons, string theory, the Atiyah Singer theorem etc. etc.  But you have no coherent framework in which to place them.

Then “The Universe Speaks in Numbers” by Graham Farmelo is the book for you.  It will provide a clear and chronological narrative of fundamental physics up to the present.  That isn’t the main point of the book, which is an argument about the role of beauty and mathematics in physics, something quite contentious presently.  Farmelo writes well and has a PhD in particle physics (1977) giving him a ringside seat for the twists and turns of  the physics he describes.  People disagree with his thesis (http://www.math.columbia.edu/~woit/wordpress/?p=11012) , but nowhere have I seen anyone infer that any of Farmelo’s treatment of the physics described in the book is incorrect.

40 years ago, something called the Standard Model of Particle physics was developed.  Physicists don’t like it because it seems like a kludge with 19 arbitrary fixed parameters.  But it works, and no experiment and no accelerator of any size has found anything inconsistent with it.  Even the recent discovery of the Higgs, was just part of the model.

You won’t find much of the debate about what physics should go from here in the book.  Farmelo says just study more math.  Others strongly disagree — Google Peter Woit, Sabine Hossenfelder.

The phenomena String theory predicts would require an accelerator the size of the Milky Way or larger to produce particles energetic enough to probe it.  So it’s theory divorced from any experiment possible today, and some claim that String Theory has become another form of theology.

It’s sad to see this.  The smartest people I know are physicists.  Contrast the life sciences, where experiments are easily done, and new data to explain arrives weekly.

 

 

A mathematical kludge and its repair

If you are in a train going x miles an hour and throw a paper airplane forward at x feet per second (or x * 3600/5280 miles per hour, relative to someone outside the train sees the plane move a bit faster than x miles an hour.  Well that’s the whole idea of the Galilean transformation.  Except that they don’t really see velocities adding that way for really fast velocities (close to the speed of light).

Relativity says that there are no privileged sites of observation and that no matter how fast two observer frames are moving relative to each other light will zing past both at the same speed (3 x 10^8 meters/second, 186,000 miles/second).

All of Newton’s mechanics and force laws obeys the Galilean transformation (e.g. velocities add).  Maxwell conceived a series of 4 laws linking electricity and magnetism together, which predicted new phenomena (such as radio waves, and the fact that light was basically a form of wave traveling through space).

Even though incredibly successful, Maxwell’s laws led to an equation (the wave equation) which didn’t obey the Galilean transformation.  This led Lorentz to modify it so the wave equation did obey Galileo.  If you’ve got some mathematical background an excellent exposition of all this is to be found in “The Geometry of Spacetime” by James J. Callahan pp. 22 – 27.

The Lorentz transformation is basically a kludge which makes things work out.  But he had no understanding of why it worked (or what it meant).  The equations produced by the Lorentz transformation are ugly.

Here are the variables involved.

t’ is time in the new frame, t in the old, x’ is position in the new frame x in the old. v is the constant velocity at which the two observation frames are moving relative to each other. c is the common symbol for the velocity of light.

Here are the two equations

t’ =  ( t – vz/c^2 )/ sqrt (1 – v^2/c^2)

x’ = ( z – vt ) /  sqrt (1 – v^2/c^2)

Enter Einstein — he derived them purely by thought.  I recommend Appendix 1 in Einstein’s book “Relativity”.  Amazingly you do not need tensors or even calculus to understand his derivation — just high school algebra (and not much of that — no trigonometry etc. etc.)  You will have the pleasure of watching the power of a great mind at work.

One caveat.  The first few equations won’t make much sense if you hit the appendix without having read the rest of the book (as I did).

Light travels at c miles/hour, so multiplying c by time gives you where it is after t seconds.  In equations x = ct.  This is also true for another reference frame x’ = ct’.

This implies that both x – ct =  0 and x’ – ct’ = 0

Then Einstein claims that these two equations imply that

(x – ct) = lambda * (x’ – ct’) ; lambda is some nonzero number.

Say what?  Is he really saying  0 = lambda * 0.

This is mathematical fantasy.  Lambda could be anything and the statement lacks mathematical content.

Yes, but . . .

It does not lack physical content, which is where the rest of the book comes in.

This is because the two frames (x, t) and (x’ , t’) are said to be in ‘standard configuration which is a complicated state of affairs. We’ll see why y, y’, z, z’ are left out shortly

The assumptions of the standard configuration are as follows:

  • An observer in frame of reference K defines events with coordinates t, x
  • Another frame K’ moves with velocity v relative to K, with an observer in this moving frame K’ defining events using coordinates t’, x’
  • The coordinate axes in each frame of reference are parallel
  • The relative motion is along the coincident xx’ axes (y = y’ and z = z’ for all time, only x’ changes, explaining why they are left out)
  • At time t = t’ =0, the origins of both coordinate systems are the same.

Another assumption is that at time t = t’ = 0 a light pulse is emitted by K at the origin (x = x’ = 0)

The only possible events in K and K’ are observations of the light pulse. Since the velocity of light (c) is independent of the coordinate system, K’ will see the pulse at time t’ and x’ axis location ct’, NOT x’-axis location ct’ – vt’ (which is what Galileo would say). So whenever K sees the pulse at time t and on worldline (ct, t), K’ will see the pulse SOMEWHERE on worldline (ct’, t’).

The way to express this mathematically is by (3) (x – ct) = lambda * (x’ – ct’)

This may seem trivial, but I spent a lot of time puzzling over equation (3)

Now get Einstein’s book and watch him derive the complicated looking Lorentz transformations using simple math and complex reasoning.

Bye bye stoichiometry

Until recently, developments in physics basically followed earlier work by mathematicians Think relativity following Riemannian geometry by 40 years.  However in the past few decades, physicists have developed mathematical concepts before the mathematicians — think mirror symmetry which came out of string theory — https://en.wikipedia.org/wiki/Mirror_symmetry_(string_theory). You may skip the following paragraph, but here is what it meant to mathematics — from a description of a 400+ page book by Amherst College’s own David A. Cox

Mirror symmetry began when theoretical physicists made some astonishing predictions about rational curves on quintic hypersurfaces in four-dimensional projective space. Understanding the mathematics behind these predictions has been a substantial challenge. This book is the first completely comprehensive monograph on mirror symmetry, covering the original observations by the physicists through the most recent progress made to date. Subjects discussed include toric varieties, Hodge theory, Kahler geometry, moduli of stable maps, Calabi-Yau manifolds, quantum cohomology, Gromov-Witten invariants, and the mirror theorem. This title features: numerous examples worked out in detail; an appendix on mathematical physics; an exposition of the algebraic theory of Gromov-Witten invariants and quantum cohomology; and, a proof of the mirror theorem for the quintic threefold.

Similarly, advances in cellular biology have come from chemistry.  Think DNA and protein structure, enzyme analysis.  However, cell biology is now beginning to return the favor and instruct chemistry by giving it new objects to study. Think phase transitions in the cell, liquid liquid phase separation, liquid droplets, and many other names (the field is in flux) as chemists begin to explore them.  Unlike most chemical objects, they are big, or they wouldn’t have been visible microscopically, so they contain many, many more molecules than chemists are used to dealing with.

These objects do not have any sort of definite stiochiometry and are made of RNA and the proteins which bind them (and sometimes DNA).  They go by any number of names (processing bodies, stress granules, nuclear speckles, Cajal bodies, Promyelocytic leukemia bodies, germline P granules.  Recent work has shown that DNA may be compacted similarly using the linker histone [ PNAS vol.  115 pp.11964 – 11969 ’18 ]

The objects are defined essentially by looking at them.  By golly they look like liquid drops, and they fuse and separate just like drops of water.  Once this is done they are analyzed chemically to see what’s in them.  I don’t think theory can predict them now, and they were never predicted a priori as far as I know.

No chemist in their right mind would have made them to study.  For one thing they contain tens to hundreds of different molecules.  Imagine trying to get a grant to see what would happen if you threw that many different RNAs and proteins together in varying concentrations.  Physicists have worked for years on phase transitions (but usually with a single molecule — think water).  So have chemists — think crystallization.

Proteins move in and out of these bodies in seconds.  Proteins found in them do have low complexity of amino acids (mostly made of only a few of the 20), and unlike enzymes, their sequences are intrinsically disordered, so forget the key and lock and induced fit concepts for enzymes.

Are they a new form of matter?  Is there any limit to how big they can be?  Are the pathologic precipitates of neurologic disease (neurofibrillary tangles, senile plaques, Lewy bodies) similar.  There certainly are plenty of distinct proteins in the senile plaque, but they don’t look like liquid droplets.

It’s a fascinating field to study.  Although made of organic molecules, there seems to be little for the organic chemist to say, since the interactions aren’t covalent.  Time for physical chemists and polymer chemists to step up to the plate.

Book recommendation

“Losing the Nobel Prize”  by Brian Keating is a book you should read if you have any interest in l. physics. 2. astronomy 3. cosmology 4. the sociology of the scientific enterprise (physics division) 5. The Nobel prize 6. The BICEPs and BICEP2 experiments.

It contains extremely clear explanations of the following

l. The spiderweb bolometer detector used to measure the curvature of the universe

2. How Galileo’s telescope works and what he saw

3. How refracting and reflecting telescopes work

4. The Hubble expansion of the universe and the problems it caused

5. The history of the big bang, its inherent problems, how Guth solved some of them but created more

6. How bouncing off water (or dust) polarizes light

7. The smoothness problem, the flatness problem and the horizon problem.

8. The difference between B modes and E modes and why one would be evidence of gravitational waves which would be evidence for inflation.

9. Cosmic background radiation

The list could be much longer.  The writing style is clear and very informal.   Example: he calls the dust found all over the universe — cosmic schmutz.   Then there are the stories about explorers trying to reach the south pole, and what it’s like getting there (and existing there).

As you probably know BICEP2 found galactic dust and not the polarization pattern produced by gravitational waves.  The initial results were announced 17 March 2014 to much excitement.  It was the subject of a talk given the following month at Harvard Graduate Alumni Day, an always interesting set of talks.  I didn’t go to the talk but there were plenty of physicists around to ask about the results (which were nowhere nearly as clearly explained as this book).  All of them responded to my questions the same way — “It’s complicated.”

The author Brian Keating has a lot to say about Nobels and their pursuit and how distorting it is, but that’s the subject of another post, as purely through chance I’ve known 9 Nobelists before they received their prize.

It will also lead to another post about the general unhappiness of a group of physicists.

Buy and read the book

How to study math by yourself far away from an academic center

“Differential geometry is the study of things that are invariant under a change of notation.”   Sad but true, and not original as it appears in the introduction to two different differential geometry books I own.

Which brings me to symbol tables and indexes in math books. If you have a perfect mathematical mind and can read math books straight through understanding everything and never need to look back in the book for a symbol or concept you’re not clear on, then you don’t need them.  I suspect most people aren’t like that.  I’m not.

Even worse is failing to understand something (say the connection matrix) and trying to find another discussion in another book.  If you go to an older book (most of which do not have symbol tables) the notation will likely be completely different and you have to start back at ground zero.  This happened when I tried to find what a connection form was, finding the discussion in one book rather skimpy.  I found it in O’Neill’s book on elementary differential geometry, but the notation was completely different and I had to read page after page to pick up the lingo until I could understand his discussion (which was quite clear).

Connections are important, and they underlie gauge theory and a lot of modern physics.

Good math books aren’t just theorem proof theorem proof, but have discussions about why you’d want to know something etc. etc.  Even better are discussions about why things are the way they are.  Tu’s book on Differential geometry is particularly good on this, showing (after a careful discussion of why the directional derivative is the way it is) how the rather abstract definition of a connection on a manifold arises by formalizing the properties of the directional derivative and using them to define the connection.

Unfortunately, he presents curvature in a very ad hoc fashion, and I’m back to starting at ground zero in another book (older and without a symbol table).

Nonetheless I find it very helpful when taking notes to always start by listing what is given.  Then a statement of the theorem, particularly putting statements like for all i in { 1, …. ,n} in front.  In particular if a concept is defined, put how the concept is written in the definition

e.g.

Given X, Y smooth vector fields

def:  Lie Bracket (written [ X, Y ] ) ::= DxY – DyX

with maybe a link to a page in your notes where Dx is defined

So before buying a math book, look to see how fulsome the index is, and whether it has a symbol table.

 

A creation myth

Sigmund Freud may have been wrong about penis envy, but most lower forms of scientific life (chemists, biologists) do have physics envy — myself included.  Most graduate chemists have taken a quantum mechanics course, if only to see where atomic and molecular orbitals come from.  Anyone doing physical chemistry has likely studied statistical mechanics. I was fortunate enough to audit one such course given by E. Bright Wilson (of Pauling and Wilson).

Although we no longer study physics per se, most of us read books about physics.  Two excellent such books have come out in the past year.  One is “What is Real?” — https://www.basicbooks.com/titles/adam-becker/what-is-real/9780465096053/, the other is “Lost in Math” by Sabine Hossenfelder whose blog on physics is always worth reading, both for herself and the heavies who comment on what she writes — http://backreaction.blogspot.com

Both books deserve a long discursive review here. But that’s for another time.  Briefly, Hossenfelder thinks that physics for the past 30 years has become so fascinated with elegant mathematical descriptions of nature, that theories are judged by their mathematical elegance and beauty, rather than agreement with experiment.  She acknowledges that the experiments are both difficult and expensive, and notes that it took a century for one such prediction (gravitational waves) to be confirmed.

The mathematics of physics can certainly be seductive, and even a lowly chemist such as myself has been bowled over by it.  Here is how it hit me

Budding chemists start out by learning that electrons like to be in filled shells. The first shell has 2 elements, the next 2 + 6 elements etc. etc. It allows the neophyte to make some sense of the periodic table (as long as they deal with low atomic numbers — why the 4s electrons are of lower energy than the 3d electons still seems quite ad hoc to me). Later on we were told that this is because of quantum numbers n, l, m and s. Then we learn that atomic orbitals have shapes, in some wierd way determined by the quantum numbers, etc. etc.

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back in college).

So it wasn’t a shock when the QM instructor back in 1961 got to them in the course of solving the Schrodinger equation for the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3 variables, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman, the instructor, put it: the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursion relations.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

But what interested me the most about “Lost in Math” was Hossenfelder’s discussion of the different physical laws appearing at different physical scales (e.g. effective laws), emergent properties and reductionism (pp. 44 –> ).  Although things at larger scales (atoms) can be understood in terms of the physics of smaller scales (protons, neutrons, electrons), the details of elementary particle interactions (quarks, gluons, leptons etc.) don’t matter much to the chemist.  The orbits of planets don’t depend on planetary structure, etc. etc.  She notes that reduction of events at one scale to those at a smaller one is not an optional philosophical position to hold, it’s just the way nature is as revealed by experiment.  She notes that you could ‘in principle, derive the theory for large scales from the theory for small scales’ (although I’ve never seen it done) and then she moves on

But the different structures and different laws at different scales is what has always fascinated me about the world in which we exist.  Do we have a model for a world structured this way?

Of course we do.  It’s the computer.

 

Neurologists have always been interested in computers, and computer people have always been interested in the brain — von Neumann wrote “The Computer and the Brain” shortly before his death in 1958.

Back in med school in the 60s people were just figuring out how neurons talked to each other where they met at the synapse.  It was with a certain degree of excitement that we found that information appeared to flow just one way across the synapse (from the PREsynaptic neuron to the POST synaptic neuron).  E.g. just like the vacuum tubes of the earliest computers.  Current (and information) could flow just one way.

The microprocessors based on transistors that a normal person could play with came out in the 70s.  I was naturally interested, as having taken QM I thought I could understand how transistors work.  I knew about energy gaps in atomic spectra, but how in the world a crystal with zillions of atoms and electrons floating around could produce one seemed like a mystery to me, and still does.  It’s an example of ’emergence’ about which more later.

But forgetting all that, it’s fairly easy to see how electrons could flow from a semiconductor with an abundance of them (due to doping) to a semiconductor with a deficit — and have a hard time flowing back.  Again a one way valve, just like our concept of the synapses.

Now of course, we know information can flow the other way in the synapse from POST synaptic to PREsynaptic neuron, some of the main carriers of which are the endogenous marihuana-like substances in your brain — anandamide etc. etc.  — the endocannabinoids.

In 1968 my wife learned how to do assembly language coding with punch cards ones and zeros, the whole bit.  Why?  Because I was scheduled for two years of active duty as an Army doc, a time in which we had half a million men in Vietnam.  She was preparing to be a widow with 2 infants, as the Army sent me a form asking for my preferences in assignment, a form so out of date, that it offered the option of taking my family with me to Vietnam if I’d extend my tour over there to 4 years.  So I sat around drinking Scotch and reading Faulkner waiting to go in.

So when computers became something the general populace could have, I tried to build a mental one using and or and not logical gates and 1s and 0s for high and low voltages. Since I could see how to build the three using transistors (reductionism), I just went one plane higher.  Note, although the gates can be easily reduced to transistors, and transistors to p and n type semiconductors, there is nothing in the laws of semiconductor physics that implies putting them together to form logic gates.  So the higher plane of logic gates is essentially an act of creation.  They do not necessarily arise from transistors.

What I was really interested in was hooking the gates together to form an ALU (arithmetic and logic unit).  I eventually did it, but doing so showed me the necessity of other components of the chip (the clock and in particular the microcode which lies below assembly language instructions).

The next level up, is what my wife was doing — sending assembly language instructions of 1’s and 0’s to the computer, and watching how gates were opened and shut, registers filled and emptied, transforming the 1’s and 0’s in the process.  Again note that there is nothing necessary in the way the gates are hooked together to make them do anything.  The program is at yet another higher level.

Above that are the higher level programs, Basic, C and on up.  Above that hooking computers together to form networks and then the internet with TCP/IP  etc.

While they all can be reduced, there is nothing inherent in the things that they are reduced to which implies their existence.  Their existence was essentially created by humanity’s collective mind.

Could something be going on in the levels of the world seen in physics.  Here’s what Nobel laureate Robert Laughlin (he of the fractional quantum Hall effect) has to say about it — http://www.pnas.org/content/97/1/28.  Note that this was written before people began taking quantum computers seriously.

“However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing (2). We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment (35). However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is Nthen the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light (6). With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures (7). But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors (810). Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

So reductionism doesn’t explain the laws we have at various levels.  They are regularities to be sure, and they describe what is happening, but a description is NOT an explanation, in the same way that Newton’s gravitational law predicts zillions of observations about the real world.     But even  Newton famously said Hypotheses non fingo (Latin for “I feign no hypotheses”) when discussing the action at a distance which his theory of gravity entailed. Actually he thought the idea was crazy. “That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it”

So are the various physical laws things that are imposed from without, by God only knows what?  The computer with its various levels of phenomena certainly was consciously constructed.

Is what I’ve just written a creation myth or is there something to it?

The Gambler’s fallacy is actually based on our experience

We don’t understand randomness very well. When asked to produce a random sequence we never produce enough repeating patterns thinking that they are less probable. This is the Gambler’s fallacy.  If heads come up 3 times in a row, the Gambler will bet on tails on the next throw   Why?  This reasoning is actually based on experience.

The following comes from a very interesting paper of a few years ago  [ Proc. Natl. Acad. Sci. vol. 112 pp. 3788 – 3792 ’15 ].  There is a surprising amount of systematic structure lurking within random sequences. For example, in the classic case of tossing a fair coin, where the probability of each outcome (heads or tails) is exactly 0.5 on every single trial, one would naturally assume that there is no possibility for some kind of interesting structure to emerge, given such a simple form of randomness.

However if you record the average amount of time for a pattern to first occur in a sequence (i.e., the waiting time statistic), it is longer for a repetition (head–head HH or tail–tail TT  (an average of six tosses is needrequired) than for an alternation (HT or TH, only four tosses is needed). This is despite the fact that on average, repetitions and alternations are equally probable (occurring once in every four tosses, i.e., the same mean time statistic).

For both of these facts to be true, it must be that repetitions are more bunched together over time—they come in bursts, with greater spacing between, compared with alternations (which is why they appear less frequent to us). Intuitively, this difference comes from the fact that repetitions can build upon each other (e.g., sequence HHH contains two instances of HH), whereas alternations cannot.

Statistically, the mean time and waiting time delineate the mean and variance in the distribution of the interarrival times of patterns (respectively). Despite the same frequency of occurrence (i.e., the same mean), alternations are more evenly distributed over time than repetitions (they have different variances) — which is exactly why they appear less frequent, hence less likely.

Then the authors go on to develop a model of the way we think about these things.

“Is this latent structure of waiting time just a strange mathematical curiosity or could it possibly have deep implications for our cognitive level perceptions of randomness? It has been speculated that the systematic bias in human randomness perception such as the gambler’s fallacy might be due to the greater variance in the interarrival times or the “delayed” waiting time for repetition patterns. Here, we show that a neural model based on a detailed biological understanding of the way the neocortex integrates information over time when processing sequences of events is naturally sensitive to both the mean time and waiting time statistics. Indeed, its behavior is explained by a simple averaging of the influences of both of these statistics, and this behavior emerges in the model over a wide range of parameters. Furthermore, this averaging dynamic directly produces the best-fitting bias-gain parameter for an existing Bayesian model of randomness judgments, which was previously an unexplained free parameter and obtained only through parameter fitting. We show that we can extend this Bayesian model to better fit the full range of human data by including a higher-order pattern statistic, and the neurally derived bias-gain parameter still provides the best fit to the human data in the augmented model. Overall, our model provides a neural grounding for the pervasive gambler’s fallacy bias in human judgments of random processes, where people systematically discount repetitions and emphasize alternations.”

Fascinating stuff