Tag Archives: Hilbert space

Mathematics and the periodic table

It isn’t surprising that math is involved in the periodic table. Decades before the existence of atoms was shown for sure (Einstein in 1905 on Brownian motion — https://physicsworld.com/a/einsteins-random-walk/) Mendeleev arranged the known elements in a table according to their chemical properties. Math is great at studying and describing structure, and the periodic table is full of it. 

What is surprising, is how periodic table structure arises from math that ostensibly has absolutely nothing to do with chemistry.  Here are 3 examples.

The first occurred exactly 60 years ago to the month in grad school.  The instructor was taking a class of budding chemists through the solution of the Schrodinger equation for the hydrogen atom. 

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back then).

So it wasn’t a shock when the QM instructor back then got to them in the course of solving the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman put it, the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursions.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

The second and third examples occurred this year as I was going through Tony Zee’s book “Group Theory in a Nutshell for Physicists”

The second example occurs with the rotation group in 3 dimensions, which is a 3 x 3 invertible matrix, such that multiplying it by its transpose gives the identity, and such that is determinant is +1.  It is called SO(3)

Then he tensors 2 rotation matrices together to get a 9 x 9 matrix.  Zee than looks for the irreducible matrices of which it is composed and finds that there is a 3×3, a 1×1 and a 5×5.  The 5×5 matrix is both traceless and symmetric.  Note that 5 = 2(2) + 1.  If you tensor 3 of them together you get (among other things 3(2) + 1)   = 7;   a 7 x 7 matrix.

If you’re a chemist this is beginning to look like the famous 2 L + 1 formula for the number of the number of magnetic quantum numbers given an orbital quantum number of L.   The application of a magnetic field to an atom causes the orbital momentum L to split in 2L + 1 magnetic eigenvalues.    And you get this from the dimension of a particular irreducible representation from a group.  Incredible.  How did abstract math know this.  

The third example also occurs a bit farther along in Zee’s book, starting with the basis vectors (Jx, Jy, Jz) of the Lie algebra of the rotation group SO(3).   These are then combined to form J+ and J-, which raise and lower the eigenvalues of Jz.  A fairly long way from chemistry you might think.  

All state vectors in quantum mechanics have absolute value +1 in Hilbert space, this means the eigenvectors must be normalized to one using complex constants.  Simply by assuming that the number of eigenvalues is finite, there must be a highest one (call it j) . This leads to a recursion relation for the normalization constants, and you wind up with the fact that they are all complex integers.  You get the simple equation s = 2j where s is a positive integer.  The 2j + 1 formula arises again, but that isn’t what is so marvelous. 

j doesn’t have to be an integer.  It could be 1/2, purely by the math.  The 1/2 gives 2 (1/2) + 1 e.g two numbers.  These turn out to be the spin quantum numbers for the electron.  Something completely out of left field, and yet purely mathematical in origin. It wasn’t introduced until 1924 by Pauli — long after the math had been worked out.  

Incredible.  

The Representation of group G on vector space V is really a left action of the group on the vector space

Say what? What does this have to do with quantum mechanics? Quite a bit. Practically everything in fact. Most chemists learn quantum mechanics because they want to see where atomic orbitals come from. So they stagger through the solution of the Schrodinger equation where the quantum numbers appear as solution of recursion equations for power series solutions of the Schrodinger equation.

Forget the Schrodinger equation (for now), quantum mechanics is really written in the language of linear algebra. Feynman warned us not to consider ‘how it can be like that’, but at least you can understand the ‘that’ — e.g. linear algebra. In fact, the instructor in a graduate course in abstract algebra I audited opened the linear algebra section with the remark that the only functions mathematicians really understand are the linear ones.

The definitions used (vector space, inner product, matrix multiplication, Hermitian operator) are obscure and strange. You can memorize them and mumble them as incantations when needed, or you can understand why they are the way they are and where they come from. So if you are a bit rusty on your linear algebra I’ve written a series of 9 posts on the subject — here’s a link to the first https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/– just follow the links after that.

Just to whet your appetite, all of quantum mechanics consists of manipulation of a particular vector space called Hilbert space. Yes all of it.

Representations are a combination of abstract algebra and linear algebra, and are crucial in elementary particle physics. In fact elementary particles are representations of abstract symmetry groups.

So in what follows, I’ll assume you know what vector spaces, linear transformations of them, their matrix representation. I’m not going to explain what a group is, but it isn’t terribly complicated. So if you don’t know about them quit. The Wiki article is too detailed for what you need to know.

The title of the post really threw me, and understanding requires significant unpacking of the definitions, but you need to know this if you want to proceed further in physics.

So we’ll start with a Group G, its operation * and e, its identity element.

Next we have a set called X — just that a bunch of elements (called x, y, . . .), with no further structure imposed — you can’t add elements, you can’t mutiply them by real numbers. If you could with a few more details you’d have a vector space (see the survival guide)

Definition of Left Action (LA) of G on set X

LA : G x X –> X

LA : ( g, x ) |–> (g . x)

Such that the following two properties hold

l. For all x in X LA : (e, x) |–> (e.x) = x

2. For all g1 and g2 in G LA ( g1 * g2), x ) |–> ( g1 . (g2 . x )

Given vector space V define GL(V) the set of invertible linear transformations (LTs) of vector space. GL(V) becomes a group if you let composition of linear transformations become its operation (it’s all in the survival guide.

Now for the definition of representation of Group G on vector space V

It is a function

rho: G –> GL(V)

rho: g |–> LTg : V –> V linear ; LTg == Linear Transformation labeled by group element g

The representation rho defines a left group action on V

LA : (g, v) |–> LTg (V) — this satisfies the two properties above of a left action given above — think about it.

Now you’re ready for some serious study of quantum mechanics. When you read that the representation is acting on some vector space, you’ll know what they are talking about.

How formal tensor mathematics and the postulates of quantum mechanics give rise to entanglement

Tensors continue to amaze. I never thought I’d get a simple mathematical explanation of entanglement, but here it is. Explanation is probably too strong a word, because it relies on the postulates of quantum mechanics, which are extremely simple but which lead to extremely bizarre consequences (such as entanglement). As Feynman famously said ‘no one understands quantum mechanics’. Despite that it’s never made a prediction not confirmed by experiments, so the theory is correct even if we don’t understand ‘how it can be like that’. 100 years of correct prediction of experimentation are not to be sneezed at.

If you’re a bit foggy on just what entanglement is — have a look at https://luysii.wordpress.com/2010/12/13/bells-inequality-entanglement-and-the-demise-of-local-reality-i/. Even better; read the book by Zeilinger referred to in the link (if you have the time).

Actually you don’t even need all the postulates for quantum mechanics (as given in the book “Quantum Computation and Quantum Information by Nielsen and Chuang). No differential equations. No Schrodinger equation. No operators. No eigenvalues. What could be nicer for those thirsting for knowledge? Such a deal ! ! ! Just 2 postulates and a little formal mathematics.

Postulate #1 “Associated to any isolated physical system, is a complex vector space with inner product (that is a Hilbert space) known as the state space of the system. The system is completely described by its state vector which is a unit vector in the system’s state space”. If this is unsatisfying, see an explication of this on p. 80 of Nielson and Chuang (where the postulate appears)

Because the linear algebra underlying quantum mechanics seemed to be largely ignored in the course I audited, I wrote a series of posts called Linear Algebra Survival Guide for Quantum Mechanics. The first should be all you need. https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/ but there are several more.

Even though I wrote a post on tensors, showing how they were a way of describing an object independently of the coordinates used to describe it, I did’t even discuss another aspect of tensors — multi linearity — which is crucial here. The post itself can be viewed at https://luysii.wordpress.com/2014/12/08/tensors/

Start by thinking of a simple tensor as a vector in a vector space. The tensor product is just a way of combining vectors in vector spaces to get another (and larger) vector space. So the tensor product isn’t a product in the sense that multiplication of two objects (real numbers, complex numbers, square matrices) produces another object of the exactly same kind.

So mathematicians use a special symbol for the tensor product — a circle with an x inside. I’m going to use something similar ‘®’ because I can’t figure out how to produce the actual symbol. So let V and W be the quantum mechanical state spaces of two systems.

Their tensor product is just V ® W. Mathematicians can define things any way they want. A crucial aspect of the tensor product is that is multilinear. So if v and v’ are elements of V, then v + v’ is also an element of V (because two vectors in a given vector space can always be added). Similarly w + w’ is an element of W if w an w’ are. Adding to the confusion trying to learn this stuff is the fact that all vectors are themselves tensors.

Multilinearity of the tensor product is what you’d think

(v + v’) ® (w + w’) = v ® (w + w’ ) + v’ ® (w + w’)

= v ® w + v ® w’ + v’ ® w + v’ ® w’

You get all 4 tensor products in this case.

This brings us to Postulate #2 (actually #4 on the book on p. 94 — we don’t need the other two — I told you this was fairly simple)

Postulate #2 “The state space of a composite physical system is the tensor product of the state spaces of the component physical systems.”

http://planetmath.org/simpletensor

Where does entanglement come in? Patience, we’re nearly done. One now must distinguish simple and non-simple tensors. Each of the 4 tensors products in the sum on the last line is simple being the tensor product of two vectors.

What about v ® w’ + v’ ® w ?? It isn’t simple because there is no way to get this by itself as simple_tensor1 ® simple_tensor2 So it’s called a compound tensor. (v + v’) ® (w + w’) is a simple tensor because v + v’ is just another single element of V (call it v”) and w + w’ is just another single element of W (call it w”).

So the tensor product of (v + v’) ® (w + w’) — the elements of the two state spaces can be understood as though V has state v” and W has state w”.

v ® w’ + v’ ® w can’t be understood this way. The full system can’t be understood by considering V and W in isolation, e.g. the two subsystems V and W are ENTANGLED.

Yup, that’s all there is to entanglement (mathematically at least). The paradoxes entanglement including Einstein’s ‘creepy action at a distance’ are left for you to explore — again Zeilinger’s book is a great source.

But how can it be like that you ask? Feynman said not to start thinking these thoughts, and if he didn’t know you expect a retired neurologist to tell you? Please.