We’ve established a pretty good base camp for the final assault. It’s time to acclimate to the altitude, look around and wax a bit philosophic. What’s happened to the integrals and derivatives in all of this? A vector is a vector and its components can be differentiated, but linear algebra never talks about integrating vectors. During the QM course, I was constantly bombarding the instructor with questions about things I didn’t understand. Finally, he said that he wished the students were asking those sorts of questions. I told him they were just doing what most people do on their first exposure to QM — trying to survive. That’s certainly the way I was the first time around QM. True for calculus as well. I quickly learned to ignore what a Riemann integral really is — the limit of an infinite sum of products. Cut the baloney, to integrate something just find the antiderivative. We all know that. Well, that’s pretty much true for continuous functions and the problems you meet in Calculus I.

Well you’re not in Kansas anymore, and to understand why an infinite dimensional vector is like an integral, you’ve got to go back to Riemann’s definition of the integral of a function. You start with some finite interval (infinite intervals come later). Then you chop it up into many (say 100) smaller nonoverlapping but contiguous subintervals (each of which has a finite nonzero length). Then you pick one value of the function in each of the intervals (which can’t be infinite or the process fails), multiply it by the length of each subinterval and form the sum of all 100 products (which is just a number after all). Then you chop each of the subintervals into subsubintervals and repeat the process obtaining a new number. If the series of numbers approaches a limit as the process proceeds than the integral exists and is a number. Purists will note that I’ve skipped all sorts of analysis, such that each interval is a compact (closed and bounded) set of real numbers, that the function is continuous on the intervals, so that it reaches a maximum and a minimum on each interval, and that if the integral exists, the sums of the maxima times the interval length on each interval and the sums of the minima times interval length approach each other etc. etc. Parenthetically, the best analysis book I’ve met is “Understanding Analysis” by Stephen Abbott.

As you subdivide, the length of the sub-sub- .. . sub intervals get smaller and smaller (and of course more numerous). What if you call each of the subintervals a dimension rather than an interval and the value of the function, the coefficient of the vector on that dimension? Then as the number of subintervals increases, the plot of the function values you chosen for each interval get closer and closer, so that plotting a high dimension vector looks just like the continuous function you started with. This is why an infinite dimensional vector looks like the integral of a function (and why quantum mechanics uses them).

Now imagine a linear transformation of this vector into another vector in the same infinite dimensional space, and you’re almost to what quantum mechanics means by an operator. Inner products of infinite dimensional vectors can be defined (with just a minor bit of heavy lifting). Just multiply the coefficients of the vectors in each dimension together and form their sum. This won’t be impossible. Let the nth coefficient of vector #1 be 1/2^n, that of vector #2 1/3^n. The sum of even an infinite number of such products is finite. This implies that to be of use in QM the coefficients of any of its infinite vectors must form a convergent series.

Now, what if some (or all) of the coefficients are complex numbers? No problem, because of the way inner product of vectors with complex coefficients was defined in the second post of the series. The inner product of (even an infinite dimensional ) complex vector with itself is guaranteed to be a real number. You’re almost in the playing field of QM, e.g. Hilbert space — an infinite dimensional space with an inner product defined on it. The only other thing needed for Hilbert space is something called completeness, something I don’t understand well enough to explain, but it means something like plugging up the holes in the space, in the same way that the real numbers plug the holes in the rational numbers.

Certainly not in Kansas anymore, and apparently barely in Physics either. It’s time to respond to Wavefuntion’s comment on the last post. “It’s interesting that if you are learning “practical” quantum mechanics such as quantum chemistry, you can get away with a lot without almost any linear algebra. One has to only take a look at popular QC texts like Levine, Atkins, or Pauling and Wilson; almost no LA there.” So what’s the point of all these posts?

It’s back to Feynman and another of his famous quotes “I think I can safely say that nobody understands quantum mechanics.” This from 1965. A look a Louisa Gilder’s recent book “The Age of Entanglement” should convince you that, on a deep level, no one still does. Feynman also warns us not to start thinking ‘how can it be like that’ (so did the instructor in the QM course). So why all this verbiage?

Because all QM follows from a few simple postulates, and these postulates are written in linear algebra. Hopefully at the end of this, you’ll understand the language in which QM is written, so any difficulty will be with the underlying structure of QM (which is plenty), not the way QM is expressed (or why it is expressed the way it is).

Next up, vector and matrix notation and what the adjoint is, and why it’s important. If you begin thinking hard about the inner product of two different complex vectors (even the finite ones) you’ll see that usually a complex number will result. How does QM avoid this (since all measurable values must be real — one of the postulates). Adjoints and Hermitian operators are the way out. There’s still some pretty hard stuff ahead.