Linear Algebra survival guide for Quantum Mechanics – III

Before leaving the dot product, it should be noted that there are all sorts of nice geometric things you can do with it — such defining the angle between two vectors (and in a space with any finite number of dimensions to boot).  But these are things which are pretty intuitive (because they are geometric) so I’m not going to go into them.  When the dot product of two vectors is zero they are said to be orthogonal to each other (e.g. at right angles to each other).  You saw this with the dot product of E1 = (1,0) and E2 = (0,1) in the other post.  But it also works with any two vectors at right angles, such as X = (1,1) and Y = (1,-1).  

The notion of dimension seems pretty simple, until you start to think about it (consider fractals).  We cut our  vector teeth on vectors in 3 dimensional space (e.g. E1 = (1,0,0) aka i, E2 = (0,1,0) aka j, and E3  = (0,0,1) aka k.  Any point in 3 dimensional space can be expressed as a linear combination of them — e.g.  (x, y, z) = x * E1 + y * E2 + z * E3.   The crucial point about this way of representing a given point is that the representation is unique.  In math lingo, E1, E2, and E3 are said to be linearly independent, and if you study abstract algebra you will run up against the following (rather obscure) definition  — a collection of vectors is linearly independent if the only way to get them to add up to the zero vector (0, 0, . . .) is to multiply each of them by the real number zero.  X and Y by themselves are linearly independent , but X, Y and (1,0) = E1 are not, as 1 * X + 1 * Y + (-2) * E1 = (0, 0).  This definition is used in lots of proofs in abstract algebra, but it totally hides what is really going on.  Given a linearly independent set of vectors, the representation of any other vector as a linear combination of them is UNIQUE.  Given a set of vectors V1, V2, . .. we can always represent the zero vector as 0 * V1 + 0 *V2 + . …   If there is no other way to get the zero vector from them, then V1, V2,  … are linearly independent.  That’s where the criterion comes from, but uniqueness is what is crucial.  

It’s intuitively clear that you need two vectors to represent points in the plane, 3 to represent points in space, etc. etc.  So the dimension of any vector space is the maximum number of linearly independent vectors it contains.  The number of pairs of linearly independent vectors in the plane is infinite (just consider rotating the x and y axes). But the plane has dimension 2 because 3 (non co-linear) vectors are never linearly independent.   Spaces can have any number of dimensions, and quantum mechanics deals with a type of infinite dimensional space called Hilbert space (I’ll show how to get your mind around this in a later post).  As an example of a space with a large number of dimensions, consider the stock market.  Each stock in it occupies a separate dimension, with the price (or the volume, or the total number of shares outstanding) as a number to multiple that dimension by.  You don’t have a complete description of the stock market vector until you say what’s going on with each stock (dimension). 

Suppose you now have a space of dimension n, and a collection of n linearly independent vectors, so that any other n-dimensional vector can be uniquely expressed (can be uniquely represented) as a linear combination of the n vectors.  The collection of n vectors is then called a basis of the vector space.  There is no reason the vectors of the basis  have to be at right angles to each other (in fact in “La Geometrie” of Descartes which gave rise to the term Cartesian coordinates, the axes were NOT at right angles to each other, and didn’t even go past the first quadrant).  So (1,0) and (1,1) is a perfectly acceptable basis for the plane.  The pair are linearly independent — try getting them to add to (o, o) with nonzero coefficients. 

Quantum mechanics wants things nicer than this.  First, all the basis vectors are normalized — given a vector V’ we want to form a vector V pointing in the same direction such that < V | V > = 1.  Not hard to do — < V’ | V’ > is a just real number after all (call it x), so V is just V’/SQRT[x].  There was an example of this technique in the previous post in the series.  

Second (and this is the hard part), Quantum mechanics wants all its normalized basis vectors to be orthogonal to each other — e.g. if I and J are vectors < I | J > = 1 if I = J, and 0 if I doesn’t equal J.  Such a function is called the Kroneker delta function (or delta(ij).  How do you accomplish this?  By a true algebraic horror known as Gram Schmidt orthogonalization.  It is a ‘simple’ algorithm in which you take dot products of two vectors and then subtract them from another vector .  I never could get the damn thing to work on problems  years ago in grad school, and developed another name for it which I’ll leave to your imagination (where is Kyle Finchsigmate when you really need him?).  But work it does, so the basis vectors (the pure wavefunctions) of quantum mechanical space are both normalized and orthogonal to each other (e.g. they are orthonormal).  Since they are a basis, any other wave function has a UNIQUE representation in terms of them (these are the famous mixed states or the superposition states of quantum mechanics).  

If you’ve already studied a bit of QM, the basis vectors are the eigenvectors of the various quantum mechanical operators.  If not, don’t sweat it, this will be explained in the next post.  That’s a fair amount of background and terminology.  But it’s necessary for you to understand why matrix multiplication is the way it is, why matrices represent linear transformation, and why quantum mechanical operators are basically linear transformations.  That’s all coming up.

About these ads
Post a comment or leave a trackback: Trackback URL.

Comments

  • Wavefunction  On January 11, 2010 at 4:18 pm

    Good job Retread! I will have to take a look at this in detail later; I think I can see why Heisenberg’s matrix mechanics gave people fits. Meanwhile, I pontificated on your RNA/protein posts.

  • Michelle  On January 12, 2010 at 9:33 pm

    I sooooo much prefer the matrix mechanics to Schrodinger’s wave notions, I have to say. There is something so elegant about it!

    I’ll have to link this to my course site for next year….and give it a very careful read again (I’ve learned the hard way with my own text!).

    Great job!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 66 other followers

%d bloggers like this: