## Linear Algebra survival guide for Quantum Mechanics – VI

Why is linear algebra like real estate?   Well, in linear algebra the 3 most important things are notation, notation, notation.  I’ve shown how two sequential linear transformations can be melded into one, but you’ve seen nothing about the matrix representation of a linear transformation.

Here’s the playing field from LASGFQM – IV again.  There are 3 vector spaces, A, B and C of dimensions 3, 4, and 5, with bases {A1, A2, A3}, {B1, B2, B3, B4} and {C1, C2, C3, C4, C5}.  Then there is linear transformation T which transforms A into B, and linear tranformation S which transforms B into C.

We have T(A1) = AB11 * B1 + AB12 * B2 + AB13 *B3 + AB14*B4

S(B1) = BC11 *C1 + BC12 *C2 + BC13 *C3 + BC14 * C4 + BC15 * C5
S(B2) = BC21 *C1 + BC22 *C2 + BC23 *C3 + BC24 * C4 + BC25 * C5
S(B3) = BC31 *C1 + BC32 *C2 + BC33 *C3 + BC34 * C4 + BC35 * C5
S(B4) = BC41 *C1 + BC42 *C2 + BC43 *C3 + BC44 * C4 + BC45 * C5

To see the symmetry of what is going on you may have to make the print size smaller so the equations don’t slop over the linebreak.

So after some heavy lifting we eventually arrived at:

T(A1) = AB11 * ( BC11 * C1  +  BC12 * C2  +  BC13 * C3   +   BC14 * C4   +   BC15 * C5 ) +

AB12 * ( BC21 * C1  +  BC22 * C2  +  BC23 * C3   +   BC24 * C4   +   BC25 * C5 ) +

AB13 * ( BC31 * C1  +  BC32 * C2  +  BC33 * C3   +   BC34 * C4   +   BC35 * C5 ) +

AB14 * ( BC41 * C1  +  BC42 * C2  +  BC43 * C3   +   BC44 * C4   +   BC45 * C5 )

So that

A1 = (AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41) C1  +

(AB11 * BC12 + AB12 *BC22 + AB13 * BC32 + AB14 * BC42)  C2 +

etc. etc.

All very open and above board, and obtained  just by plugging the B”s in terms of the C’s into the A’s in terms of the B’s to get the A’s in terms of the C’s.

Notice that what we could call AC11 is just AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41 and AC12 is just AB11 * BC12 + AB12 *BC22 + AB13 * BC32 + AB14 * BC42.  We need another 13 such sums to be able to express a vector in A (which is a unique linear combination of A1, A2, A3 because the three of them are a basis) in terms of the 5 C’ basis vectors.  It’s dreary but it can be done, and you just saw part of it.

You don’t want to figure this out all the time.  So represent T as a rectangular array with 4 rows and 3 columns

AB11   AB21  AB31
AB12   AB22  AB32
AB13   AB23  AB33
AB14   AB24  AB34

Represent S as a rectangular array with 5 rows and 4 columns

BC11   BC21   BC31  BC41
BC12   BC22  BC32  BC42
BC13   BC23  BC33  BC43
BC14   BC24  BC34  BC44
BC15   BC25  BC34  BC45

Now plunk the array of AB’s on top of (and to the right) of the array of BC’s

AB11   AB21  AB31
AB12   AB22  AB32
AB13   AB23  AB33
AB14   AB24  AB34
BC11   BC21   BC31  BC41  AC11
BC12   BC22  BC32  BC42
BC13   BC23  BC33  BC43
BC14   BC24  BC34  BC44
BC15   BC25  BC34  BC45

Recall that (after much tedious algebra) we obtained that

AC11 was just AB11 * BC11 + AB12 * BC21 + AB13 * BC31 + AB14 * BC41

But AC11 is just the as if the first row of the BC array was a vector and the first column of the AB array was also a vector and you formed the dot product.  Well they are and you did just that to find element AC11 of the array representing the linear transformation from A to C.  Do this 14 more times to get all 15 possible combinations of 3 As and 5 Cs and you get an array of numbers with 5 rows and 3 columns.  This is the AC matrix and this is why matrix multiplication is the way it is.

Note: we have multiplied a 5 row times 4 column array by a 4 row 3 column array.  Recall that you can only form the inner product of vectors with the same numbers of components (e.g. they have to be in vector spaces of the same dimension).

We have T: A to B (dimension 3 to dimension 4)

S: B to C (dimension 4 to dimension 5)

This is written as ST (convention has it that the transformation on the right is always done first — this takes some getting used to, but at least everyone follows it, so it’s like medical school — the appendix is on the right, just remember it).   Notice that  TS makes absolutely no sense.   S takes you to a vector space of dimension 5, then T tries to start with a different vector space.   This is why when multiplying arrays (matrices) the number of rows of the matrix on the left must match the number of columns of the matrix on the right (or the top as I’ve drawn it — thanks to John and Barbara Hubbard and their great book on Vector Calculus).  If the two matrices are rectangular (as we have here), only one way of  multiplication is possible.

More notation, and an apology.  Matrix T is a 4 row by 3 column matrix — this is always written as a 4 x 3 matrix.  Similarly for the coefficients of each element which I have in some way screwed up (but at least I did so consistently).  Invariably the matrix element (just a number) in the 3rd column of the fourth row is written element43 — If you look at what I’ve written everything is bassackwards.  Sorry, but the principles are correct. The mnemonic for the order of the coefficients is Roman Catholic (row column), a nonscatological mnemonic for once.

That’s a lot of tedium, but it does explain why matrix multiplication is the way it is.  Notice a few other things.  The matrices you saw were 4 x 3 and 5 x 4, but 3 x 1 matrices are possible as well.  Such matrices are called column vectors.  Similarly 1 x 3 matrices exist and are called row vectors.  So what do you get if you multiply a 1 x 3 vector by a 3 x 1 vector?

You get a 1 x 1 vector or a number.  This is another way to look at the inner product of two vectors.  Usually vectors are written as column vectors ( n x 1 ) with n rows and 1 columns.  1 x n row vectors are known as the transpose of the column vector.

That’s plenty for now.  Hopefully the next post will be more interesting.  However, physics needs to calculate things and see if the numbers they get match up with experiment.  This means that they must choose a basis for each vector space, and express each vector as an array of coefficients of that basis.  Mathematicians avoid this where possible, just using the properties of vector space bases to reason about linear transformations, and the properties of various linear transformations to reason about bases.  You’ll see the power of this sort of thinking in the next post.  If you ever study differentiable manifolds you’ll see it in spades.