## Tag Archives: dot product

### The pleasures of enough time

One of the joys of retirement is the ability to take the time to fully understand the math behind statistical mechanics and thermodynamics (on which large parts of chemistry are based — cellular biophysics as well). I’m going through some biophysics this year reading “Physical Biology of the Cell” 2nd Edition and “Molecular Driving Forces” 2nd Edition. Back in the day, what with other courses, research, career plans and hormones to contend with, there just wasn’t enough time.

To really understand the derivation of the Boltzmann equation, you must understand Lagrange multipliers, which requires an understanding of the gradient and where it comes from. To understand the partition function you must understand change of variables in an integral, and to understand that you must understand why the determinant of the Jacobian matrix of a set of independent vectors is the volume multiplier you need.

These were all math tools whose use was fairly simple and which didn’t require any understanding of where they came from. What a great preparation for a career in medicine, where we understood very little of why we did the things we did, not because of lack of time but because the deep understanding of the systems we were mucking about with simply didn’t (and doesn’t) exist. It was intellectually unsatisfying, but you couldn’t argue with the importance of what we were doing. Things are better now with the accretion of knowledge, but if we really understood things perfectly we’d have effective treatments for cancer and Alzheimer’s. We don’t.

But in the pure world of math, whether a human creation or existing outside of us all, this need not be accepted.

I’m not going to put page after page of derivation of the topics mentioned in the second paragraph, but mention a few things to know which might help you when you’re trying learn about them, and point you to books (with page numbers) that I’ve found helpful.

Let’s start with the gradient. If you remember it at all, you know that it’s a way of taking a continuous real valued function of several variables and making a vector of it. The vector has the miraculous property of pointing in the direction of greatest change in the function. How did this happen?

The most helpful derivation I’ve found was in Thomas’ textbook of calculus (9th Edition pp. 957–> ). Yes Thomas — the same book I used as a freshman 6o years ago ! Like most living things that have aged, it’s become fat. Thomas is now up to the 13th edition.

The simplest example of a continuous real valued function is a topographic map. Thomas starts with the directional derivative — which is how the function height(north, east) changes in the direction of a vector whose absolute value is 1. That’s the definition — to get something you can actually calculate, you need to know the chain rule, and how to put a path on the topo map. The derivative of the real valued function in the direction of a unit vector turns out to be the dot product of the gradient vector and any vector at that point whose absolute value is 1. The unit vector can point any direction but the value of the derivative (the dot product) will be greatest when the unit vector points in the direction of the gradient vector. That’s where the magic comes from. If you’re slightly shaky on linear algebra, vectors and dot products — here’s a (hopefully explanatory) link to some basic linear algebra — https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/. This is the first in a series — just follow the links.

The discussion of Lagrange multipliers (which is essentially the relation between two gradients — one of a function, the other of a constraint in Dill pp.68 -> 72 is only fair, and I did a lot more work to understand it (which can’t be reproduced here).

For an excellent discussion of wedge product and why the volume multiplier in an integral must be the determinant of the Jacobian — see Callahan Advanced Calculus p. 41 and exercise 2.15 p. 61, the latter being the most important. It explains why things work this way in 2 dimensions. The exercise takes you through the derivation step by step asking you to fill in some fairly easy dots. Even better is  exercise 2.34 on p. 67 which proves the same thing for any collection of n independent vectors in R^n.

The Jacobian is just the determinant of a square matrix, something familiar from linear algebra. The numbers are just the coefficients of the vectors at a given point. But in integrals we’re changing dx and dy to something else — dr and dTheta when you go to polar coordinates. Why a matrix here? Because if differential calculus is about anything it is about linearization of nonLinear functions, which is why you can use a matrix of derivatives (the Jacobian matrix)  for dx and dy.

Why is this important for statistical mechanics. Because one of the integrals you must evaluate is of exp(-ax^2) from -infinity to + infinity, and the switch to polar coordinates is the way to do it. You also must evaluate integrals of this type to understand the kinetic theory of ideal gases.

Not necessary in this context, but one of the best discussions of the derivative in its geometric context I’ve ever seen is on pp. 105 –> 106 of Callahan’s bok

So these are some pointers and hints, not a full discussion — I hope it makes the road easier for you, should you choose to take it.

### The many ways the many tensor notations can confuse you

This post is for the hardy autodictats attempting to learn tensors on their own. If you use multiple sources, you’ll find that they define the same terms used to describe tensors in diametrically opposed ways, so that just when you thought you knew what terms like covariant and contravariant tensor meant,  another source defines them completely differently, leading you to wonder (1) about your intelligence (2) your sanity.

Tensors involve vector spaces and their bases. This post assumes you know what they are. If you don’t understand how a vector can be expressed in terms of coordinates relative to a basis, pick up any book on linear algebra.

Tensors can be defined by the way their elements transform under a change of coordinate basis. This is where the terms covariant and contravariant come from. By the way when Einstein says that physical quantities must transform covariantly, he means they transform like tensors do (even contravariant tensors).

****
Addendum 12/17 — Neuenschwander “Tensor Calculus for Physicists 2015 pp. 17 –> There are 3 meanings to the term covariance

l. Covariance of an equation — it transforms under a coordinate transformation the way the coodinates do.

2. A way of distinguishing between covariant and contravariant vectors (see below).

3. A type of tensor derivative — a modification of the usual derivative definition to make the derivative of a tensor another tensor (the usual derivative definition fails this).

******

True enough, but this approach doesn’t help you understand the term tensor product or the weird ® notation (where there is an x within the circle) used to describe it.

The best way to view tensors (from a notational point of view) is to look on them as functions which take finite Cartesian products (https://en.wikipedia.org/wiki/Cartesian_product) of vectors and covectors and produce a single real number.

To understand what a covector (aka dual vector) is, you must understand the inner product (aka dot product).

The definition of inner product (dot product) of a vector V with itself written , probably came from the notion of vector length. Given the standard basis in two dimensional space E1 = (1,0) and E2 = (0,1) all vectors V can be written as x * E1 + y * E2 (x is known as the coefficient of E1). Vector length is given by the good old Pythagorean theorem as SQRT[ x^2 + y^2]. The dot product (inner product) is just x^2 + y^2 without the square root.

In 3 dimensions the distance of a point (x, y, z) from the origin is SQRT [x^2 + y^2 + z^2]. The definition of vector length (or distance) easily extends (by analogy) to n dimensions where the length of V is SQRT[x1^2 + x2^2 + . . . . + xn^2] and the dot product is x1^2 + x2^2 + . . . . + xn^2. Length is always a non-negative real number.

The definition of inner product also extends to the the dot product of two different vectors V and W where V = v1 * E1 + v2 * E2 + . … vn * En, W = w1 * E1 + . . + wn * En — e.g.  = v1 * w1 + v2 * w2 + . . . + vn * wn. Again always a real number, but not always positive as any of the v’s and w’s can be negative.

So, if you hold W constant you can regard it as a function on the vector space in which V and W reside which takes any V and produces a real number. You can regard V the same way if you hold it constant.

Now with some of the complications which mathematicians love, you can regard the set of functions { W } operating on a vector space, as a vector space itself. Functions can be added (by their results) and can be multiplied by a real number (a scalar). The set of functions { W } regarded as a vector space is called the dual vector space.

Well if { W } along with function addition and scalar multiplication is a vector space, it must have a basis. Everything I’ve every read about tensors  involves finite dimensional vector spaces. So assume the vector space A is n dimensional where n is an positive integer, and call its basis vectors the ordered set a1, . . . , an. The dual vector space (call it B) is also n dimensional with another basis the ordered set b1, . . . , bn.

The bi are chosen so that their dot product with elements of A’s basis = Kronecker delta, e.g. if i = j then
= 1. If i doesn’t equal j  then  = 0. This can be done by a long and horrible process (back in the day before computer algebra systems) called Gram Schmidt orthonormalization. Assume this can be done. If you’re a true masochist have a look at https://en.wikipedia.org/wiki/Gram–Schmidt_process.

Notice what we have here. Any particular element of the dual space B (a real valued function operating on A) call it f can be written down as f1 * b1 + . . . + fn * bn. It will take any vector in A (written g1 * a1 + . . . + gn * an) and give you f1 * g1 + . . . + fn * gn which is a real number. Basically any element ( say bj) of the basis of dual space B just looks at a vector in A and picks out the coefficient of aj (when it forms the dot product with the vector in A.

Now (at long last) we can begin to look at the contrary way tensors are described. The most fruitful way is to look at them as the product of individual dot products between a vector and a dual vector.

Have a look at — https://luysii.wordpress.com/2014/12/08/tensors/. To summarize  — the whole point of tensor use in physics is that they describe physical quantities which are ‘out there’ independently of the coordinates used to describe them. A hot dog has a certain length independently of its description in inches or centimeters. Change your viewpoint and the its coordinates in space will change as well (the hot dog doesn’t care about this). Tensors are a way to accomplish this.

It’s too good to pass up, but the length of the hot dog stays the same no matter how many times you (non invasively) measure it.  This is completely different than the situations in quantum mechanics, and is one of the reasons that quantum mechanics has never been unified with general relativity (which is a theory of gravity based on tensors).

Remember the dot product concerns   . If you change the basis of vector  W (so vector W has different coordinates) the basis of dual vector   V must also change (to keep the dot product the same). A choice must be made as to which of the two concurrent basis changes is fundamental (actually neither is as they both are).

Mathematics has chosen the basis of vector W in as fundamental.

When you change the basis of W, the coefficients of W must change in the opposite way (to keep the vector length constant). The coefficients of W are said to change contravariantly. What about the coefficients of V? The basis of V changes oppositely to the basis of W (e.g. contravariantly), so the coefficients of V must change differently from this e.g. the same way the basis of W changes — e.g. covariantly. Confused?  Nonetheless, that’s the way they are named

Vectors and convectors and other mathematical entities such differentials, metrics and gradients are labelled as covariant or contravariant by the way their numerical coefficients change with a change in basis.

So the coefficients of vector W transform contravariantly, and the coefficients of dual vector V transform covariantly. This is true even though the coefficients of V and W always transform contravariantly (e. g. oppositely) to the way their basis transforms.

An immense source of confusion.

As mentioned above, one can regard vectors and dual vectors as real valued functions on elements of a vector space. So (adding to the confusion) vectors and dual vectors are both tensors. Vectors are contravariant tensors, and dual vectors are covariant tensors.

Now we form Cartesian products of vectors W (now called V) and convectors V (hereafter called V* to keep them straight).

We get something like this V x V x V x V* x V*, a cartesian product of 3 contravariant vectors and 2 dual vectors.

To get a real number out of them we form the tensor product V* ® V* ® V* ® V ® V, where the first V* operates on the first V to produce a real number, the second operates . . . and the last V* operates on the last V to produce a real number. All real numbers produced are multiplied together to produce the result.

Why not just call  V* ® V* ® V* ® V ® V a product? Well each V and V* is an n dimensional vector space, and the tensor V ® V is a n^2 dimensional space (and  V* ® V* ® V* ® V ® V is an n^5 dimensional vector space). When we form the product of two numbers (real or complex) we just get another number of the same species (real or complex). The tensor product of two n dimensional vector spaces is not another n dimensional space, hence the need for the adjective modifying the name product. The dot product nomenclature is much the same, the dot product of two vectors is not another vector, but a real number.

Here is yet another source of confusion. What we really have is a tensor product V* ® V* ® V* ® V ® V operating on a Cartesian product of vectors and covectors (tensors themselves) V x V x V x V* x V* to produce a real number.

Tensors can either be named by their operands making this a 3 contravariant 2 covariant tensor — (3, 2) tensor.

Other books name them by their operator (e.g. the tensor product) making it a 3 covariant 3 contravariant tensor (a 2, 3) tensor.

If you don’t get this settled when you switch books you’ll think you don’t really understand what contravariant and covariant mean (when in fact you do). Mercifully, one constancy in notation (thankfully) is that the contravariant number always comes first (or on top) and the covariant number second (or on bottom).

Hopefully this is helpful.  I wish I’d had this spelled out when I started.