In the grad school course on abstract algebra I audited a decade or so ago, the instructor began the discussion about tensors by saying they were the hardest thing in mathematics. Unfortunately I had to drop this section of the course due a family illness. I’ve written about tensors before and their baffling notation and nomenclature. The following is yet another way to look at them which may help with their confusing terminology
First, this post will assume you have a significant familiarity with linear algebra. I’ve written a series of posts on the subject if you need a brush up — pretty basic — here’s a link to the first post — https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/
All of them can be found here — https://luysii.wordpress.com/category/linear-algebra-survival-guide-for-quantum-mechanics/.
Here’s another attempt to explain them — which will give you the background on dual vectors you’ll need for this post — https://luysii.wordpress.com/2015/06/15/the-many-ways-the-many-tensor-notations-can-confuse-you/
To the physicist, tensors really represent a philosophical position — e.g. there are shapes and processes external to us which are real, and independent of the way we choose to describe them mathematically. E. g. describing them by locating their various parts and physical extents in some sort of coordinate system. That approach is described here — https://luysii.wordpress.com/2014/12/08/tensors/
Zee in one of his books defines tensors as something that transforms like a tensor (honest to god). Neuenschwander in his book says “What kind of a definition is that supposed to be, that doesn’t tell you what it is that is changing.”
The following approach may help — it’s from an excellent book which I’ve not completely gotten through — “An Introduction to Tensors and Group Theory for Physicists” by Nadir Jeevanjee.
He says that tensors are just functions that take a bunch of vectors and return a number (either real or complex). It’s a good idea to keep the volume tensor (which takes 3 vectors and returns a real number) in mind while reading further. The tensor function just has one other constraint — it must be multilinear (https://en.wikipedia.org/wiki/Multilinear_map). Amazingly, it turns out that this is all you need.
Tensors are named by the number of vectors (written V) and dual vectors (written V*) they massage to produce the number. This is fairly weird when you think of it. We don’t name sin (x) by x because this wouldn’t distinguish it from the zillion other real valued functions of a single variable.
So an (r, s) tensor is named by the ordered array of its operands — (V, …V,V*, …,V*) with r V’s first and s V* next in the array. The array tells you what the tensor function must be.
How can Jeevanjee get away with this? Amazingly, multilinearity is all you need. Recall that the great thing about the linearity of any function or operator on a vector space is that ALL you need to know is what the function or operator does to the basis vectors of the space. The effect on ANY vector in the vector space then follows by linearity.
Going back to the volume tensor whose operand is (V, V, V) and the vector space for all 3 V’s (R^3), how many basis vectors are there for V x V x V ? There are 3 for each V meaning that there are 3^3 = 27 possible basis vectors. You probably remember the formula for the volume enclosed by 3 vectors (call them u, v, w). The 3 components of u are u1 u2 and u3.
The volume tensor calculates volume by ( U crossproduct V ) dot product W.
Writing the calculation out
Volume = u1*v2*w3 – u1*v3*w2 + u2*v3*w1 – u2*v1*w3 + u3*v1*w2 – u3*v2*w1. What about the other 21 combinations of basis vectors? They are all zero, but they are all present in the tensor.
While any tensor manipulating two vectors can be expressed as a square matrix, clearly the volume tensor with 27 components can not be. So don’t confuse tensors with matrices (as I did).
Note that the formula for volume implicitly used the usual standard orthogonal coordinates for R^3. What would it be in spherical coordinates? You’d have to use a change of basis matrix to (r, theta, phi). Actually you’d have to have 3 of them, as basis vectors in V x V x V are 3 places arrays. This gives the horrible subscript and superscript notation of matrices by which tensors are usually defined. So rather than memorizing how tensors transform you can derive things like
T_i’^j’ = (A^k_i’)*(A^k_i’) * T_k^l where _ before a letter means subscript and ^ before a letter means superscript and A^k_i’ and A^k_i’ are change of basis matrices and the Einstein summation convention is used. Note that the chance of basis formula for tensor components for the volume tensor would have 3 such matrices, not two as I’ve shown.
One further point. You can regard a dual vector as a function that takes a vector and returns a number — so a dual vector is a (1,0) tensor. Similarly you can regard vectors as functions that take dual vectors and returns a number, so they are (0,1) tensors. So, actually vectors and dual vectors are tensors as well.
The distinction between describing what a tensor does (e.g. its function) and what its operands actually are caused me endless confusion. You write a tensor operating on a dual vector as a (0, 1) tensor, but a dual vector is a (1,0) considered as a function.
None of this discussion applies to the tensor product, which is an entirely different (but similar) story.
Hopefully this helps