Category Archives: Math

Tensors — again, again, again

“A tensor is something that transforms like a tensor” — and a duck is something that quacks like a duck. If you find this sort of thing less than illuminating, I’ve got the book for you — “An Introduction to Tensors and Group Theory for Physicists” by Nadir Jeevanjee.

He notes that many physics books trying to teach tensors start this way, without telling you what a tensor actually is.  

Not so Jeevanjee — right on the first page of text (p. 3) he says “a tensor is a function which eats a certain number of vectors (known as the rank r of the tensor) and produces a number.  He doesn’t say what that number is, but later we are told that it is either C or R.

Then comes  the crucial fact that tensors are multilinear functions. From that all else flows (and quickly).

This means that you know everything you need to know about a tensor if you know what it does to its basis vectors.  

He could be a little faster about what these basis vectors actually are, but on p. 7 you are given an example explicitly showing them.

To keep things (relatively) simple the vector space is good old 3 dimensional space with basis vectors x, y and z.

His rank 2 tensor takes two vectors from this space (u and v) and produces a number.  There are 9 basis vectors not 6 as you might think — x®x, x®y, x®z, y®x, y®y, y®z, z®x, z®y, and z®z.    ® should be read as x inside a circle

Tensor components are the (real) numbers the tensor assigns to the 9 — these are written T(x®x) , T(x®y) T( x®z), T(y®x), T(y®y), T(y®z), T(z®x), T(z®y), and T(z®z)– note that there is no reason that T(x®y) should equal T(y®x) any more than a function R^2 –> R should give the same values for (1, 2) and (2, 1).

One more complication — where do the components of u and v fit in?  u is really (u^1, u^2, u^3) and v is really (v^1, v^2, v^3)

They multiply each other and the T’s  — so the first term of the tensor (sometimes confusingly called a tensor component)

is u^1 * v^1 * T(x®x)  and the last is u^3 * v^3 T(z®z).  Then the 9 tensor terms/components are summed giving a number. 

Then on pp. 7 and 8 he shows how a change of basis matrix (a 3 x 3 matrix written A^rs where rs, is one of 1, 2, 3) with nonZero determinant) gives the (usually incomprehensible) formula 
 
T^i’j’ = A^ik * A^jl T * (k, l)  where i, j, k, l are one of x, y, and z (or 1, 2, 3 as usually written)
 
So now you have a handle on the cryptic algebraic expression for tensors and what happens to them on a change of basis (e.g. how they transform).  Not bad for 5 pages of work — certainly not everything, but enough to make you comfortable with what follows — dual vectors, invariance, symmetric etc. etc.
 
Just knowing the multilinearity of tensors and just 2 postulates of quantum mechanics is all you need to understand entanglement — yes truly.  Yes, and you don’t need the Schrodinger equation, or differential equations at all, just linear algebra. 
 
Here is an old post to show you exactly how this works
 

How formal tensor mathematics and the postulates of quantum mechanics give rise to entanglement

Tensors continue to amaze. I never thought I’d get a simple mathematical explanation of entanglement, but here it is. Explanation is probably too strong a word, because it relies on the postulates of quantum mechanics, which are extremely simple but which lead to extremely bizarre consequences (such as entanglement). As Feynman famously said ‘no one understands quantum mechanics’. Despite that it’s never made a prediction not confirmed by experiments, so the theory is correct even if we don’t understand ‘how it can be like that’. 100 years of correct prediction of experimentation are not to be sneezed at.

If you’re a bit foggy on just what entanglement is — have a look at https://luysii.wordpress.com/2010/12/13/bells-inequality-entanglement-and-the-demise-of-local-reality-i/. Even better; read the book by Zeilinger referred to in the link (if you have the time).

Actually you don’t even need all the postulates for quantum mechanics (as given in the book “Quantum Computation and Quantum Information by Nielsen and Chuang). No differential equations. No Schrodinger equation. No operators. No eigenvalues. What could be nicer for those thirsting for knowledge? Such a deal ! ! ! Just 2 postulates and a little formal mathematics.

Postulate #1 “Associated to any isolated physical system, is a complex vector space with inner product (that is a Hilbert space) known as the state space of the system. The system is completely described by its state vector which is a unit vector in the system’s state space”. If this is unsatisfying, see an explication of this on p. 80 of Nielson and Chuang (where the postulate appears)

Because the linear algebra underlying quantum mechanics seemed to be largely ignored in the course I audited, I wrote a series of posts called Linear Algebra Survival Guide for Quantum Mechanics. The first should be all you need. https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/ but there are several more.

Even though I wrote a post on tensors, showing how they were a way of describing an object independently of the coordinates used to describe it, I did’t even discuss another aspect of tensors — multi linearity — which is crucial here. The post itself can be viewed at https://luysii.wordpress.com/2014/12/08/tensors/

Start by thinking of a simple tensor as a vector in a vector space. The tensor product is just a way of combining vectors in vector spaces to get another (and larger) vector space. So the tensor product isn’t a product in the sense that multiplication of two objects (real numbers, complex numbers, square matrices) produces another object of the exactly same kind.

So mathematicians use a special symbol for the tensor product — a circle with an x inside. I’m going to use something similar ‘®’ because I can’t figure out how to produce the actual symbol. So let V and W be the quantum mechanical state spaces of two systems.

Their tensor product is just V ® W. Mathematicians can define things any way they want. A crucial aspect of the tensor product is that is multilinear. So if v and v’ are elements of V, then v + v’ is also an element of V (because two vectors in a given vector space can always be added). Similarly w + w’ is an element of W if w an w’ are. Adding to the confusion trying to learn this stuff is the fact that all vectors are themselves tensors.

Multilinearity of the tensor product is what you’d think

(v + v’) ® (w + w’) = v ® (w + w’ ) + v’ ® (w + w’)

= v ® w + v ® w’ + v’ ® w + v’ ® w’

You get all 4 tensor products in this case.

This brings us to Postulate #2 (actually #4 on the book on p. 94 — we don’t need the other two — I told you this was fairly simple)

Postulate #2 “The state space of a composite physical system is the tensor product of the state spaces of the component physical systems.”

http://planetmath.org/simpletensor

Where does entanglement come in? Patience, we’re nearly done. One now must distinguish simple and non-simple tensors. Each of the 4 tensors products in the sum on the last line is simple being the tensor product of two vectors.

What about v ® w’ + v’ ® w ?? It isn’t simple because there is no way to get this by itself as simple_tensor1 ® simple_tensor2 So it’s called a compound tensor. (v + v’) ® (w + w’) is a simple tensor because v + v’ is just another single element of V (call it v”) and w + w’ is just another single element of W (call it w”).

So the tensor product of (v + v’) ® (w + w’) — the elements of the two state spaces can be understood as though V has state v” and W has state w”.

v ® w’ + v’ ® w can’t be understood this way. The full system can’t be understood by considering V and W in isolation, e.g. the two subsystems V and W are ENTANGLED.

Yup, that’s all there is to entanglement (mathematically at least). The paradoxes entanglement including Einstein’s ‘creepy action at a distance’ are left for you to explore — again Zeilinger’s book is a great source.

But how can it be like that you ask? Feynman said not to start thinking these thoughts, and if he didn’t know you expect a retired neurologist to tell you? Please.

The Representation of group G on vector space V is really a left action of the group on the vector space

Say what? What does this have to do with quantum mechanics? Quite a bit. Practically everything in fact. Most chemists learn quantum mechanics because they want to see where atomic orbitals come from. So they stagger through the solution of the Schrodinger equation where the quantum numbers appear as solution of recursion equations for power series solutions of the Schrodinger equation.

Forget the Schrodinger equation (for now), quantum mechanics is really written in the language of linear algebra. Feynman warned us not to consider ‘how it can be like that’, but at least you can understand the ‘that’ — e.g. linear algebra. In fact, the instructor in a graduate course in abstract algebra I audited opened the linear algebra section with the remark that the only functions mathematicians really understand are the linear ones.

The definitions used (vector space, inner product, matrix multiplication, Hermitian operator) are obscure and strange. You can memorize them and mumble them as incantations when needed, or you can understand why they are the way they are and where they come from. So if you are a bit rusty on your linear algebra I’ve written a series of 9 posts on the subject — here’s a link to the first https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/– just follow the links after that.

Just to whet your appetite, all of quantum mechanics consists of manipulation of a particular vector space called Hilbert space. Yes all of it.

Representations are a combination of abstract algebra and linear algebra, and are crucial in elementary particle physics. In fact elementary particles are representations of abstract symmetry groups.

So in what follows, I’ll assume you know what vector spaces, linear transformations of them, their matrix representation. I’m not going to explain what a group is, but it isn’t terribly complicated. So if you don’t know about them quit. The Wiki article is too detailed for what you need to know.

The title of the post really threw me, and understanding requires significant unpacking of the definitions, but you need to know this if you want to proceed further in physics.

So we’ll start with a Group G, its operation * and its identity element.

Next we have a set called X — just that a bunch of elements (called x, y, . . .), with no further structure imposed — you can’t add elements, you can’t mutiply them by real numbers. If you could with a few more details you’d have a vector space (see the survival guide)

Definition of Left Action (LA) of G on set X

LA : G x X –> X

LA : ( g, x ) |–> (g . x)

Such that the following two properties hold

l. For all x in X LA : (e, x) |–> (e.x) = x

2. For all g1 and g2 in G LA ( g1 * g2), x ) |–> ( g1 . (g2 . x )

Given vector space V define GL(V) the set of invertible linear transformations of vector space. GL(V) becomes a group if you let composition of linear transformations become its operation (it’s all in the survival guide.

Now for the definition of representation of Group G on vector space V

It is a function

rho: G –> GL(V)

rho g |–> LTg : V –> V linear

The representation rho defines a left group action on V

LA : (g, v) |–> LTg (V) — this satisfies the two properties above of a left action given above — think about it.

Now you’re ready for some serious study of quantum mechanics. When you read that the representation is acting on some vector space, you’ll know what they are talking about.

Math can be hard even for very smart people

50 McCosh Hall an autumn evening in 1956. The place was packed. Chen Ning Yang was speaking about parity violation. Most of the people there had little idea (including me) of what he did, but wanted to be eyewitnesses to history.. But we knew that what he did was important and likely to win him the Nobel (which happened the following year).

That’s not why Yang is remembered today (even though he’s apparently still alive at 98). Before that he and Robert Mills were trying to generalize Maxwell’s equations of electromagnetism so they would work in quantum mechanics and particle physics. Eventually this led Yang and Mills to develop the theory of nonAbelian gauge fields which pervade physics today.

Yang and James Simons (later the founder of Renaissance technologies and already a world class mathematician — Chern Simons theory) later wound up at Stony Brook. Simons, told him that gauge theory must be related to connections on fiber bundles and pointed him to Steenrod’s The Topology of Fibre Bundles. So he tried to read it and “learned nothing. The language of modern mathematics is too cold and abstract for a physicist.”

Another Yang quote “There are only two kinds of math books: Those you cannot read beyond the first sentence, and those you cannot read beyond the first page.”

So here we have a brilliant man who invented significant mathematics (gauge theory) along with Mills, unable to understand a math book written about the exact same subject (connections on fiber bundles).

The pleasures of reading Feynman on Physics – V

Feynman finally gets around to discussing tensors 376 pages into volume II in “The Feynman Lectures on Physics” and a magnificent help it is (to me at least).  Tensors must be understood to have a prayer of following the math of General Relativity (a 10 year goal, since meeting classmate Jim Hartle who wrote a book “Gravity” on the subject).

There are so many ways to describe what a tensor is (particularly by mathematicians and physicists) that it isn’t obvious that they are talking about the same thing.   I’ve written many posts about tensors, as the best way to learn something it to try to explain it to someone else (a set of links to the posts will be found at the end).

So why is Feynman so helpful to me?  After plowing through 370 pages of Callahan’s excellent book we get to something called the ‘energy-momentum tensor’, aka the stress-energy tensor.  This floored me as it appeared to have little to do with gravity, talking about flows of energy and momentum. However it is only 5 pages away from the relativistic field equations so it must be understood.

Back in the day, I started reading books about tensors such as the tensor of inertia, the stress tensor etc.  These were usually presented as if you knew why they were developed, and just given in a mathematical form which left my intuition about them empty.

Tensors were developed years before Einstein came up with special relativity (1905) or general relativity (1915).

This is where Feynman is so good.  He starts with the problem of electrical polarizability (which is familiar if you’ve plowed this far through volume II) and shows exactly why a tensor is needed to describe it, e.g. he derives  the tensor from known facts about electromagnetism.  Then on to the tensor of inertia (another derivation).  This allows you to see where all that notation comes from. That’s all very nice, but you are dealing with just matrices.  Then on to tensors over 4 vector spaces (a rank 4 tensor) not the same thing as a 4 tensor which is over a 4 dimensional vector space.

Then finally we get to the 4 tensor (a tensor over a 4 dimensional vector space) of electromagnetic momentum.  Here are the 16 components of Callahan’s energy momentum tensor, clearly explained.  The circle is finally closed.

He briefly goes into the way tensors transform under a change of coordinates, which for many authors is the most important thing about them.   So his discussion doesn’t contain the usual blizzard of superscripts and subscript.  Covariant and contravariant are blessedly absent. Here the best explanation of how they transform is in Jeevanjee “An introduction to Tensors and Group Theory for Physicists”  chapter 3 pp. 51 – 74.

Here are a few of the posts I’ve written about tensors trying to explain them to myself (and hopefully you)

https://luysii.wordpress.com/2020/02/03/the-reimann-curvature-tensor/

https://luysii.wordpress.com/2017/01/04/tensors-yet-again/

https://luysii.wordpress.com/2015/06/15/the-many-ways-the-many-tensor-notations-can-confuse-you/

https://luysii.wordpress.com/2014/12/28/how-formal-tensor-mathematics-and-the-postulates-of-quantum-mechanics-give-rise-to-entanglement/

https://luysii.wordpress.com/2014/12/08/tensors/

Book Review: Tales of Impossibility

Here is a book for anyone who has had high school geometry and likes math.  It is “Tales of Impossibility” by David Richeson.  It’s full of diagrams and is extremely well written.  A bright high school student could go all the way to the end, and would learn a lot of abstract algebra, up to and including complex numbers, irrational numbers and transcendental numbers.  It describes the 2000+ year search for ways to trisect an angle, double the cube, construct any polygon using a compass and straight edge,  and find the area of a circle (squaring the circle), or prove that it was impossible using basic methods.

It took until the late 1800s to finish the job.  Proving that something is impossible is subtle and difficult.    The book is 368 pages long and contains 40 pages of notes and references, but it is definitely not turgid.

There is a huge amount of historical detail about each of the great figures who worked on the problems starting with Euclid and going on through the the Greek geometers, Fermat, Descartes.

The battles about what could be considered kosher in math occurred every step of the way and is well covered.  Could algebra be used to solve a geometric problem?  Was a negative number a number. What about an imaginary number, or an irrational one?   Was something you could draw using a marked ruler (neusis) really a geometric figure?

If you look at nothing else, have a look at how Descartes was able to multiply and divide the length of various lines, using nothing more that Euclid’s geometry (but apparently no one had figured it out before).

The ultimate impossibility proofs involved abstract algebra, so we meet Viete and Descartes, Galois, Hermite etc. etc.  So it might help if some high school algebra was on board.

For the right smart high school kid, this book is perfect.  For the cognoscenti or even for nonCognoscenti with a lifelong interest in math (such as me) there is a lot to learn.  The proofs of all the geometric statements are all well laid out, and now it’s time for me to go through the book a second time and follow closely.

The Pleasures of Reading Feynman on Physics – IV

Chemists don’t really need to know much about electromagnetism.  Understand Coulombic forces between charges and you’re pretty much done.   You can use NMR easily without knowing much about magnetism aside from the shielding of the nucleus from a magnetic field by  charge distributions and ring currents. That’s  about it.  Of course, to really understand NMR you need the whole 9 yards.

I wonder how many chemists actually have gone this far.  I certainly haven’t.  Which brings me to volume II of the Feynman Lectures on Physics which contains over 500 pages and is all about electromagnetism.

Trying to learn about relativity told me that the way Einstein got into it was figuring out how to transform Maxwell’s equations correctly (James J. Callahan “The Geometry of Spacetime” pp. 22 – 27).  Using the Galilean transformation (which just adds velocities) an observer moving at constant velocity gets a different set of Maxwell equations, which according to the Galilean principle of relativity (yes Galileo got there first) shouldn’t happen.

Lorentz figured out a mathematical kludge so Maxwell’s equations transformed correctly, but it was just that,  a kludge.  Einstein derived the Lorentz transformation from first principles.

Feynman back in the 60s realized that the entering 18 yearolds had heard of relativity and quantum mechanics.  He didn’t like watching them being turned off to physics by studying how blocks travel down inclined planes for 2 or more years before getting to the good stuff (e. g. relativity, quantum mechanics).  So there is special relativity (no gravity) starting in volume I lecture 15 (p. 138) including all the paradoxes, time dilation length contraction, a very clear explanation fo the Michelson Morley experiment etc. etc.

Which brings me to volume II, which is also crystal clear and contains all the vector calculus (in 3 dimensions anyway) you need to know.  As you probably know, moving charge produces a magnetic field, and a changing magnetic field produces a force on a moving charge.

Well and good but on 144 Feynman asks you to consider 2 situations

  1. A stationary wire carrying a current and a moving charge outside the wire — because the charge is moving, a magnetic force is exerted on it causing the charge to move toward the wire (circle it actually)

2. A stationary charge and a  moving wire carrying a current

Paradox — since the charge isn’t moving there should be no magnetic force on it, so it shouldn’t move.

Then Feynman uses relativity to produce an electric force on the stationary charge so it moves.  (The world does not come equipped with coordinates) and any reference frame you choose should give you the same physics.

He has to use the length (Fitzgerald) contraction of a moving object (relativistic effect #1) and the time dilation of a moving object (relativistic effect #2) to produce  an electric force on the stationary charge.

It’s a tour de force and explains how electricity and magnetism are parts of a larger whole (electromagnetism).  Keep the charge from moving and you see only electric forces, let it move and you see only magnetic forces.  Of course there are reference frames where you see both.

 

General relativity at last

I’ve finally arrived at the relativistic gravitational field equation which includes mass, doing ALL the math and understanding the huge amount of mathematical work it took to get there:  Chistoffel symbols (first and second kind), tensors, Fermi coordinates, the Minkowski metric, the Riemann curvature tensor (https://luysii.wordpress.com/2020/02/03/the-reimann-curvature-tensor/) geodesics, matrices, transformation laws, divergence of tensors, the list goes on.  It’s all covered in a tidy 379 pages of a wonderful book I used — “The Geometry of Spacetime” by James J. Callahan, professor emeritus of mathematics at Smith college.  Even better I got to ask him questions by eMail when I got stuck, and a few times we drank beer and listened to Irish music at a dive bar north of Amherst.

Why relativity? The following was written 8 years ago.  Relativity is something I’ve always wanted to understand at a deeper level than the popularizations of it (reading the sacred texts in the original so to speak).  I may have enough background in math, to understand how to study it.  Topology is something I started looking at years ago as a chief neurology resident, to get my mind off the ghastly cases I was seeing.

I’d forgotten about it, but a fellow ancient alum, mentioned our college president’s speech to us on opening day some 55 years ago.  All the high school guys were nervously looking at our neighbors and wondering if we really belonged there.  The prez told us that if they accepted us that they were sure we could do the work, and that although there were a few geniuses in the entering class, there were many more people in the class who thought they were.

Which brings me to our class relativist (Jim Hartle).  I knew a lot of the physics majors as an undergrad, but not this guy.  The index of the new book on Hawking by Ferguson has multiple entries about his work with Hawking (which is ongoing).  Another physicist (now a semi-famous historian) felt validated when the guy asked him for help with a problem.  He never tooted his own horn, and seemed quite modest at the 50th reunion.  As far as I know, one physics self-proclaimed genius (and class valedictorian) has done little work of any significance.  Maybe at the end of the year I’ll be able to read the relativist’s textbook on the subject.  Who knows?  It’s certainly a personal reason for studying relativity.  Maybe at the end of the year I’ll be able to ask him a sensible question.

Well that took 6 years or so.

Well as the years passed, Hartle was close enough to Hawking that he was chosen to speak at Hawking’s funeral.

We really don’t know why we like things and I’ve always like math.  As I went on in medicine, I liked math more and more because it could be completely understood (unlike medicine) –Why is the appendix on the right and the spleen on the left — dunno but you’d best remember it.

Coming to medicine from organic chemistry, the contrast was striking.  Experiments just refined our understanding, and one can look at organic synthesis as proving a theorem with the target compound as statement and the synthesis as proof.

Even now, wrestling with the final few pages of Callahan today took my mind off the Wuhan flu and my kids in Hong Kong just as topology took my mind off various neurologic disasters 50 years ago.

What’s next?  Well I’m just beginning to study the implications of the relativistic field equation, so it’s time to read other books about black holes, and gravity.  I’ve browsed in a few — Zee, Wheeler in particular are written in an extremely nonstuffy manner, unlike medical and molecular biological writing today (except the blogs). Hopefully the flu will blow over, and Jim and I will be at our 60th Princeton reunion at the end of May.  I better get started on his book “Gravity”

One point not clear presently.  If mass bends space which tells mass how to move, when mass moves it bends space — so it’s chicken and the egg.  Are the equations even soluble.

The Reimann curvature tensor

I have harpooned the great white whale of mathematics (for me at least) the Reimann curvature tensor.  Even better, I understand what curvature is, and how the Reimann curvature tensor expresses it.  Below you’ll see the nightmare of notation by which it is expressed.

Start with curvature.  We all know that a sphere (e.g. the earth) is curved.  But that’s when you look at it from space.  Gauss showed that you could prove a surface was curved just be performing measurements entirely within the surface itself, not looking at it from the outside (theorem egregium).

Start with the earth, assuming that it is a perfect sphere (it isn’t because its rotation fattens its middle).  We’ve got longitude running from pole to pole and the equator around the middle.  Perfect sphere means that all points are the same distance from the center — e.g. the radius.  Call the radius 1.

Now think of a line from the north pole to the plane formed by the equator (radius 1).  Take the midpoint of that line and inscribe a circle on the sphere, parallel to the plane of the equator.  Its radius is the half the square root of 3 (or 1.73). This comes from the right angle triangle just built with hypotenuse is 1 and  one side 1/2.   The circumference of the equator is 2*pi (remember the sphere’s radius is 1).  The circumference of the newly inscribed circle is 1.73 * pi.

Now pick a point on the smaller circle and follow a longitude down to the equator.  Call this point down1.  Move in one direction by 1/4 of the circumference of the sphere (pi/2).  Call that point on the equator down then across

Now go back to the smaller circle at the first point you picked and move in the same direction as you did on the equator by absolute distance pi/2 (not by pi/2 radians).  Then follow the longitude down to the equator.  Call that point across then down.  The two will not be the same.  Across then down is farther from down 1 than down then across.

The difference occurs because the surface of the sphere is curved, and the difference in endpoints of the two paths is exactly what the Reimann curvature tensor measures.

Here is the way the Riemann curvature tensor is notated.  Hideous isn’t it?

If you’re going to have any hope of understanding general relativity (not special relativity) you need to understand curvature.

I used paths in the example, Riemann uses the slope of the paths (e.g derivatives) which makes things much more complicated.  Which is where triangles (dels), and the capital gammas (Γ) come in.

To really understand the actual notation, you need to understand what a covariant derivative actually is, which is much more complicated, but knowing what you know now, you’ll see where you are going when enmeshed in thickets of notation.

What the Riemann curvature tensor is actually saying is that the order of taking covariant derivatives (which is the same thing as the order of taking paths)  is NOT commutative.

The simplest functions we grow up with are commutative.  2 + 3 is the same as 3 + 2, and 5*3 = 3*5.  The order of the terms doesn’t matter.

Although we weren’t taught to think of it that way, subtraction is not.  5 – 3 is not the same as 3 – 5.  There is all sorts of nonCommutativity in math.  The Lie bracket is one such, the Poisson bracket  another, and most groups are nonCommutative.  But that’s enough.  I wish I’d known this when I started studying general relativity.

Is the microtubule alive ??

When does inanimate matter become animate?  How about cilia — they beat and move around.  No one would call  the alpha/beta tubulin dimer from which they are formed alive.  The tubulin proteins contain 450 amino acids or so and form a globule 40 Angstroms (4 nanoMeters) in diameter.  The dimer is then 40 x 80 Angstroms and looks like an oil drum.  Then they form protofilaments stacked end to end — e.g. alpha beta alpha beta.  Then 13 protofilaments then align side by side to form the microtubule (which is 250 Angstroms in diameter, with a central hole about half that size.  Do you think you could design a protein to do this?

Lets make it a bit more complicated, and add another 10 protofilaments forming a second incomplete ring.  This is the microtubule doublet, and each cilium has 9 of them all arranged in a circle.

Hopefully you have access to the 31 October cell where the repeating unit of the microtubule doublet is shown in exquisite detail — https://www.cell.com/action/showPdf?pii=S0092-8674%2819%2931081-5. — Cell 179, 909–922 ’19

The structure is from the primitive eukaryote Chlamydomonas, the structure repeats every 960 Angstroms (e.g every 12 alpha/beta tubulin dimers).  So just for one repeating unit which is just under 1/10 of a micron (10,000 Angstroms) there are (13 + 10) * 12 = 276 dimers.  The cilium is 12 microns long so that’s 12 * 276 * 100 = 298,080 alpha tubulin dimers/microtubule doublet. The cilium has 9 of these + another doublet in the center, so thats 2,980,800 alpha tubulin dimers/cilium.

The cell article is far better than this, because it shows how the motor proteins which climb along the outside of the doublet (such as dynein) attach.The article also describes the molecular ruler (basically a 960 Angstrom coil coil which spans the repeat. They found some 38 different proteins associated with the microtubule repeat.  They repeat as well at 80, 160, 240, 480 and 960 Angstrom periodicity.  The proteins in the hole in the center of the microtubule (e.g. the lumen) are rich in a protein module called the EF hand which binds calcium, and which likely causes movement of the microtubule, at which point the damn thing (whose structure we now know) appears alive.

Because of the attachment of the partial ring (B ring) to the complete ring of protofilaments, each of the 23 protofilaments has a unique position in the doublet, and each of the proteins in the lumen is bound to a specific mitotubule profilament. There are 6 different coiled coil proteins inside the A ring, occupying  specific furrows between the protofilaments.

Staggering complexity built from a simple subunit, but then Monticello is only made of bricks.

Want to understand Quantum Computing — buy this book

As quantum mechanics enters its second century, quantum computing has been hot stuff for the last third of it, beginning with Feynman’s lectures on computation in 84 – 86.  Articles on quantum computing  appear all the time in Nature, Science and even the mainstream press.

Perhaps you tried to understand it 20 years ago by reading Nielsen and Chuang’s massive tome Quantum Computation and Quantum information.  I did, and gave up.  At 648 pages and nearly half a million words, it’s something only for people entering the field.  Yet quantum computers are impossible to ignore.

That’s where a new book “Quantum Computing for Everyone” by Chris Bernhardt comes in.  You need little more than high school trigonometry and determination to get through it.  It is blazingly clear.  No term is used before it is defined and there are plenty of diagrams.   Of course Bernhardt simplifies things a bit.  Amazingly, he’s able to avoid the complex number system. At 189 pages and under 100,000 words it is not impossible to get through.

Not being an expert, I can’t speak for its completeness, but all the stuff I’ve read about in Nature, Science is there — no cloning, entanglement, Ed Frenkin (and his gate), Grover’s algorithm,  Shor’s algorithm, the RSA algorithm.  As a bonus there is a clear explanation of Bell’s theorem.

You don’t need a course in quantum mechanics to get through it, but it would make things easier.  Most chemists (for whom this blog is basically written) have had one.  This plus a background in linear algebra would certainly make the first 70 or so pages a breeze.

Just as a book on language doesn’t get into the fonts it can be written in, the book doesn’t get into how such a computer can be physically instantiated.  What it does do is tell you how the basic guts of the quantum computer work. Amazingly, they are just matrices (explained in the book) which change one basis for representing qubits (explained) into another.  These are the quantum gates —  ‘just operations that can be described by orthogonal matrices” p. 117.  The computation comes in by sending qubits through the gates (operating on vectors by matrices).

Be prepared to work.  The concepts (although clearly explained) come thick and fast.

Linear algebra is basic to quantum mechanics.  Superposition of quantum states is nothing more than a linear combination of vectors.  When I audited a course on QM 10 years ago to see what had changed in 50 years, I was amazed at how little linear algebra was emphasized.  You could do worse that read a series of posts on my blog titled “Linear Algebra Survival Guide for Quantum Mechanics” — There are 9 — start here and follow the links — you may find it helpful — https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/

From a mathematical point of view entanglement (discussed extensively in the book) is fairly simple -philosophically it’s anything but – and the following was described by a math prof as concise and clear– https://luysii.wordpress.com/2014/12/28/how-formal-tensor-mathematics-and-the-postulates-of-quantum-mechanics-give-rise-to-entanglement/

The book is a masterpiece — kudos to Bernhardt