Category Archives: Math

A mathematical kludge and its repair

If you are in a train going x miles an hour and throw a paper airplane forward at x feet per second (or x * 3600/5280 miles per hour, relative to someone outside the train sees the plane move a bit faster than x miles an hour.  Well that’s the whole idea of the Galilean transformation.  Except that they don’t really see velocities adding that way for really fast velocities (close to the speed of light).

Relativity says that there are no privileged sites of observation and that no matter how fast two observer frames are moving relative to each other light will zing past both at the same speed (3 x 10^8 meters/second, 186,000 miles/second).

All of Newton’s mechanics and force laws obeys the Galilean transformation (e.g. velocities add).  Maxwell conceived a series of 4 laws linking electricity and magnetism together, which predicted new phenomena (such as radio waves, and the fact that light was basically a form of wave traveling through space).

Even though incredibly successful, Maxwell’s laws led to an equation (the wave equation) which didn’t obey the Galilean transformation.  This led Lorentz to modify it so the wave equation did obey Galileo.  If you’ve got some mathematical background an excellent exposition of all this is to be found in “The Geometry of Spacetime” by James J. Callahan pp. 22 – 27.

The Lorentz transformation is basically a kludge which makes things work out.  But he had no understanding of why it worked (or what it meant).  The equations produced by the Lorentz transformation are ugly.

Here are the variables involved.

t’ is time in the new frame, t in the old, x’ is position in the new frame x in the old. v is the constant velocity at which the two observation frames are moving relative to each other. c is the common symbol for the velocity of light.

Here are the two equations

t’ =  ( t – vz/c^2 )/ sqrt (1 – v^2/c^2)

x’ = ( z – vt ) /  sqrt (1 – v^2/c^2)

Enter Einstein — he derived them purely by thought.  I recommend Appendix 1 in Einstein’s book “Relativity”.  Amazingly you do not need tensors or even calculus to understand his derivation — just high school algebra (and not much of that — no trigonometry etc. etc.)  You will have the pleasure of watching the power of a great mind at work.

One caveat.  The first few equations won’t make much sense if you hit the appendix without having read the rest of the book (as I did).

Light travels at c miles/hour, so multiplying c by time gives you where it is after t seconds.  In equations x = ct.  This is also true for another reference frame x’ = ct’.

This implies that both x – ct =  0 and x’ – ct’ = 0

Then Einstein claims that these two equations imply that

(x – ct) = lambda * (x’ – ct’) ; lambda is some nonzero number.

Say what?  Is he really saying  0 = lambda * 0.

This is mathematical fantasy.  Lambda could be anything and the statement lacks mathematical content.

Yes, but . . .

It does not lack physical content, which is where the rest of the book comes in.

This is because the two frames (x, t) and (x’ , t’) are said to be in ‘standard configuration which is a complicated state of affairs. We’ll see why y, y’, z, z’ are left out shortly

The assumptions of the standard configuration are as follows:

  • An observer in frame of reference K defines events with coordinates t, x
  • Another frame K’ moves with velocity v relative to K, with an observer in this moving frame K’ defining events using coordinates t’, x’
  • The coordinate axes in each frame of reference are parallel
  • The relative motion is along the coincident xx’ axes (y = y’ and z = z’ for all time, only x’ changes, explaining why they are left out)
  • At time t = t’ =0, the origins of both coordinate systems are the same.

Another assumption is that at time t = t’ = 0 a light pulse is emitted by K at the origin (x = x’ = 0)

The only possible events in K and K’ are observations of the light pulse. Since the velocity of light (c) is independent of the coordinate system, K’ will see the pulse at time t’ and x’ axis location ct’, NOT x’-axis location ct’ – vt’ (which is what Galileo would say). So whenever K sees the pulse at time t and on worldline (ct, t), K’ will see the pulse SOMEWHERE on worldline (ct’, t’).

The way to express this mathematically is by (3) (x – ct) = lambda * (x’ – ct’)

This may seem trivial, but I spent a lot of time puzzling over equation (3)

Now get Einstein’s book and watch him derive the complicated looking Lorentz transformations using simple math and complex reasoning.

Advertisements

Bye bye stoichiometry

Until recently, developments in physics basically followed earlier work by mathematicians Think relativity following Riemannian geometry by 40 years.  However in the past few decades, physicists have developed mathematical concepts before the mathematicians — think mirror symmetry which came out of string theory — https://en.wikipedia.org/wiki/Mirror_symmetry_(string_theory). You may skip the following paragraph, but here is what it meant to mathematics — from a description of a 400+ page book by Amherst College’s own David A. Cox

Mirror symmetry began when theoretical physicists made some astonishing predictions about rational curves on quintic hypersurfaces in four-dimensional projective space. Understanding the mathematics behind these predictions has been a substantial challenge. This book is the first completely comprehensive monograph on mirror symmetry, covering the original observations by the physicists through the most recent progress made to date. Subjects discussed include toric varieties, Hodge theory, Kahler geometry, moduli of stable maps, Calabi-Yau manifolds, quantum cohomology, Gromov-Witten invariants, and the mirror theorem. This title features: numerous examples worked out in detail; an appendix on mathematical physics; an exposition of the algebraic theory of Gromov-Witten invariants and quantum cohomology; and, a proof of the mirror theorem for the quintic threefold.

Similarly, advances in cellular biology have come from chemistry.  Think DNA and protein structure, enzyme analysis.  However, cell biology is now beginning to return the favor and instruct chemistry by giving it new objects to study. Think phase transitions in the cell, liquid liquid phase separation, liquid droplets, and many other names (the field is in flux) as chemists begin to explore them.  Unlike most chemical objects, they are big, or they wouldn’t have been visible microscopically, so they contain many, many more molecules than chemists are used to dealing with.

These objects do not have any sort of definite stiochiometry and are made of RNA and the proteins which bind them (and sometimes DNA).  They go by any number of names (processing bodies, stress granules, nuclear speckles, Cajal bodies, Promyelocytic leukemia bodies, germline P granules.  Recent work has shown that DNA may be compacted similarly using the linker histone [ PNAS vol.  115 pp.11964 – 11969 ’18 ]

The objects are defined essentially by looking at them.  By golly they look like liquid drops, and they fuse and separate just like drops of water.  Once this is done they are analyzed chemically to see what’s in them.  I don’t think theory can predict them now, and they were never predicted a priori as far as I know.

No chemist in their right mind would have made them to study.  For one thing they contain tens to hundreds of molecules.  Imagine trying to get a grant to see what would happen if you threw that many different RNAs and proteins together in varying concentrations.  Physicists have worked for years on phase transitions (but usually with a single molecule — think water).  So have chemists — think crystallization.

Proteins move in and out of these bodies in seconds.  Proteins found in them do have low complexity of amino acids (mostly made of only a few of the 20), and unlike enzymes, their sequences are intrinsically disordered, so forget the key and lock and induced fit concepts for enzymes.

Are they a new form of matter?  Is there any limit to how big they can be?  Are the pathologic precipitates of neurologic disease (neurofibrillary tangles, senile plaques, Lewy bodies) similar.  There certainly are plenty of distinct proteins in the senile plaque, but they don’t look like liquid droplets.

It’s a fascinating field to study.  Although made of organic molecules, there seems to be little for the organic chemist to say, since the interactions aren’t covalent.  Time for physical chemists and polymer chemists to step up to the plate.

Book recommendation

“Losing the Nobel Prize”  by Brian Keating is a book you should read if you have any interest in l. physics. 2. astronomy 3. cosmology 4. the sociology of the scientific enterprise (physics division) 5. The Nobel prize 6. The BICEPs and BICEP2 experiments.

It contains extremely clear explanations of the following

l. The spiderweb bolometer detector used to measure the curvature of the universe

2. How Galileo’s telescope works and what he saw

3. How refracting and reflecting telescopes work

4. The Hubble expansion of the universe and the problems it caused

5. The history of the big bang, its inherent problems, how Guth solved some of them but created more

6. How bouncing off water (or dust) polarizes light

7. The smoothness problem, the flatness problem and the horizon problem.

8. The difference between B modes and E modes and why one would be evidence of gravitational waves which would be evidence for inflation.

9. Cosmic background radiation

The list could be much longer.  The writing style is clear and very informal.   Example: he calls the dust found all over the universe — cosmic schmutz.   Then there are the stories about explorers trying to reach the south pole, and what it’s like getting there (and existing there).

As you probably know BICEP2 found galactic dust and not the polarization pattern produced by gravitational waves.  The initial results were announced 17 March 2014 to much excitement.  It was the subject of a talk given the following month at Harvard Graduate Alumni Day, an always interesting set of talks.  I didn’t go to the talk but there were plenty of physicists around to ask about the results (which were nowhere nearly as clearly explained as this book).  All of them responded to my questions the same way — “It’s complicated.”

The author Brian Keating has a lot to say about Nobels and their pursuit and how distorting it is, but that’s the subject of another post, as purely through chance I’ve known 9 Nobelists before they received their prize.

It will also lead to another post about the general unhappiness of a group of physicists.

Buy and read the book

How to study math by yourself far away from an academic center

“Differential geometry is the study of things that are invariant under a change of notation.”   Sad but true, and not original as it appears in the introduction to two different differential geometry books I own.

Which brings me to symbol tables and indexes in math books. If you have a perfect mathematical mind and can read math books straight through understanding everything and never need to look back in the book for a symbol or concept you’re not clear on, then you don’t need them.  I suspect most people aren’t like that.  I’m not.

Even worse is failing to understand something (say the connection matrix) and trying to find another discussion in another book.  If you go to an older book (most of which do not have symbol tables) the notation will likely be completely different and you have to start back at ground zero.  This happened when I tried to find what a connection form was, finding the discussion in one book rather skimpy.  I found it in O’Neill’s book on elementary differential geometry, but the notation was completely different and I had to read page after page to pick up the lingo until I could understand his discussion (which was quite clear).

Connections are important, and they underlie gauge theory and a lot of modern physics.

Good math books aren’t just theorem proof theorem proof, but have discussions about why you’d want to know something etc. etc.  Even better are discussions about why things are the way they are.  Tu’s book on Differential geometry is particularly good on this, showing (after a careful discussion of why the directional derivative is the way it is) how the rather abstract definition of a connection on a manifold arises by formalizing the properties of the directional derivative and using them to define the connection.

Unfortunately, he presents curvature in a very ad hoc fashion, and I’m back to starting at ground zero in another book (older and without a symbol table).

Nonetheless I find it very helpful when taking notes to always start by listing what is given.  Then a statement of the theorem, particularly putting statements like for all i in { 1, …. ,n} in front.  In particular if a concept is defined, put how the concept is written in the definition

e.g.

Given X, Y smooth vector fields

def:  Lie Bracket (written [ X, Y ] ) ::= DxY – DyX

with maybe a link to a page in your notes where Dx is defined

So before buying a math book, look to see how fulsome the index is, and whether it has a symbol table.

 

A creation myth

Sigmund Freud may have been wrong about penis envy, but most lower forms of scientific life (chemists, biologists) do have physics envy — myself included.  Most graduate chemists have taken a quantum mechanics course, if only to see where atomic and molecular orbitals come from.  Anyone doing physical chemistry has likely studied statistical mechanics. I was fortunate enough to audit one such course given by E. Bright Wilson (of Pauling and Wilson).

Although we no longer study physics per se, most of us read books about physics.  Two excellent such books have come out in the past year.  One is “What is Real?” — https://www.basicbooks.com/titles/adam-becker/what-is-real/9780465096053/, the other is “Lost in Math” by Sabine Hossenfelder whose blog on physics is always worth reading, both for herself and the heavies who comment on what she writes — http://backreaction.blogspot.com

Both books deserve a long discursive review here. But that’s for another time.  Briefly, Hossenfelder thinks that physics for the past 30 years has become so fascinated with elegant mathematical descriptions of nature, that theories are judged by their mathematical elegance and beauty, rather than agreement with experiment.  She acknowledges that the experiments are both difficult and expensive, and notes that it took a century for one such prediction (gravitational waves) to be confirmed.

The mathematics of physics can certainly be seductive, and even a lowly chemist such as myself has been bowled over by it.  Here is how it hit me

Budding chemists start out by learning that electrons like to be in filled shells. The first shell has 2 elements, the next 2 + 6 elements etc. etc. It allows the neophyte to make some sense of the periodic table (as long as they deal with low atomic numbers — why the 4s electrons are of lower energy than the 3d electons still seems quite ad hoc to me). Later on we were told that this is because of quantum numbers n, l, m and s. Then we learn that atomic orbitals have shapes, in some wierd way determined by the quantum numbers, etc. etc.

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back in college).

So it wasn’t a shock when the QM instructor back in 1961 got to them in the course of solving the Schrodinger equation for the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3 variables, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman, the instructor, put it: the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursion relations.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

But what interested me the most about “Lost in Math” was Hossenfelder’s discussion of the different physical laws appearing at different physical scales (e.g. effective laws), emergent properties and reductionism (pp. 44 –> ).  Although things at larger scales (atoms) can be understood in terms of the physics of smaller scales (protons, neutrons, electrons), the details of elementary particle interactions (quarks, gluons, leptons etc.) don’t matter much to the chemist.  The orbits of planets don’t depend on planetary structure, etc. etc.  She notes that reduction of events at one scale to those at a smaller one is not an optional philosophical position to hold, it’s just the way nature is as revealed by experiment.  She notes that you could ‘in principle, derive the theory for large scales from the theory for small scales’ (although I’ve never seen it done) and then she moves on

But the different structures and different laws at different scales is what has always fascinated me about the world in which we exist.  Do we have a model for a world structured this way?

Of course we do.  It’s the computer.

 

Neurologists have always been interested in computers, and computer people have always been interested in the brain — von Neumann wrote “The Computer and the Brain” shortly before his death in 1958.

Back in med school in the 60s people were just figuring out how neurons talked to each other where they met at the synapse.  It was with a certain degree of excitement that we found that information appeared to flow just one way across the synapse (from the PREsynaptic neuron to the POST synaptic neuron).  E.g. just like the vacuum tubes of the earliest computers.  Current (and information) could flow just one way.

The microprocessors based on transistors that a normal person could play with came out in the 70s.  I was naturally interested, as having taken QM I thought I could understand how transistors work.  I knew about energy gaps in atomic spectra, but how in the world a crystal with zillions of atoms and electrons floating around could produce one seemed like a mystery to me, and still does.  It’s an example of ’emergence’ about which more later.

But forgetting all that, it’s fairly easy to see how electrons could flow from a semiconductor with an abundance of them (due to doping) to a semiconductor with a deficit — and have a hard time flowing back.  Again a one way valve, just like our concept of the synapses.

Now of course, we know information can flow the other way in the synapse from POST synaptic to PREsynaptic neuron, some of the main carriers of which are the endogenous marihuana-like substances in your brain — anandamide etc. etc.  — the endocannabinoids.

In 1968 my wife learned how to do assembly language coding with punch cards ones and zeros, the whole bit.  Why?  Because I was scheduled for two years of active duty as an Army doc, a time in which we had half a million men in Vietnam.  She was preparing to be a widow with 2 infants, as the Army sent me a form asking for my preferences in assignment, a form so out of date, that it offered the option of taking my family with me to Vietnam if I’d extend my tour over there to 4 years.  So I sat around drinking Scotch and reading Faulkner waiting to go in.

So when computers became something the general populace could have, I tried to build a mental one using and or and not logical gates and 1s and 0s for high and low voltages. Since I could see how to build the three using transistors (reductionism), I just went one plane higher.  Note, although the gates can be easily reduced to transistors, and transistors to p and n type semiconductors, there is nothing in the laws of semiconductor physics that implies putting them together to form logic gates.  So the higher plane of logic gates is essentially an act of creation.  They do not necessarily arise from transistors.

What I was really interested in was hooking the gates together to form an ALU (arithmetic and logic unit).  I eventually did it, but doing so showed me the necessity of other components of the chip (the clock and in particular the microcode which lies below assembly language instructions).

The next level up, is what my wife was doing — sending assembly language instructions of 1’s and 0’s to the computer, and watching how gates were opened and shut, registers filled and emptied, transforming the 1’s and 0’s in the process.  Again note that there is nothing necessary in the way the gates are hooked together to make them do anything.  The program is at yet another higher level.

Above that are the higher level programs, Basic, C and on up.  Above that hooking computers together to form networks and then the internet with TCP/IP  etc.

While they all can be reduced, there is nothing inherent in the things that they are reduced to which implies their existence.  Their existence was essentially created by humanity’s collective mind.

Could something be going on in the levels of the world seen in physics.  Here’s what Nobel laureate Robert Laughlin (he of the fractional quantum Hall effect) has to say about it — http://www.pnas.org/content/97/1/28.  Note that this was written before people began taking quantum computers seriously.

“However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing (2). We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment (35). However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is Nthen the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light (6). With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures (7). But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors (810). Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

So reductionism doesn’t explain the laws we have at various levels.  They are regularities to be sure, and they describe what is happening, but a description is NOT an explanation, in the same way that Newton’s gravitational law predicts zillions of observations about the real world.     But even  Newton famously said Hypotheses non fingo (Latin for “I feign no hypotheses”) when discussing the action at a distance which his theory of gravity entailed. Actually he thought the idea was crazy. “That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it”

So are the various physical laws things that are imposed from without, by God only knows what?  The computer with its various levels of phenomena certainly was consciously constructed.

Is what I’ve just written a creation myth or is there something to it?

The Gambler’s fallacy is actually based on our experience

We don’t understand randomness very well. When asked to produce a random sequence we never produce enough repeating patterns thinking that they are less probable. This is the Gambler’s fallacy.  If heads come up 3 times in a row, the Gambler will bet on tails on the next throw   Why?  This reasoning is actually based on experience.

The following comes from a very interesting paper of a few years ago  [ Proc. Natl. Acad. Sci. vol. 112 pp. 3788 – 3792 ’15 ].  There is a surprising amount of systematic structure lurking within random sequences. For example, in the classic case of tossing a fair coin, where the probability of each outcome (heads or tails) is exactly 0.5 on every single trial, one would naturally assume that there is no possibility for some kind of interesting structure to emerge, given such a simple form of randomness.

However if you record the average amount of time for a pattern to first occur in a sequence (i.e., the waiting time statistic), it is longer for a repetition (head–head HH or tail–tail TT  (an average of six tosses is needrequired) than for an alternation (HT or TH, only four tosses is needed). This is despite the fact that on average, repetitions and alternations are equally probable (occurring once in every four tosses, i.e., the same mean time statistic).

For both of these facts to be true, it must be that repetitions are more bunched together over time—they come in bursts, with greater spacing between, compared with alternations (which is why they appear less frequent to us). Intuitively, this difference comes from the fact that repetitions can build upon each other (e.g., sequence HHH contains two instances of HH), whereas alternations cannot.

Statistically, the mean time and waiting time delineate the mean and variance in the distribution of the interarrival times of patterns (respectively). Despite the same frequency of occurrence (i.e., the same mean), alternations are more evenly distributed over time than repetitions (they have different variances) — which is exactly why they appear less frequent, hence less likely.

Then the authors go on to develop a model of the way we think about these things.

“Is this latent structure of waiting time just a strange mathematical curiosity or could it possibly have deep implications for our cognitive level perceptions of randomness? It has been speculated that the systematic bias in human randomness perception such as the gambler’s fallacy might be due to the greater variance in the interarrival times or the “delayed” waiting time for repetition patterns. Here, we show that a neural model based on a detailed biological understanding of the way the neocortex integrates information over time when processing sequences of events is naturally sensitive to both the mean time and waiting time statistics. Indeed, its behavior is explained by a simple averaging of the influences of both of these statistics, and this behavior emerges in the model over a wide range of parameters. Furthermore, this averaging dynamic directly produces the best-fitting bias-gain parameter for an existing Bayesian model of randomness judgments, which was previously an unexplained free parameter and obtained only through parameter fitting. We show that we can extend this Bayesian model to better fit the full range of human data by including a higher-order pattern statistic, and the neurally derived bias-gain parameter still provides the best fit to the human data in the augmented model. Overall, our model provides a neural grounding for the pervasive gambler’s fallacy bias in human judgments of random processes, where people systematically discount repetitions and emphasize alternations.”

Fascinating stuff

Homology: the skinny

I’d love to get a picture of a triangulated torus in here but I’ve tried for hours and can’t do it.

Homology is a rather esoteric branch of topology concerning holes in shapes (which can have any number of dimensions, not just two or three.  It is very easy to get bogged down in the large number of definitions and algebra without understanding what is really going on.  I certainly did.

 

The following explains what is really going on underneath the massive amounts of algebra (chains, cycles, chain groups, Betti numbers, cohomology, homology groups etc. etc.) required to understand homology.

The doughnut (torus) you see just above is hollow like an inner tube not solid like a donut.  So it is basically a 2 dimensional surface in 3 dimensional space.  Topology ignores what its objects of study (topological spaces) are embedded in, although they all can be embedded in ‘larger’ spaces, just as the 2 dimensional torus can be embedded in good old 3 dimensional space.

 

Homology allows you to look for holes in topological spaces in any dimension.  How would you find the hole in the torus without looking at it as it sits in 3 dimensioal space.

 

Look at the figure.  Its full of intersecting lines.  It is an amazingly difficult to prove theorem that every 2 dimensional surface can be triangulated (e.g. points placed on it so that it is covered with little triangles).  There do exist topological objects which cannot be triangulated (but two dimensional closed surfaces like the torus are not among them).

 

The corners of the triangles are called vertexes.  It’s easy to see how you could start at one vertex, march around using the edges between them and then get back to where you started.  Such a path is called a cycle.  Note that a cycle is one dimensional not two.

 

Every 3 adjacent vertices form a triangle. Paths using the 3 edges between them form a cycle.  This cycle is a boundary (in the mathematical sense) because it separates the torus into two parts.  The cycle is one dimensional because all you need is one number to describe any point on it.

 

So far incredibly trivial?  That’s about all there is to it.

 

No go up to the picture and imagine the red and pink circles as cycles using as many adjacent vertices as needed (the vertices are a bit hard to see). Circle
Neither one is a boundary in the mathematical sense, because they don’t separate the torus into two parts.

 

 

Each one has found a ‘hole’ in the torus, without ever looking at it in 3 dimensions.

 

 

So this particular homology group is the set of cycles in the torus which don’t separate it into two parts.

 

 

Similar reasoning allows you to construct paths made of 3 dimensional objects (say tetrahedrons instead of two dimensional triangles) in a 4 dimensional space of  your choice.  Some of these are cycles separating the 4 dimensional space into separate parts and others are cycles which don’t.  This allows you to look for 3 dimensional holes in 4 dimensional spaces.

 

 

Of course it’s more complicated than this. Homology allows you to look for any of the 1, 2, . . , n-1 dimension holes possible in an n dimensional space — but the idea is the same.

 

There’s tons more lingo to get under your belt, boundary homomorphism, K complex, singular homology, p-simplex, simplicial complex, quotient group, etc. etc. but keep this idea in mind.

 

How a first rate mathematician thinks

For someone studying math on their own far from academe, the Math Stack Exhange is a blessing.  There must be people who look at it all day, answering questions as they arise, presumably accumulating some sort of points (either real points or virtue points).  Usually answers to my questions are answered by these people within a day (often hours).  But not this one.

“It is clear to me that a surface of constant negative Gaussian curvature satisfies the hyperbolic axiom (more than one ‘straight’ line not meeting a given ‘straight’ line). Hartshorne (Geometry: Euclid and Beyond p. 374) defines the hyperbolic plane as the Hilbert plane + the hyperbolic axiom.

I’d like a proof that this axiomatic definition of the hyperbolic plane implies a surface of constant negative Gaussian curvature. ”

Clearly a highly technical question.  So why bore you with this?  Because no answer was quickly forthcoming, I asked one of my math professor friends about it.  His response is informal, to the point, and more importantly, shows how a first rate mathematician thinks and explains things.  I assure you that this guy is a big name in mathematics — full prof for years and years, author of several award winning math books etc. etc.  He’s also a very normal appearing and acting individual, and a very nice guy.  So here goes.

” Proving that the axiomatic definition of hyperbolic geometry implies constant negative curvature can be done but requires a lot of work. The first thing you have to do is prove that given any two points p and q in the hyperbolic plane, there is an isometry that takes p to q. By a basic theorem of Gauss, this implies that the Gaussian curvature K is the same at p and q. Hence K is constant. Then the Gauss-Bonnet Theorem says that if you have a geodesic triangle T with angles A, B, C, you have

A+B+C = pi + integral of K over T = pi + K area(T)

since K is constant. This implies K area(T) = A+B+C-pi < 0 by a basic result in hyperbolic geometry. Hence K must be negative, so we have constant negative curvature.

To get real numbers into the HIlbert plane H, you need to impose "rulers" on the lines of H. The idea is that you pick one line L in H and mark two points on it. The axioms then give a bijection from L to the real numbers R that takes the two points to 0 and 1, and then every line in H gets a similar bijection which gives a "ruler" for each line. This allows you to assign lengths to all line segments, which gives a metric. With more work, you get a setup that allows you to get a Riemannian metric on H, hence a curvature, and the lines in H become geodesics in this metric since they minimize distance. All of this takes a LOT of work.

It is a lot easier to build a model that satisfies the axioms. Since the axioms are categorical (all models are isomorphic), you can prove theorems in the model. Doing axiomatic proofs in geometry can be grueling if you insist on justifying every step by an axiom or previous result. When I teach geometry, I try to treat the axioms with a light touch."

I responded to this

"Thanks a lot. It certainly explains why I couldn’t figure this out on my own. An isometry of the hyperbolic plane implies the existence a metric on it. Where does the metric come from? In my reading the formula for the metric seems very ad hoc. "

He got back saying —

"Pick two points on a line and call one 0 and the other 1. By ruler and compass, you can then mark off points on the line that correspond to all rational numbers. But Hilbert also has an axiom of completeness, which gives a bijection between the line and the set of real numbers.

The crucial thing about the isometry group of the plane is that it transitive in the sense of group actions, so that if something happens at one point, then the same thing happens at any other point.

The method explained in my previous email gives a metric on the plane which seems a bit ad-hoc. But one can prove that any two metrics constructed this way are positive real number multiples of each other. "

Logically correct operationally horrible

A med school classmate who graduated from the University of Chicago was fond of saying — that’s how it works in practice, but how does it work in theory?

Exactly the opposite happened when I had to do some programming. It shows the exact difference between computer science and mathematics.

Basically I had to read a large textEdit file (minimum 2 megaBytes, Maximum 8) into a Filemaker Table and do something similar 15 times. The files ranged in size from 20,000 to 70,000 lines (each delimited by a carriage return). They needed to be broken up into 1000 records.

Each record began with “Begin Card Xnnnn” and ended with “End Card Xnnn” so it was easy to see where each of the 1000 cards began and ended. So a program was written to
l. look for “Begin Card Xnnn”
2. count the number of lines until “End Card Xnnn” was found
3. Open a new record in Filemaker
4. Put the data from Card Xnnnn into a field of the record
5. Repeat 1000 times.

Before I started, I checked the program out with smaller sized Files with 1, 5, 10, 50, 100, 200, 500 Cards.

The first program used a variable called “lineCounter” which just pointed to the line being processed. As each line was read, the pointer was advanced.

It was clear that the runTime was seriously nonLinear 10 cards took more than twice the time that 5 cards did. Even worse the more cards in the file the worse things got, so that 1000 cards took over an hour.

Although the logic of using an advancing pointer to select and retrieve lines was impeccable, the implementation was not.

You’ve really not been given enough information to figure out what went wrong but give it a shot before reading further.

I was thinking of the LineCounter variable as a memory pointer (which it was), similar to memory pointers in C.

But it wasn’t — to get to line 25,342, the high level command in Filemaker — MiddleValues (Text; Starting_Line ; Number_of_Lines_to_get) had to start at the beginning of the file, examine each character for a character return, keep a running count of character returns, and stop after 25,342 lines had been counted.

So what happened to run time?

Assume the LinePointer had to read each line (not exactly true, but close enough).

Given n lines in the file — that’s the sum of 1 to n — which turns out to be n^2 + n. (Derivation at the end)

So when there were 2*n lines in the file, the runtime went up by over 4 times (exactly 2^2 * n^2 + 2n)

So run times scaled in a polynomial fashion k * n lines would scale as k^2 * n^2 + k * n

At least it wasn’t exponential time which would have scaled as 2^k

How to solve the problem ?

Think about it before reading further

The trick was to start at the first lines in the file, get one card and then throw those lines away, starting over at the top each time. The speed up was impressive.

It really shows the difference between math and computer science. Both use logic, but computer science uses more

Derivation of sum of 1 to n.

Consider a square n little squares on a side. The total number of squares is n^2. Throw away the diagonal giving n^2 – n. The number of squares left is twice the sum of 1 to n – 1. So divide n^2 – n by 2, and add back n giving you n^2 + n

Entangled points

The terms Limit point, Cluster point, Accumulation point don’t really match the concept point set topology is trying to capture.

As usual, the motivation for any topological concept (including this one) lies in the real numbers.

1 is a limit point of the open interval (0, 1) of real numbers. Any open interval containing 1 also contains elements of (0, 1). 1 is entangled with the set (0, 1) given the usual topology of the real line.

What is the usual topology of the real line? (E.g. how are its open sets defined) It’s the set of open intervals) and their infinite unions and their finite intersection.

In this topology no open set can separate 1 from the set ( 0, 1) — e.g. they are entangled.

So call 1 an entangled point.This way of thinking allows you to think of open sets as separators of points from sets.

Hausdorff thought this way, when he described the separation axioms (TrennungsAxioms) describing points and sets that open sets could and could not separate.

The most useful collection of open sets satisfy Trennungsaxiom #2 — giving a Hausdorff topological space. There are enough of them so that every two distinct points are contained in two distinct disjoint open sets.

Thinking of limit points as entangled points gives you a more coherent way to think of continuous functions between topological spaces. They never separate a set and any of its entangled points in the domain when they map them to the target space. At least to me, this is far more satisfactory (and actually equivalent) to continuity than the usual definition; the inverse of an open set in the target space is an open set in the domain.

Clarity of thought and ease of implementation are two very different things. It is much easier to prove/disprove that a function is continuous using the usual definition than using the preservation of entangled points.

Organic chemistry could certainly use some better nomenclature. Why not call an SN1 reaction (Substitution Nucleophilic 1) SN-pancake — as the 4 carbons left after the bond is broken form a plane. Even better SN2 should be called SN-umbrella, as it is exactly like an umbrella turning inside out in the wind.