The Chinese Room Argument
Two of the most important theorems in differential geometry are Gauss’s Theorem egregium and the Inverse function theorem. Basically the theorem egregium says that you don’t need to look at the shape of a two dimensional surface (say the surface of a walnut) from outside (e.g. from the way it sits in 3 dimensional space) to understand its shape. All the information is contained in the surface itself.
The inverse function theorem (InFT) is used over and over. If you have a continuous function from Euclidean space U of finite dimension n to Euclidean space V of the same dimension, and certain properties of its derivative are present at a point x of U, then there exists a another function to get you back from space V to U.
Even better, once you’ve proved the inverse function theorem, proof of another important theorem (the implicit function theorem aka the ImFT) is quite simple. The ImFT lets you know if given f(x, y, .. .) –> R (e.g. a real valued function) if you can express one variable (say x) in terms of the others. Again sometimes it’s difficult to solve such an equation for x in terms of y — consider arctan(e^(x + y^2) * sin(xy) + ln x). What is important to know in this case, is whether it’s even possible.
The proofs of both are tricky. In particular, the proof of the inverse function theorem is an existence proof. You may not be able to write down the function from V to U even though you’ve just proved that it exists. So using the InFT to prove the implicit function theory is also nonconstructive.
At some point in your mathematical adolescence, you should sit down and follow these proofs. They aren’t easy and they aren’t short.
Here’s where to go. Both can be found in books by James J. Callahan, emeritus professor of Mathematics at Smith College in Northampton Mass. The proof of the InVT is to be found on pages 169 – 174 of his “Advanced Calculus, A Geometric View”, which is geometric, with lots of pictures. What’s good about this proof is that it’s broken down into some 13 steps. Be prepared to meet a lot of functions and variables.
Just the statement of InVT involves functions f, f^-1, df, df^-1, spaces U^n, R^n, variables a, q, B
The proof of InVT involves functions g, phi, dphi, h, dh, N, most of which are vector valued (N is real valued)
Then there are the geometric objects U^n, R^n, Wa, Wfa, Br, Br/2
Vectors a, x, u, delta x, delta u, delta v, delta w
Real number r
That’s just to get you through step 8 of the 13 step proof, which proves the existence of the inverse function (aka f^-1). The rest involves proving properties of f^-1 such as continuity and differentiability. I must confess that just proving existence of f^-1 was enough for me.
The proof of the implicit function theorem for two variables — e.g. f(x, y) = k takes less than a page (190).
The proof of the Theorem Egregium is to be found in his book “The Geometry of Spacetime” pp. 258 – 262 in 9 steps. Be prepared for fewer functions, but many more symbols.
As to why I’m doing this please see https://luysii.wordpress.com/2011/12/31/some-new-years-resolutions/
I’m continuing to plow through classic differential geometry en route to studying manifolds en route to understanding relativity. The following thoughts might help someone just starting out.
Derivatives of one variable functions are fairly easy to understand. Plot y = f(x) and measure the slope of the curve. That’s the derivative.
So why do you need a matrix to find the derivative for more than one variable? Imagine standing on the side of a mountain. The slope depends on the direction you look. So something giving you the slope(s) of a mountain just has to be more complicated. It must be something that operates on the direction you’re looking (e.g. a vector).
Another point to remember about derivatives is that they basically take something that looks bumpy (like a mountain), look very closely at it under a magnifying glass and flatten it out (e.g. linearize it). Anything linear comes under the rubric of linear algebra — about which I wrote a lot, because it underlies quantum mechanics — for details see the 9 articles I wrote about it in — https://luysii.wordpress.com/category/linear-algebra-survival-guide-for-quantum-mechanics/.
Any linear transformation of a vector (of which the direction of your gaze on the side of a mountain is but one) can be represented by a matrix of numbers, which is why to find a slope in the direction of a vector it must be multiplied by a matrix (the Jacobian if you want to get technical).
Now on to the two tricky theorems — the Inverse Function Theorem and the Implicit Function theorem. I’ve been plowing through a variety of books on differential geometry (Banchoff & Lovett, McInenery, DoCarmo, Kreyszig, Thorpe) and they all refer you for proofs of both to an analysis book. They are absolutely crucial to differential geometry, so it’s surprising that none of these books prove them. They all involve linear transformations (because derivatives are linear) from an arbitrary real vector space R^n — elements are ordered n-tuples of real numbers to to another real vector space R^m. So they must inherently involve matrices, which quickly gets rather technical.
To keep your eye on the ball let’s go back to y = f(x). Y and x are real numbers. They have the lovely property that between any two real numbers there lies another, and between those two another and another. So there is no smallest real number greater than 0. If there is a point x at which the derivative isn’t zero but some positive number a to keep it simple (but a negative number would work as well), then y is increasing at x. If the derivative is continuous at a (which it usually is) then there is a delta greater than zero such that the derivative is greater than zero in the open interval (x – delta, x + delta). This means that y = f(x) is always increasing over that interval. This means that there is a one to one function y = g(x) defined over the same interval. This is called an inverse function.
Now you’re ready for the inverse function theorem — but the conditions are the same — the derivative at a point should be greater than zero and continuous at that point — and an inverse function exists. The trickiness and the mountains of notation come from the fact that the function is from R^n to R^m where n and m are any positive integers.
It’s important to know that, although the inverse and implicit functions are shown logically to exist, almost never can they be written down explicitly. The implicit function theorem follows from the inverse function theorem with even more notation involved, but this is the basic idea behind them.
A few other points on differential geometry. Much of it involves surfaces, and they are defined 3 ways. The easiest way to understand two of them takes you back to the side of a mountain. Now you’re standing on it half way up and wondering which would be the best way to get to the top. So you whip out your topographic map which has lines of constant elevation on it. This brings to the first way to define a surface. Assume the mountain is given by the function z = f (x, y) — every point on the earth has a height above it where the land stops and the sky beings (z) — so the function is a parameterization of the surface. Another way to define a surface in space is by level sets: put z equal to some height — call it z’ and define the surface as the set of two dimensional points (x, y) such that f (x, y ) = z’. These are the lines of constant elevation (e.g. the contour lines) – on the mountain. Differential geometry takes a broad view of surfaces — yes a curve on f (x, y) is considered a surface, just as a surface of constant temperature around the sun is a level set on f(x,y,z). The third way to define a surface is by f (x1, x2, …, xn) = 0. This is where the implicit function theorem comes in if some variables are in fact functions of others.
Well, I hope this helps when you plunge into the actual details.
For the record — the best derivation of these theorems are in Apostol Mathematical Analysis 1957 third printing pp. 138 – 148. The development is leisurely and quite clear. I bought the book in 1960 for $10.50. The second edition came out in ’74 — you can now buy it for 76.00 from Amazon — proving you should never throw out your old math books.