Tag Archives: Sigmund Freud

A creation myth

Sigmund Freud may have been wrong about penis envy, but most lower forms of scientific life (chemists, biologists) do have physics envy — myself included.  Most graduate chemists have taken a quantum mechanics course, if only to see where atomic and molecular orbitals come from.  Anyone doing physical chemistry has likely studied statistical mechanics. I was fortunate enough to audit one such course given by E. Bright Wilson (of Pauling and Wilson).

Although we no longer study physics per se, most of us read books about physics.  Two excellent such books have come out in the past year.  One is “What is Real?” — https://www.basicbooks.com/titles/adam-becker/what-is-real/9780465096053/, the other is “Lost in Math” by Sabine Hossenfelder whose blog on physics is always worth reading, both for herself and the heavies who comment on what she writes — http://backreaction.blogspot.com

Both books deserve a long discursive review here. But that’s for another time.  Briefly, Hossenfelder thinks that physics for the past 30 years has become so fascinated with elegant mathematical descriptions of nature, that theories are judged by their mathematical elegance and beauty, rather than agreement with experiment.  She acknowledges that the experiments are both difficult and expensive, and notes that it took a century for one such prediction (gravitational waves) to be confirmed.

The mathematics of physics can certainly be seductive, and even a lowly chemist such as myself has been bowled over by it.  Here is how it hit me

Budding chemists start out by learning that electrons like to be in filled shells. The first shell has 2 elements, the next 2 + 6 elements etc. etc. It allows the neophyte to make some sense of the periodic table (as long as they deal with low atomic numbers — why the 4s electrons are of lower energy than the 3d electons still seems quite ad hoc to me). Later on we were told that this is because of quantum numbers n, l, m and s. Then we learn that atomic orbitals have shapes, in some wierd way determined by the quantum numbers, etc. etc.

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back in college).

So it wasn’t a shock when the QM instructor back in 1961 got to them in the course of solving the Schrodinger equation for the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3 variables, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman, the instructor, put it: the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursion relations.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

But what interested me the most about “Lost in Math” was Hossenfelder’s discussion of the different physical laws appearing at different physical scales (e.g. effective laws), emergent properties and reductionism (pp. 44 –> ).  Although things at larger scales (atoms) can be understood in terms of the physics of smaller scales (protons, neutrons, electrons), the details of elementary particle interactions (quarks, gluons, leptons etc.) don’t matter much to the chemist.  The orbits of planets don’t depend on planetary structure, etc. etc.  She notes that reduction of events at one scale to those at a smaller one is not an optional philosophical position to hold, it’s just the way nature is as revealed by experiment.  She notes that you could ‘in principle, derive the theory for large scales from the theory for small scales’ (although I’ve never seen it done) and then she moves on

But the different structures and different laws at different scales is what has always fascinated me about the world in which we exist.  Do we have a model for a world structured this way?

Of course we do.  It’s the computer.


Neurologists have always been interested in computers, and computer people have always been interested in the brain — von Neumann wrote “The Computer and the Brain” shortly before his death in 1958.

Back in med school in the 60s people were just figuring out how neurons talked to each other where they met at the synapse.  It was with a certain degree of excitement that we found that information appeared to flow just one way across the synapse (from the PREsynaptic neuron to the POST synaptic neuron).  E.g. just like the vacuum tubes of the earliest computers.  Current (and information) could flow just one way.

The microprocessors based on transistors that a normal person could play with came out in the 70s.  I was naturally interested, as having taken QM I thought I could understand how transistors work.  I knew about energy gaps in atomic spectra, but how in the world a crystal with zillions of atoms and electrons floating around could produce one seemed like a mystery to me, and still does.  It’s an example of ’emergence’ about which more later.

But forgetting all that, it’s fairly easy to see how electrons could flow from a semiconductor with an abundance of them (due to doping) to a semiconductor with a deficit — and have a hard time flowing back.  Again a one way valve, just like our concept of the synapses.

Now of course, we know information can flow the other way in the synapse from POST synaptic to PREsynaptic neuron, some of the main carriers of which are the endogenous marihuana-like substances in your brain — anandamide etc. etc.  — the endocannabinoids.

In 1968 my wife learned how to do assembly language coding with punch cards ones and zeros, the whole bit.  Why?  Because I was scheduled for two years of active duty as an Army doc, a time in which we had half a million men in Vietnam.  She was preparing to be a widow with 2 infants, as the Army sent me a form asking for my preferences in assignment, a form so out of date, that it offered the option of taking my family with me to Vietnam if I’d extend my tour over there to 4 years.  So I sat around drinking Scotch and reading Faulkner waiting to go in.

So when computers became something the general populace could have, I tried to build a mental one using and or and not logical gates and 1s and 0s for high and low voltages. Since I could see how to build the three using transistors (reductionism), I just went one plane higher.  Note, although the gates can be easily reduced to transistors, and transistors to p and n type semiconductors, there is nothing in the laws of semiconductor physics that implies putting them together to form logic gates.  So the higher plane of logic gates is essentially an act of creation.  They do not necessarily arise from transistors.

What I was really interested in was hooking the gates together to form an ALU (arithmetic and logic unit).  I eventually did it, but doing so showed me the necessity of other components of the chip (the clock and in particular the microcode which lies below assembly language instructions).

The next level up, is what my wife was doing — sending assembly language instructions of 1’s and 0’s to the computer, and watching how gates were opened and shut, registers filled and emptied, transforming the 1’s and 0’s in the process.  Again note that there is nothing necessary in the way the gates are hooked together to make them do anything.  The program is at yet another higher level.

Above that are the higher level programs, Basic, C and on up.  Above that hooking computers together to form networks and then the internet with TCP/IP  etc.

While they all can be reduced, there is nothing inherent in the things that they are reduced to which implies their existence.  Their existence was essentially created by humanity’s collective mind.

Could something be going on in the levels of the world seen in physics.  Here’s what Nobel laureate Robert Laughlin (he of the fractional quantum Hall effect) has to say about it — http://www.pnas.org/content/97/1/28.  Note that this was written before people began taking quantum computers seriously.

“However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing (2). We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment (35). However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is Nthen the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light (6). With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures (7). But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors (810). Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

So reductionism doesn’t explain the laws we have at various levels.  They are regularities to be sure, and they describe what is happening, but a description is NOT an explanation, in the same way that Newton’s gravitational law predicts zillions of observations about the real world.     But even  Newton famously said Hypotheses non fingo (Latin for “I feign no hypotheses”) when discussing the action at a distance which his theory of gravity entailed. Actually he thought the idea was crazy. “That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it”

So are the various physical laws things that are imposed from without, by God only knows what?  The computer with its various levels of phenomena certainly was consciously constructed.

Is what I’ve just written a creation myth or is there something to it?

What is schizophrenia really like ?

The recent tragic death of John Nash and his wife warrants reposting the following written 11 October 2009

“I feel that writing to you there I am writing to the source of a ray of light from within a pit of semi-darkness. It is a strange place where you live, where administration is heaped upon administration, and all tremble with fear or abhorrence (in spite of pious phrases) at symptoms of actual non-local thinking. Up the river, slightly better, but still very strange in a certain area with which we are both familiar. And yet, to see this strangeness, the viewer must be strange.”

“I observed the local Romans show a considerable interest in getting into telephone booths and talking on the telephone and one of their favorite words was pronto. So it’s like ping-pong, pinging back again the bell pinged to me.”

Could you paraphrase this? Neither can I, and when, as a neurologist I had occasion to see schizophrenics, the only way to capture their speech was to transcribe it verbatim. It can’t be paraphrased, because it makes no sense, even though it’s reasonably gramatical.

What is a neurologist doing seeing schizophrenics? That’s for shrinks isn’t it? Sometimes in the early stages, the symptoms suggest something neurological. Epilepsy for example. One lady with funny spells was sent to me with her husband. Family history is important in just about all neurological disorders, particularly epilepsy. I asked if anyone in her family had epilepsy. She thought her nephew might have it. Her husband looked puzzled and asked her why. She said she thought so because they had the same birthday.

It’s time for a little history. The board which certifies neurologists, is called the American Board of Psychiatry and Neurology. This is not an accident as the two fields are joined at the hip. Freud himself started out as a neurologist, wrote papers on cerebral palsy, and studied with a great neurologist of the time, Charcot at la Salpetriere in Paris. 6 months of my 3 year residency were spent in Psychiatry, just as psychiatrists spend time learning neurology (and are tested on it when they take their Boards).

Once a month, a psychiatrist friend and I would go to lunch, discussing cases that were neither psychiatric nor neurologic but a mixture of both. We never lacked for new material.

Mental illness is scary as hell. Society deals with it the same way that kids deal with their fears, by romanticizing it, making it somehow more human and less horrible in the process. My kids were always talking about good monsters and bad monsters when they were little. Look at Sesame street. There are some fairly horrible looking characters on it which turn out actually to be pretty nice. Adults have books like “One flew over the Cuckoo’s nest” etc. etc.

The first quote above is from a letter John Nash wrote to Norbert Weiner in 1959. All this, and much much more, can be found in “A Beatiful Mind” by Sylvia Nasar. It is absolutely the best description of schizophrenia I’ve ever come across. No, I haven’t seen the movie, but there’s no way it can be more accurate than the book.

Unfortunately, the book is about a mathematician, which immediately turns off 95% of the populace. But that is exactly its strength. Nash became ill much later than most schizophrenics — around 30 when he had already done great work. So people saved what he wrote, and could describe what went on decades later. Even better, the mathematicians had no theoretical axe to grind (Freudian or otherwise). So there’s no ego, id, superego or penis envy in the book, just page after page of description from well over 100 people interviewed for the book, who just talked about what they saw. The description of Nash at his sickest covers 120 pages or so in the middle of the book. It’s extremely depressing reading, but you’ll never find a better description of what schizophrenia is actually like — e.g. (p. 242) She recalled that “he kept shifting from station to station. We thought he was just being pesky. But he thought that they were broadcasting messages to him. The things he did were mad, but we didn’t really know it.”

Because of his previous mathematical achievments, people saved what he wrote — the second quote above being from a letter written in 1971 and kept by the recipient for decades, the first quote from a letter written in 12 years before that.

There are a few heartening aspects of the book. His wife Alicia is a true saint, and stood by him and tried to help as best she could. The mathematicians also come off very well, in their attempts to shelter him and to get him treatment (they even took up a collection for this at one point).

I was also very pleased to see rather sympathetic portraits of the docs who took care of him. No 20/20 hindsight is to be found. They are described as doing the best for him that they could given the limited knowledge (and therapies) of the time. This is the way medicine has been and always will be practiced — we never really know enough about the diseases we’re treating, and the therapies are almost never optimal. We just try to do our best with what we know and what we have.

I actually ran into Nash shortly after the book came out. The Princeton University Store had a fabulous collection of math books back then — several hundred at least, most of them over $50, so it was a great place to browse, which I did whenever I was in the area. Afterwards, I stopped in a coffee shop in Nassau Square and there he was, carrying a large disheveled bunch of papers with what appeared to be scribbling on them. I couldn’t bring myself to speak to him. He had the eyes of a hunted animal.

The most interesting paper I’ve read in the past 5 years — Introduction and allegro

Have a look at Nature vol. 495 pp. 111 – 115 ’13, and the accompanying editorial (ibid. pp. 57 – 58) and see if you can find out why I think it is so fascinating. It has to do with my background and interests over the last 50+ years which are unlikely to be completely the same as the readers of this blog.

This post will be about computers, and how they can be completely understood in terms of their components (because humans constructed them). The next will be a boiled down version of the 6 articles https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Well, for nearly all my professional career 1962 – 2000 I was a neurologist and neurologists must deal with the brain and attempt to understand how it works (which we still don’t). The brain (and mind) has always been interpreted using the dominant technology of the day.

Freud (1856 – 1939) formulated his work when steam power was widely known and used. He studied with most eminent neurologist of the time (Charcot) after getting his M. D. His conception of the mind and it’s pathology had to do with powerful urges and the way they were channeled through the pipes of the psyche. In particular, traumatic events if allowed to build up in the system, could create pressures and wreck the psychiatric machinery. Hence the emphasis on discovering the blockages and releasing them before the steam engine exploded into pathology. This approach is alive and well today — can you say PTSD ?

Presently the brain is thought of in terms of the current dominant technology — the computer. It runs programs. Use of this analogy goes back to the dawn of the computer age way before they became widespread. John von Neumann who invented the stored program computer, in which programs and data looked the same, wrote “The Computer and the Brain” before his death in 1957.

So as a neurologist (and general techie) I was fascinated with them when they came out for the general public. Obviously, they could be completely understood because we created them. I bought an alpha Micro (long gone) which was the fruits of some engineers who worked at Digital Equipment Compancy (DEC — long gone), which was sold to Compaq (also long gone), back in the early 80’s.

Don’t laugh at what I bought; it was state of the art at the time. It had 64 kiloBytes of memory, of which 32 kiloBytes was taken up by the operating system, and the other 32 was used for programs. I read about the logic behind computers, and quickly realized that everything important happened inside the ALU (Arithmetic and Logical Unit), which had places to store data (registers) and a place to store one instruction (another register called the instruction pointer). The instructions were 16 bits (2 bytes long). The disc was state of the art at the time — all of 80 megaBytes — it looked (and sounded) like a washing machine, with removable platters which looked like giant thick frisbies.

I’d read up on how registers could be built up from logic gates (AND, OR, NOR, NAND). So, on paper, I built logical registers from these elements. I had a clock as well (a black box) which could send signals to the gates coordinating things. I quickly understood that for the simplest instruction == Add register A to register B, further instructions were necessary — this is the microcode — e.g. move register A to the ALU, open register B, use it as input along with the instruction code for Add, to perform the addition, then store it someplace.

Then after understanding how the instructions operated, I wrote a program to take the ones and zeros of the instructions of the operating system, and turn them into something readable e.g. 0110101000001111 into ADD A, B. This allowed me to see how instructions were turned into a functioning machine.

Why do it? Well, it was interesting, and at the end of all this an understanding of how computers work could be had. Clearly the output depended on the internal structure of the computer (which didn’t change) and the program fed into it (which did). Once you understood the structure of the computer and the language of the instructions, all you needed to understand its output was the program (e.g. the code).

As all this was going on, people were deciphering the chemical nature of the genetic code. Know the sequence of nucleotides in the code and you’d know everything was the zeitgeist. By an enormous effort the first sequence of an organism became available in 1977 — it was of a DNA virus PhiX-179. It had all of 5,386 base pairs and was a huge amount of work. The human genome project was decades away.

This sort of genetic hubris is the subject of the next post in the series. If you’ve read the paper, can you now see why I find it so fascinating? Stay tuned.