Tag Archives: Charcot

What is schizophrenia really like ?

The recent tragic death of John Nash and his wife warrants reposting the following written 11 October 2009

“I feel that writing to you there I am writing to the source of a ray of light from within a pit of semi-darkness. It is a strange place where you live, where administration is heaped upon administration, and all tremble with fear or abhorrence (in spite of pious phrases) at symptoms of actual non-local thinking. Up the river, slightly better, but still very strange in a certain area with which we are both familiar. And yet, to see this strangeness, the viewer must be strange.”

“I observed the local Romans show a considerable interest in getting into telephone booths and talking on the telephone and one of their favorite words was pronto. So it’s like ping-pong, pinging back again the bell pinged to me.”

Could you paraphrase this? Neither can I, and when, as a neurologist I had occasion to see schizophrenics, the only way to capture their speech was to transcribe it verbatim. It can’t be paraphrased, because it makes no sense, even though it’s reasonably gramatical.

What is a neurologist doing seeing schizophrenics? That’s for shrinks isn’t it? Sometimes in the early stages, the symptoms suggest something neurological. Epilepsy for example. One lady with funny spells was sent to me with her husband. Family history is important in just about all neurological disorders, particularly epilepsy. I asked if anyone in her family had epilepsy. She thought her nephew might have it. Her husband looked puzzled and asked her why. She said she thought so because they had the same birthday.

It’s time for a little history. The board which certifies neurologists, is called the American Board of Psychiatry and Neurology. This is not an accident as the two fields are joined at the hip. Freud himself started out as a neurologist, wrote papers on cerebral palsy, and studied with a great neurologist of the time, Charcot at la Salpetriere in Paris. 6 months of my 3 year residency were spent in Psychiatry, just as psychiatrists spend time learning neurology (and are tested on it when they take their Boards).

Once a month, a psychiatrist friend and I would go to lunch, discussing cases that were neither psychiatric nor neurologic but a mixture of both. We never lacked for new material.

Mental illness is scary as hell. Society deals with it the same way that kids deal with their fears, by romanticizing it, making it somehow more human and less horrible in the process. My kids were always talking about good monsters and bad monsters when they were little. Look at Sesame street. There are some fairly horrible looking characters on it which turn out actually to be pretty nice. Adults have books like “One flew over the Cuckoo’s nest” etc. etc.

The first quote above is from a letter John Nash wrote to Norbert Weiner in 1959. All this, and much much more, can be found in “A Beatiful Mind” by Sylvia Nasar. It is absolutely the best description of schizophrenia I’ve ever come across. No, I haven’t seen the movie, but there’s no way it can be more accurate than the book.

Unfortunately, the book is about a mathematician, which immediately turns off 95% of the populace. But that is exactly its strength. Nash became ill much later than most schizophrenics — around 30 when he had already done great work. So people saved what he wrote, and could describe what went on decades later. Even better, the mathematicians had no theoretical axe to grind (Freudian or otherwise). So there’s no ego, id, superego or penis envy in the book, just page after page of description from well over 100 people interviewed for the book, who just talked about what they saw. The description of Nash at his sickest covers 120 pages or so in the middle of the book. It’s extremely depressing reading, but you’ll never find a better description of what schizophrenia is actually like — e.g. (p. 242) She recalled that “he kept shifting from station to station. We thought he was just being pesky. But he thought that they were broadcasting messages to him. The things he did were mad, but we didn’t really know it.”

Because of his previous mathematical achievments, people saved what he wrote — the second quote above being from a letter written in 1971 and kept by the recipient for decades, the first quote from a letter written in 12 years before that.

There are a few heartening aspects of the book. His wife Alicia is a true saint, and stood by him and tried to help as best she could. The mathematicians also come off very well, in their attempts to shelter him and to get him treatment (they even took up a collection for this at one point).

I was also very pleased to see rather sympathetic portraits of the docs who took care of him. No 20/20 hindsight is to be found. They are described as doing the best for him that they could given the limited knowledge (and therapies) of the time. This is the way medicine has been and always will be practiced — we never really know enough about the diseases we’re treating, and the therapies are almost never optimal. We just try to do our best with what we know and what we have.

I actually ran into Nash shortly after the book came out. The Princeton University Store had a fabulous collection of math books back then — several hundred at least, most of them over $50, so it was a great place to browse, which I did whenever I was in the area. Afterwards, I stopped in a coffee shop in Nassau Square and there he was, carrying a large disheveled bunch of papers with what appeared to be scribbling on them. I couldn’t bring myself to speak to him. He had the eyes of a hunted animal.

The most interesting paper I’ve read in the past 5 years — Introduction and allegro

Have a look at Nature vol. 495 pp. 111 – 115 ’13, and the accompanying editorial (ibid. pp. 57 – 58) and see if you can find out why I think it is so fascinating. It has to do with my background and interests over the last 50+ years which are unlikely to be completely the same as the readers of this blog.

This post will be about computers, and how they can be completely understood in terms of their components (because humans constructed them). The next will be a boiled down version of the 6 articles https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Well, for nearly all my professional career 1962 – 2000 I was a neurologist and neurologists must deal with the brain and attempt to understand how it works (which we still don’t). The brain (and mind) has always been interpreted using the dominant technology of the day.

Freud (1856 – 1939) formulated his work when steam power was widely known and used. He studied with most eminent neurologist of the time (Charcot) after getting his M. D. His conception of the mind and it’s pathology had to do with powerful urges and the way they were channeled through the pipes of the psyche. In particular, traumatic events if allowed to build up in the system, could create pressures and wreck the psychiatric machinery. Hence the emphasis on discovering the blockages and releasing them before the steam engine exploded into pathology. This approach is alive and well today — can you say PTSD ?

Presently the brain is thought of in terms of the current dominant technology — the computer. It runs programs. Use of this analogy goes back to the dawn of the computer age way before they became widespread. John von Neumann who invented the stored program computer, in which programs and data looked the same, wrote “The Computer and the Brain” before his death in 1957.

So as a neurologist (and general techie) I was fascinated with them when they came out for the general public. Obviously, they could be completely understood because we created them. I bought an alpha Micro (long gone) which was the fruits of some engineers who worked at Digital Equipment Compancy (DEC — long gone), which was sold to Compaq (also long gone), back in the early 80’s.

Don’t laugh at what I bought; it was state of the art at the time. It had 64 kiloBytes of memory, of which 32 kiloBytes was taken up by the operating system, and the other 32 was used for programs. I read about the logic behind computers, and quickly realized that everything important happened inside the ALU (Arithmetic and Logical Unit), which had places to store data (registers) and a place to store one instruction (another register called the instruction pointer). The instructions were 16 bits (2 bytes long). The disc was state of the art at the time — all of 80 megaBytes — it looked (and sounded) like a washing machine, with removable platters which looked like giant thick frisbies.

I’d read up on how registers could be built up from logic gates (AND, OR, NOR, NAND). So, on paper, I built logical registers from these elements. I had a clock as well (a black box) which could send signals to the gates coordinating things. I quickly understood that for the simplest instruction == Add register A to register B, further instructions were necessary — this is the microcode — e.g. move register A to the ALU, open register B, use it as input along with the instruction code for Add, to perform the addition, then store it someplace.

Then after understanding how the instructions operated, I wrote a program to take the ones and zeros of the instructions of the operating system, and turn them into something readable e.g. 0110101000001111 into ADD A, B. This allowed me to see how instructions were turned into a functioning machine.

Why do it? Well, it was interesting, and at the end of all this an understanding of how computers work could be had. Clearly the output depended on the internal structure of the computer (which didn’t change) and the program fed into it (which did). Once you understood the structure of the computer and the language of the instructions, all you needed to understand its output was the program (e.g. the code).

As all this was going on, people were deciphering the chemical nature of the genetic code. Know the sequence of nucleotides in the code and you’d know everything was the zeitgeist. By an enormous effort the first sequence of an organism became available in 1977 — it was of a DNA virus PhiX-179. It had all of 5,386 base pairs and was a huge amount of work. The human genome project was decades away.

This sort of genetic hubris is the subject of the next post in the series. If you’ve read the paper, can you now see why I find it so fascinating? Stay tuned.