Tag Archives: Digital Equipment Company.

The most interesting paper I’ve read in the past 5 years — Introduction and allegro

Have a look at Nature vol. 495 pp. 111 – 115 ’13, and the accompanying editorial (ibid. pp. 57 – 58) and see if you can find out why I think it is so fascinating. It has to do with my background and interests over the last 50+ years which are unlikely to be completely the same as the readers of this blog.

This post will be about computers, and how they can be completely understood in terms of their components (because humans constructed them). The next will be a boiled down version of the 6 articles https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Well, for nearly all my professional career 1962 – 2000 I was a neurologist and neurologists must deal with the brain and attempt to understand how it works (which we still don’t). The brain (and mind) has always been interpreted using the dominant technology of the day.

Freud (1856 – 1939) formulated his work when steam power was widely known and used. He studied with most eminent neurologist of the time (Charcot) after getting his M. D. His conception of the mind and it’s pathology had to do with powerful urges and the way they were channeled through the pipes of the psyche. In particular, traumatic events if allowed to build up in the system, could create pressures and wreck the psychiatric machinery. Hence the emphasis on discovering the blockages and releasing them before the steam engine exploded into pathology. This approach is alive and well today — can you say PTSD ?

Presently the brain is thought of in terms of the current dominant technology — the computer. It runs programs. Use of this analogy goes back to the dawn of the computer age way before they became widespread. John von Neumann who invented the stored program computer, in which programs and data looked the same, wrote “The Computer and the Brain” before his death in 1957.

So as a neurologist (and general techie) I was fascinated with them when they came out for the general public. Obviously, they could be completely understood because we created them. I bought an alpha Micro (long gone) which was the fruits of some engineers who worked at Digital Equipment Compancy (DEC — long gone), which was sold to Compaq (also long gone), back in the early 80’s.

Don’t laugh at what I bought; it was state of the art at the time. It had 64 kiloBytes of memory, of which 32 kiloBytes was taken up by the operating system, and the other 32 was used for programs. I read about the logic behind computers, and quickly realized that everything important happened inside the ALU (Arithmetic and Logical Unit), which had places to store data (registers) and a place to store one instruction (another register called the instruction pointer). The instructions were 16 bits (2 bytes long). The disc was state of the art at the time — all of 80 megaBytes — it looked (and sounded) like a washing machine, with removable platters which looked like giant thick frisbies.

I’d read up on how registers could be built up from logic gates (AND, OR, NOR, NAND). So, on paper, I built logical registers from these elements. I had a clock as well (a black box) which could send signals to the gates coordinating things. I quickly understood that for the simplest instruction == Add register A to register B, further instructions were necessary — this is the microcode — e.g. move register A to the ALU, open register B, use it as input along with the instruction code for Add, to perform the addition, then store it someplace.

Then after understanding how the instructions operated, I wrote a program to take the ones and zeros of the instructions of the operating system, and turn them into something readable e.g. 0110101000001111 into ADD A, B. This allowed me to see how instructions were turned into a functioning machine.

Why do it? Well, it was interesting, and at the end of all this an understanding of how computers work could be had. Clearly the output depended on the internal structure of the computer (which didn’t change) and the program fed into it (which did). Once you understood the structure of the computer and the language of the instructions, all you needed to understand its output was the program (e.g. the code).

As all this was going on, people were deciphering the chemical nature of the genetic code. Know the sequence of nucleotides in the code and you’d know everything was the zeitgeist. By an enormous effort the first sequence of an organism became available in 1977 — it was of a DNA virus PhiX-179. It had all of 5,386 base pairs and was a huge amount of work. The human genome project was decades away.

This sort of genetic hubris is the subject of the next post in the series. If you’ve read the paper, can you now see why I find it so fascinating? Stay tuned.