Category Archives: Philosophical issues raised

Never stop thinking, never stop looking for an angle

Derek Lowe may soon be a very rich man if he owns some Vertex stock. An incredible pair of papers in the current Nature (vol. 505 pp. 492 – 493, 509 – 514 ’14, Science (vol 343 pp. 38 – 384, 428 – 432 ’14) has come up with a completely new way of possibly treating AIDs. Instead of attacking the virus, attack the cells it infects, and let them live (or at least die differently).

Now for some background. Cells within us are dying all the time. Red cells die within half a year, the cells in the lining of your gut die within a week and are replaced. None of this causes inflammation, and the cells die very quietly and are munched up by white cells. They even send out a signal to the white cells called an ‘eat me’ signal. The process is called apoptosis. It occurs big time during embryonic development, particularly in the nervous system. Neurons failing to make strong enough contacts effectively kill themselves.

Apoptosis is also called programmed cell death — the cell literally kills itself using enzymes called caspases to break down proteins, and other proteins to break down DNA.

We have evolved other ways for cell death to occur. Consider a cell infected by a bacterium or a virus. We don’t want it to go quietly. We want a lot of inflammatory white cells to get near it and mop up any organisms around. This type of cell death is called pyroptosis. It also uses caspases, but a different set.

You just can’t get away from teleological thinking in biology. We are always asking ‘what’s it for?’ Chemistry and physics can never answer questions like this. We’re back at the Cartesian dichotomy.

Which brings us to an unprecedented way to treat AIDS (or even prevent it).

As anyone conscious for the past 30 years knows, the AIDS virus (aka Human Immunodeficiency Virus 1 aka HIV1) destroys the immune system. It does so in many ways, but the major brunt of the disease falls on a type of white cell called a helper T cell. These cells carry a protein called CD4 on their surface, so for years docs have been counting their number as a prognostic sign, and, in earlier days, to tell them when to start treatment.

We know HIV1 infects CD4 positive (CD4+) T cells and kills them. What the papers show, is that this isn’t the way that most CD4+ cells die. Most (the papers estimate 95%) CD4+ cells die of an abortive HIV1 infection — the virus gets into the cell, starts making some of its DNA, and then the pyroptosis response occurs, causing inflammation, attracting more and more immune cells, which then get infected.

This provides a rather satisfying explanation of the chronic inflammation seen in AIDS in lymph nodes.

Vertex has a drug VX-765 which inhibits the caspase responsible for pyroptosis, but not those responsible for apoptosis. The structure is available (http://www.medkoo.com/Anticancer-trials/VX-765.html), and it looks like a protease inhibitor. Even better, VX-765 been used in humans (in phase II trials for something entirely different). It was well tolerated for 6 weeks anyway. Clearly, a lot more needs to be done before it’s brought to the FDA — how safe is it after a year, what are the long term side effects. But imagine that you could give this to someone newly infected with essentially normal CD4+ count to literally prevent the immunodeficiency, even if you weren’t getting rid of the virus.

Possibly a great advance. I love the deviousness of it all. Don’t attack the virus, but prevent cells it infects from dying in a particular way.

Never stop thinking. Hats off to those who thought of it.

The death of the synonymous codon – III

The coding capacity of our genome continues to amaze. The redundancy of the genetic code has been put to yet another use. Depending on how much you know, skip the following two links and read on. Otherwise all the background to understand the following is in them.

http://luysii.wordpress.com/2011/05/03/the-death-of-the-synonymous-codon/

http://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/

There really was no way around it. If you want to code for 20 different amino acids with only four choices at each position, two positions (4^2) won’t do. You need three positions, which gives you 64 possibilities (61 after the three stop codons are taken into account) and the redundancy that comes with it. The previous links show how the redundant codons for some amino acids aren’t redundant at all but used to code for the speed of translation, or for exonic splicing enhancers and inhibitors. Different codons for the same amino acid can produce wildly different effects leaving the amino acid sequence of a given protein alone.

The following recent work [ Science vol. 342 pp. 1325 - 1326, 1367 - 1367 '13 ] showed that transcription factors bind to the coding sequences of proteins, not just the promoters and enhancers found outside them as we had thought.

The principle behind the DNAaseI protection assay is pretty simple. Any protein binding to DNA protects it against DNAase I which chops it up. Then clone and sequence what’s left to see where proteins have bound to DNA. These are called footprints. They must have removed histones first, I imagine.

The work performed DNAaseI protection assays on a truly massive scale. They looked at 81 different cell types at nucleotide resolution. They found 11,000,000 footprints all together, about 1,000,000 per cell type. In a given cell type 25,000 were completely localized within exons (the parts of the gene actually specifying amino acids). When all the codons of the genome are looked at as a group, 1/7 of them are found in a footprint in one of the cell types.

The results wouldn’t have been that spectacular had they just looked at a few cell types. How do we know the binding sites contain transcription factors? Because the footprints match transcription factor recognition sequences.

We know that sequences around splice sites are used to code for splicing enhancers and inhibitors. Interestingly, the splice sites are generally depleted of DNAaseI footprints. Remember that splicing occurs after the gene has been transcribed.

At this point it isn’t clear how binding of a transcription factor in a protein coding region influences gene expression.

Just like a work of art, there is more than one way that DNA can mean. Remarkable !

A new parameter for ladies to measure before choosing a mate — testicular volume

I’m amazed that they actually did this work [ Proc. Natl. Acad. Sci. vol. 110 pp. 15746 - 15751 '13 ] but they did. From Atlanta Georgia, the home of the Southern Gentleman. You do have to wonder what sort of wimps would permit this type of work. 70 such individuals were found, still cohabiting with the mother. Clearly a skewed distribution as 65/70 were actually married. No mention of any effect of the sex of the offspring on what they found.

Here’s what they did

Testosterone levels and testicular volume predicted how much parenting a male actually did (based on self-reports from the two parents). Functional MRI on viewing a picture of the offspring also predicted the degree of male parenting.

So which way do you think it went?

The bigger the testicles and the higher the testosterone, the less parenting the father did. Similarly the less activation of one area of the brain in response to a picture of their chile, the less parenting.

So ladies, you may get a macho dude for a mate, but don’t expect much help.

Your fetus can hear you

There have been intimations of this. 10 years ago [ Proc. Natl. Acad. Sci. vol. 100 pp. 11702 - 11705 '13 ] Full term infants were shown to respond more to human speech played forwards than when the tape was reversed. A clever technique called optical topography was used — it is noninvasive, and relies on the thinness of the neonatal skull.

Now comes [ Proc. Natl. Acad. Sci. vol. 110 pp. 15145 - 15150 '13 ] The authors presented variants of words to fetuses; unlike infants with no exposure to these stimuli, the exposed fetuses showed enhanced brain activity (mismatch responses) in response to pitch changes for the trained variants after birth. (What this means is that there is (again a noninvasive) way of measuring brain activity — and there is greater activity when an unexpected variant is presented. We are novelty seekers from the get go.

Also there was a significant correlation between the amount of prenatal exposure and brain activity, with greater activity being associated with a higher amount of prenatal speech exposure. Even more impressive, the learning effect was generalized to other types of similar speech sounds not included in the training material.

We know know that the infant brain gets tuned to the language they hear. Japanese infants can hear the r sound at birth (again brain electrical activity responds to it) but after 6 – 9 months of listening to a language without the sound, their brain becomes tuned so they can’t.

So maybe pregnant ladies should listen to Mozart

The most interesting paper I’ve read in the past 5 years — finale

Recall from https://luysii.wordpress.com/2013/06/13/the-most-interesting-paper-ive-read-in-the-past-5-years-introduction-and-allegro/ that if you knew the ones and zeroes coding for the instruction your computer was currently working on you’d know exactly what it would do. Similarly, it has long been thought that, if you knew the sequence of the 4 letters of the genetic code (A, T, G, C) coding for a protein, you’d know exactly what would happen. The cellular machinery (the ribosome) producing output (a protein in this case) was thought to be an automaton similar to a computer blindly carrying out instructions. Assuming the machinery is intact, the cellular environment should have nothing to do with the protein produced. Not so. In what follows, I attempt to provide an abbreviated summary of the background you need to understand what goes wrong, and how, even here, environment rears its head.

If you find the following a bit terse, have a look at the https://luysii.wordpress.com/category/molecular-biology-survival-guide/ . In particular the earliest 3 articles (Roman numerals I, II and III) should be all you need).

We’ve learned that our DNA codes for lots of stuff that isn’t protein. In fact only 2% of it codes for the amino acids comprising our 20,000 proteins. Proteins are made of sequences of 20 different amino acids. Each amino acid is coded for by a sequence of 3 genetic code letters. However there are 64 possibilities for these sequences (4 * 4 * 4). 3 possibilities tell the machinery to quit (they don’t code for an amino acid). So some amino acids have as many as 6 codons (sequences of 3 letters) for them — e.g. Leucine (L) has 6 different codons (synonymous codons) for it while Methionine (M) has but 1. The other 18 amino acids fall somewhere between.

The cellular machine making proteins (the ribosome) uses the transcribed genetic code (mRNA) and a (relatively small) adapter, called transfer RNA (tRNA). There are 64 different tRNAs (61 for each codon specifying an amino acid and 3 telling the machine to stop). Each tRNA contains a sequence of 3 letters (the antiCodon) which exactly pairs with the codon sequence in the mRNA, the same way the letters (bases if you’re a chemist) in the two strands of DNA pair with each other. Hanging off the opposite end of each tRNA is the amino acid the antiCodon refers to. The ribosome basically stitches two amino acids from adjacent tRNAs together and then gets rid of one tRNA.

So which particular synonymous codon is found in the mRNA shouldn’t make any difference to the final product of the ribosome. That’s what the computer model of the cell tells us.

Since most cells are making protein all the time. There is lots of tRNA around. We need so much tRNA that instead of 64 genes (one for each tRNA) we have some 500 in our genome. So we have multiple different genes coding for each tRNA. I can’t find out how many of each we have (which would be very nice to know in what follows). The amount of tRNA of each of the 64 types is roughly proportional to the number of genes coding for it (the gene copy number) according to the papers cited below.

This brings us to codon usage. You have 6 different codons (synonymous codons) for leucine. Are they all used equally (when you look at every codon in the genome which codes for leucine)? They are not. Here are the percentages for the usages of the 6 distinct leucine codons in human DNA: 7, 7, 13, 13, 20, 40. For random use they should all be around 16. The most frequently appearing codon occurs as often as the least frequently used 4.

It turns out the the most used synonymous codons are the ones with the highest number of genes for the corresponding tRNA. Makes sense as there should be more of that synonymous tRNA around (at least in most cases) This is called codon bias, but I can’t seem to find the actual numbers.

This brings us (at last) to the actual paper [ Nature vol. 495 pp. 111 - 115 '13 ] and the accompanying editorial (ibid. pp. 57 – 58). The paper says “codon-usage bias has been observed in almost all genomes and is thought to result from selection for efficient and accurate translation (into protein) of highly expressed genes” — 3 references given. Essentially this says that the more tRNA around matching a particular codon, the faster the mRNA will find it (le Chatelier’s principle in action).

An analogy at this point might help. When I was a kid, I hung around a print shop. In addition to high speed printing, there was also a printing press, where individual characters were selected from boxes of characters, placed on a line (this is where the font term leading comes from), and baked into place using some scary smelling stuff. This was so the same constellation of characters could be used over and over. For details see http://en.wikipedia.org/wiki/Printing_press. You can regard the 6 different tRNAs for leucine as 6 different fonts for the letter L. To make things right, the correct font must be chosen (by the printer or the ribosome). Obviously if a rare font is used, the printer will have to fumble more in the L box to come up with the right one. This is exactly le Chatelier’s principle.

The papers concern a protein (FRQ) used in the circadian clock of a fungus — evolutionarily far from us to be sure, but hang in there. Paradoxically, the FRQ gene uses a lot of ‘rare’ synonymous codons. Given the technology we have presently, the authors were able to switch the ‘rare’ synonymous codons to the most common ones. As expected, the organism made a lot more FRQ using the modified gene.

The fascinating point (to me at least) is that the protein, with exactly the same amino acids did not fulfill its function in the circadian clock. As expected there was more of the protein around (it was easier for the ribosome machinery to make).

Now I’ve always been amazed that the proteins making us up have just a few shapes, something I’d guess happens extremely rarely. For details see http://luysii.wordpress.com/2010/10/24/the-essential-strangeness-of-the-proteins-that-make-us-up/.

Well, as we know, proteins are just a linear string of amino acids, and they have to fold to their final shape. The protein made by codon optimization must not have had the proper shape. Why? For one thing the protein is broken down faster. For another it is less stable after freeze thaw cycles. For yet another, it just didn’t work correctly in the cell.

What does this mean? Most likely it means that the protein made from codon optimized mRNA has a different shape. The organism must make it more slowly so that it folds into the correct shape. Recall that the amino acid chain is extruded from one by one from the ribosome, like sausage from a sausage making machine. As it’s extruded the chain (often with help from other proteins called chaperones) flops around and finds its final shape.

Why is this so fascinating (to me at least)? Because here,in the very uterus of biologic determinism, the environment (how much of each type of synonymous tRNA is around) rears its head. Forests have been felled for papers on the heredity vs. environment question. Just as American GIs wrote “Kilroy was here” everywhere they went in WWII, here’s the environment popping up where no one thought it would.

In addition the implications for protein function, if this is a widespread phenomenon, are simply staggering.

The DSM again

The Diagnostic and Statistical Manual of the American Psychiatric Association (DSM-V) is in the news. The press has not been favorable, nor have two new books concerning it. Here are some links

l. A review of a book on it from today’s Nature (2 May ’13)–http://www.nature.com/nature/journal/v497/n7447/full/497036a.html
2. An article in the New York Times today concerning the Nature book and one other — neither favorable –http://www.nytimes.com/2013/05/02/books/greenbergs-book-of-woe-and-francess-saving-normal.html?ref=todayspaper&_r=0

Added 8 May ’13 The US National Institute of Mental Health (NIMH) will no longer use the Diagnostic and Statistical Manual of Mental Disorders (DSM) to guide psychiatric research, NIMH director Thomas Insel announced on 30 April. The manual has long been used as a gold standard for defining mental disorders. Insel described the DSM as ill-suited to scientific studies, and said the NIMH will now support studies that cut across DSM-defined disease categories.

But, as Ernst Mayr once said — nothing in biology makes sense except in the light of evolution. Keeping that thought in mind, what I wrote a few years ago is relevant today. Here’s the post. Although it starts off in Mathematics, it gives some history which helps explain why the DSM is the way it is.

Even so, psychiatric wisdom should be taken with a good deal of salt. A psychiatrist in my medical school class (1966) knew people who were thrown out of their psychiatric residencies because they were gay, and back then homosexuality was a psychiatric disease.

Here’s the post of 3 years ago

Reification in mathematics and medicine

Can you bring an object into existence just by naming and describing it? Well, no one has created a unicorn yet, but mathematicians and docs do it all the time. Let’s start with mathematicians, most of whom are Platonists. They don’t think they’re inventing anything, they’re just describing an external reality that is ‘out there’ but isn’t physical. So is any language an external reality, but when the last person who knows that language dies, so does the language. It will never reappear as people invent new languages, and invent them they do as the experience with deaf Nicaraguan children has shown [ Science vol. 293 pp. 1758 - 1759 '01 ]. Mathematics has been developed independently multiple times all over the world, and it’s always the same. The subject matter is out there, and not just a social construct as some say.

A fascinating book, “Naming Infinity” describes a Russian school of mathematicians who extended set theory beyond the work of the French and Germans. They literally believed that describing a mathematical object and its properties implied that the object existed (assuming the properties were consistent). The mathematicians involved were also very devout mystical Christians, who were called “Name Worshippers”. They thought that repeatedly invoking the name of Jesus would allow them to reach an ecstatic state. The rather contentious theory of the book is that their religious stance allowed them to imbue all names with powerful properties which could bring what they named into existence and this led to their extensions of set theory. Naturally the Communists hated them, and exterminated many (see p. 126). People possessed of all absolute truths dislike those possessed of a different set.

Docs bring diseases into existence all the time simply by naming them. This is why the new DSM-V (Diagnostic and Statistical Manual of Mental Disorders) of the American Psychiatric Association (APA) is so important. Is homosexuality a disease? Years ago the APA thought it was. If your teenager won’t do what you want, is this “Adolescent Defiant Disorder”? Is it a disease? It will be if the DSM-V says it is.

There are a lot of things wrong with what the DSM has become (297 disorders in 886 pages in DSM-IV), but the original impetus for the major shift that occurred with DSM-III in the 70s was excellent. So it’s time for a bit of history. Prior to that time, it was quite possible for the same individual to go to 3 psychiatric teaching hospitals in New York City and get 3 different diagnoses. Why? Because diagnosis was based on the reconstruction of the psychodynamics of the case. Just as there is no single way to interpret “Stopping by Woods on a Snowy Evening” (see the previous post), there isn’t one for a case history. Freud’s case studies are great literature, but someone else would write up the case differently.

The authors of the DSM-III decided to be more like medical docs than shrinks. In our usual state of ignorance, we docs define diseases by how they act — the symptoms, the physical signs, the clinical course. So the DSM-III abandoned the literary approach of psychodynamics and started asking what psychiatric patients looked like — were they hallucinating, did they take no pleasure in things, was there sleep disturbance, were they delusional etc. etc. As you can imagine, there was a huge uproar from the psychoanalysts.

Now no individual fits any disease exactly. There are always parts missing, and there are always additional symptoms and signs present to confuse matters. The net result was that psychiatric diagnosis became like choosing from a menu in a Chinese restaurant, so many symptoms and findings from column A, so many from column B. (Update 2013 — Having been to China for 3 weeks this year, restaurant menus over there aren’t like that).

This led to a rather atheoretical approach, but psychiatric diagnoses became far more consistent. Docs have always been doing this sort of thing and still do (look at the multiple confusing initial manifestations of what turned out to AIDS back in the 80s). Different infections were classified by how they acted, long before Pasteur proved that they were caused by micro-organisms. Back when I was running a muscular dystrophy clinic, we saw something called limb girdle muscular dystrophy , in which the patients were weak primarily in muscles about the shoulders and hips. Now we know that there are at least 13 different genetic causes of the disorder. So there are many distinct causes of the same clinical picture. This is similar to the many different genetic causes of Parkinson’s disease I talked about 2 and 3 posts earlier. At least with limb girdle muscular dystrophy it is much easier to see how the genetic defects cause muscle weakness — all of the known genetic causes involve proteins found in muscle.

Where DSM-IV (and probably DSM-V — it’s coming out later this month) went off the rails, IMHO, is the multiplicity of diagnoses they have reified. Do you really think there are 297 psychiatric disorders? Not only that, many of them are treated the same way — with an SSRI (Selective Serotonin Reuptake Inhibitor). You don’t treat all infections with the same antibiotic. This makes me wonder just how ‘real’ these diagnoses are. However in defense of them, you do treat classic Parkinsonism pretty much the same way regardless of the genetic defect causing it (and at this point we know of genetic causes of less than 10% of cases).

There is a fascinating series of articles in Science starting 12 Feb ’10 about the new DSM-V. The first is on pp. 770 – 771. One of the most interesting points is that 40% of academic inpatients receive a diagnosis of NOS (Not Otherwise Specified — e.g. not in the DSM-IV — clearly even 297 diagnoses are missing quite a bit).

But insurance companies and the government treat this stuff as holy writ. Would you really like your frisky adolescent labeled with “prepsychotic risk syndrome” which is proposed for DSM-V. Also, casting doubt on the whole enterprise, are the radical changes the DSM has undergone since it’s inception nearly 60 years ago. We’ve learned a lot about all sorts of medical diseases since then, but strokes and heart attacks back then are still strokes and heart attacks today and TB is still TB. Do these guys really know what they’re talking about, and should we allow them to reify things?

That being said, cut psychiatry some slack. Regardless of theory, there are plenty of mentally ill people out there who need help. They aren’t going to go away (or get better) any time soon. Psychiatrists (like all docs) are doing the best they can with what they know.

That’s why it’s nice to be retired and reading stuff that it is at least possible to understand — like math, physics, organic chemistry and molecular biology. But never forget that it is trivial compared to human suffering. That’s why the carnage in the drug discovery industry is so sad — there goes our only hope making things better (written in 2010, but still true in 2013).

Retinal physiology and the demise of the pure percept

Rooming with 2 philosophy majors warps the mind even if it was 50 years ago.  Conundrums raised back then still hang around.  It was the heyday of Bertrand Russell before he became a crank.  One idea being bandied about back then was the ‘pure percept’ — a sensation produced by the periphery  before the brain got to mucking about with it.   My memory about the concept was a bit foggy so who better to ask than two philosophers I knew.

The first was my nephew, a Rhodes in philosophy, now an attorney with a Yale degree.  I got this back when I asked –

I would be delighted to be able to tell you that my two bachelors’ degrees in philosophy — from the leading faculties on either side of the Atlantic — leave me more than prepared to answer your question. Unfortunately, it would appear I wasn’t that diligent. I focused on moral and political philosophy, and although the idea of a “pure precept” rings a bell, I can’t claim to have a concrete grasp on what that phrase means, much less a commanding one.

 Just shows what a Yale degree does to the mind.

So I asked a classmate, now an emeritus prof. of philosophy and got this back
This pp nonsense was concocted because Empiricists [Es]–inc. Russell, in his more empiricistic moods–believed that the existence of pp was a necessary condition for empirical knowledge. /Why? –>
1. From Plato to Descartes, philosophers often held that genuine Knowledge [K] requires beliefs that are “indubitable” [=beyond any possible doubt]; that is, a belief counts as K only if it [or at least its ultimate source] is beyond doubt. If there were no such indubitable source for belief, skepticism would win: no genuine K, because no beliefs are beyond doubt. “Pure percepts” were supposed to provide the indubitable source for empirical K.
2. Empirical K must originate in sensory data [=percepts] that can’t be wrong, because they simply copy external reality w/o any cognitive “shopping” [as in Photoshop]. In order to avoid any possible ‘error’, percepts must be pure in that they involve no interpretation [= error-prone cognitive manipulation].
{Those Es who contend  that all K derives from our senses tend to ignore mathematical and other allegedly a priori K, which does not “copy” the sensible world.} In sum, pp are sensory data prior to [=unmediated by] any cognitive processing.

So it seems as though the concept is no longer taken seriously.  To drive a stake through its heart it’s time to talk about the retina.

It lies in the back of our eyes, and is organized rather counter-intuitively.  The photoreceptors (the pixels of the camera if you wish) are the last retinal elements to be hit by light, which must pass through the many other layers of the retina to get to them.

We have a lot of them — at least 100,000,000 of one type (rods).  The nerve cells sending impulses back to the brain, are called ganglion cells, and there are about 1,000,000 in each eye.  Between the them are bipolar cells and amacrine cells which organize the information falling on the photoreceptors.

All this happens in something only .2 milliMeters thick.

The organization of information results in retinal ganglion cells responding to different types of stimuli.  How do we know?  Impale the ganglion cell with an electrode while still in the retina, and try out various visual stimuli to see what it responds to.

Various authorities put the number of retinal ganglion cell types in the mouse at 11, 12, 14, 19 and 22.  Each responds to a given type of stimulus. Here are a few examples:

The X-type ganglion cell responds linearly to brightness

Y cells respond to movement in a particular direction,

Blue-ON transmits the mean spectral luminance (color distribution) along the spectrum from blue to green.

From an evolutionary point of view, it would be very useful to detect motion.  Some retinal ganglion cells being responding before they should. How do we know this?  It’s easy (but tedious) to map the area of visual space a ganglion cell responds to — this is called its receptive field.  The responses of some anticipate the incursion of a moving stimulus — clearly this must be the way they are hooked to photoreceptors via the intermediate cells.

Just think about the way photoreceptors at the back of the spherical eye are excited by something moving in a straight line in visual space.  It certainly isn’t a straight line on the retinal surfaced.  Somehow the elements of the retina are performing this calculation and predicting where something moving in a straight line will be next.  Why  couldn’t the brain bedoing this?  Because it can be seen in isolated retinas with no brain attached.

Now for something even more amazing.  Each type of ganglion cell (and I’ve just discussed a few) tiles the retina. This means that every patch of the retina has a ganglion cell responding to each type of visual stimulus.  So everything hitting every area of the retina is being analyzed 11, 12, 14, 19 or 22 different ways simultaneously.

So much for the pure percept: it works for a digital camera, but not the retina.  There is an immense amount of computation of the visual input going right there, before anything gets back to the brain.

If you wish to read more about this — an excellent review is available, but it’s quite technical and not for someone coming to neuroanatomy and neurophysiology for the first time.  [ Neuron vol. 76 pp. 266 - 280 '12 ]

The New Clayden pp. 931 – 969

p. 935 — I don’t understand why neighboring group participation is less common using 4 membered rings than it is using  3 and 5 membered rings.  It may be entropy and the enthalpy of strain balancing out.  I think they’ve said this elsewhere (or in the previous edition).   Actually — looking at the side bar, they did say exactly that in Ch. 31.  

As we used to say, when scooped in the literature — at least we were thinking well.

p. 935 — “During the 1950′s and 1960′s, this sort of question provoked a prolonged and acrimonious debate”  – you better believe it.  Schleyer worked on norbornane, but I don’t think he got into the dust up.  Sol Winstein (who Schleyer called solvolysis Sol) was one of the participants along with H. C. Brown (HydroBoration Brown).

p. 936 — The elegance of Cram’s work.  Reading math has changed the way I’m reading organic chemistry.  What you want in math is an understanding of what is being said, and subsequently an ability to reconstruct a given proof.  You don’t have to have the proof at the tip of your tongue ready to spew out, but you should be able to reconstruct it given a bit of time.   The hard thing is remembering the definitions of the elements of a proof precisely, because precise they are and quite arbitrary in order to make things work properly.  It’s why I always leave a blank page next to my notes on a proof — to contain the definitions I’ve usually forgotten (or not remembered precisely).

I also find it much easier to remember mathematical definitions if I write them out (as opposed to reading them as sentences) as logical statements.  This means using ==> for implies | for such that, upside down A for ‘for all’, backwards E for ‘there exists, etc. etc. There’s too much linguistic fog in my mind when I read them as English sentences.

       So just knowing some general principles will be enough to reconstruct Cram’s elegant work described here.  There’s no point in trying to remember it exactly (although there used to be for me).   It think this is where beginning students get trapped — at first it seems that you can remember it all.  But then the inundation starts.  What should save them, is understanding and applying the principles, which are relatively few.  Again, this is similar to what happens in medicine — and why passing organic chemistry sets up the premed for this style of thinking. 

p. 938 – In the example of the Payne rearrangement, why doesn’t OH attack the epoxide rather than deprotonating the primary alcohol (which is much less acidic than OH itself).

p. 955 – Although the orbitals in the explanation of why stereochemistry is retained in 1,2 migrations are called  molecular orbitals (e.g. HOMO, LUMO) they look awfully like atomic orbitals just forming localized bonds between two atoms to me.  In fact the whole notion of molecular orbital has disappeared in most of the explanations (except linguistically).  The notions of 50 years ago retain their explanatory power.  

p. 956 — How did Eschenmoser ever think of the reaction bearing his name?  Did he stumble into it by accident? 

p. 956 — The starting material for the synthesis of juvenile hormone looks nothing like it.  I suppose you could say its the disconnection approach writ large, but the authors don’t take the opportunity.   The use of fragmentation to control double bond stereochemistry is extremely clever.   This is really the first stuff in the book that I think I’d have had trouble coming up with.  The fragmentation syntheses at the end of the chapter are elegant and delicious.

On a more philosophical note, the use of stereochemistry and orbitals to make molecules is exactly what I mean by explanatory power.  Anti-syn periplanar is a very general concept, which I doubt was brought into being to explain the stereochemistry of fragmentation reactions (yet it does).  It appears over and over throughout the book in various guises.

Urysohn’s Lemma

“Now we come to the first deep theorem of the book,. a theorem that is commonly called the “Urysohn lemma”.  . . .  It is the crucial tool used in proving a number of important theorems. . . .  Why do we call the Urysohn lemma a ‘deep’ theorem?  Because its proof involves a really original idea, which the previous proofs did not.  Perhaps we can explain what we mean this way:  By and large, one would expect that if one went through this book and deleted all the proofs we have given up to now and then handed the book to a bright student who had not studied topology, that student ought to be able to go through the book and work out the proofs independently.  (It would take a good deal of time and effort, of course, and one would not expect the student to handle the trickier examples.)  But the Uyrsohn lemma is on a different level.  It would take considerably more originality than most of us possess to prove this lemma.”

The above quote is  from  one of the standard topology texts for undergraduates (or perhaps the standard text) by James R. Munkres of MIT. It appears on  page 207 of 514 pages of text.  Lee’s text book on Topological Manifolds gets to it on p. 112 (of 405).  For why I’m reading Lee see https://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/.

Well it is a great theorem, and the proof is ingenious, and understanding it gives you a sense of triumph that you actually did it, and a sense of awe about Urysohn, a Russian mathematician who died at 26.   Understanding Urysohn is an esthetic experience, like a Dvorak trio or a clever organic synthesis [ Nature vol. 489 pp. 278 - 281 '12 ].

Clearly, you have to have a fair amount of topology under your belt before you can even tackle it, but I’m not even going to state or prove the theorem.  It does bring up some general philosophical points about math and its relation to reality (e.g. the physical world we live in and what we currently know about it).

I’ve talked about the large number of extremely precise definitions to be found in math (particularly topology).  Actually what topology is about, is space, and what it means for objects to be near each other in space.  Well, physics does that too, but it uses numbers — topology tries to get beyond numbers, and although precise, the 202 definitions I’ve written down as I’ve gone through Lee to this point don’t mention them for the most part.

Essentially topology reasons about our concept of space qualitatively, rather than quantitatively.  In this, it resembles philosophy which uses a similar sort of qualitative reasoning to get at what are basically rather nebulous concepts — knowledge, truth, reality.   As a neurologist, I can tell you that half the cranial nerves, and probably half our brains are involved with vision, so we automatically have a concept of space (and a very sophisticated one at that).  Topologists are mental Lilliputians trying to tack down the giant Gulliver which is our conception of space with definitions, theorems, lemmas etc. etc.

Well one form of space anyway.  Urysohn talks about normal spaces.  Just think of a closed set as a Russian Doll with a bright shiny surface.  Remove the surface, and you have a rather beat up Russian doll — this is an open set.  When you open a Russian doll, there’s another one inside (smaller but still a Russian doll).  What a normal space permits you to do (by its very definition), is insert a complete Russian doll of intermediate size, between any two Dolls.

This all sounds quite innocent until you realize that between any two Russian dolls an infinite number of concentric Russian dolls can be inserted.  Where did they get a weird idea like this?  From the number system of course.  Between any two distinct rational numbers p/q and r/s where p, q, r and s are whole numbers, you can  always insert a new one halfway between.  This is where the infinite regress comes from.

For mathematics (and particularly for calculus) even this isn’t enough.  The square root of two isn’t a rational number (one of the great Euclid proofs), but you can get as close to it as you wish using rational numbers.  So there are an infinite number of non-rational numbers between any two rational numbers.  In fact that’s how non-rational numbers (aka real numbers) are defined — essentially by fiat, that any series of real numbers bounded above has a greatest number (think 1, 1.4, 1.41, 1.414, defining the square root of 2).

What does this skullduggery have to do with space?  It says essentially that space is infinitely divisible, and that you can always slice and dice it as finely as you wish.  This is the calculus of Newton and the relativity of Einstein.  It clearly is right, or we wouldn’t have GPS systems (which actually require a relativistic correction).

But it’s clearly wrong as any chemist knows. Matter isn’t infinitely divisible, Just go down 10 orders of magnitude from the visible and you get the hydrogen atom, which can’t be split into smaller and smaller hydrogen atoms (although it can be split).

It’s also clearly wrong as far as quantum mechanics goes — while space might not be quantized, there is no reasonable way to keep chopping it up once you get down to the elementary particle level.  You can’t know where they are and where they are going exactly at the same time.

This is exactly one of the great unsolved problems of physics — bringing relativity, with it’s infinitely divisible space together with quantum mechanics, where the very meaning of space becomes somewhat blurry (if you can’t know exactly where anything is).

Interesting isn’t it?

The New Clayden pp. 877 – 908

p. 878 — “The transition state has 6 delocalized pi electrons and thus is aromatic in character”.  Numerically yes, but the transition state isn’t planar, and there is all sorts of work showing how important planarity is to aromaticity. 

p. 881 — It seems to me that the arrow is wrong in the equation at the bottom. Entropy should increase when a Diels Alder product is broken apart, and since deltaG = deltaH  - T * deltaS heating the product should break it apart not cause it to form.  I guess the heat shown is required to increase molecular velocity so that collisions result in reaction.   Enough kinetic energy will blow anything apart (see Higgs particle).

p. 890 — “It is not cheating to use the regioselectivity of chemical reactions to tell us about the coefficients of the orbitals involved.”    I do think that this sort of thing is  cheating when you use the regioselectivity of chemical reactions as an explanation.  They are adding nothing new.  A real explanation predicts new phenomena, the way the anomeric effect does, for example.  You should contemplate the point at which a description of something becomes an explanation (e.g. epistemology).   It’s not the case here, but it was the case for Newton’s laws of gravitation.  Famously he said Hypotheses non dingo (“I frame no hypotheses”).  It appears in the following

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses.

Yet his laws of gravity were used to predict all sorts of events never before seen, so they are explanatory in some sense.  

This sort of thing is just what a neurologist experiences learning functional neuroanatomy (e.g.  which part of the nervous system has which function).  Initially almost all of it was developed by studying neurologic deficits due to various localized lesions of the brain and spinal cord.  There’s a huge caveat involved — pulling the plug on a radio will stop the sound, but that isn’t how the sound is produced.  People with lesions of the occipital lobe lose the ability to see in certain directions (parts of their visual fields).  Understanding HOW the occipital lobe processes sensory input from the eyes has taken 50 years and is far from over.  

p. 892 — Unfortunately the rationale behind  the Woodward Hoffmann rules isn’t covered, so it appears incredibly convoluted and arbitrary.  Read the book — “The Control of Orbital Symmetry” which they wrote.  Also, unfortunately, the description of the rules uses the term ‘component’ in two ways.  At step two butadiene and the dienophile are each considered a component, as they are in steps 3, 4, and 5, then the two are mushed together into a single component in step 6. 

p. 894 — I haven’t been looking at the animations for a while, but those of the Diels Alder type reactions are incredible, and almost sexual.  You can rotate the two molecules in space and watch them come together and react.

p. 894 –”Remember, the numbers in brackets, [ 4 + 2 ] etc., refer to the numbers of atoms.  The numbers (4q +2)s and (4r)s in The Woodward Hoffman (should be Hoffmann) refer to the numbers of electrons.”  This is so very like math, where nearly identical characters are used to refer to quite different things.  Bold capital X might mean one thing, italic x another, script X still another.  They all sound the same when you mentally read them to yourself.  It makes life confusing. 

p. 894 — The Alder ene reactions — quite unusual.  The worst thing is that I remember nothing about them from years ago.  They must have been around as they were discovered by Alder himself (who died in 1958).  They produce some rather remarkable transformations, the synthesis of menthol from citronellal being one.  I wonder if they are presently used much in synthetic organic chemistry. 

p. 900 — How do you make OCN – SO2Cl, and why is it available commercially?

p. 904 — The synthesis of the sulfur containing 5 membered ring of biotin is a thing of beauty.  It’s extremely non-obvious beginning with a 7 membered ring with no sulfur at all. 
Follow

Get every new post delivered to your Inbox.

Join 57 other followers