Category Archives: Quantum Mechanics

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

At the Alumni Day

‘It’s Complicated’. No this isn’t about the movie where Meryl Streep made a feeble attempt to be a porn star. It’s what I heard from a bunch of Harvard PhD physicists who had listened to John Kovac talk about the BICEP2 experiment a day earlier. I had figured as a humble chemist that if anyone would understand why polarized light from the Cosmic Background Radiation would occur in pinwheels they would. But all the ones I talked to admitted that they didn’t.

The experiment is huge for physics and several articles explain why this is so [ Science vol. 343 pp. 1296 - 1297m vol. 344 pp. 19 - 20 '14, Nature vol. 507 pp. 281 - 283 '14 ]. BICEP2 provided strong evidence for gravitational waves, cosmic inflation, and the existence of a quantum theory of gravity (assuming it holds up and something called SPIDER confirms it next year). The nice thing about the experiment is that it found something predicted by theory years ago. This is the way Science is supposed to operate. Contrast this with the climate models which have been totally unable to predict the more than decade of unchanged mean global temperature that we are currently experiencing.

Well we know gravity can affect light — this was the spectacular experimental conformation of General Relativity by Eddington nearly a century ago. But how quantum fluctuations in the gravitational field lead to gravitational waves, and how these waves lead to the polarization of the background electromagnetic radiation occurring in pinwheels is a mystery to me and a bunch of physicists had more high powered than I’ll ever be. If someone can explain this, please write a comment. The articles cited above are very good to explain context and significance, but they don’t even try to explain why the data looks the way it does.

The opening talk was about terrorism, and what had been learned about it by studying worldwide governmental responses to a variety of terrorist organizations (Baader Meinhof, Shining Path, Red Brigades). The speaker thought our response to 9/11 was irrational — refusing to fly when driving is clearly more dangerous etc. etc. It was the typical arrogance of the intelligent, who cannot comprehend why everyone does not think the way they do.

I thought it was remarkable that a sociologist would essentially deprecate the way people think about risk. I’m sure that many in the room were against any form of nuclear power, despite its safety compared to everything else and absent carbon footprint.

Addendum 7 April — The comment by Handles and link he provided is quite helpful, although I still don’t understand it as well as I’d like. Here’s the link https://medium.com/p/25c5d719187b

The New Clayden pp. 43 – 106

The illustrations in the book that I comment on can be reached on the web by substituting the page number I give for xx in the following

My guess is that people who haven’t bought the book, will be tempted to do after looking at a few of them.

p. 50         The reason that exact masses of various isotopes of a given element aren’t integers lies in the slight mass difference between a proton 1.67262 * 10^-27 kiloGrams and a neutron 1.67493 * 10^-27,  Also electrons have a mass of 9.10956 * 10^-31 kiloGrams.   I didn’t realize that particle masses can now be measured to 6 significant figures.  Back in the day it was 4.  Impressive.

p. 52 — Nice to see a picture of an MRI scanner (not NMR scanner).  MRI stands for magnetic resonance imaging.  The chemist might be wondering why the name change.  Because it would be very difficult to get a fairly large subset of patient to put their head in anything with nuclear in the name.

It’s also amusing to note that in the early days of NMR, chemists worked very hard to keep out water, as the large number of hydrogens in water would totally swamp the signal of the compound you were trying to study.  But the brain (and all our tissues) is quite wet with lots of water.

p. 53 — (sidebar).  Even worse than a workman’s toolbox permanently attaching itself to an NMR magnet, is the following:  Aneurysms of brain arteries are like bubbles on an inner tube.  Most have a neck, and they are treated by placing a clip across the neck.  Aneurysm clips are no longer made with magnetic materials, and putting a patient with such a clip in an MRI is dangerous.     A patient with a ferromagnetic aneurysm clip suffered a fatal rupture of the aneurysm when she was placed in an MRI scanner [ Radiol. vol. 187 pp. 612 - 614, 855 - 856 '93 ].

p. 53 —     NMR  shows the essential weirdness of phenomena at the quantum mechanical level, and just how counter intuitive it is.  Consider throwing a cannonball at a brick wall.  At low speeds (e.g. at low energies) it hits the wall and bounces back.  So at the end you see the cannonball on your side of the wall.  As speeds and energies get higher and higher the cannonball eventually goes through the wall, changing the properties of the wall in the process.  Radiowaves have very low energies relative to visible light (their wavelengths are much longer so their frequencies are much lower, and energy of light is proportional to frequency).  So what happens when you throw radiowaves at an organic compound with no magnetic field present — it goes right through it (e.g. it is found on the other side of the brick wall).  NMR uses a magnetic field  to separate the energy levels of a spinning charged nucleus enough that they can absorb light.  Otherwise the light just goes past the atom without disturbing it.  Imagine a brick wall that a cannonball goes through without disturbing and you have the macroscopic analogy.

p. 53 Very nice explanation of what the actual signal picked up by the NMR machine actually is — it is the energy put into flipping a spin up against the field, coming out again.  It’s the first text I’ve read on the subject that makes this clear at the start.

p. 54 — Note that the plot of absorption (really emission) of energy has higher frequencies to the left rather than the right (unlike every other of anything numeric I’ve ever seen).  This is the way it is.  Get used to it.

p. 55 — The stronger the magnetic field, the more nuclear energy levels are pulled apart, the greater the energy difference between them, and thus the higher frequency of electromagnetic radiation resulting from the jump between nuclear energy levels.

p. 56 — I was amazed to think that frequencies can be measured with accuracies better than 1 part per million, but an electrical engineer who married my cousin assures me that this is not a problem.

p. 57  The following mnemonic may help you keep things straight

low field <==> downfield <==> bigger chemical shift <==> deshielded

mnemonic loadd of bs

It’s far from perfect, so if you can think of something better, please post a comment.

p. 64 — Error — Infrared wavelengths are “not between 10 and 100 mm”  They start just over the wavelength of visible light (8000 Angstroms == 800 nanoMeters == .8 microMeter) and go up to 1 milliMeter (1,000 microMeters)

p. 64 — Here’s what a wavenumber is, and how to calculate it.  The text says that the wavenumber in cm^-1 (reciprocal centimers) is the number of wavelengths in a centimeter.  So wavenumbers are proportional to frequency.

To figure out what this is the wavelength (usually given in Angstroms, nanoMeters, microMeters, milliMeters) should be expressed in meters.  So should centimeters (which are 10^=2 meters).  Then we have

#wavelengths/centiMeters * wavelength in meters = 10^-2 (centimeters in meters)

Thus visible light with a wavelength of 6000 Angstroms == 600 nanoMeters can pack

(# wavelength/cm) * 600 * 10^-9 = 10^-2

waves into a centimeter

so its wavenumber is l/6 * 10^5 reciprocal centimeters — e.g

  16,666 cm(-1).  The highest wavenumber of visible light is 12,500 cm(-1) — corresponding to 8000 Angstroms.

Infrared wavenumbers can be converted to frequencies by multiplying by the velocity of light (in centimeters) e.g. 3 * 10^10 cm/sec. So the highest frequency of visible light is 7.5 * 10^14 — nearly a petaHertz

IR wavenumbers range from 4000 (25,000 Angstroms) to 500 (200,000 Angstroms)

p. 65 — In the bottom figure — the line between bonds to hydrogen and triple bonds shold be at 2500 cm^-1 rather than 3000 to be consistent with the ext.

p. 71 — “By contrast, the carbonyl group is very polarized with oxygen attracting the electrons”  – the electronegativity values 2.5 and 3.5 of C and O could be mentioned here.

p. 81-1, 81-2 — The animations contingue to amaze.  The latest shows the charge distribution, dipole moments of each bond, electron density, stick model, space filling models of a variety of small molecules — you can rotate the molecule, shrink it or blow it up.

To get to any given model mentioned here type
with the page number replacing xx.  The 81-1 and 81-2 are to be substituted for xx as there are two interactive web pages for page 81.  Fabulous.

Look at enough of them and you’ll probably buy the book.

p. 82 — “Electrons have quantized energy levels”  Very misleading, but correct in a sense which a naive reader of this book wouldn’t be expected to know.   This should be changed to “Atoms (and/or Molecules) have quantized electronic energy levels.”   In the first paragraph of the section the pedagogical boat is righted.

p. 85 — The introspective will pause and wonder about the following statement — “In an atom, a node is a point where the electron can never (italics) be found — a void separating the two parts of the orbital”.  Well, how does a given electron get past a node from one part of an orbital to the other.  This is just more of the wierdness of what’s going on at the quantum mechanical level (which is dominant at atomic scales).  There is no way you can regard the electron in the 2 s orbital as having a trajectory (or an electron anywhere else according to QM).  The idea that that trajectories need to be abandoned in QM isn’t my idea but that of a physicist, Mark P. Silverman. His books are worth a look, if you’re into getting migraines from considering what quantum mechanics really means about the world. He’s written 4 according to Amazon.

p. 86 — Very worthwhile looking at the web page for the 3 dimensional shapes of atomic orbitals — particularly the d and f orbitals.  FYI  p orbitals have 1 nodal planes d have 2 and f have 3.   If you’re easily distractable, steel yourself, as this web page has links to other interesting web pages with all sorts of moleculara orbitals.  This one has links to ‘simple’ molecular orbitals for 11 more compounds ranging from hydrogen to benzene.

p. 87 — Nice to know where s, p, d, and f come from — the early days of spectroscopy — s = sharp, p = principal, d = diffuse, f = fundamental.

p. 87 — “There doesn’t have to be someone standing on a stair for it to exist”  great analogy for empty orbitals.

*Ap. 89 — The first diagram on the page is misleading and, in fact, quite incorrect.  The diagram shows that the bonding orbital is lower in energy than the atomic 1s orbitals by exactly the same amount as the antibonding orbitals are higher.  This is not the case.  In such a situation the antibonding orbital is higher in energy by a greater amount than the bonding orbital is lower.  The explanation is quite technical, involving overlap integrals and the secular equation (far too advanced to bring in here, but the fact should be noted nonetheless).  Anslyn and Dougherty has a nice discussion of this point pp. 828 – 831.

p. 91 — The diagram comes back to bite as “Since there is no overall bonding holding the two atoms together, they can drift apart as two separate atoms with their electrons in 1s AOs”.  Actually what happens is that they are pushed apart because the destabilization of the molecule by putting an electron in the antibonding  molecular orbital is greater than the stabilization of the remaining electron in the bonding molecular orbital (so the bonding orbital can’t hold the atoms together).   Ditto for the explanation of why diatomic Helium doesn’t exist.

p. 94 — The rotating models of the bonding and antibonding orbitals of N2 are worth a look, and far better than an projection onto two dimensional space (e.g. the printed page)  See the top of this post  for how to get them.

p. 95 — It’s important to note that nitric oxide is used by the brain in many different ways — control of blood flow, communication between neurons, neuroprotection after insults, etc. etc.  These are just a few of its effects, more are still being found.

p. 100 — I guess British undergraduates know what a pm is.  Do you?  It’s a picoMeter (10^-12 Meters) or 100 Angstroms.  Bond lengths of ethylene are given as C-H = 108 pm, C=C as 133 pm.  I find it easier to think of them as 1.08 Ansgroms and 1.33 Angstroms — but that’s how I was brung up.

p. 103 — It is mentioned that sp orbitals are lower in energy than sp2 orbitals which are lower in energy than sp3 orbitals.  The explanation given is that s orbitals are of lower energy than p orbitals — not sure if the reason for why this is so was given.  It’s because the s electrons get much closer to the nucleus (on average) than electrons in p orbitals (which have a node there).  Why should this lower energy?  Because the closer an electron gets to the positively charge nucleus, the less charge separation there is, and separating charges costs energy.

p. 105 — Mnemonic for Z (cis) and E (trans) just say cis in French — it sounds like Zis.

Going to the mat with representation, characters and group theory

Willock’s book (see https://luysii.wordpress.com/category/willock-molecular-symmetry/) convinced me of the importance of the above in understanding vibrational spectroscopy.  I put it aside because the results were presented, rather than derived.  From the very beginning (the 60′s for me) we were told that group theory was important for quantum mechanics, but somehow even a 1 semester graduate course back then didn’t get into it.  Ditto for the QM course I audited a few years ago.

I’ve been nibbling about the edges of this stuff for half a century, and it’s time to go for it.  Chemical posts will be few and far between as I do this, unless I run into something really interesting (see https://luysii.wordpress.com/2012/02/18/a-new-way-to-study-reaction-mechanisms/).  Not to worry –plenty of interesting molecular biology, and even neuroscience will appear in the meantime, including a post about article showing that just about everything we thought about hallucinogens is wrong.

So, here’s my long and checkered history with groups etc.  Back in the day we were told to look at “The Theory of Groups and Quantum Mechanics” by Hermann Weyl.  I dutifully bought the Dover paperback, and still have it (never throw out a math book, always throw out medical books if more than 5 years old).  What do you think the price was — $1.95 — about two hours work at minimum wage then.  I never read it.

The next brush with the subject was a purchase of Wigner’s book “Group Theory and its Application to the Quantum Mechanics of Atomic Spectra”  – also never read but kept.  A later book (see Sternberg later in the post)  noted that the group theoretical approach to relativity by Wigner produced the physical characteristics of mass and spin as parameters in the description of irreducible representations.  The price of this one was $6.80.

Then as a neurology resident I picked up “Group Theory” by Morton Hammermesh (Copyright 1962).  It was my first attempt to actually study the subject.  I was quickly turned off by the exposition.  As groups got larger (and more complicated) more (apparently) ad hoc apparatus was brought in to explain them — first cosets, then  subgroups, then normal subgroups, then conjugate classes.

That was about it, until retirement 11 years ago.  I picked up a big juicy (and cheap) Dover paperback “Modern Algebra” by Seth Warner — a nice easy introduction to the subject.

Having gone through over half of Warner, I had the temerity to ask to audit an Abstract Algebra course at one of the local colleges.  I forget the text, but I didn’t like it (neither did the instructor).  We did some group theory, but never got into representations.

A year or so later, I audited a graduate math abstract algebra course given by the same man.  I had to quit about 3/4 through it because of an illness in the family.  We never got into representation theory.

Then, about 3 years ago, while at a summer music camp, I got through about 100 pages of “Representations and Characters of Groups” by James and Liebeck.  The last chapter in the book (starting on p. 366) was on an application to molecular vibration.  The book was hard to use because they seemed to use mathematical terms differently than I’d learned — module for example.   I was used to.  100 pages was as far as I got.

Then I had the pleasure of going through Cox’s book on Galois theory, seeing where a lot of group theory originated  (along with a lot of abstract algebra) — but there was nothing about representations there either.

Then after giving up on Willock, a reader suggested  “Elements of Molecular Symmetry” by Ohrn.  This went well until p. 28, when his nomenclature for the convolution product threw me.

So I bought yet another book on the subject which had just come out “Representation Theory of Finite Groups” by Steinberg.  No problems going through the first 50 pages which explains what representations, characters and irreducibles are.  Tomorrow I tackle p. 52 where he defines the convolution product.  Hopefully I’ll be able to understand it an go back to Ohrn — which is essentially chemically oriented.

The math of it all is a beautiful thing, but the immediate reason for learning it is to understand chemistry better.  I might mention that I own yet another book “Group Theory and Physics” by Sternberg, which appears quite advanced.  I’ve looked into it from time to time and quickly given up.

Anyway, it’s do or die time with representation theory.  Wish me luck

Where has all the chemistry gone?

Devoted readers of this blog (assuming there are any) must be wondering where all the chemistry has gone.  Willock’s book convinced me of the importance of group theory in understand what solutions we have of the Schrodinger equation.  Fortunately (or unfortunately) I have the mathematical background to understand group characters and group representations, but I found Willock’s  presentation of just the mathematical  results unsatisfying.

So I’m delving into a few math books on the subject. One is  “Representations and Characters of Groups” by James and Liebeck (which provides an application to molecular vibration in the last chapter starting on p. 367).  It’s clear, and for the person studying this on their own, does have solutions to all the problems. Another is “Elements of Molecular Symmetry” by Ohrn, which I liked quite a bit.  But unfortunately I got stymied by the notation M(g)alpha(g) on p. 28. In particular, it’s not clear to me if the A in equation (4.12) and (4.13) are the same thing.

I’m also concurrently reading two books on Computational Chemistry, but the stuff in there is pretty cut and dried and I doubt that anyone would be interested in comments as I read them.  One is “Essential Computational Chemistry” by Cramer (2nd edition).  The other is “Computational Organic Chemistry” by Bachrach.  The subject is a festival of acronyms (and I thought the army was bad) and Cramer has a list of a mere 284 of them starting on p. 549. On p. 24 of Bachrach there appears the following “It is at this point that the form of the functionals  begins to cause the eyes to glaze over and the acronyms appear to be random samplings from an alphabet soup.”  I was pleased to see that Cramer still thinks 40 pages or so of Tom Lowry and Cathy Richardson’s book is still worth reading on molecular orbital theory, even though it was 24 years old at the time Cramer referred to it.  They’re old friends from grad school.   I’m also pleased to see that Bachrach’s book contains interviews with Paul Schleyer (my undergraduate mentor).  He wasn’t doing anything remotely approaching computational chemistry in the late 50s (who could?).  Also there’s an interview with Ken Houk, who was already impressive as an undergraduate in the early 60s.

Maybe no one knows how all of the above applies to transition metal organic chemistry, which has clearly revolutionized synthetic organic chemistry since the 60′s, but it’s time to know one way or the other before tackling books like Hartwig.

Another (personal) reason for studying computational chemistry, is so I can understand if the protein folding people are blowing smoke or not.  Also it appears to be important in drug discovery, or at least is supporting Ashutosh in his path through life.  I hope to be able to talk intelligently to him about the programs he’s using.

So stay tuned.

Anslyn pp. 935 – 1000

The penultimate chapter of Anslyn is an excellent discussion of photochemistry, with lots of physics clearly explained but it leaves one question unanswered which has always puzzled me.  How long does it take for a photon of a given wavelength to be absorbed .  On p. 811 there is an excellent discussion of the way the quantum mechanical operator for kinetic energy (-hBar/2m * del^2) is related to kinetic energy.  The more the wavefunction changes in space, the higher the energy.  Note that the wavefunction applies to particles (like protons, neutrons, electrons) with mass.

Nonetheless, in a meatball sort of way, apply this to the (massless) photon.  Consider light from the classical point of view, as magnetic and electrical fields which fluctuate in time and space.  The fields of course exert force on charged particles, and one can imagine photons exerting forces on the electrons around a nucleus and  changing their momentum, hence doing work on them.  Since energy is conserved (even in quantum mechanics), it’s easy to see how the electrons get a higher energy as a result.  The faster the fields fluctuate, the more energy they impart to the electrons.

Now consider a photon going past an atom, and being absorbed by it.  It seems that a full cycle of field fluctuation must pass the atom.  So here’s a back of the envelope calculation, which seems to work out.  Figure an atomic diameter of 1 Angstrom (10^-10 meters).  The chapter is about photochemistry, which is absorption of light energetic enough to change electronic energy levels in an atom or a molecule.  All the colored things we see, are colored because their electronic energy levels are absorbing photons of visible light — the colors actually result from the photons NOT absorbed.  So choose light of 6000 Angstroms — which has a wavelength of 6 * 10^-7 meters.

In one second, light moves 3 * 10^8 meters, regardless of how many many wavelengths it contains. If the wavelength were 1 meter it would move past a point in 1/3 * 10^8 seconds But wavelength of the visible  light  I chose is 6 * 10 ^-7 meters, so the wavelength moves past in 6*10^-7/3 * 10^8 = 2 x 10^-15 seconds, which (I think) is how long it takes visible light to be absorbed.  Have I made a mistake?  Are there any cognoscenti out there to tell me different?

That was a classical way of looking at it.  Now for the bizarrity of quantum mechanics.  How does the wavelength of the photon get sucked up by something 1/6000th of itself, particularly when there are probably at least 10^9 atoms in a volume 6,000 on a side?  It gets worse with NMR, because the radioWave absorbed by a nucleus is 1 meter, and the nucleus is 10^-4 the size of an atom.  Essentially I’m asking about the collapse of the wavefunction of a photon (assuming they have one?).

 

p. 936 — “We show wavelength in the condoned (italics) unit of nanoMeter  . . . “  It may be condoned, but this chemist thinks in Angstroms, and my guess is that most chemists do, because atomic radii and diameters are small numbers in Angstroms, not fractions of a nanoMeter.

p. 939 — “Absorption of two photons or multiple photons . . . does not occur, execpt with special equipment . . . “  True enough, but the technique is now widely used in biologic research.   This is not new      [ Nature vol. 375 pp. 682 - 685 '95 ] In contrast to conventional microscopy, two long wavelength photons are simultaneously absorbed in two photon fluoresence microscopy (multiphoton microscopy)   < [ Science vol. 300 p. 84 '03 -- actually within a few femtoSeconds -- I thought simultaneity was asking too much > and combine their energies to excite a fluorophore not normally absorbing at this wavelength.  This permits the use of infrared light to excite the fluorophore. By using low energy (near infrared) light rather than higher energy visible light photons, light induced degradation of biological samples is minimized.  

p. 939 -- Manifold probably really refers to the potential energy surface associated with the different energy levels, rather than the numeric value of the energy level. 

p. 940 -- Look at transition dipoles very hard if you want to understand Forster resonance energy transfer (FRET), whch is widely used in biology to determine how proteins associate with each other. 

p. 944 -- How in the world did they get enough formaldehyde in the excited state to measure it -- or is this calculation? 

p. 947 -- Nice exposition on GFP (Green Fluorescent Protein) which has revolutionized cellular biology.   But the organic chemist should ask themselves, why don't chemical reactions between the hundreds of side chains on a protein happen all the time?  For more on this point see http://luysii.wordpress.com/2009/09/25/are-biochemists-looking-under-the-lamppost/

p. 951 -- How do you tell phosphorescence from fluorescence -- the lifetime for phosphorescence is much longer (.1 - 10 seconds), but is this enough. 

p. 970 -- The chemistry of photolyases, which repair thymine photodimers is interesting.  Here's a bit more information.        [ Proc. Natl. Acad. Sci. vol. 99 pp. 1319 - 1322 '02 ] Enzymes repairing cyclobutane dimerase are called photolyases.  The enzymes contain a redox active flavin adenine dinucleotide (FAD), and a light harvester (a methenyltetrahydrofolate < a pterin > in most species).    It has been proposed that the initial step in the DNA repair mechanism is a photoinduced single electron transfer from the FAD cofactor (which in the active enzyme is in its fully reduced form — FADH-) to the DNA lesion.  The extra electron goes into the antibonding orbital of one of the C C bonds of the dimer.  (The electron donated is on the adenine of FADH).  The entire process takes less than a nanoSecond.   Electron transfer to the dimer takes 250 picoSeconds.  The dimer then opens within 90 picoSeconds and the electron comes back to the FADH cofactor in 700 picoSeconds.  This all happens because the dimer has been flipped out of the DNA into a binding pocket of the photolyase (how long does this take?).

       Interestingly, photolyases use less energetic light than the natural absorption of thymine dimers (2500 Angstroms).   Photoexcitation of the enzyme culminates in electron donation from the excited state flavin directly to the thymine dimer. 

p. 973 –> The photochemical reactions are impressive synthetically, and represent a whole new ball game in making fused rings.   The synthesis of cubane is impressive, and I wouldn’t have though quadricyclane could have been made at all. 

p. 980 — Caged compounds and their rapid release in incredibly important in biological research, particularly brain research.  Glutamic acid, is the main excitatory neurotransmitter in brain, and the ability to release it very locally in the brain and watch what happens subsequently is extremely useful in brain research.  

p. 987 — Sinbce the bond dissociation energy of O2 is given (34 kiloCalories/Mole) and C=O bonds are stated to be quite strong, why not just say the BDE of C=O is 172 KCal/M? 

p. 992 — Good to see Sam Danishevsky has somethng named for him.

Anslyn pp. 807 – 876 (Chapter 14)

p. 807 “Most chemists think about bonding improperly”  – What an opening salvo for this Chapter — “Advanced Concepts in Electronic Structure Theory”.  I think A&D’s reasons for this are correct (at least for me).  They can be found on p. 813 (see the note) and p. 838 (ditto)

p. 808 — “These wavefunctions contain al the observable information about the system.”  – A huge assumption, and in fact a postulate of quantum mechanics.  OK, actually, since QM has never made an incorrect prediction. 

p. 809 — “In classical mechanics, the forces on a system create two kinds of energy — kinetic and potential”.  Hmm.  How does force ‘create’ energy?  It does so by doing work.  Work is force * distance, and if you do a dimensional analysis, you find that force * distance has the dimensions of kinetic energy (mass * velocity^2) — It’s worth working through this yourself, if you’ve been away from physics for a while.  Recall that potential energy is the general name for  energy which has to do with location relative to something else.

After reading Lawrence Krause’s biography of Feynmann (which goes much more into the actual physics than other biographies including Gleick’s), I cracked open the 3 volumes of the Feynmann lectures on physics and have begun reading.   It’s amazing how uncannily accurate his speculations were. particularly about things which weren’t known in the 60′s  but which are known now.  

       Feynman lists 9 types of energy (lecture 4 page 2)  
  l. gravitational (a type of potential energy)
  2. kinetic
  3. heat
  4. elastic
  5. electric
  6. chemical
  7. radiant (??)
  8. nuclear
  9. mass 

      He says that we really don’t know  what energy is (even though we know 9 forms in which it appears) just that it’s conserved.  Even so, the conservation law allows all sorts of physics problems to be solved.  To really get into why energy is conserved, you have to read about Noether’s theorem — which I’m about to do, using a book called “Emmy Noether’s Wonderful Theorem” by Dwight E. Neuenschwander.

      Later (Lecture 4 page 4) Feynmann defines potential energy as  the general name of energy which has to do with location relative to something else. 

p. 809 — The QM course I audited 2 years ago, noted that the Schrodinger equation really can’t be derived, but is used because it works. However,  the prof then proceeded to give us a nice pseudo-derivation based on the standard equation for a wave propagating in space and time, Einsteins E = h * nu, and De Broglie’s  p = h/lambda, and differenating the wave equation twice with respect to position, then twice with respect to time and equating what he got.   

However, to get the usual Hamiltonian, he had to arbitrarily throw in a term for potential energy (because it works).  

p. 810  “The energy E is simply a  number”  – should have said “The energy E is simply a real  number” which is exactly why the complex conjugate must be used.  If you really want to know what’s going on see — the 10 articles in the category == Linear Algebra Survival Guide for QM.

p. 811 – 812 — The qualitative discussion of the Laplacian is great — it also explains why higher frequency light has higher energy.  Worth the price of the book.  Localizing an electron increases its energy by the Heisenberg uncertainty principle, which was the reasoning I’d grown up with.  

One point for the unitiated (into the mysteries of quantum mechanics) to consider.  “The more nodes an orbital has, the higher is its energy.  Recall from Chapter 1 that nodes are points of zero electron density, where the wavefunction changes sign.”    Well, a point of zero electron density, or a point at which the wavefunction equals zero, means the electron is never (bold) found there.  So why is the probability of finding an electron on both sides of the node not zero.  You need to abandon the notion that an electron has a trajectory within an atom.  Having done so, what does angular momentum mean in quantum mechanics? 

p. 813 — Interesting that the electrostatic arguments for bonding (shielding nuclei frm each other, etc. etc. which I’ve heard a zillion times) are incorrect.  This probably explains the opening salvo of the chapter. (See also the note on p. 838).

p. 814 — “This is the fundamental reason that a bond forms; the kinetic energy of the electrons in the bonding region is lower than the kinetic energy of the electrons in isolated atomic orbits”  – this is because the wave function amplitude changes less between the nuclei.  However, since we’ve had to abandon the notion of a trajectory — what does kinetic energy actually mean in the quantum mechanical situation (see note on pp. 811 – 812).

p. 814 — “The greater the overlap between to orbitals the lower the kinetic energy”  – to really see this you have to look at figure 14.6 on p. 813 — The greater the overlap, the shallower the  depression of the wave function amplitude between the nuclei, which implies less change in amplitude with distance, whch implies a smaller Laplacian (a second derivative) and less kinetic energy for the electrons here.  So this is why overlapping atomic orbitals result in lower kinetic electron energy at sites of overlap    e. g. why bonds form (bold).  Great stuff ! ! ! !

      Continuing on, the next paragraph explains where Morse potentials (p. 422) come from, and why populating antisymmetric  orbitals causes repulsion (the change in orbital sign increases the Lagrangian greatly along with it the electron kinetic energy, despite the fact the the potential energy of the antisymmetric orbitals is favorable for keeping the atoms close — e.g. bonding).   

p. 815 — What does the Born Oppenheimer approximation (which keeps internuclear distances fixed) do to the calculation of vibrational energies — which depend on nuclear motion?    The way the energy of the solutions of the Schrodinger Equation using the BO approximation is gradually approached (moving the nuclei around and calculating energy) clearly won’t work for CnH2n+2 with n > 2. There will be more than a single minimum.  What about a small protein?   Clearly these situations the Born Oppenheimer approximation is hopeless.  Because of the difficulty in understanding A&Ds discussion of the secular equation (see comments on p. 828), I’ve taken to reading other books (which have the advantage of devoting hundreds of pages to A&Ds 60 to computational chemistry), notably  Cramer’s “Essentials of Computational Chemistry”  – He notes that lacking the Born Oppenheimer approximation, the concept of the potential energy surface vanishes.

p. 816 – 817 — Antisymmetric wave functions and Slater determinants are interesting ways to look at the Pauli exclusion principle.  The Slater determinant is basically a linear combination of orbitals — why is this allowed?  – because the orbitals are the solution of a differential equation, and the differential of a sum of functions is the same as the sum of differentials of the functions and orbitals are the solution to a differential equation.  

p. 823 — What’s a diffuse function?  Also polarization orbitals strike me as a gigantic kludge.  I suppose the proof of the pudding is in the prediction of energy levels, but there appear to be an awful lot of adjustible parameters lurking about.  

p. 824 — “the motions of the electrons are pairwise correlated to keep the electrons apart” — but electrons don’t really have trajectories — see note on pp. 811 – 812.  I got all this stuff from Mark P. Silverman “And Yet It Moves” Chapter 3

p. 824 — “the c(i)’s are incrementally changed until capital Psi approximates a correlated wavefunction.”  More kludges. 

p. 825 — Nice to see why electron correlation is required, if you want to study Van der Waals forces between molecules,  The correlation energy could be considered a intramolecular Van der Waals force.  

What is a single point energy? — I couldn’t find it in the index.

p. 826 — The descriptions of the successive kludges required for the ab initio approach to orbitals are rather depressing.  However, there’s no way around it.   You are basically trying to solve a many body problem when you solve the Schrodinger equation.  It’s time to remember what a former editor of Nature (John Gribbin) said “It’s important to appreciate, though, that the lack of solutions to the three-body problem is not caused by our human deficiencies as mathematicians; it is built into the laws of mathematics “

p. 828 — I was beginning not to take all this stuff seriously, until I found that the Hartree Fock approach produces results agreeing with experiment.  However, given the zillions of adjustable parameters involved in getting to any one energy, it better produce good results for n^2 molecules, where n is the number of adjustable parameters.   Fortunately, organic chemistry can provide well over n^2 molecules with n carbons and 2n+2 hydrogens. 

p. 828 — The discussion of secular determinants and the equations leading to them is opaque (to me).  So I had to look it up in another book “Essentials of Computational Chemistry” by Christopher J. Cramer (2nd Edition).  Clear as a bell (although, to be fair, I read it after slowly going through 20 pages of A&D), and done in 10+ pages (105 –> 115).

     Along these lines, how did the secular equation get its name.  Is there a religious equation? 

What can you do with an approximate wavefunction produced by any of these methods.  The discussion in A&D so far is all about energy levels.  However, unlike wavefunctions, operators on wavefunctions are completely known, so you can use them to calculate other properties (Cramer doesn’t give an example).  

     p. 830 – 831 — Even so, given the solutions of the secular equation for a very simple case, you see why the energy of a bonding orbital is less than two separate atomic orbital — call the amount B (for bonding). More importantly the energy of  the antibonding oribtal his greater than the two separate atomic orbitals by a greater amount than B — explaining why filling a both a bonding and the corresponding antibonding orbital results in repulsion.  This is rule #8 the rules of Qualitative Molecular Orbital theory (p. 28).

p. 834 — An acronym festival — CNDO, INDO, PNDO, MINDO 1- 3, AMI, PM3 etc.  at least they tell you that they aren’t much used any more.  

It’s amazing that Huckel theory works at all, ignoring as it does electron electron repulsion. 

p. 836 — If everything in Density Functional Theory (DFT) depends on the electron density — how do you ever find it?   Isn’t this what the wavefunctions which are the solutions to the Schrodinger equation actually are?  I’m missing something here and will have to dig into Cramer again. 

p. 838 — Most energy diagrams of molecular orbitals made from two identical atomic orbitals have the bonding and antibonding orbitals have them symmetrically disposed lower and higher than the atomic orbitals.  This is from use of the Huckel approximation, which simply ignores overlap integrals.  The truth is shown in the diagram on p. 831. 

p. 838 — Another statement worth the price of the book — the sigma amd pi orbitals are of opposite symmetry (different symmetry) and so the sigma and pi orbitals don’t mix.  The sigma electrons provide part of the potential field experienced by the pi electrons.  

p. 839 — Spectacular to see how well Huckel Molecular Orbital Theory works for fulvene — even if the bonding and antibonding orbitals are symmetrically disposed. 

p. 841 — Fig. 14.13 — I don’t understand what is going on in part B in the two diagrams where not all the atoms have an associated p orbital. 

p. 843 — With all these nice energy level diagrams, presumably spectroscopy has been able to determine the difference between them, and see how well the Huckel theory fits the pattern of energy levels (if not the absolute values). 

p. 846 — Table 1.1 should be table 1.4 (I think)

p. 849 — How in the world was the bridged [10] annulene made?

p. 853 — Why is planar tetracoordinate carbon a goal of physical organic chemistry?   The power of the molecular orbital approach is evident — put the C’s and H’s where you want in space, and watch what happens — sort of like making a chemical unicorn.  Why not methane with 3 hydrogens in an equilateral triangle, the carbon in the center of the triangular plane, and the fourth hydrogen perpendicular to the central carbon?   

Dilithiocycloproane has been made — presumably as a test bed for MO calculations, being able to predict something is always more convincing of a theory’s validity than justifying something you know to be true. 

p. 854 — What are the observations supporting the A < S ordering of molecular orbitals in 1, 4 dihydrobenzene?  The arguments to rationalize the unexpected strike me as talmudic, not that talmudic reasoning is bad, just that no one calls it scientific.

p. 856 — Good to see that one can calculate NMR chemical shifts using ab initio calculations (hopefully without tweaking parameters for each molecule).  Bad to see that it is too complicated to go into here.  More reading to do (next year) after Anslyn (probably Cramer), with a little help from two computational chemist friends.  

p. 858 — How in the world did anyone ever make Coates’ ion?  

p. 861 — Have the cyclobutanediyl and cyclpentanediyl radicals ever been made?

p. 863 — “Recall that the 3d orbitals are in the same row of the period(ic) table as the 4s and 4p orbitals’ — does anyone have an idea why this is so?  Given the periodic table, the 4s orbitals fill before the 3d, which fill before the 4p (lowest energy orbital fill first presumably).  The higher energy of the 4p than 3d may explain by d2sp3 orbitals are higher in energy than the leftover 3d orbitals — d(xy), d(yz) and d(zx).  Is this correct?  However the diagram in part B. of Figure 14.33 on p. 864 still shows that (n+1)s is of higher energy than nd, even though the periodic table would imply the opposite. 

       It’s not clear why the t(2g) combinations of d(xy), d(yz) and d(zx) orbitals don’t interact with the 6 ligand orbitals, since they are closer to them in energy.  Presumably the geometry is wrong?  Presumably d(z2) and d(x2 – y2) are used to hybridize with the p orbitals because they are oriented the same way, and d(xy), d(yz) and d(zx) are offset from the p orbitals by 45 degrees.  Is this correct? 

        This is the downside of self-study.  I’m sure a practicing transitional metal organic chemist could answer these questions quickly, but without these answers it’s back to the med school drill — that’s the way it is, and you’d best memorize it.

p. 865 — Frontier orbitals have only been defined for the Huckel approximation at this point — the index has them discussed in the next chapter.  On p. 888 they are defined as HOMO and LUMO which have been well defined previously. 

p. 866 — The isolobal work is fascinating, primarily because it allows you to predict (or at least rationalize) things.

This section does elaborate a bit on organometallic  bonding details, lacking in chapter 12.  However, no reactions are discussed, and electrons are not pushed.  Perhaps later in the book, but I doubt it.

It’s clear from reading chapter 12 that organometallics have revolutionized synthesis in the past 50 years. I’ll need to read further next year, in order to reach the goal of enjoying new and clever syntheses as they come out.

Does anyone out there have any thoughts about Cramer’s book? Any recommendations for other computational chemistry books? I’ve clearly got to go farther.

Merry Christmas and Happy New Year

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this his hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? 

Some New Year’s thank you’s – I

Even though I’m the CEO of a tiny department of a very large organization, it’s time to thank those unsung divisions that make it all possible.  It’s been a very good year. Thanks in part to our work, the boss is a lot more adept at using the pedal when he plays the piano.

First: thanks to the guys in shipping and receiving.  Kinesin moves the stuff out and Dynein brings it back home.  Think of how far they have to go.  The head office sits in area 4 of the cerebral cortex and K & D have to travel about 3 feet down to the motorneurons in the first sacral segment of the spinal cord controlling the gastrocnemius and soleus, so the boss can press the pedal on his piano when he wants. Like all good truckers, they travel on the highway.  But instead of rolling they jump.  The highway is pretty lumpy being made of 13 rows of tubulin dimers.

Now chemists are very detail oriented and think in terms of Angstroms (10^-10 meters) about the size of a hydrogen atom. As CEO and typical of cell biologists, I have to think in terms of the big picture, so I think in terms of nanoMeters (10^-9 meters).  Each tubulin dimer is 80 nanoMeters long, and K & D essentially jump from one to the other in 80 nanoMeter steps.  Now the boss is shrinking as he gets older, but my brothers working for players in the NBA have to go more than a meter to contract the gastrocnemius and soleus (among other muscles) to help their bosses jump.  So split the distance and call the distance they have to go one Meter.  How many jumps do Kinesin and Dynein have to make to get there? Just 10^9/80 — call it 10,000,000. The boys also have to jump from one microtubule to another, as the longest microtubule in our division is at most 100 microns (.1 milliMeter).  So even in the best of cases they have to make at least 10,000 transfers between microtubules.  It’s a miracle they get the job done at all.

To put this in perspective, consider a tractor trailer (not a truck — the part with the motor is the tractor, and the part pulled is the trailer — the distinction can be important, just like the difference between rifle and gun as anyone who’s been through basic training knows quite well).  Say the trailer is 48 feet long, and let that be comparable to the 80 nanoMeters K and D have to jump. That’s 10,000,000 jumps of 48 feet or 90,909 miles.  It’s amazing they get the job done.

Second: Thanks to probably the smallest member of the team.  The electron.  Its brain has to be tiny, yet it has mastered quantum mechanics because it knows how to tunnel through a potential barrier.   In order to produce the fuel for K and D it has to tunnel some 20 Angstroms from the di-copper center (CuA) to heme a in cytochrome C oxidase (COX).  Is the electron conscious? Who knows?  I don’t tell it what to do.   Now COX is just a part of one of our larger divisions, the power plant (the mitochondrion).

Third: The power plant.  Amazing to think that it was once (a billion years or more ago) a free living bacterium.  Somehow back in the mists of time one of our predecessors captured it.  The power plant produces gas (ATP) for the motors to work.  It’s really rather remarkable when you think of it.   Instead of carrying a tank of ATP, kinesin and dynein literally swim in the stuff, picking it up from the surroundings as they move down the microtubule.  Amazingly the entire division doesn’t burn up, but just uses the ATP when and where needed.  No spontaneous combustion.

There are some other unsung divisions to talk about (I haven’t forgotten you ladies in the steno pool, and your incredible accuracy — 1 mistake per 100,000,000 letters [ Science vol. 328 pp. 636 - 639 '10 ]).  But that’s for next time.

To think that our organization arose by chance, working by finding a slightly better solution to problems it face boggles this CEO’s mind (but that’s the current faith — so good to see such faith in an increasingly secular world).

Bell’s brilliance and the hedonism of understanding

 

If you’re coming here from FARK.COM and wonder what this is all about, start reading two posts back “http://luysii.wordpress.com/2010/12/13/bells-inequality-entanglement-and-the-demise-of-local-reality-i/&#8221;.  You don’t have to know any physics or math to do so (but it helps, of course)

Bell did most of his work on quantum mechanics (QM) behind everyone’s back.  His day job was accelerator design and particle physics at the European center for such things (CERN).  He was no more satisfied with quantum mechanics than Einstein.  In the 60s the attitude toward the fundamentals behind QM was “Don’t Ask, Don’t Think”.  This persisted for decades and years later he made sure that anyone wanting to actually attempt an experiment testing his inequalities had a secure position before he’d encourage them.

In the 50′s David Bohm had developed a theory with hidden variables which explained the results of QM perfectly.  Unfortunately, it relied on something called a pilot wave (which you couldn’t measure).  Even worse the pilot wave was non-local affecting everything everywhere at once.  Worse than that, the great mathematician von Neumann had proved that a hidden variable was impossible (mathematically).  So Bell had the guts to decide that one of them had to be wrong, and read both papers (the very definition of Chutzpah).   For those organic chemists in your 30s, imagine that you decided that the Woodward Hoffmann rules were wrong.  Clearly, something you’d have to work on out of sight.

So Bell reads von Neumann’s proof and finds that it’s wrong.  He doesn’t like the nonlocality of Bohm’s theory either, so what to do. Follow the implications of a hidden variable theory of QM with locality.  This is how he came up with his inequalities.  To do so he had to know a good deal of QM and exactly what its predictions would be for various orientations of the polarizing beam splitters (PBSs) — see the previous two posts if you’re foggy about what PBSs do.  Not only that, but he had to conceive a type of experiment which no one at the time had any clue how to do.  A brilliant, brilliant man, and a great pleasure to finally understand what he did (thanks to Zeilinger).

You’ll have to make your own peace with the implications of entanglement, nonlocality, lack of hidden variables, etc. etc.  Bohr didn’t –” For those who are not shocked when they first come across quantum theory cannot possibly have understood it.”  Feynman didn’t — “I think I can safely say that nobody understands quantum mechanics.”.  There are a number of ways to make the world QM describes as weird as possible.  No hidden variables means that the moon isn’t there unless you look at it (measure it), or that a pretty girl isn’t pretty until you look at her. Then there is Schrodinger’s cat (see http://luysii.wordpress.com/2009/11/23/spin-hair-race-and-schrodingers-cat/).  Chemists have long known (and ignored) the fact that electrons in atoms can’t have anything like a classical trajectory.  Why?  How would any electron with a quantum number of 2 or more get past a node (where it is never found)?

After all these years, finally understanding the inequalities was really pleasant, so pleasant in fact that I began thinking about why this was. One reason of course was that I’d read a lot about it and never understood what was going on.  That’s simply getting to the top of an intellectual mountain.

More importantly, I think a lot of the pleasure comes from the completeness of the understanding.  You never understand music, literature or art anything like this.  Playing Mozart or Bach gives me great pleasure, but I’ve never felt  that I completely understood what is going on, even though there certainly is an esthetic kick to it.  I’m not sure anyone does, but if they did, would the pleasure of it be of the same kind?  The stuff that seems to most interest us, is stuff we don’t fully understand.  Interest isn’t the same as pleasure, but trying to figure music out is enjoyable. Leonard Bernstein said something to the effect that if you could capture Beethoven in words you wouldn’t need Beethoven.

Even though we understand a lot about organic chemistry, I don’t think we’re close to understanding it deeply in this sense.  Perhaps that’s why it is so fascinating.

Merry Christmas and Happy New Year to all ! ! !
Follow

Get every new post delivered to your Inbox.

Join 61 other followers