Category Archives: Quantum Mechanics

Watching electrons being pushed

Would any organic chemist like to watch electrons moving around in a molecule? Is the Pope Catholic? Attosecond laser pulses permit this [ Science vol. 346 pp. 336 – 339 ’14 ]. An attosecond is 10^-18 seconds. The characteristic vibrational motion of atoms in chemical bonds occurs at the femtosecond scale (10^-15 seconds). An electron takes 150 attoseconds to orbit a hydrogen atom [ Nature vol. 449 p. 997 ’07 ]. Of course this is macroscopic thinking at the quantum level, a particular type of doublethink indulged in by chemists all the time — http://luysii.wordpress.com/2009/12/10/doublethink-and-angular-momentum-why-chemists-must-be-adept-at-it/.

The technique involves something called pump probe spectroscopy. Here was the state of play 15 years ago — [ Science vol. 283 pp. 1467 – 1468 ’99 ] Using lasers it is possible to blast in a short duration (picoseconds 10^-12 to femtoseconds 10^-15) pulse of energy (pump pulse ) at one frequency (usually ultraviolet so one type of bond can be excited) and then to measure absorption at another frequency (usually infrared) a short duration later (to measure vibrational energy). This allows you to monitor the formation and decay of reactive intermediates produced by the pump (as the time between pump and probe is varied systematically).

Time has marched on and we now have lasers capable of producing attosecond pulses of electromagnetic energy (e.g. light).

A single optical cycle of visible light of 6000 Angstrom wavelength lasts 2 femtoseconds. To see this just multiply the reciprocal of the speed of light (3 * 10^8 meters/second) by the wavelength (6 * 10^3 *10^-10). To get down to the attosecond range you must use light of a shorter wavelength (e.g. the ultraviolet or vacuum ultraviolet).

The paper didn’t play around with toy molecules like hydrogen. They blasted phenylalanine with UV light. Here’s what they said “Here, we present experimental evidence of ultrafast charge dynamics in the amino acid phenylalanine after prompt ionization induced by isolated attosecond pulses. A probe pulse then produced a doubly charged molecular fragment by ejection of a second electron, and charge migration manifested itself as a sub-4.5-fs oscillation in the yield of this fragment as a function of pump-probe delay. Numerical simulations of the temporal evolution of the electronic wave packet created by the attosecond pulse strongly support the interpretation of the experimental data in terms of charge migration resulting from ultrafast electron dynamics preceding nuclear rearrangement.”

OK, they didn’t actually see the electron dynamics but calculated it to explain their results. It’s the Born Oppenheimer approximation writ large.

You are unlikely to be able to try this at home. It’s more physics than I know, but here’s the experimental setup. ” In our experiments, we used a two-color, pump-probe technique. Charge dynamics were initiated by isolated XUV sub-300-as pulses, with photon energy in the spectral range between 15 and 35 eV and probed by 4-fs, waveform-controlled visible/near infrared (VIS/NIR, central photon energy of 1.77 eV) pulses (see supplementary materials).”

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 – E3533 ’14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Are Van der Waals interactions holding asteroids together?

A recent post of Derek’s concerned the very weak (high kD) but very important interactions of proteins within our cells. http://pipeline.corante.com/archives/2014/08/14/proteins_grazing_against_proteins.phpAr

Most of this interaction is due to Van der Waals forces — http://en.wikipedia.org/wiki/Van_der_Waals_force. Shape shape complementarity (e.g. steric factors) and dipole dipole interactions are also important.

Although important, Van der Waals interactions have always seemed like a lot of hand waving to me.

Well guess what, they are now hypothesized to be what is holding an asteroid together. Why are people interested in asteroids in the first place? [ Science vol. 338 p. 1521 ’12 ] “Asteroids and comets .. reflect the original chemical makeup of the solar system when it formed roughly 4.5 billion years ago.”

[ Nature vol. 512 p. 118 ’14 ] The Rosetta spacecraft reached the comet 67P/Churyumov-Gerasimenko after a 10 year journey becoming the first spacecraft to rendezvous with a comet. It will take a lap around the sun with the comet and will watch as the comet heats up and releases ice in a halo of gas and dust. It is now flying triangles in front of the comet, staying 100 kiloMeters away. In a few weeks it will settle into a 30 kiloMeter orbit around he comet. It will attempt to place a lander (Philae) the size of a washing machine on its surface in November. The comet is 4 kiloMeters long.

[ Nature vol. 512 pp. 139 – 140, 174 – 176 ’14 ] A kiloMeter sized near Earth asteroid called (29075) 1950 DA (how did they get this name?) is covered with sandy regolith (heterogeneous material covering solid rock { on earth } it includes dust, soil, broken rock ). The asteroid rotates every 2+ hours, and it is so small that gravity alone can’t hold the regolith to its surface. An astronaut could scoop up a sample from its surface, but would have to hold on to the asteroid to avoid being flung off by the rotation. So the asteroid must have some degree of cohesive strength. The strength required is 64 pascals to hold the rubble together — about the pressure that a penny exerts on the palm of your hand. A Pascal is 1/101,325 of atmospheric pressure.

They think the strength comes from van der Waals interactions between small (1 – 10 micron) grains — making it fairy dust. It’s rather unsatisfying as no one has seen these particles.

The ultimate understanding of the large multi-protein and RNA machines (ribosome, spliceosome, RNA polymerase etc. etc. ) without which life would be impossible will involve the very weak interactions which hold them together. Along with permanent dipole dipole interactions, charge interactions and steric complementarity, the van der Waals interaction is high on anyone’s list.

Some include dipole dipole interactions as a type of van der Waals interaction. The really fascinating interaction is the London dispersion force. These are attractions seen between transient induced dipoles formed in the electron clouds surrounding each atomic nucleus.

It’s time to attempt the surmount the schizophrenia which comes from trying to see how quantum mechanics gives rise to the macroscopic interactions between molecules which our minds naturally bring to matters molecular (with a fair degree of success).

Steric interactions come to mind first — it’s clear that an electron cloud surrounding molecule 1 should repel another electron cloud surrounding molecule 2. Shape complementarity should allow two molecules to get closer to each other.

What about the London dispersion forces, which are where most of the van der Waals interaction is thought to be. We all know that quantum mechanical molecular orbitals are static distributions of electron probability. They don’t fluctuate (at least the ones I’ve read about). If something is ‘transiently inducing a dipole’ in a molecule, it must be changing the energy level of a molecule, somehow. All dipoles involve separation of charge, and this always requires energy. Where does it come from? The kinetic energy of the interacting molecules? Macroscopically it’s easy to see how a collision between two molecules could change the vibrational and/or rotation energy levels of a molecule. What does a collision between between molecules look like in terms of the wave functions of both. I’ve never seen this. It has to have been worked out for single particle physics in an accelerators, but that’s something I’ve never studied.

One molecule inducing a transient dipole in another, which then induces a complementary dipole in the first molecule, seems like a lot of handwaving to me. It also appears to be getting something for nothing contradicting the second law of thermodynamics.

Any thoughts from the physics mavens out there?

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

For some very interesting work he did last year please see http://luysii.wordpress.com/2013/07/08/schleyer-is-still-pumping-out-papers-crystallization-of-a-nonclassical-norbornyl-cation/

At the Alumni Day

‘It’s Complicated’. No this isn’t about the movie where Meryl Streep made a feeble attempt to be a porn star. It’s what I heard from a bunch of Harvard PhD physicists who had listened to John Kovac talk about the BICEP2 experiment a day earlier. I had figured as a humble chemist that if anyone would understand why polarized light from the Cosmic Background Radiation would occur in pinwheels they would. But all the ones I talked to admitted that they didn’t.

The experiment is huge for physics and several articles explain why this is so [ Science vol. 343 pp. 1296 – 1297m vol. 344 pp. 19 – 20 ’14, Nature vol. 507 pp. 281 – 283 ’14 ]. BICEP2 provided strong evidence for gravitational waves, cosmic inflation, and the existence of a quantum theory of gravity (assuming it holds up and something called SPIDER confirms it next year). The nice thing about the experiment is that it found something predicted by theory years ago. This is the way Science is supposed to operate. Contrast this with the climate models which have been totally unable to predict the more than decade of unchanged mean global temperature that we are currently experiencing.

Well we know gravity can affect light — this was the spectacular experimental conformation of General Relativity by Eddington nearly a century ago. But how quantum fluctuations in the gravitational field lead to gravitational waves, and how these waves lead to the polarization of the background electromagnetic radiation occurring in pinwheels is a mystery to me and a bunch of physicists had more high powered than I’ll ever be. If someone can explain this, please write a comment. The articles cited above are very good to explain context and significance, but they don’t even try to explain why the data looks the way it does.

The opening talk was about terrorism, and what had been learned about it by studying worldwide governmental responses to a variety of terrorist organizations (Baader Meinhof, Shining Path, Red Brigades). The speaker thought our response to 9/11 was irrational — refusing to fly when driving is clearly more dangerous etc. etc. It was the typical arrogance of the intelligent, who cannot comprehend why everyone does not think the way they do.

I thought it was remarkable that a sociologist would essentially deprecate the way people think about risk. I’m sure that many in the room were against any form of nuclear power, despite its safety compared to everything else and absent carbon footprint.

Addendum 7 April — The comment by Handles and link he provided is quite helpful, although I still don’t understand it as well as I’d like. Here’s the link https://medium.com/p/25c5d719187b

The New Clayden pp. 43 – 106

The illustrations in the book that I comment on can be reached on the web by substituting the page number I give for xx in the following

My guess is that people who haven’t bought the book, will be tempted to do after looking at a few of them.

p. 50         The reason that exact masses of various isotopes of a given element aren’t integers lies in the slight mass difference between a proton 1.67262 * 10^-27 kiloGrams and a neutron 1.67493 * 10^-27,  Also electrons have a mass of 9.10956 * 10^-31 kiloGrams.   I didn’t realize that particle masses can now be measured to 6 significant figures.  Back in the day it was 4.  Impressive.

p. 52 — Nice to see a picture of an MRI scanner (not NMR scanner).  MRI stands for magnetic resonance imaging.  The chemist might be wondering why the name change.  Because it would be very difficult to get a fairly large subset of patient to put their head in anything with nuclear in the name.

It’s also amusing to note that in the early days of NMR, chemists worked very hard to keep out water, as the large number of hydrogens in water would totally swamp the signal of the compound you were trying to study.  But the brain (and all our tissues) is quite wet with lots of water.

p. 53 — (sidebar).  Even worse than a workman’s toolbox permanently attaching itself to an NMR magnet, is the following:  Aneurysms of brain arteries are like bubbles on an inner tube.  Most have a neck, and they are treated by placing a clip across the neck.  Aneurysm clips are no longer made with magnetic materials, and putting a patient with such a clip in an MRI is dangerous.     A patient with a ferromagnetic aneurysm clip suffered a fatal rupture of the aneurysm when she was placed in an MRI scanner [ Radiol. vol. 187 pp. 612 – 614, 855 – 856 ’93 ].

p. 53 —     NMR  shows the essential weirdness of phenomena at the quantum mechanical level, and just how counter intuitive it is.  Consider throwing a cannonball at a brick wall.  At low speeds (e.g. at low energies) it hits the wall and bounces back.  So at the end you see the cannonball on your side of the wall.  As speeds and energies get higher and higher the cannonball eventually goes through the wall, changing the properties of the wall in the process.  Radiowaves have very low energies relative to visible light (their wavelengths are much longer so their frequencies are much lower, and energy of light is proportional to frequency).  So what happens when you throw radiowaves at an organic compound with no magnetic field present — it goes right through it (e.g. it is found on the other side of the brick wall).  NMR uses a magnetic field  to separate the energy levels of a spinning charged nucleus enough that they can absorb light.  Otherwise the light just goes past the atom without disturbing it.  Imagine a brick wall that a cannonball goes through without disturbing and you have the macroscopic analogy.

p. 53 Very nice explanation of what the actual signal picked up by the NMR machine actually is — it is the energy put into flipping a spin up against the field, coming out again.  It’s the first text I’ve read on the subject that makes this clear at the start.

p. 54 — Note that the plot of absorption (really emission) of energy has higher frequencies to the left rather than the right (unlike every other of anything numeric I’ve ever seen).  This is the way it is.  Get used to it.

p. 55 — The stronger the magnetic field, the more nuclear energy levels are pulled apart, the greater the energy difference between them, and thus the higher frequency of electromagnetic radiation resulting from the jump between nuclear energy levels.

p. 56 — I was amazed to think that frequencies can be measured with accuracies better than 1 part per million, but an electrical engineer who married my cousin assures me that this is not a problem.

p. 57  The following mnemonic may help you keep things straight

low field <==> downfield <==> bigger chemical shift <==> deshielded

mnemonic loadd of bs

It’s far from perfect, so if you can think of something better, please post a comment.

p. 64 — Error — Infrared wavelengths are “not between 10 and 100 mm”  They start just over the wavelength of visible light (8000 Angstroms == 800 nanoMeters == .8 microMeter) and go up to 1 milliMeter (1,000 microMeters)

p. 64 — Here’s what a wavenumber is, and how to calculate it.  The text says that the wavenumber in cm^-1 (reciprocal centimers) is the number of wavelengths in a centimeter.  So wavenumbers are proportional to frequency.

To figure out what this is the wavelength (usually given in Angstroms, nanoMeters, microMeters, milliMeters) should be expressed in meters.  So should centimeters (which are 10^=2 meters).  Then we have

#wavelengths/centiMeters * wavelength in meters = 10^-2 (centimeters in meters)

Thus visible light with a wavelength of 6000 Angstroms == 600 nanoMeters can pack

(# wavelength/cm) * 600 * 10^-9 = 10^-2

waves into a centimeter

so its wavenumber is l/6 * 10^5 reciprocal centimeters — e.g

  16,666 cm(-1).  The highest wavenumber of visible light is 12,500 cm(-1) — corresponding to 8000 Angstroms.

Infrared wavenumbers can be converted to frequencies by multiplying by the velocity of light (in centimeters) e.g. 3 * 10^10 cm/sec. So the highest frequency of visible light is 7.5 * 10^14 — nearly a petaHertz

IR wavenumbers range from 4000 (25,000 Angstroms) to 500 (200,000 Angstroms)

p. 65 — In the bottom figure — the line between bonds to hydrogen and triple bonds shold be at 2500 cm^-1 rather than 3000 to be consistent with the ext.

p. 71 — “By contrast, the carbonyl group is very polarized with oxygen attracting the electrons”  — the electronegativity values 2.5 and 3.5 of C and O could be mentioned here.

p. 81-1, 81-2 — The animations contingue to amaze.  The latest shows the charge distribution, dipole moments of each bond, electron density, stick model, space filling models of a variety of small molecules — you can rotate the molecule, shrink it or blow it up.

To get to any given model mentioned here type
with the page number replacing xx.  The 81-1 and 81-2 are to be substituted for xx as there are two interactive web pages for page 81.  Fabulous.

Look at enough of them and you’ll probably buy the book.

p. 82 — “Electrons have quantized energy levels”  Very misleading, but correct in a sense which a naive reader of this book wouldn’t be expected to know.   This should be changed to “Atoms (and/or Molecules) have quantized electronic energy levels.”   In the first paragraph of the section the pedagogical boat is righted.

p. 85 — The introspective will pause and wonder about the following statement — “In an atom, a node is a point where the electron can never (italics) be found — a void separating the two parts of the orbital”.  Well, how does a given electron get past a node from one part of an orbital to the other.  This is just more of the wierdness of what’s going on at the quantum mechanical level (which is dominant at atomic scales).  There is no way you can regard the electron in the 2 s orbital as having a trajectory (or an electron anywhere else according to QM).  The idea that that trajectories need to be abandoned in QM isn’t my idea but that of a physicist, Mark P. Silverman. His books are worth a look, if you’re into getting migraines from considering what quantum mechanics really means about the world. He’s written 4 according to Amazon.

p. 86 — Very worthwhile looking at the web page for the 3 dimensional shapes of atomic orbitals — particularly the d and f orbitals.  FYI  p orbitals have 1 nodal planes d have 2 and f have 3.   If you’re easily distractable, steel yourself, as this web page has links to other interesting web pages with all sorts of moleculara orbitals.  This one has links to ‘simple’ molecular orbitals for 11 more compounds ranging from hydrogen to benzene.

p. 87 — Nice to know where s, p, d, and f come from — the early days of spectroscopy — s = sharp, p = principal, d = diffuse, f = fundamental.

p. 87 — “There doesn’t have to be someone standing on a stair for it to exist”  great analogy for empty orbitals.

*Ap. 89 — The first diagram on the page is misleading and, in fact, quite incorrect.  The diagram shows that the bonding orbital is lower in energy than the atomic 1s orbitals by exactly the same amount as the antibonding orbitals are higher.  This is not the case.  In such a situation the antibonding orbital is higher in energy by a greater amount than the bonding orbital is lower.  The explanation is quite technical, involving overlap integrals and the secular equation (far too advanced to bring in here, but the fact should be noted nonetheless).  Anslyn and Dougherty has a nice discussion of this point pp. 828 – 831.

p. 91 — The diagram comes back to bite as “Since there is no overall bonding holding the two atoms together, they can drift apart as two separate atoms with their electrons in 1s AOs”.  Actually what happens is that they are pushed apart because the destabilization of the molecule by putting an electron in the antibonding  molecular orbital is greater than the stabilization of the remaining electron in the bonding molecular orbital (so the bonding orbital can’t hold the atoms together).   Ditto for the explanation of why diatomic Helium doesn’t exist.

p. 94 — The rotating models of the bonding and antibonding orbitals of N2 are worth a look, and far better than an projection onto two dimensional space (e.g. the printed page)  See the top of this post  for how to get them.

p. 95 — It’s important to note that nitric oxide is used by the brain in many different ways — control of blood flow, communication between neurons, neuroprotection after insults, etc. etc.  These are just a few of its effects, more are still being found.

p. 100 — I guess British undergraduates know what a pm is.  Do you?  It’s a picoMeter (10^-12 Meters) or 100 Angstroms.  Bond lengths of ethylene are given as C-H = 108 pm, C=C as 133 pm.  I find it easier to think of them as 1.08 Ansgroms and 1.33 Angstroms — but that’s how I was brung up.

p. 103 — It is mentioned that sp orbitals are lower in energy than sp2 orbitals which are lower in energy than sp3 orbitals.  The explanation given is that s orbitals are of lower energy than p orbitals — not sure if the reason for why this is so was given.  It’s because the s electrons get much closer to the nucleus (on average) than electrons in p orbitals (which have a node there).  Why should this lower energy?  Because the closer an electron gets to the positively charge nucleus, the less charge separation there is, and separating charges costs energy.

p. 105 — Mnemonic for Z (cis) and E (trans) just say cis in French — it sounds like Zis.

Going to the mat with representation, characters and group theory

Willock’s book (see https://luysii.wordpress.com/category/willock-molecular-symmetry/) convinced me of the importance of the above in understanding vibrational spectroscopy.  I put it aside because the results were presented, rather than derived.  From the very beginning (the 60’s for me) we were told that group theory was important for quantum mechanics, but somehow even a 1 semester graduate course back then didn’t get into it.  Ditto for the QM course I audited a few years ago.

I’ve been nibbling about the edges of this stuff for half a century, and it’s time to go for it.  Chemical posts will be few and far between as I do this, unless I run into something really interesting (see https://luysii.wordpress.com/2012/02/18/a-new-way-to-study-reaction-mechanisms/).  Not to worry –plenty of interesting molecular biology, and even neuroscience will appear in the meantime, including a post about article showing that just about everything we thought about hallucinogens is wrong.

So, here’s my long and checkered history with groups etc.  Back in the day we were told to look at “The Theory of Groups and Quantum Mechanics” by Hermann Weyl.  I dutifully bought the Dover paperback, and still have it (never throw out a math book, always throw out medical books if more than 5 years old).  What do you think the price was — $1.95 — about two hours work at minimum wage then.  I never read it.

The next brush with the subject was a purchase of Wigner’s book “Group Theory and its Application to the Quantum Mechanics of Atomic Spectra”  — also never read but kept.  A later book (see Sternberg later in the post)  noted that the group theoretical approach to relativity by Wigner produced the physical characteristics of mass and spin as parameters in the description of irreducible representations.  The price of this one was $6.80.

Then as a neurology resident I picked up “Group Theory” by Morton Hammermesh (Copyright 1962).  It was my first attempt to actually study the subject.  I was quickly turned off by the exposition.  As groups got larger (and more complicated) more (apparently) ad hoc apparatus was brought in to explain them — first cosets, then  subgroups, then normal subgroups, then conjugate classes.

That was about it, until retirement 11 years ago.  I picked up a big juicy (and cheap) Dover paperback “Modern Algebra” by Seth Warner — a nice easy introduction to the subject.

Having gone through over half of Warner, I had the temerity to ask to audit an Abstract Algebra course at one of the local colleges.  I forget the text, but I didn’t like it (neither did the instructor).  We did some group theory, but never got into representations.

A year or so later, I audited a graduate math abstract algebra course given by the same man.  I had to quit about 3/4 through it because of an illness in the family.  We never got into representation theory.

Then, about 3 years ago, while at a summer music camp, I got through about 100 pages of “Representations and Characters of Groups” by James and Liebeck.  The last chapter in the book (starting on p. 366) was on an application to molecular vibration.  The book was hard to use because they seemed to use mathematical terms differently than I’d learned — module for example.   I was used to.  100 pages was as far as I got.

Then I had the pleasure of going through Cox’s book on Galois theory, seeing where a lot of group theory originated  (along with a lot of abstract algebra) — but there was nothing about representations there either.

Then after giving up on Willock, a reader suggested  “Elements of Molecular Symmetry” by Ohrn.  This went well until p. 28, when his nomenclature for the convolution product threw me.

So I bought yet another book on the subject which had just come out “Representation Theory of Finite Groups” by Steinberg.  No problems going through the first 50 pages which explains what representations, characters and irreducibles are.  Tomorrow I tackle p. 52 where he defines the convolution product.  Hopefully I’ll be able to understand it an go back to Ohrn — which is essentially chemically oriented.

The math of it all is a beautiful thing, but the immediate reason for learning it is to understand chemistry better.  I might mention that I own yet another book “Group Theory and Physics” by Sternberg, which appears quite advanced.  I’ve looked into it from time to time and quickly given up.

Anyway, it’s do or die time with representation theory.  Wish me luck

Where has all the chemistry gone?

Devoted readers of this blog (assuming there are any) must be wondering where all the chemistry has gone.  Willock’s book convinced me of the importance of group theory in understand what solutions we have of the Schrodinger equation.  Fortunately (or unfortunately) I have the mathematical background to understand group characters and group representations, but I found Willock’s  presentation of just the mathematical  results unsatisfying.

So I’m delving into a few math books on the subject. One is  “Representations and Characters of Groups” by James and Liebeck (which provides an application to molecular vibration in the last chapter starting on p. 367).  It’s clear, and for the person studying this on their own, does have solutions to all the problems. Another is “Elements of Molecular Symmetry” by Ohrn, which I liked quite a bit.  But unfortunately I got stymied by the notation M(g)alpha(g) on p. 28. In particular, it’s not clear to me if the A in equation (4.12) and (4.13) are the same thing.

I’m also concurrently reading two books on Computational Chemistry, but the stuff in there is pretty cut and dried and I doubt that anyone would be interested in comments as I read them.  One is “Essential Computational Chemistry” by Cramer (2nd edition).  The other is “Computational Organic Chemistry” by Bachrach.  The subject is a festival of acronyms (and I thought the army was bad) and Cramer has a list of a mere 284 of them starting on p. 549. On p. 24 of Bachrach there appears the following “It is at this point that the form of the functionals  begins to cause the eyes to glaze over and the acronyms appear to be random samplings from an alphabet soup.”  I was pleased to see that Cramer still thinks 40 pages or so of Tom Lowry and Cathy Richardson’s book is still worth reading on molecular orbital theory, even though it was 24 years old at the time Cramer referred to it.  They’re old friends from grad school.   I’m also pleased to see that Bachrach’s book contains interviews with Paul Schleyer (my undergraduate mentor).  He wasn’t doing anything remotely approaching computational chemistry in the late 50s (who could?).  Also there’s an interview with Ken Houk, who was already impressive as an undergraduate in the early 60s.

Maybe no one knows how all of the above applies to transition metal organic chemistry, which has clearly revolutionized synthetic organic chemistry since the 60’s, but it’s time to know one way or the other before tackling books like Hartwig.

Another (personal) reason for studying computational chemistry, is so I can understand if the protein folding people are blowing smoke or not.  Also it appears to be important in drug discovery, or at least is supporting Ashutosh in his path through life.  I hope to be able to talk intelligently to him about the programs he’s using.

So stay tuned.

Anslyn pp. 935 – 1000

The penultimate chapter of Anslyn is an excellent discussion of photochemistry, with lots of physics clearly explained but it leaves one question unanswered which has always puzzled me.  How long does it take for a photon of a given wavelength to be absorbed .  On p. 811 there is an excellent discussion of the way the quantum mechanical operator for kinetic energy (-hBar/2m * del^2) is related to kinetic energy.  The more the wavefunction changes in space, the higher the energy.  Note that the wavefunction applies to particles (like protons, neutrons, electrons) with mass.

Nonetheless, in a meatball sort of way, apply this to the (massless) photon.  Consider light from the classical point of view, as magnetic and electrical fields which fluctuate in time and space.  The fields of course exert force on charged particles, and one can imagine photons exerting forces on the electrons around a nucleus and  changing their momentum, hence doing work on them.  Since energy is conserved (even in quantum mechanics), it’s easy to see how the electrons get a higher energy as a result.  The faster the fields fluctuate, the more energy they impart to the electrons.

Now consider a photon going past an atom, and being absorbed by it.  It seems that a full cycle of field fluctuation must pass the atom.  So here’s a back of the envelope calculation, which seems to work out.  Figure an atomic diameter of 1 Angstrom (10^-10 meters).  The chapter is about photochemistry, which is absorption of light energetic enough to change electronic energy levels in an atom or a molecule.  All the colored things we see, are colored because their electronic energy levels are absorbing photons of visible light — the colors actually result from the photons NOT absorbed.  So choose light of 6000 Angstroms — which has a wavelength of 6 * 10^-7 meters.

In one second, light moves 3 * 10^8 meters, regardless of how many many wavelengths it contains. If the wavelength were 1 meter it would move past a point in 1/3 * 10^8 seconds But wavelength of the visible  light  I chose is 6 * 10 ^-7 meters, so the wavelength moves past in 6*10^-7/3 * 10^8 = 2 x 10^-15 seconds, which (I think) is how long it takes visible light to be absorbed.  Have I made a mistake?  Are there any cognoscenti out there to tell me different?

That was a classical way of looking at it.  Now for the bizarrity of quantum mechanics.  How does the wavelength of the photon get sucked up by something 1/6000th of itself, particularly when there are probably at least 10^9 atoms in a volume 6,000 on a side?  It gets worse with NMR, because the radioWave absorbed by a nucleus is 1 meter, and the nucleus is 10^-4 the size of an atom.  Essentially I’m asking about the collapse of the wavefunction of a photon (assuming they have one?).

 

p. 936 — “We show wavelength in the condoned (italics) unit of nanoMeter  . . . “  It may be condoned, but this chemist thinks in Angstroms, and my guess is that most chemists do, because atomic radii and diameters are small numbers in Angstroms, not fractions of a nanoMeter.

p. 939 — “Absorption of two photons or multiple photons . . . does not occur, execpt with special equipment . . . “  True enough, but the technique is now widely used in biologic research.   This is not new      [ Nature vol. 375 pp. 682 – 685 ’95 ] In contrast to conventional microscopy, two long wavelength photons are simultaneously absorbed in two photon fluoresence microscopy (multiphoton microscopy)   < [ Science vol. 300 p. 84 ’03 — actually within a few femtoSeconds — I thought simultaneity was asking too much > and combine their energies to excite a fluorophore not normally absorbing at this wavelength.  This permits the use of infrared light to excite the fluorophore. By using low energy (near infrared) light rather than higher energy visible light photons, light induced degradation of biological samples is minimized.  

p. 939 — Manifold probably really refers to the potential energy surface associated with the different energy levels, rather than the numeric value of the energy level. 

p. 940 — Look at transition dipoles very hard if you want to understand Forster resonance energy transfer (FRET), whch is widely used in biology to determine how proteins associate with each other. 

p. 944 — How in the world did they get enough formaldehyde in the excited state to measure it — or is this calculation? 

p. 947 — Nice exposition on GFP (Green Fluorescent Protein) which has revolutionized cellular biology.   But the organic chemist should ask themselves, why don’t chemical reactions between the hundreds of side chains on a protein happen all the time?  For more on this point see http://luysii.wordpress.com/2009/09/25/are-biochemists-looking-under-the-lamppost/

p. 951 — How do you tell phosphorescence from fluorescence — the lifetime for phosphorescence is much longer (.1 – 10 seconds), but is this enough. 

p. 970 — The chemistry of photolyases, which repair thymine photodimers is interesting.  Here’s a bit more information.        [ Proc. Natl. Acad. Sci. vol. 99 pp. 1319 – 1322 ’02 ] Enzymes repairing cyclobutane dimerase are called photolyases.  The enzymes contain a redox active flavin adenine dinucleotide (FAD), and a light harvester (a methenyltetrahydrofolate < a pterin > in most species).    It has been proposed that the initial step in the DNA repair mechanism is a photoinduced single electron transfer from the FAD cofactor (which in the active enzyme is in its fully reduced form — FADH-) to the DNA lesion.  The extra electron goes into the antibonding orbital of one of the C C bonds of the dimer.  (The electron donated is on the adenine of FADH).  The entire process takes less than a nanoSecond.   Electron transfer to the dimer takes 250 picoSeconds.  The dimer then opens within 90 picoSeconds and the electron comes back to the FADH cofactor in 700 picoSeconds.  This all happens because the dimer has been flipped out of the DNA into a binding pocket of the photolyase (how long does this take?).

       Interestingly, photolyases use less energetic light than the natural absorption of thymine dimers (2500 Angstroms).   Photoexcitation of the enzyme culminates in electron donation from the excited state flavin directly to the thymine dimer. 

p. 973 –> The photochemical reactions are impressive synthetically, and represent a whole new ball game in making fused rings.   The synthesis of cubane is impressive, and I wouldn’t have though quadricyclane could have been made at all. 

p. 980 — Caged compounds and their rapid release in incredibly important in biological research, particularly brain research.  Glutamic acid, is the main excitatory neurotransmitter in brain, and the ability to release it very locally in the brain and watch what happens subsequently is extremely useful in brain research.  

p. 987 — Sinbce the bond dissociation energy of O2 is given (34 kiloCalories/Mole) and C=O bonds are stated to be quite strong, why not just say the BDE of C=O is 172 KCal/M? 

p. 992 — Good to see Sam Danishevsky has somethng named for him.

Anslyn pp. 807 – 876 (Chapter 14)

p. 807 “Most chemists think about bonding improperly”  – What an opening salvo for this Chapter — “Advanced Concepts in Electronic Structure Theory”.  I think A&D’s reasons for this are correct (at least for me).  They can be found on p. 813 (see the note) and p. 838 (ditto)

p. 808 — “These wavefunctions contain al the observable information about the system.”  – A huge assumption, and in fact a postulate of quantum mechanics.  OK, actually, since QM has never made an incorrect prediction. 

p. 809 — “In classical mechanics, the forces on a system create two kinds of energy — kinetic and potential”.  Hmm.  How does force ‘create’ energy?  It does so by doing work.  Work is force * distance, and if you do a dimensional analysis, you find that force * distance has the dimensions of kinetic energy (mass * velocity^2) — It’s worth working through this yourself, if you’ve been away from physics for a while.  Recall that potential energy is the general name for  energy which has to do with location relative to something else.

After reading Lawrence Krause’s biography of Feynmann (which goes much more into the actual physics than other biographies including Gleick’s), I cracked open the 3 volumes of the Feynmann lectures on physics and have begun reading.   It’s amazing how uncannily accurate his speculations were. particularly about things which weren’t known in the 60’s  but which are known now.  

       Feynman lists 9 types of energy (lecture 4 page 2)  
  l. gravitational (a type of potential energy)
  2. kinetic
  3. heat
  4. elastic
  5. electric
  6. chemical
  7. radiant (??)
  8. nuclear
  9. mass 

      He says that we really don’t know  what energy is (even though we know 9 forms in which it appears) just that it’s conserved.  Even so, the conservation law allows all sorts of physics problems to be solved.  To really get into why energy is conserved, you have to read about Noether’s theorem — which I’m about to do, using a book called “Emmy Noether’s Wonderful Theorem” by Dwight E. Neuenschwander.

      Later (Lecture 4 page 4) Feynmann defines potential energy as  the general name of energy which has to do with location relative to something else. 

p. 809 — The QM course I audited 2 years ago, noted that the Schrodinger equation really can’t be derived, but is used because it works. However,  the prof then proceeded to give us a nice pseudo-derivation based on the standard equation for a wave propagating in space and time, Einsteins E = h * nu, and De Broglie’s  p = h/lambda, and differenating the wave equation twice with respect to position, then twice with respect to time and equating what he got.   

However, to get the usual Hamiltonian, he had to arbitrarily throw in a term for potential energy (because it works).  

p. 810  “The energy E is simply a  number”  – should have said “The energy E is simply a real  number” which is exactly why the complex conjugate must be used.  If you really want to know what’s going on see — the 10 articles in the category == Linear Algebra Survival Guide for QM.

p. 811 – 812 — The qualitative discussion of the Laplacian is great — it also explains why higher frequency light has higher energy.  Worth the price of the book.  Localizing an electron increases its energy by the Heisenberg uncertainty principle, which was the reasoning I’d grown up with.  

One point for the unitiated (into the mysteries of quantum mechanics) to consider.  “The more nodes an orbital has, the higher is its energy.  Recall from Chapter 1 that nodes are points of zero electron density, where the wavefunction changes sign.”    Well, a point of zero electron density, or a point at which the wavefunction equals zero, means the electron is never (bold) found there.  So why is the probability of finding an electron on both sides of the node not zero.  You need to abandon the notion that an electron has a trajectory within an atom.  Having done so, what does angular momentum mean in quantum mechanics? 

p. 813 — Interesting that the electrostatic arguments for bonding (shielding nuclei frm each other, etc. etc. which I’ve heard a zillion times) are incorrect.  This probably explains the opening salvo of the chapter. (See also the note on p. 838).

p. 814 — “This is the fundamental reason that a bond forms; the kinetic energy of the electrons in the bonding region is lower than the kinetic energy of the electrons in isolated atomic orbits”  – this is because the wave function amplitude changes less between the nuclei.  However, since we’ve had to abandon the notion of a trajectory — what does kinetic energy actually mean in the quantum mechanical situation (see note on pp. 811 – 812).

p. 814 — “The greater the overlap between to orbitals the lower the kinetic energy”  – to really see this you have to look at figure 14.6 on p. 813 — The greater the overlap, the shallower the  depression of the wave function amplitude between the nuclei, which implies less change in amplitude with distance, whch implies a smaller Laplacian (a second derivative) and less kinetic energy for the electrons here.  So this is why overlapping atomic orbitals result in lower kinetic electron energy at sites of overlap    e. g. why bonds form (bold).  Great stuff ! ! ! !

      Continuing on, the next paragraph explains where Morse potentials (p. 422) come from, and why populating antisymmetric  orbitals causes repulsion (the change in orbital sign increases the Lagrangian greatly along with it the electron kinetic energy, despite the fact the the potential energy of the antisymmetric orbitals is favorable for keeping the atoms close — e.g. bonding).   

p. 815 — What does the Born Oppenheimer approximation (which keeps internuclear distances fixed) do to the calculation of vibrational energies — which depend on nuclear motion?    The way the energy of the solutions of the Schrodinger Equation using the BO approximation is gradually approached (moving the nuclei around and calculating energy) clearly won’t work for CnH2n+2 with n > 2. There will be more than a single minimum.  What about a small protein?   Clearly these situations the Born Oppenheimer approximation is hopeless.  Because of the difficulty in understanding A&Ds discussion of the secular equation (see comments on p. 828), I’ve taken to reading other books (which have the advantage of devoting hundreds of pages to A&Ds 60 to computational chemistry), notably  Cramer’s “Essentials of Computational Chemistry”  – He notes that lacking the Born Oppenheimer approximation, the concept of the potential energy surface vanishes.

p. 816 – 817 — Antisymmetric wave functions and Slater determinants are interesting ways to look at the Pauli exclusion principle.  The Slater determinant is basically a linear combination of orbitals — why is this allowed?  – because the orbitals are the solution of a differential equation, and the differential of a sum of functions is the same as the sum of differentials of the functions and orbitals are the solution to a differential equation.  

p. 823 — What’s a diffuse function?  Also polarization orbitals strike me as a gigantic kludge.  I suppose the proof of the pudding is in the prediction of energy levels, but there appear to be an awful lot of adjustible parameters lurking about.  

p. 824 — “the motions of the electrons are pairwise correlated to keep the electrons apart” — but electrons don’t really have trajectories — see note on pp. 811 – 812.  I got all this stuff from Mark P. Silverman “And Yet It Moves” Chapter 3

p. 824 — “the c(i)’s are incrementally changed until capital Psi approximates a correlated wavefunction.”  More kludges. 

p. 825 — Nice to see why electron correlation is required, if you want to study Van der Waals forces between molecules,  The correlation energy could be considered a intramolecular Van der Waals force.  

What is a single point energy? — I couldn’t find it in the index.

p. 826 — The descriptions of the successive kludges required for the ab initio approach to orbitals are rather depressing.  However, there’s no way around it.   You are basically trying to solve a many body problem when you solve the Schrodinger equation.  It’s time to remember what a former editor of Nature (John Gribbin) said “It’s important to appreciate, though, that the lack of solutions to the three-body problem is not caused by our human deficiencies as mathematicians; it is built into the laws of mathematics “

p. 828 — I was beginning not to take all this stuff seriously, until I found that the Hartree Fock approach produces results agreeing with experiment.  However, given the zillions of adjustable parameters involved in getting to any one energy, it better produce good results for n^2 molecules, where n is the number of adjustable parameters.   Fortunately, organic chemistry can provide well over n^2 molecules with n carbons and 2n+2 hydrogens. 

p. 828 — The discussion of secular determinants and the equations leading to them is opaque (to me).  So I had to look it up in another book “Essentials of Computational Chemistry” by Christopher J. Cramer (2nd Edition).  Clear as a bell (although, to be fair, I read it after slowly going through 20 pages of A&D), and done in 10+ pages (105 –> 115).

     Along these lines, how did the secular equation get its name.  Is there a religious equation? 

What can you do with an approximate wavefunction produced by any of these methods.  The discussion in A&D so far is all about energy levels.  However, unlike wavefunctions, operators on wavefunctions are completely known, so you can use them to calculate other properties (Cramer doesn’t give an example).  

     p. 830 – 831 — Even so, given the solutions of the secular equation for a very simple case, you see why the energy of a bonding orbital is less than two separate atomic orbital — call the amount B (for bonding). More importantly the energy of  the antibonding oribtal his greater than the two separate atomic orbitals by a greater amount than B — explaining why filling a both a bonding and the corresponding antibonding orbital results in repulsion.  This is rule #8 the rules of Qualitative Molecular Orbital theory (p. 28).

p. 834 — An acronym festival — CNDO, INDO, PNDO, MINDO 1- 3, AMI, PM3 etc.  at least they tell you that they aren’t much used any more.  

It’s amazing that Huckel theory works at all, ignoring as it does electron electron repulsion. 

p. 836 — If everything in Density Functional Theory (DFT) depends on the electron density — how do you ever find it?   Isn’t this what the wavefunctions which are the solutions to the Schrodinger equation actually are?  I’m missing something here and will have to dig into Cramer again. 

p. 838 — Most energy diagrams of molecular orbitals made from two identical atomic orbitals have the bonding and antibonding orbitals have them symmetrically disposed lower and higher than the atomic orbitals.  This is from use of the Huckel approximation, which simply ignores overlap integrals.  The truth is shown in the diagram on p. 831. 

p. 838 — Another statement worth the price of the book — the sigma amd pi orbitals are of opposite symmetry (different symmetry) and so the sigma and pi orbitals don’t mix.  The sigma electrons provide part of the potential field experienced by the pi electrons.  

p. 839 — Spectacular to see how well Huckel Molecular Orbital Theory works for fulvene — even if the bonding and antibonding orbitals are symmetrically disposed. 

p. 841 — Fig. 14.13 — I don’t understand what is going on in part B in the two diagrams where not all the atoms have an associated p orbital. 

p. 843 — With all these nice energy level diagrams, presumably spectroscopy has been able to determine the difference between them, and see how well the Huckel theory fits the pattern of energy levels (if not the absolute values). 

p. 846 — Table 1.1 should be table 1.4 (I think)

p. 849 — How in the world was the bridged [10] annulene made?

p. 853 — Why is planar tetracoordinate carbon a goal of physical organic chemistry?   The power of the molecular orbital approach is evident — put the C’s and H’s where you want in space, and watch what happens — sort of like making a chemical unicorn.  Why not methane with 3 hydrogens in an equilateral triangle, the carbon in the center of the triangular plane, and the fourth hydrogen perpendicular to the central carbon?   

Dilithiocycloproane has been made — presumably as a test bed for MO calculations, being able to predict something is always more convincing of a theory’s validity than justifying something you know to be true. 

p. 854 — What are the observations supporting the A < S ordering of molecular orbitals in 1, 4 dihydrobenzene?  The arguments to rationalize the unexpected strike me as talmudic, not that talmudic reasoning is bad, just that no one calls it scientific.

p. 856 — Good to see that one can calculate NMR chemical shifts using ab initio calculations (hopefully without tweaking parameters for each molecule).  Bad to see that it is too complicated to go into here.  More reading to do (next year) after Anslyn (probably Cramer), with a little help from two computational chemist friends.  

p. 858 — How in the world did anyone ever make Coates’ ion?  

p. 861 — Have the cyclobutanediyl and cyclpentanediyl radicals ever been made?

p. 863 — “Recall that the 3d orbitals are in the same row of the period(ic) table as the 4s and 4p orbitals’ — does anyone have an idea why this is so?  Given the periodic table, the 4s orbitals fill before the 3d, which fill before the 4p (lowest energy orbital fill first presumably).  The higher energy of the 4p than 3d may explain by d2sp3 orbitals are higher in energy than the leftover 3d orbitals — d(xy), d(yz) and d(zx).  Is this correct?  However the diagram in part B. of Figure 14.33 on p. 864 still shows that (n+1)s is of higher energy than nd, even though the periodic table would imply the opposite. 

       It’s not clear why the t(2g) combinations of d(xy), d(yz) and d(zx) orbitals don’t interact with the 6 ligand orbitals, since they are closer to them in energy.  Presumably the geometry is wrong?  Presumably d(z2) and d(x2 – y2) are used to hybridize with the p orbitals because they are oriented the same way, and d(xy), d(yz) and d(zx) are offset from the p orbitals by 45 degrees.  Is this correct? 

        This is the downside of self-study.  I’m sure a practicing transitional metal organic chemist could answer these questions quickly, but without these answers it’s back to the med school drill — that’s the way it is, and you’d best memorize it.

p. 865 — Frontier orbitals have only been defined for the Huckel approximation at this point — the index has them discussed in the next chapter.  On p. 888 they are defined as HOMO and LUMO which have been well defined previously. 

p. 866 — The isolobal work is fascinating, primarily because it allows you to predict (or at least rationalize) things.

This section does elaborate a bit on organometallic  bonding details, lacking in chapter 12.  However, no reactions are discussed, and electrons are not pushed.  Perhaps later in the book, but I doubt it.

It’s clear from reading chapter 12 that organometallics have revolutionized synthesis in the past 50 years. I’ll need to read further next year, in order to reach the goal of enjoying new and clever syntheses as they come out.

Does anyone out there have any thoughts about Cramer’s book? Any recommendations for other computational chemistry books? I’ve clearly got to go farther.

Merry Christmas and Happy New Year
Follow

Get every new post delivered to your Inbox.

Join 69 other followers