Category Archives: Quantum Mechanics

Time to get busy

Well I asked for it (the answer sheets to my classmate’s book on general relativity). It came today all 347 pages of it + a small appendix “Light Orbits in the Schwarzschild Geometry”. It’s one of the few times the old school tie has actually been of some use. The real advantages of going to an elite school are (1) the education you can get if you want (2) the people you meet back then or subsequently. WRT #1 — the late 50s was the era of the “Gentleman’s C”.

It should be fun. The book is the exact opposite of the one I’d been working on which put the math front and center. This one puts the physics first and the math later on. I’m glad I’m reading it second because as an undergraduate and graduate student I became adept at mouthing mathematical incantations without really understanding what was going on. I think most of my math now is reasonably solid. I did make a lot of detours I probably didn’t need to make — manifold theory,some serious topology — but that was fun as well.

When you’re out there away from University studying on your own, you assume everything you don’t understand is due to your stupidity. This isn’t always the case (although it usually is), and I’ve found errors in just about every book I’ve studied hard, and my name features on errata web pages of most of them. For one example see https://luysii.wordpress.com/2014/05/01/a-mathematical-near-death-experience/

How formal tensor mathematics and the postulates of quantum mechanics give rise to entanglement

Tensors continue to amaze. I never thought I’d get a simple mathematical explanation of entanglement, but here it is. Explanation is probably too strong a word, because it relies on the postulates of quantum mechanics, which are extremely simple but which lead to extremely bizarre consequences (such as entanglement). As Feynman famously said ‘no one understands quantum mechanics’. Despite that it’s never made a prediction not confirmed by experiments, so the theory is correct even if we don’t understand ‘how it can be like that’. 100 years of correct prediction of experimentation are not to be sneezed at.

If you’re a bit foggy on just what entanglement is — have a look at https://luysii.wordpress.com/2010/12/13/bells-inequality-entanglement-and-the-demise-of-local-reality-i/. Even better; read the book by Zeilinger referred to in the link (if you have the time).

Actually you don’t even need all the postulates for quantum mechanics (as given in the book “Quantum Computation and Quantum Information by Nielsen and Chuang). No differential equations. No Schrodinger equation. No operators. No eigenvalues. What could be nicer for those thirsting for knowledge? Such a deal ! ! ! Just 2 postulates and a little formal mathematics.

Postulate #1 “Associated to any isolated physical system, is a complex vector space with inner product (that is a Hilbert space) known as the state space of the system. The system is completely described by its state vector which is a unit vector in the system’s state space”. If this is unsatisfying, see an explication of this on p. 80 of Nielson and Chuang (where the postulate appears)

Because the linear algebra underlying quantum mechanics seemed to be largely ignored in the course I audited, I wrote a series of posts called Linear Algebra Survival Guide for Quantum Mechanics. The first should be all you need. https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/ but there are several more.

Even though I wrote a post on tensors, showing how they were a way of describing an object independently of the coordinates used to describe it, I did’t even discuss another aspect of tensors — multi linearity — which is crucial here. The post itself can be viewed at https://luysii.wordpress.com/2014/12/08/tensors/

Start by thinking of a simple tensor as a vector in a vector space. The tensor product is just a way of combining vectors in vector spaces to get another (and larger) vector space. So the tensor product isn’t a product in the sense that multiplication of two objects (real numbers, complex numbers, square matrices) produces another object of the exactly same kind.

So mathematicians use a special symbol for the tensor product — a circle with an x inside. I’m going to use something similar ‘®’ because I can’t figure out how to produce the actual symbol. So let V and W be the quantum mechanical state spaces of two systems.

Their tensor product is just V ® W. Mathematicians can define things any way they want. A crucial aspect of the tensor product is that is multilinear. So if v and v’ are elements of V, then v + v’ is also an element of V (because two vectors in a given vector space can always be added). Similarly w + w’ is an element of W if w an w’ are. Adding to the confusion trying to learn this stuff is the fact that all vectors are themselves tensors.

Multilinearity of the tensor product is what you’d think

(v + v’) ® (w + w’) = v ® (w + w’ ) + v’ ® (w + w’)

= v ® w + v ® w’ + v’ ® w + v’ ® w’

You get all 4 tensor products in this case.

This brings us to Postulate #2 (actually #4 on the book on p. 94 — we don’t need the other two — I told you this was fairly simple)

Postulate #2 “The state space of a composite physical system is the tensor product of the state spaces of the component physical systems.”

http://planetmath.org/simpletensor

Where does entanglement come in? Patience, we’re nearly done. One now must distinguish simple and non-simple tensors. Each of the 4 tensors products in the sum on the last line is simple being the tensor product of two vectors.

What about v ® w’ + v’ ® w ?? It isn’t simple because there is no way to get this by itself as simple_tensor1 ® simple_tensor2 So it’s called a compound tensor. (v + v’) ® (w + w’) is a simple tensor because v + v’ is just another single element of V (call it v”) and w + w’ is just another single element of W (call it w”).

So the tensor product of (v + v’) ® (w + w’) — the elements of the two state spaces can be understood as though V has state v” and W has state w”.

v ® w’ + v’ ® w can’t be understood this way. The full system can’t be understood by considering V and W in isolation, e.g. the two subsystems V and W are ENTANGLED.

Yup, that’s all there is to entanglement (mathematically at least). The paradoxes entanglement including Einstein’s ‘creepy action at a distance’ are left for you to explore — again Zeilinger’s book is a great source.

But how can it be like that you ask? Feynman said not to start thinking these thoughts, and if he didn’t know you expect a retired neurologist to tell you? Please.

Watching electrons being pushed

Would any organic chemist like to watch electrons moving around in a molecule? Is the Pope Catholic? Attosecond laser pulses permit this [ Science vol. 346 pp. 336 – 339 ’14 ]. An attosecond is 10^-18 seconds. The characteristic vibrational motion of atoms in chemical bonds occurs at the femtosecond scale (10^-15 seconds). An electron takes 150 attoseconds to orbit a hydrogen atom [ Nature vol. 449 p. 997 ’07 ]. Of course this is macroscopic thinking at the quantum level, a particular type of doublethink indulged in by chemists all the time — https://luysii.wordpress.com/2009/12/10/doublethink-and-angular-momentum-why-chemists-must-be-adept-at-it/.

The technique involves something called pump probe spectroscopy. Here was the state of play 15 years ago — [ Science vol. 283 pp. 1467 – 1468 ’99 ] Using lasers it is possible to blast in a short duration (picoseconds 10^-12 to femtoseconds 10^-15) pulse of energy (pump pulse ) at one frequency (usually ultraviolet so one type of bond can be excited) and then to measure absorption at another frequency (usually infrared) a short duration later (to measure vibrational energy). This allows you to monitor the formation and decay of reactive intermediates produced by the pump (as the time between pump and probe is varied systematically).

Time has marched on and we now have lasers capable of producing attosecond pulses of electromagnetic energy (e.g. light).

A single optical cycle of visible light of 6000 Angstrom wavelength lasts 2 femtoseconds. To see this just multiply the reciprocal of the speed of light (3 * 10^8 meters/second) by the wavelength (6 * 10^3 *10^-10). To get down to the attosecond range you must use light of a shorter wavelength (e.g. the ultraviolet or vacuum ultraviolet).

The paper didn’t play around with toy molecules like hydrogen. They blasted phenylalanine with UV light. Here’s what they said “Here, we present experimental evidence of ultrafast charge dynamics in the amino acid phenylalanine after prompt ionization induced by isolated attosecond pulses. A probe pulse then produced a doubly charged molecular fragment by ejection of a second electron, and charge migration manifested itself as a sub-4.5-fs oscillation in the yield of this fragment as a function of pump-probe delay. Numerical simulations of the temporal evolution of the electronic wave packet created by the attosecond pulse strongly support the interpretation of the experimental data in terms of charge migration resulting from ultrafast electron dynamics preceding nuclear rearrangement.”

OK, they didn’t actually see the electron dynamics but calculated it to explain their results. It’s the Born Oppenheimer approximation writ large.

You are unlikely to be able to try this at home. It’s more physics than I know, but here’s the experimental setup. ” In our experiments, we used a two-color, pump-probe technique. Charge dynamics were initiated by isolated XUV sub-300-as pulses, with photon energy in the spectral range between 15 and 35 eV and probed by 4-fs, waveform-controlled visible/near infrared (VIS/NIR, central photon energy of 1.77 eV) pulses (see supplementary materials).”

Physics to the rescue

It’s enough to drive a medicinal chemist nuts. General anesthetics are an extremely wide ranging class of chemicals, ranging from Xenon (which has essentially no chemistry) to a steroid alfaxalone which has 56 carbons. How can they possibly have a similar mechanism of action? It’s long been noted that anesthetic potency is proportional to lipid solubility, so that’s at least something. Other work has noted that enantiomers of some anesthetics vary in potency implying that they are interacting with something optically active (like proteins). However, you should note sphingosine which is part of many cell membrane lipids (gangliosides, sulfatides etc. etc.) contains two optically active carbons.

A great paper [ Proc. Natl. Acad. Sci. vol. 111 pp. E3524 – E3533 ’14 ] notes that although Xenon has no chemistry it does have physics. It facilitates electron transfer between conductors. The present work does some quantum mechanical calculations purporting to show that
Xenon can extend the highest occupied molecular orbital (HOMO) of an alpha helix so as to bridge the gap to another helix.

This paper shows that Xe, SF6, NO and chloroform cause rapid increases in the electron spin content of Drosophila. The changes are reversible. Anesthetic resistant mutant strains (in what protein) show a different pattern of spin responses to anesthetic.

So they think general anesthetics might work by perturbing the electronic structure of proteins. It’s certainly a fresh idea.

What is carrying the anesthetic induced increase in spin? Speculations are bruited about. They don’t think the spin changes are due to free radicals. They favor changes in the redox state of metals. Could it be due to electrons in melanin (the prevalent stable free radical in flies). Could it be changes in spin polarization? Electrons traversing chiral materials can become spin polarized.

Why this should affect neurons isn’t known, and further speculations are given (1) electron currents in mitochondria, (2) redox reactions where electrons are used to break a disulfide bond.

The article notes that spin changes due to general anesthetics differ in anesthesia resistant fly mutants.

Fascinating paper, and Mark Twain said it the best “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Are Van der Waals interactions holding asteroids together?

A recent post of Derek’s concerned the very weak (high kD) but very important interactions of proteins within our cells. http://pipeline.corante.com/archives/2014/08/14/proteins_grazing_against_proteins.phpAr

Most of this interaction is due to Van der Waals forces — http://en.wikipedia.org/wiki/Van_der_Waals_force. Shape shape complementarity (e.g. steric factors) and dipole dipole interactions are also important.

Although important, Van der Waals interactions have always seemed like a lot of hand waving to me.

Well guess what, they are now hypothesized to be what is holding an asteroid together. Why are people interested in asteroids in the first place? [ Science vol. 338 p. 1521 ’12 ] “Asteroids and comets .. reflect the original chemical makeup of the solar system when it formed roughly 4.5 billion years ago.”

[ Nature vol. 512 p. 118 ’14 ] The Rosetta spacecraft reached the comet 67P/Churyumov-Gerasimenko after a 10 year journey becoming the first spacecraft to rendezvous with a comet. It will take a lap around the sun with the comet and will watch as the comet heats up and releases ice in a halo of gas and dust. It is now flying triangles in front of the comet, staying 100 kiloMeters away. In a few weeks it will settle into a 30 kiloMeter orbit around he comet. It will attempt to place a lander (Philae) the size of a washing machine on its surface in November. The comet is 4 kiloMeters long.

[ Nature vol. 512 pp. 139 – 140, 174 – 176 ’14 ] A kiloMeter sized near Earth asteroid called (29075) 1950 DA (how did they get this name?) is covered with sandy regolith (heterogeneous material covering solid rock { on earth } it includes dust, soil, broken rock ). The asteroid rotates every 2+ hours, and it is so small that gravity alone can’t hold the regolith to its surface. An astronaut could scoop up a sample from its surface, but would have to hold on to the asteroid to avoid being flung off by the rotation. So the asteroid must have some degree of cohesive strength. The strength required is 64 pascals to hold the rubble together — about the pressure that a penny exerts on the palm of your hand. A Pascal is 1/101,325 of atmospheric pressure.

They think the strength comes from van der Waals interactions between small (1 – 10 micron) grains — making it fairy dust. It’s rather unsatisfying as no one has seen these particles.

The ultimate understanding of the large multi-protein and RNA machines (ribosome, spliceosome, RNA polymerase etc. etc. ) without which life would be impossible will involve the very weak interactions which hold them together. Along with permanent dipole dipole interactions, charge interactions and steric complementarity, the van der Waals interaction is high on anyone’s list.

Some include dipole dipole interactions as a type of van der Waals interaction. The really fascinating interaction is the London dispersion force. These are attractions seen between transient induced dipoles formed in the electron clouds surrounding each atomic nucleus.

It’s time to attempt the surmount the schizophrenia which comes from trying to see how quantum mechanics gives rise to the macroscopic interactions between molecules which our minds naturally bring to matters molecular (with a fair degree of success).

Steric interactions come to mind first — it’s clear that an electron cloud surrounding molecule 1 should repel another electron cloud surrounding molecule 2. Shape complementarity should allow two molecules to get closer to each other.

What about the London dispersion forces, which are where most of the van der Waals interaction is thought to be. We all know that quantum mechanical molecular orbitals are static distributions of electron probability. They don’t fluctuate (at least the ones I’ve read about). If something is ‘transiently inducing a dipole’ in a molecule, it must be changing the energy level of a molecule, somehow. All dipoles involve separation of charge, and this always requires energy. Where does it come from? The kinetic energy of the interacting molecules? Macroscopically it’s easy to see how a collision between two molecules could change the vibrational and/or rotation energy levels of a molecule. What does a collision between between molecules look like in terms of the wave functions of both. I’ve never seen this. It has to have been worked out for single particle physics in an accelerators, but that’s something I’ve never studied.

One molecule inducing a transient dipole in another, which then induces a complementary dipole in the first molecule, seems like a lot of handwaving to me. It also appears to be getting something for nothing contradicting the second law of thermodynamics.

Any thoughts from the physics mavens out there?

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

For some very interesting work he did last year please see https://luysii.wordpress.com/2013/07/08/schleyer-is-still-pumping-out-papers-crystallization-of-a-nonclassical-norbornyl-cation/

At the Alumni Day

‘It’s Complicated’. No this isn’t about the movie where Meryl Streep made a feeble attempt to be a porn star. It’s what I heard from a bunch of Harvard PhD physicists who had listened to John Kovac talk about the BICEP2 experiment a day earlier. I had figured as a humble chemist that if anyone would understand why polarized light from the Cosmic Background Radiation would occur in pinwheels they would. But all the ones I talked to admitted that they didn’t.

The experiment is huge for physics and several articles explain why this is so [ Science vol. 343 pp. 1296 – 1297m vol. 344 pp. 19 – 20 ’14, Nature vol. 507 pp. 281 – 283 ’14 ]. BICEP2 provided strong evidence for gravitational waves, cosmic inflation, and the existence of a quantum theory of gravity (assuming it holds up and something called SPIDER confirms it next year). The nice thing about the experiment is that it found something predicted by theory years ago. This is the way Science is supposed to operate. Contrast this with the climate models which have been totally unable to predict the more than decade of unchanged mean global temperature that we are currently experiencing.

Well we know gravity can affect light — this was the spectacular experimental conformation of General Relativity by Eddington nearly a century ago. But how quantum fluctuations in the gravitational field lead to gravitational waves, and how these waves lead to the polarization of the background electromagnetic radiation occurring in pinwheels is a mystery to me and a bunch of physicists had more high powered than I’ll ever be. If someone can explain this, please write a comment. The articles cited above are very good to explain context and significance, but they don’t even try to explain why the data looks the way it does.

The opening talk was about terrorism, and what had been learned about it by studying worldwide governmental responses to a variety of terrorist organizations (Baader Meinhof, Shining Path, Red Brigades). The speaker thought our response to 9/11 was irrational — refusing to fly when driving is clearly more dangerous etc. etc. It was the typical arrogance of the intelligent, who cannot comprehend why everyone does not think the way they do.

I thought it was remarkable that a sociologist would essentially deprecate the way people think about risk. I’m sure that many in the room were against any form of nuclear power, despite its safety compared to everything else and absent carbon footprint.

Addendum 7 April — The comment by Handles and link he provided is quite helpful, although I still don’t understand it as well as I’d like. Here’s the link https://medium.com/p/25c5d719187b

The New Clayden pp. 43 – 106

The illustrations in the book that I comment on can be reached on the web by substituting the page number I give for xx in the following

My guess is that people who haven’t bought the book, will be tempted to do after looking at a few of them.

p. 50         The reason that exact masses of various isotopes of a given element aren’t integers lies in the slight mass difference between a proton 1.67262 * 10^-27 kiloGrams and a neutron 1.67493 * 10^-27,  Also electrons have a mass of 9.10956 * 10^-31 kiloGrams.   I didn’t realize that particle masses can now be measured to 6 significant figures.  Back in the day it was 4.  Impressive.

p. 52 — Nice to see a picture of an MRI scanner (not NMR scanner).  MRI stands for magnetic resonance imaging.  The chemist might be wondering why the name change.  Because it would be very difficult to get a fairly large subset of patient to put their head in anything with nuclear in the name.

It’s also amusing to note that in the early days of NMR, chemists worked very hard to keep out water, as the large number of hydrogens in water would totally swamp the signal of the compound you were trying to study.  But the brain (and all our tissues) is quite wet with lots of water.

p. 53 — (sidebar).  Even worse than a workman’s toolbox permanently attaching itself to an NMR magnet, is the following:  Aneurysms of brain arteries are like bubbles on an inner tube.  Most have a neck, and they are treated by placing a clip across the neck.  Aneurysm clips are no longer made with magnetic materials, and putting a patient with such a clip in an MRI is dangerous.     A patient with a ferromagnetic aneurysm clip suffered a fatal rupture of the aneurysm when she was placed in an MRI scanner [ Radiol. vol. 187 pp. 612 – 614, 855 – 856 ’93 ].

p. 53 —     NMR  shows the essential weirdness of phenomena at the quantum mechanical level, and just how counter intuitive it is.  Consider throwing a cannonball at a brick wall.  At low speeds (e.g. at low energies) it hits the wall and bounces back.  So at the end you see the cannonball on your side of the wall.  As speeds and energies get higher and higher the cannonball eventually goes through the wall, changing the properties of the wall in the process.  Radiowaves have very low energies relative to visible light (their wavelengths are much longer so their frequencies are much lower, and energy of light is proportional to frequency).  So what happens when you throw radiowaves at an organic compound with no magnetic field present — it goes right through it (e.g. it is found on the other side of the brick wall).  NMR uses a magnetic field  to separate the energy levels of a spinning charged nucleus enough that they can absorb light.  Otherwise the light just goes past the atom without disturbing it.  Imagine a brick wall that a cannonball goes through without disturbing and you have the macroscopic analogy.

p. 53 Very nice explanation of what the actual signal picked up by the NMR machine actually is — it is the energy put into flipping a spin up against the field, coming out again.  It’s the first text I’ve read on the subject that makes this clear at the start.

p. 54 — Note that the plot of absorption (really emission) of energy has higher frequencies to the left rather than the right (unlike every other of anything numeric I’ve ever seen).  This is the way it is.  Get used to it.

p. 55 — The stronger the magnetic field, the more nuclear energy levels are pulled apart, the greater the energy difference between them, and thus the higher frequency of electromagnetic radiation resulting from the jump between nuclear energy levels.

p. 56 — I was amazed to think that frequencies can be measured with accuracies better than 1 part per million, but an electrical engineer who married my cousin assures me that this is not a problem.

p. 57  The following mnemonic may help you keep things straight

low field <==> downfield <==> bigger chemical shift <==> deshielded

mnemonic loadd of bs

It’s far from perfect, so if you can think of something better, please post a comment.

p. 64 — Error — Infrared wavelengths are “not between 10 and 100 mm”  They start just over the wavelength of visible light (8000 Angstroms == 800 nanoMeters == .8 microMeter) and go up to 1 milliMeter (1,000 microMeters)

p. 64 — Here’s what a wavenumber is, and how to calculate it.  The text says that the wavenumber in cm^-1 (reciprocal centimers) is the number of wavelengths in a centimeter.  So wavenumbers are proportional to frequency.

To figure out what this is the wavelength (usually given in Angstroms, nanoMeters, microMeters, milliMeters) should be expressed in meters.  So should centimeters (which are 10^=2 meters).  Then we have

#wavelengths/centiMeters * wavelength in meters = 10^-2 (centimeters in meters)

Thus visible light with a wavelength of 6000 Angstroms == 600 nanoMeters can pack

(# wavelength/cm) * 600 * 10^-9 = 10^-2

waves into a centimeter

so its wavenumber is l/6 * 10^5 reciprocal centimeters — e.g

  16,666 cm(-1).  The highest wavenumber of visible light is 12,500 cm(-1) — corresponding to 8000 Angstroms.

Infrared wavenumbers can be converted to frequencies by multiplying by the velocity of light (in centimeters) e.g. 3 * 10^10 cm/sec. So the highest frequency of visible light is 7.5 * 10^14 — nearly a petaHertz

IR wavenumbers range from 4000 (25,000 Angstroms) to 500 (200,000 Angstroms)

p. 65 — In the bottom figure — the line between bonds to hydrogen and triple bonds shold be at 2500 cm^-1 rather than 3000 to be consistent with the ext.

p. 71 — “By contrast, the carbonyl group is very polarized with oxygen attracting the electrons”  — the electronegativity values 2.5 and 3.5 of C and O could be mentioned here.

p. 81-1, 81-2 — The animations contingue to amaze.  The latest shows the charge distribution, dipole moments of each bond, electron density, stick model, space filling models of a variety of small molecules — you can rotate the molecule, shrink it or blow it up.

To get to any given model mentioned here type
with the page number replacing xx.  The 81-1 and 81-2 are to be substituted for xx as there are two interactive web pages for page 81.  Fabulous.

Look at enough of them and you’ll probably buy the book.

p. 82 — “Electrons have quantized energy levels”  Very misleading, but correct in a sense which a naive reader of this book wouldn’t be expected to know.   This should be changed to “Atoms (and/or Molecules) have quantized electronic energy levels.”   In the first paragraph of the section the pedagogical boat is righted.

p. 85 — The introspective will pause and wonder about the following statement — “In an atom, a node is a point where the electron can never (italics) be found — a void separating the two parts of the orbital”.  Well, how does a given electron get past a node from one part of an orbital to the other.  This is just more of the wierdness of what’s going on at the quantum mechanical level (which is dominant at atomic scales).  There is no way you can regard the electron in the 2 s orbital as having a trajectory (or an electron anywhere else according to QM).  The idea that that trajectories need to be abandoned in QM isn’t my idea but that of a physicist, Mark P. Silverman. His books are worth a look, if you’re into getting migraines from considering what quantum mechanics really means about the world. He’s written 4 according to Amazon.

p. 86 — Very worthwhile looking at the web page for the 3 dimensional shapes of atomic orbitals — particularly the d and f orbitals.  FYI  p orbitals have 1 nodal planes d have 2 and f have 3.   If you’re easily distractable, steel yourself, as this web page has links to other interesting web pages with all sorts of moleculara orbitals.  This one has links to ‘simple’ molecular orbitals for 11 more compounds ranging from hydrogen to benzene.

p. 87 — Nice to know where s, p, d, and f come from — the early days of spectroscopy — s = sharp, p = principal, d = diffuse, f = fundamental.

p. 87 — “There doesn’t have to be someone standing on a stair for it to exist”  great analogy for empty orbitals.

*Ap. 89 — The first diagram on the page is misleading and, in fact, quite incorrect.  The diagram shows that the bonding orbital is lower in energy than the atomic 1s orbitals by exactly the same amount as the antibonding orbitals are higher.  This is not the case.  In such a situation the antibonding orbital is higher in energy by a greater amount than the bonding orbital is lower.  The explanation is quite technical, involving overlap integrals and the secular equation (far too advanced to bring in here, but the fact should be noted nonetheless).  Anslyn and Dougherty has a nice discussion of this point pp. 828 – 831.

p. 91 — The diagram comes back to bite as “Since there is no overall bonding holding the two atoms together, they can drift apart as two separate atoms with their electrons in 1s AOs”.  Actually what happens is that they are pushed apart because the destabilization of the molecule by putting an electron in the antibonding  molecular orbital is greater than the stabilization of the remaining electron in the bonding molecular orbital (so the bonding orbital can’t hold the atoms together).   Ditto for the explanation of why diatomic Helium doesn’t exist.

p. 94 — The rotating models of the bonding and antibonding orbitals of N2 are worth a look, and far better than an projection onto two dimensional space (e.g. the printed page)  See the top of this post  for how to get them.

p. 95 — It’s important to note that nitric oxide is used by the brain in many different ways — control of blood flow, communication between neurons, neuroprotection after insults, etc. etc.  These are just a few of its effects, more are still being found.

p. 100 — I guess British undergraduates know what a pm is.  Do you?  It’s a picoMeter (10^-12 Meters) or 100 Angstroms.  Bond lengths of ethylene are given as C-H = 108 pm, C=C as 133 pm.  I find it easier to think of them as 1.08 Ansgroms and 1.33 Angstroms — but that’s how I was brung up.

p. 103 — It is mentioned that sp orbitals are lower in energy than sp2 orbitals which are lower in energy than sp3 orbitals.  The explanation given is that s orbitals are of lower energy than p orbitals — not sure if the reason for why this is so was given.  It’s because the s electrons get much closer to the nucleus (on average) than electrons in p orbitals (which have a node there).  Why should this lower energy?  Because the closer an electron gets to the positively charge nucleus, the less charge separation there is, and separating charges costs energy.

p. 105 — Mnemonic for Z (cis) and E (trans) just say cis in French — it sounds like Zis.

Going to the mat with representation, characters and group theory

Willock’s book (see https://luysii.wordpress.com/category/willock-molecular-symmetry/) convinced me of the importance of the above in understanding vibrational spectroscopy.  I put it aside because the results were presented, rather than derived.  From the very beginning (the 60’s for me) we were told that group theory was important for quantum mechanics, but somehow even a 1 semester graduate course back then didn’t get into it.  Ditto for the QM course I audited a few years ago.

I’ve been nibbling about the edges of this stuff for half a century, and it’s time to go for it.  Chemical posts will be few and far between as I do this, unless I run into something really interesting (see https://luysii.wordpress.com/2012/02/18/a-new-way-to-study-reaction-mechanisms/).  Not to worry –plenty of interesting molecular biology, and even neuroscience will appear in the meantime, including a post about article showing that just about everything we thought about hallucinogens is wrong.

So, here’s my long and checkered history with groups etc.  Back in the day we were told to look at “The Theory of Groups and Quantum Mechanics” by Hermann Weyl.  I dutifully bought the Dover paperback, and still have it (never throw out a math book, always throw out medical books if more than 5 years old).  What do you think the price was — $1.95 — about two hours work at minimum wage then.  I never read it.

The next brush with the subject was a purchase of Wigner’s book “Group Theory and its Application to the Quantum Mechanics of Atomic Spectra”  — also never read but kept.  A later book (see Sternberg later in the post)  noted that the group theoretical approach to relativity by Wigner produced the physical characteristics of mass and spin as parameters in the description of irreducible representations.  The price of this one was $6.80.

Then as a neurology resident I picked up “Group Theory” by Morton Hammermesh (Copyright 1962).  It was my first attempt to actually study the subject.  I was quickly turned off by the exposition.  As groups got larger (and more complicated) more (apparently) ad hoc apparatus was brought in to explain them — first cosets, then  subgroups, then normal subgroups, then conjugate classes.

That was about it, until retirement 11 years ago.  I picked up a big juicy (and cheap) Dover paperback “Modern Algebra” by Seth Warner — a nice easy introduction to the subject.

Having gone through over half of Warner, I had the temerity to ask to audit an Abstract Algebra course at one of the local colleges.  I forget the text, but I didn’t like it (neither did the instructor).  We did some group theory, but never got into representations.

A year or so later, I audited a graduate math abstract algebra course given by the same man.  I had to quit about 3/4 through it because of an illness in the family.  We never got into representation theory.

Then, about 3 years ago, while at a summer music camp, I got through about 100 pages of “Representations and Characters of Groups” by James and Liebeck.  The last chapter in the book (starting on p. 366) was on an application to molecular vibration.  The book was hard to use because they seemed to use mathematical terms differently than I’d learned — module for example.   I was used to.  100 pages was as far as I got.

Then I had the pleasure of going through Cox’s book on Galois theory, seeing where a lot of group theory originated  (along with a lot of abstract algebra) — but there was nothing about representations there either.

Then after giving up on Willock, a reader suggested  “Elements of Molecular Symmetry” by Ohrn.  This went well until p. 28, when his nomenclature for the convolution product threw me.

So I bought yet another book on the subject which had just come out “Representation Theory of Finite Groups” by Steinberg.  No problems going through the first 50 pages which explains what representations, characters and irreducibles are.  Tomorrow I tackle p. 52 where he defines the convolution product.  Hopefully I’ll be able to understand it an go back to Ohrn — which is essentially chemically oriented.

The math of it all is a beautiful thing, but the immediate reason for learning it is to understand chemistry better.  I might mention that I own yet another book “Group Theory and Physics” by Sternberg, which appears quite advanced.  I’ve looked into it from time to time and quickly given up.

Anyway, it’s do or die time with representation theory.  Wish me luck

Where has all the chemistry gone?

Devoted readers of this blog (assuming there are any) must be wondering where all the chemistry has gone.  Willock’s book convinced me of the importance of group theory in understand what solutions we have of the Schrodinger equation.  Fortunately (or unfortunately) I have the mathematical background to understand group characters and group representations, but I found Willock’s  presentation of just the mathematical  results unsatisfying.

So I’m delving into a few math books on the subject. One is  “Representations and Characters of Groups” by James and Liebeck (which provides an application to molecular vibration in the last chapter starting on p. 367).  It’s clear, and for the person studying this on their own, does have solutions to all the problems. Another is “Elements of Molecular Symmetry” by Ohrn, which I liked quite a bit.  But unfortunately I got stymied by the notation M(g)alpha(g) on p. 28. In particular, it’s not clear to me if the A in equation (4.12) and (4.13) are the same thing.

I’m also concurrently reading two books on Computational Chemistry, but the stuff in there is pretty cut and dried and I doubt that anyone would be interested in comments as I read them.  One is “Essential Computational Chemistry” by Cramer (2nd edition).  The other is “Computational Organic Chemistry” by Bachrach.  The subject is a festival of acronyms (and I thought the army was bad) and Cramer has a list of a mere 284 of them starting on p. 549. On p. 24 of Bachrach there appears the following “It is at this point that the form of the functionals  begins to cause the eyes to glaze over and the acronyms appear to be random samplings from an alphabet soup.”  I was pleased to see that Cramer still thinks 40 pages or so of Tom Lowry and Cathy Richardson’s book is still worth reading on molecular orbital theory, even though it was 24 years old at the time Cramer referred to it.  They’re old friends from grad school.   I’m also pleased to see that Bachrach’s book contains interviews with Paul Schleyer (my undergraduate mentor).  He wasn’t doing anything remotely approaching computational chemistry in the late 50s (who could?).  Also there’s an interview with Ken Houk, who was already impressive as an undergraduate in the early 60s.

Maybe no one knows how all of the above applies to transition metal organic chemistry, which has clearly revolutionized synthetic organic chemistry since the 60’s, but it’s time to know one way or the other before tackling books like Hartwig.

Another (personal) reason for studying computational chemistry, is so I can understand if the protein folding people are blowing smoke or not.  Also it appears to be important in drug discovery, or at least is supporting Ashutosh in his path through life.  I hope to be able to talk intelligently to him about the programs he’s using.

So stay tuned.