Category Archives: Anslyn & Doughterty

Anslyn pp. 51 – 100

Now that I’m into Anslyn && Doughterty (2nd printing), a few words about the book.  It’s extremely well written and illustrated, reading more like a novel than a text.  There’s lots on each page, so the going will be slow.  The margins are wide and allow for notes.  I do have a Rip Van Winkle feeling as I read it, seeing the resolution of questions exercising organic chemists in the early 60s explored, and for the most part solved.  It would be nice if specific page numbers were given for forward and backward references in the text.

p. 52 — The explanation of what we’re now calling carbocations (e.g. carbonium ions (which is what we called them all back in the 60’s) and carbenium ions — makes sense at last.  Carbonium ions include R5C+ and what we used to call non-classical carbonium ions with a three center 2 electron bonds — chasing them around the bicyclo[2;2;1] heptane nucleus occupied Schleyer and lots of other chemists back then.  Carbenium ions are derived from carbenes (which were largely a theoretical construct in 1960) and correspond to what we were callling carbonium ions back then.  The discussion of the extra stability of the 2-norbornyl cation compared to the 2-methyl, 2 norbornyl cation on p. 90 was quite good, and an excellent argument for the actual existence of nonclassical carbocations. 

p. 53 — Nice to see some experimental evidence for MO calculations — e.g. shortening of the sp3-sp2 distance in CH3CH2+ and lengthening of the C-H bonds of the methyl group — hopefully the calculations were done first before the actually structure of the t-Butyl cation was known, so they really were a priori — It seems likely (to me) that there are a lot of parameters which can be ‘tweaked’ in MO calculations.

p. 54 – 55 — Carbocation potential energy surfaces — hopefully the book will show how these are actually calculated  rather than just assumed to exist.  Nonclassical ‘carbonium ions’ (what they were called in the 60’s ) were certainly a subject of contentious debate back then (certainly the norbornyl cation was always being discussed).

p. 55 — CH5+ going deeper — incredible discussion — I’m already loving this book.   It seems likely (to me) that some sort of quantum mechanical tunneling must be going on as well.  Granted that the proton is 2000 times heavier than the electron, but electrons tunnel some 20 Angstroms between two di-Copper centers in cytochrome oxidase. [ Proc. Natl. Acad. Sci. vol. 107 pp. 21470 – 21475 ’10 ]  With distances between C and H of 1.23 and H and H of .87  Angstroms and the ‘flat’ potential energy surface, some proton tunneling should be going on.  Has anyone looked at the obvious experiment (CD5+)?

p. 57 — In the connection box, I assume that the reason the s orbitals in the third row are smaller than the p orbitals goes something like this.  Even though the 2s orbitals contain one node, it isn’t at the nucleus, so they are exposed to a greater positive charge than the 2p orbitals, which have a node right at the nucleus, escaping the greater positive charge.  

pp. 59 – 61 — Pictures of d orbitals.  Hosanna ! “One theme of this textbook is to consistently tie organic chemistry to organometallic chemistry”  Hosanna again ! ! Clayden was valiant in their attempt to give a glimpse of the field, but their d orbital views and the corresponding bonding patterns was sketchy at best.  Certainly looking forward to enlightenment on this score.

p. 59 line -3  “When CITING down the x axis”

p. 61 “Hopefully this chapter has refreshed your memory”  — Not for QMOT which was rather new to me.  I thought the discussion was reasonably clear.

p. 62 — Answer to problem #7 — The hybridization formula somehow says that the hybrid orbitals between the C-C bonds in cyclopropane are misaligned with the straight line between the carbons by 21.4 degrees.  All very nice, but what does Xray crystallography of cyclopropane say?  Structural chemistry of proteins always gives electron density maps, so where are the electrons in cyclopropane? 

p. 62 — Answer to problem #14 — terrible problem — “Note that none of these examples has a charge distribution shaped exactly like a d(xy) orbital” — no wonder I couldn’t get it

p. 62 — Answer to problem #16 — Very nice extended discussion.  If this continues, the answer book will be required reading as it’s almost another text. 

p. 70 — Organic chemists have a simple and intuitive way of looking at entropy on a molecular basis.  Compare this to all the heavy lifting in the original definition of entropy on the macroscopic level (also compare the intuitiveness of entropy of a reaction to that of enthalpy — you get at enthalpy by looking at the internal energy of a molecule — something rather hard to see).  For details see https://luysii.wordpress.com/2011/05/26/second-law-of-thermodynamics-entropy-free-energy.

p. 72 — the mere existence of the term Normal Mode implies that Non-Normal modes (Abnormal modes??) must exist.  What are they? 

p. 73 — A nice explanation of why the hydroxyl radical is so toxic to cells.  I don’t recall seeing this in any biochemistry books or discussions of molecular biology — except to say that it is quite toxic, leaving it at that.  

pp. 73 – 78 — Marvellous discussion of bond motions and the spectra they give rise to.  On p. 76 it’s obvious that knowing the force constant, one can calculate the frequency, and the difference between energy levels given the difference in frequency, and how likely higher energy levels are to be populated at 298 Kelvin.  But how do you get the force constant of a covalent bond in the first place (without looking at the frequency first?).  On p. 77 how do they know that the various motions (symmetric stretch, asymmetric stretch, scissor, rock and wag ) correspond to these frequencies. 

p. 79 — The discussion of heat of formation and heat of combustion and internal energy would be vastly improved by a simple energy diagram showing the elements at 0 kilocalories/mole and CO2 at -94 kiloCalories/mole and H20 at -58 kiloCalories/mole and the compounds of interest between.  It would then be obvious that the heat of formation of a hydrocarbon + the heat of combustion of that hydrocarbon is constant for hydrocarbons of the same atomic composition, allowing inferences about internal energy, strain energy etc. etc.  It’s very klunky expressed in words, but quite clear with a diagram. 

p. 84 — How was the rotation barrier of the allyl radical determined? p. 94 possibly by microwave spectroscopy (see Going Deeper) — hopefully this will be explained later in greater detail. 

p. 84 — “BDE is really only the energy it takes to break a bond”   What about the activation energy?  Shouldn’t this be the net energy needed to break a bond? 

p. 91 — Hopefully chapter 5 will explain how you can make a statement like the pKa of ethane than is 50 — no one can possibly have found one proton and one ethylcarbanion in 10^27 moles of ethane — which is what pKa 50 implies. 

p. 93 — “Alternatively, the A-B-C-D dihedral angle is defined as the angle between the A-B-C plane and the B-C-D plane.”  This should have a pointer to the diagram in figure 2.7 on p. 95, where the sentence becomes obvious. 

p. 94 — “on average 3 kcal per Avogadro’s number of molecules”  — what on earth does this mean? 

p. 97  — A picture of Cp-Co-(CO)3 would be nice.

Music and Weddings

Back from the wedding of one of the violinists I play chamber music with.  Coupled with a graduation 2 weeks ago and a craft festival last week, this means not much Anslyn && Dougherty got read (or anything else).  I will say, after getting through 100 pages or so, that A&&D  reads like a novel, is extremely fascinating, well paced and extremely clear for the most part.  It’s like being rip van Winkle and seeing answers to the many of the questions exercising organic chemists in the early 60s.  Clayden et. al. was an excellent prolog.

Two points about music making.  First, unlike the polls about who’s the greatest chemist etc. etc. amateur musicians know almost immediately if another amateur is better than they are.  By better, I don’t mean more technically adept, which you can get around by enough practice.  I mean sheer musicality.  We all have musicality to varying degrees, and I find interesting, that amateurs rarely disagree about who is ‘better’ than they are. Compare this the venom expended about sports teams or their individual players; It’s obvious to me that the newlywed violinist is lightyears better than I’ll ever be.  When we play, she’s the boss, despite being 43 years younger.

Second, playing music allows you to get to know what people are like (not just musically) in an incredibly short period of time. It’s nonverbal communication of a high order, mostly affective, and very intense.    I was invited to a cellist’s wedding despite having played music with her for only 5 -6 hours over the course of a chamber music festival.  A connection is formed that would take repeated social contacts over a much longer period of time, otherwise.  Just another reason to love music, and music making.

Second Law of Thermodynamics, Entropy, Free Energy

      Second Law of Thermodynamics:

      Kelvin’s formulation — No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work.   E.g. nature exerts a tax — some of the energy supplied by the hot source must get into the surroundings as heat.  

      Clausius’ formulation —  Heat does not pass from a body at low temperature to one at high temperature without an accompanying change elsewhere.

       Berry PChem2 p. 421 — A thermodynamic description of a system containing many molecules is characterized by the use of a small number of macroscopic variables, while a complete microscopic mechanical description requires a vast number of variables.  If a thermodynamic description is to be consistent with a microscopic description, some grouping together, or averaging, or systematic ignoring of microscopic variables must be an inherent part of the connection between the two theories.  The thermodynamic description of a system is inevitably coarser than the microscopic description. 

       Gribbbin “Deep Simplicity” p. 110 — “In a sense, classical thermodynamics pretends that time does not exist. Systems are described in terms of infinitesimally small changes that would take an infinite amount of time to shift from one state to another.”  These are the reversible changes from one state to another.

       Seth Lloyd “Programming the Universe” p. 66 — “Almost no scientist doubts its truth, but many disagree as to WHY it is true.”

        No energy (e.g. heat) is assumed to flow through the system — the system is closed.  It is in closed systems that we encounter time reversibility and Poincare recurrences; in open systems (in which energy is flowing through the system) we encounter irreversibility and an arrow of time. 

     McQuarrie & Simon Ch. 20 p. 817  A spontaneous process is said to be an irreversible one, e.g. the reverse of a spontaneous process never happens on its own.  At one time scientists thought that for a process to proceed spontaneously it had to be exothermic or evolve energy.   The variational principle of quantum mechanics is based on the fact that a system will always seek its state of lowest energy.    However, the expansion of a gas in one vessel into an empty vessel to which it is connected by opening of a stopcock, shows that there is no change in internal energy or enthalpy.   The reverse has never been seen.   Another irreversible (and spontaneous) process is the mixing of gasses each initially confined to one container.  Again there is no change in the internal energy or the enthalpy of the system.   

       Spontaneous processes exist in which the system absorbs energy — e. g. the melting of ice over 32 F absorbs energy and is spontaneous.  It’s why tubs of ice were used in the summer before air conditioning.

       The direction of these processes can’t be explained by the First Law (even though each process obeys the first law). 

      Atkins 4Laws p. 54 — Spontaneous doesn’t mean fast — it means a tendency for a change to occur — it doesn’t say how fast the change occurs — diamond converts into graphite, extremely slowly (does it?)

      McQ has already used statistical physics in chapters 17 and 18 which involved quantum mechanics.  Systems evolve spontaneously in a direction that lowers their energy and they also seek to increase their disorder.  The two processes are competitive.    

      Atkins Ch. 5 pp. 96 –>   Spontaneous changes are always accompanied by a reduction in the ‘quality’ of energy in the sense that it is degraded into a more dispersed chaotic form.    The first law tells us which changes MAY occur, e.g. which changes are PERMISSIBLE — e.g. those changes in which the total energy of the system (considered in isolation) remains constant.  The energy may change form —  from heat to work, or from heat to a higher temperature.  Low entropy means a high quality of energy, high entropy means just the reverse.

       The second law tells us what changes may occur SPONTANEOUSLY — clearly these must be permissible by the first law.  

       Irreversible processes generate (increase) entropy.  Reversible processes do not generate entropy.   No spontaneous change has ever been reversed without a degradation of energy (e.g. a loss of free energy — see later).  This means conversion of energy into a more dispersed chaotic form.  E. g. the conversion of cohesive mechanical energy (the directed movements of all the atoms in a macroscopic ball) into heat (the random movements of atoms in the floor on which the ball bounces).  

      Heat always flows to cooler objects (Clausius), gases always expand. 

      Atkins 4Laws p. 49 — “I actually have serious doubts about whether Snow understood the (second) law (of thermodynamics) himself. — This is the famous C. P. Snow of “Two Cultures and the Scientific Revolution” fame.  He said that understanding the second law was equivalent to asking someone if they’d read Shakespeare.

       Atkins 4Laws p. 51 — One of the most important aspects of a steam engine is a place for unused energy to be discarded as heat (which you will recall is defined as follows — Atkins (Four Laws) p. 30 “In thermodynamics, heat is not an entity or even a form of energy, heat is a mode of transfer of energy”  — heat is not a fluid rather —  “heat is the name of a process, not the name of an entity” — heat is the transfer of energy as the result of a temperature difference.    The place where heat is discarded is called the sink.  The cooling towers of a generating station are far more important to its operation than the complex turbines or expensive nuclear reactor that drives power generation. 

     ibid — the efficiency of a steam engine (e.g. any heat engine) is defined as the ratio of the work it produces to the heat it absorbs.  Sadi Carnot (1796 – 1832) derived the following expression for the maximum efficiency of an engine working between the absolute temperatures Tsource and Tsink (did he have absolute temperatures to work with?)

      Efficiency = 1 – (Tsink/Tsource)  equivalent to 1 – T(cold)/T(hot)

     No cyclic process is possible in which heat is taken from a hot source and converted completely into work.  Thermodynamics is silent on rates — it looks at the beginning and the end of a process.

       Kelvin realized that he could define a (relative) temperature scale in terms of work by using Carnot’s expression for the efficiency of a heat engine. 
The efficiency of a heat engine == work produced/heat absorbed at T(high)

     The heat absorbed is at the higher temperature — by the second law heat is not absorbed from lower to higher temperatures.

     l. The work done by the engine can be measured by the height to which a known weight is raised 
     2. The heat absorbed by the engine at the high temperature (so much for heat not being a fluid) can be measured as follows
      a. measure how much work must be done to achieve a given change of state in an adiabatic (no heat transfer) container.
      b. measure the work that must be done to achieve the same change of state in a diathermic container (heat can enter and leave)

      c.  The difference between the two amounts of work represents the heat absorbed (recall that Joule measured the heat equivalent of work).

Carnot derived the following.
     Heat Engine Efficiency = 1 – (Tsink/Tsource)  is equivalent to 
     Tsink = (1 – efficiency)Tsource.
      Here’s how he did it. (For a better explanation see p. 28 of my Boorum and Pease on Chemistry — e.g. some handwritten notes I took).   Around a cycle there is no change in internal energy (U).   So U = 0 = work done + heat absorbed.  Around a Carnot cycle there are two changes in heat  — at the high temperature and at the low temperature.  Entropy is also a state function (like internal energy).  Around a Carnot cycle there are two changes in entropy  q(A)/T(high) and q(C)/T(low).  A and C are states of the system. They sum to zero so 
     q(A)/T(high) = – q(C)/T(low)  
      work = – heat absorbed as U = 0  

       We still have to specify the value of either Tsink or Tsource.  The triple point of water ( where liquid, solid, gas all coexist) is DEFINED as 273.16 Kelvin.  One can produce this state anywhere — Atkins says it is independent of pressure.

      Clausius DEFINED a change in entropy (deltaS) as the result of dividing the energy reversibly transferred as heat by the absolute thermodynamc temperature at which the transfer took place.  He did this for mathematical reasons

deltaS = reversible heat supplied/thermodynamic temperature = q/T

       Since heat is energy,  the units of entropy are Joules/Kelvin

Keller (Why Chemical Reactions happen) p. 10  — A reasonable explanation of why 
     deltaS = q/T
       A given amount of energy will increase the speed of molecular motion to a greater extent if the molecule is moving slowly than if it is moving quickly.  Molecules are moving more slowly at lower temperatures, so the lower the temperature the greater the increase in entropy.   The entropy of a given process doesn’t really depend on the temperature (e.g. water freezing or melting).  This has been calculated (or measured) at 22 Joules/Kelvin.  However the change in entropy  for the process occurring at a given temperature DOES (inversely) depend on the temperature.   Water gives off 6010 Joules/mole of energy as heat when it freezes (so the surroundings gain entropy).  

Clearly entropy is decreased on freezing (as ice is more ordered than liquid water) so 

     S(water –> Ice) = -22.0 Joules/Kelvin

     At 5 C (278 Kelvin) deltaS (water –> Ice)  is 6010/278 =  21.6 Joules/Kelvin*Mole.  So the sum of the entropy gained by the surroundings (21.6) + the entropy lost by the freezing water (-22.0) is negative and the process doesn’t happen == isn’t spontaneous — won’t happen by the second law. 

       At -5 C (268 K)  6010/268 = 22.4 Joules/Kelvin*Mole so there is a net increase in entropy in the universe (by  summing the entropy gained by the surroundings with that lost by the freezing water) so water freezes spontaneously at -5 C. 

       This is why entropy increases when you take EXACTLY the same amount of heat from something at a high temperature and put it into something at a low temperature — what entropy you lose at the high temperature you more than gain at the low temperature because the temperature divisor is less ! ! 

      Atkins (Four Laws) p. 30 “In thermodynamics, heat is not an entity or even a form of energy, heat is a mode of transfer of energy”  — heat is not a fluid rather —  “heat is the name of a process, not the name of an entity” – heat is the transfer of energy as the result of a temperature difference.  

       Notice that this has nothing to do with the work actually done by the system.  An engine with no cold sink can’t do any work because heat (thinking of it as a fluid) has nowhere to go. 

       This gives a new statement of the second law — the entropy of the universe increases in the course of any spontaneous change.  The universe is the system + its surroundings.  So the system might have a decrease in entropy as long as the surroundings had a greater increase in entropy in the course of a given process.

       The deltaS = q/T definition of entropy ‘explains’ why energy can’t flow from cold to hot.  The same amount of q leaving a system at T1 produces a smaller gain in deltaS when it enters a system at T2 if T1 < T2 (see the water freezing example above)

      Considered as information, why does entropy increase with temperature — the distribution of atoms in energy levels gets wider (more are found at higher energy levels, where fewer were found before).  This means that we are less able to predict the energy level of a given molecule — more disorder (or less information).

      Boltzmann’s formula    S = k log W, where k is Bolztmann’s constant — the gas constant R divided by Avogadro’s number, and W is the number of ways in which the molecules of a system can be arranged to achieve the same total energy.  

       Boltzmann’s formula  can be used to calculate both the absolute entropies of substances (particularly if they have a simple structure like a gas) and entropy changes for a given process (all of them?) such as heating and gas expansion. 

       Berry PChem2 p. 434 — Irreversible process in an isolated system — the final state can’t go back to the initial state by the imposition or relaxation of external constraints WITHOUT requiring external work to be done (on or by the system).   Reversible process — you can impose or remove constraints on the system and get back to the initial state, without work being done.  p. 373  A reversible process is a change from one equilibrium state (e.g. satisfying the thermodynamic equation of state) to another equilibrium state, through a continuous series of equilibrium states.  The equation of state with n independent variables is the graph of a function (a surface) in n+1 dimensional space, so for a perfect gas it is a 2 dimensional surface in P, V, T space (because any two of P, V, T predict the value of the third).  All graphs are manifolds (if you know what a manifold is).

       p. 377  The values of the thermodynamic variables of a system at equilibrium (e.g. the energy) undergo continuous fluctuations.  If the state of a system is changed so slowly that the thermodynamic variables always remain within a standard deviation of their mean values, no single observation can ever show that the state of the system has changed, because the thermodynamic variables are always within the ranges expected for the equilibrium state.  

      Systems not in equilibrium don’t lie on the surface (the graph of the equation of state) — they require extra variables to determine them such as inhomogeneities  (of density, velocity, pressure, temperature) in the system.

       In general, the thermodynamic definition of a reversible process requires that every point on the path be infinitesimally close to an equilibrium state of the system, and that everywhere along the path of the relevant intensive variables (Pressure, Temperature, Density) be continuous across the boundaries of the system.   This implies also that the extensive variables (volume) of the system must be infinitesimally close in value in two states which are infinitesimally close — e.g. everywhere along the path of a reversible process.  This definition refers to the case that the system is in contact with some reservoirs (e.g. NOT an isolated system). 

      Any system in which a real change of state is occurring at a measurable rate is necessarily a nonequilibrium system.  If a gas is expanded by moving a piston, it takes some time for the resulting pressure change to propagate through the whole gas.

       Berry PChem2 p. 370 — The adiabatic principle of quantum mechanics says that a sufficiently slow perturbation of the boundary conditions can’t induce transitions between energy levels.  Transitions between states m and n are unlikely if the rate of change of the Hamiltonian with time is smaller than
      | E(m) – E(n) | / tau(mn) 

      Where the characteristic time is of the order of hbar /  | E(m) – E(n) |

      In principle, one can change the volume of the box slowly enough that no transitions between energy levels take place (e.g. the distribution of energy levels remains constant).  Note that since the values of the energy levels depend on the size of the box — the total energy of all the particles in the box increases when the box shrinks — the process is called an adiabatic perturbation.    An adiabatic perturbation in the quantum mechanical sense corresponds to a thermodynamic reversible adiabatic process.   Note that when the box is shrunk only work is done on the gas (heat isn’t transfered — so the process is adiabatic) and this is the source of the increased energy. 

       In a reversible adiabatic work process (no heat energy transferred) the distribution of molecules acrosss the energy levels is unchanged — but the energy levels themselves change.  In a reversible diathermic work process, the heat transferred across the system boundary changes the distribution of the molecules across the energy levels changes (if you add heat energy, the average molecular energy increases).

Free Energy

      The Helmholtz energy of the system is called A and is defined as A = U – TS, where U is the internal energy of the system.  A is known as the work function (from the German word for work — Arbeit). With T and V constant deltaA = deltaU – T*deltaS.  deltaA is just a disguised form of the change in the total entropy of the universe at constant T and V (of the system).  Why?

       PChem  Tinoco Ed4 p. 97 The Helmholtz free energy is used for reactions at constant volume — quite useful for geochemistry, where the pressure varies widely (Gibbs free energy works at constant pressure) and the volume is essentially constant.

        The term TS that appears in A = U – TS has the dimension of energy.  It can be thought of as a measure of the energy stored in a disordered way in the system, for which U is the total energy.  The difference, U – TS is the energy that stored in an orderly way.  Only the work stored in an orderly way is available to cause orderly motion (e.g. work in the surroundings).  

        Thus U – TS is the energy that is ‘free’ to do work.

         Suppose we have a process in a system which decreases its entropy.  The process will occur only if the entropy of the surroundings increases by a greater amount.  For that to happen some of the change in internal energy must be released as heat — for ONLY heat transactions result in changes in entropy.  So to increase entropy by deltaS,  T*deltaS amount of heat must be released.   So only deltaU – T*deltaS is released as work.  So T*deltaS is the tax that the surrroundings demand from the system to compensate for the reduction of the entropy of the system. 

      The Gibbs (1839 – 1903) free energy is the Helmholtz free energy plus PV

      A = U – TS    G = A + PV = U + PV – TS = H – TS  ; H = U + PV

      deltaA tells us the amount of work a process may do at constant temperature

      deltaG tells us the amount of (non-expansion) work a process may do at constant temperature and pressure (so volume may actually change)

       Just as it is not really possible to give a molecular interpretation of enthalpy (really just an accounting device) it isn’t possible to give a simple explanation of the molecular nature of the Gibbs free energy.    Since most biological (and chemical) process occur at constant temperature and pressure, the change in the Gibbs free energy tells us whether it will happen or not.

       At constant pressure deltaG = -T*deltaS, where deltaS is the total entropy of the universe.  So at constant pressure a process is spontaneous if and only if it corresponds to a decrease in the Gibbs free energy.

       Entropy is invariably positive, so as T increases G = H – TS decreases (assuming H doesn’t increase with temperature)

       [ Molecular Biology of the Cell 4th ed. p. 10 ]  A living cell has a large internal free energy.  If it is allowed to die and decay towards equilibrium, a great deal of energy is released to the environment as heat.  Free energy is required for the propagation of information.  There is a quantitative relationship between free energy and information.  To specify one bit of information costs a defined amount of free energy (measured in joules)  depending on the temperature. 

      The high fidelity match needed when DNA is synthesized requires that a lot of free energy be released and dissipated as heat as each correct nucleotide is slotted into its place in the structure.  This couldn’t happen unless the system of molecules carried a large store of free energy at the outset. 

     

      Well, that’s the ball game — now on to the second chapter of Anslyn and Dougherty.  Hopefully someone out there has found this useful and/or interesting.  I’ve got more material — but this is on the third law of thermodynamics (which concerns absolute zero) and ergodicity theorems which underly statistical mechanisms.  I’ll review them if I ever get to Dill’s book “Molecular Driving Forces”.  I don’t think they’re of great relevance to physical organic chemistry — biophysics and physical biochemistry yes, but I don’t think A&D covers these matters.     

First Law of Thermodynamics – II

This is the second half of the notes I’ve taken for myself over the years concerning the first law of thermodynamics.  See the previous post for why this is appearing.  Even the cognoscenti might find something of interest in this post — e,g. an explanation of why heat capacity decreases with decreasing temperature, using the fluctuation dissipation theorem. Anecdote aficionados might like to hear about the Hawking Preskill bet.

     First Law of ThermodynamicsThe energy of an isolated system (no heat in or out, no work done on or by) is constant

        Atkins3 p. 48 Heat capacity is measured per mole, and is the amount of heat (measured how?) raising the temperature of a substance (at a given temperature) by a certain amount.   The heat capacity concept was developed by James Black in 1760. Interestingly, it led to the demise of the idea that heat was a fluid.  He heated water and mercury over the same flame, and noted that mercury got hotter.   The heat absorbed or evolved by the system can be monitored by noting the temperature changes taking place in a surrounding bath (with a known volume of substance and a known heat capacity — this again begs the question of how heat is actually measured — perhaps by using the mechanical equivalent of heat, and then using this to determine heat capacity). 

       Atkins7 p. 44 — Actually heat capacity can be measured either per mole (molar heat capacity) or per gram (specific heat capacity)

       Atkins7 p. 44 — In general, heat capacities depend on the temperature and decrease at low temperatures (see later for the explanation which involves the fluctuation dissipation theorem) 

       At the temperature of a phase transition the heat capacity of a sample is infinite (heat just keeps being added to ice to melt it, but the temperature of the ice and/or the water doesn’t change, so deltaT is zero and Cv=heat supplied at constant volume/deltaT.  

      A very nice discussion of exact and inexact differentials as they apply to internal energy and work respectively is found in PChem McQuarrie & Simon p. 773. — which refers to p. 688 and before — p. 688 shows a differential which is not exact –.e.g. inexact.  

       In the case of P = f(T,V)  dp = dP/dT(v) dT + dP/dV(t)dV

      is the form of the exact differential.  Suppose you had an expression for dP/dT(v) and another for dP/dV(t) — never mind how you get them.   Differentiate the first with respect to V and the second with respect to T, and you should get the same thing.  This is because the order of partial differentiation of the two variables won’t matter if p = f(T, V).   

       If a thermodynamic property is an exact differential of some other thermodynamic properties it is a state function because the differential is exact, and integration is independent of path.   Examples of state functions include enthalpy and internal energy. Atkins3 p. 59 also goes over these points.   Atkins uses deltaX for state functions, and deltaBarX (dBarX) for path dependent functions.  

       Work and heat depend on the path taken between two points, so they are never exact differentials (state functions).  Work cannot be defined as a function of variables determining the thermodynamic state of the system (e.g.  state variables).  Recall — here state means equilibrium state. 

       Atkins p. 49 — Isochoric heat capacity (Cv) is the amount of heat to cause a change in temperature at constant volume of the substance.  Isobaric heat capacity  (Cp) is the amount of heat to cause a change in temperature at constant pressure.   Isobaric gains and losses of heat aren’t always reflected in the temperatures of the initial and final states, as there may have been expansion or contraction allowing for the performance of work.  Remember that pressure * volume change == work.    

      Since the first law can be wriitten as dU = dq + dw — change in internal energy = change in heat + change in work.  At constant volume — no PV work can be done, so dU = dq (assuming that no other type of work such as electrical work is done).  

      Thus Cv = (dU/dT)v  Cv is the heat capacity at constant volume. 

      Thus from a physical measurement (the heat capacity at constant volume) we can measure the change in internal energy (a state function).   Assume we can measure temperature.  We can measure the mechanical equivalent of heat by measuring how much stirring  it takes to raise the temperature of a mole of a substance by a certain amount.  We can measure the heat capacity of a substance (at a given temperature), and thus know by how much we are changing the internal energy (thanks to the first law of thermodynamics)

       Atkins3 p. 50  Enthalpy — denoted H — it is the internal energy plus pressure * volume.  H = U + PV.   Since U, P, V are state functions, so is H.  Note that the change in enthalpy of a reaction or a physical process doesn’t depend on the work done by or onto the system.  Since H is a state function, it is path independent.  

      The enthalpy change of a reaction (see below) differs from the internal energy change of a reaction because at constant pressure (e.g. that of a reaction in an open vessel) there is a difference in volume between reactants and products — this means that in the course of the reaction some pV work is done (either on the system or by the system).   HOWEVER IN REACTIONS INVOLVING ONLY SOLIDS OR LIQUIDS THE VOLUMES OF THE PRODUCTS AND REACTANTS ARE ABOUT THE SAME, AND EXCEPT UNDER SOME GEOPHYSICAL CONDITIONS WHERE PRESSURES ARE LARGE, THE CHANGE IN ENTHALPY IN A GIVEN REACTION IS THE SAME AS THE CHANGE IN INTERNAL ENERGY (FOR LIQUIDS AND SOLIDS ONLY).   No reaction, however extreme is going to change atmospheric pressure (which is where most reactions are carried out).  

       This is why chemists love enthalpy — most reactions are done in open vessels with constant (atmospheric) pressure, so the heat energy absorbed or produced (which is easy to measure) allows them to measure the enthalpy directly — e.g.  q = deltaH  — this is why the enthalpy change of a reaction is sometimes called the heat of reaction. In what follows, the rather weird looking || on one line with the v immediately underneath in the next line is supposed to stand for ==> (implies).

     H = U + PV   <==>   U = H – PV.  
                            ||   ; Assuming constant P (so VdP == O ) and that only PV
                                              work is done on or by the system 
        deltaU = deltaH – PdeltaV  
                             ||   ;  PdeltaV == deltaW
                             v
        deltaU = deltaQ + deltaW  ; first law
                             ||
                             v
                      deltaH = deltaQ     ; assuming constant P and  assuming that   
                                                  ; only PV work (and not much of that) is 

                                                  ; done on/by the system

      So the heat capacity at constant pressure (Cp) is the ratio (at the limit) of heat to temperature change (deltaT) as deltaT goes to zero  

               deltaQ = Cp * deltaT

      So given that deltaH = deltaQ at constant pressure (when only PV work done)

                deltaH/deltaT at constant pressure =  Cp

       Atkins 4Laws p. 38  The energy released as heat by a system free to expand or contract as a process occurs, as distinct from the total energy released in the same process, is exactly equal to the change in enthalpy of the system (provided the system is free to expand in an atmosphere that exerts a constant pressure on the system).  What’s the difference?  Where does the extra internal energy go (or come from)? It goes into the work of expanding (or contracting) volume against a constant pressure.  One can get this extra work out as heat if the reaction is done in a closed container which can’t expand. 

        To change a liquid into vapor requires energy to separate the molecules from each other.  This is supplied as heat — e.g. making use of a temperature difference between the liquid and its surrounding.   The extra energy of the vapor was called ‘latent heatl’ because it was released when the vapor condensed to a liquid, and was in some sense latent in the vapor.   Latent heat has been replaced by the term enthalpy of vaporization. 

       Atkins 4Laws p. 42 — The fluctuation dissipation theorem says that the ability of a system to dissipate (essentially absorb) energy is proportional to the magnitudes of the fluctuations about its mean value in a corresponding property (whatever that is).  Heat capacity is a dissipation term: it is a measure of the ability of a substance to absorb energy supplied to it as heat.   The corresponding fluctuation term is the spread of a population over the energy states of the system — when all the molecules are in a single state (say near or at absolute zero) there is no spread of populations, so the heat capacity of the system is zero.  In most cases the spread of populations increases with increasing temeperature, so heat capacity typically increases with rising temperature.

        Here’s what I have about the theorem  from elsewhere  –[ Nature vol. 431 pp. 28 – 29 ’04 ] From Einstein in his annus mirabilis (1905).  The fluctuations in classical Brownian motion which make a pollen particle jitter, also cause friction if the particle is dragged through the medium in which it is embedded (water).  The fluctuation of the particle at rest has the same origin as the dissipation of the motion of a moving particle subject to an external force.   It is one of the deepest results of thermodynamics and statistical physics.
     

      The heat capacity at constant pressure (and no other types of work such as electrical being done on or produced by the system) is thus defined as dH/dT == Cp — the volume doesn’t change –.  H plays a central role in chemistry because we are so often concerned with processes occurring at constant pressure (reactions, occurring in open vessels, the body etc. etc.).    P is taken to be the pressure of the system, the form pV is a part of the general definition of H for any system, and does not imply a restriction to perfect gases.  

      For a perfect gas H = U  + nRT (as H = U + PV and PV = nRT)

     Atkins7 p. 46 — When a system is subjected to a constant pressure, and only expansion work can occur, the change in enthalpy is equal to the energy supplied as heat.   Heating liquid water by an electric coil doesn’t expand its volume by much (although it does to some extent) so most of the energy going into the enthalpy change goes to the internal energy of the water (but all of it goes to the enthalpy).

       The thermodynmic internal energy is divided per molecule into several forms. 
     l. The energy stored in molecular bonds
     2. The energy of molecular translation
      3.  The energy of rotations of the molecule

       4.  The energy of vibration of the various bonds of the molecule.

        Thus we can’t take the average energy of a molecule at a given temperature and determine how fast it is moving, if the molecule can rotate or vibrate.  How these energies are partitioned isn’t clear to me. 

      For a substance made up of molecules one must add the energy of interaction of the molecules. 
       Heating is the transfer of energy as a result of vigorous random molecular motion into the surroundings.
       Work involves organized motion — when a weight is raised or lowered, its particles move in an organized way, not just chaotically (although they do move chaotically as well).  
       Work is identified as energy transfer making use of the coherent motion of particles in the surroundings, and heat is energy transfer making use of their random thermal motion.  

      The distinction between work and heat must be made in the surroundings (not within the system). 

        Atkins3 p. 57 Extensive properties depend on the amount of a substance present.  Examples include internal energy, mass, volume, heat capacity (not per mole).  Extensive properties are also known as colligative properties.  Intensive properties depend on the state of a substance and don’t depend on the amount present — examples include temperature, molar heat capacity (absorbable heat per mole), pressure, density, viscosity, concentration — actually any molar property (a property per mole).  Ultimately, when carried to extremely small amounts of the substance, intensive properties disappear — what is the density or pressure of a single molecule.  More to the point — what is the concentration of a molecule in a very small volume (where none is likely to be found)? 

      Some properties depend only on the present state of the system and not how it was prepared.  Examples include internal energy, enthalpy, volume, pressure and most importantly, temperature.  These are called state functions Other properties depend on how the state was prepared.  These are called path functions.

      Internal energy is a state function (according to the first law by some contorted reasoning).  If it were a path function, one could cycle between two states of a system by two paths (one emitting work to the outside system, and the other not emitting work) and get a perpetual motion machine.

       A reaction vessel is a thermodynamic system.  The energy transferred as heat during a reaction is called q.  q is a path function and not a state function.  Therefore q depends on HOW the reaction is carried out rather than on the reaction itself.    It is better to discuss energy changes during the course of a reaction using state functions — these don’t depend on how the reaction is carried out (e.g. reversibly or not).   

       If energy is transferred as heat at constant volume, AND if no other kind of work is done (what other kinds?  electrical? mechanical? — how many kinds are there? ) then the change of internal energy is equal (in absolute amount) to the heat transferred (in absolute amount).  

       [ Science vol. 305 p. 586 ’04 ] Just as scientists in the 19th century figured out that energy can neither be created nor destroyed, many 20th century physicists concluded that information is also conserved.    Black holes posed a big exception to this as information (as well as light or mass) never gets out.   Hawking said it would lose information, Caltech’s Preskill said that information would be safe until the black hole disgorged it.  

       Hawking recently proved to his sattisfaction using the Euclidean path integral method, that information isn’t destroyed when it falls into a black hole.   This implies that black holes aren’t portals to another universe.  Hawking gave Preskill a Baseball Encyclopedia. 

       Reactions for which deltaH > 0 are called ENDOTHERMIC — enthalpy being added to the system, while those in which deltaH < 0 are called EXOTHERMIC.    Thus emission of heat by a reaction loses internal energy (and enthalpy too)  for the system which is why reactions giving off heat have negative deltaH ! It all fits with the sign convention that work done on or heat added to the system increases the internal energy of the system.  So heat being given off lowers the energy of the system.  Internal energy is like height.  Reactions go from higher internal energy to low.

        Enthalpies of reactions are reported for reactants and products in their standard states.  The standard state is most stable form of an element at a given temperature (usually 25 C == 298.15) and always 1 bar (100,000 pascals)  — Atmospheric pressure is 101325 Pascals — recall that a Pascal is 1 Newton/sq. meter.   For an element in its standard state the enthalpy of formation of this state is taken as zero.  For carbon, the standard state is graphite.  Hydrogen, oxygen and nitrogen are biatomic gases in their standard states. 

       The standard enthalpy of formation of a compound is always relative to the elements making up the compound in their reference phases — the thermodynamically most stable phase under standard conditions (except for phosphorus — where white phosphorus is taken as the reference phase).    The standard enthalpy of formation of any element in its reference state is taken to be zero. 

       The reaction is considered to begin with the reactants in an unmixed state (there is such a thing as the enthalpy of mixing) and the products in an unmixed state.  In the case of ionic reactions in solution, the enthalpy changes occompanying mixing and unmixing are insignificant in comparison with the contribution from the reaction itself.  

        When considering the enthalpy change of a reaction, one subtracts the enthalpies of formation of the reactants from the enthalpies of formation of the products.  You must multiply the enthalpy of formation of each moiety by the number of moles of each moiety in the reaction. 

       Hess’s law — you can add the enthalpies of each of a sequence of reactions together.   This is because enthalpy is a state function (as is deltaH for that reason). 

       The change in internal energy for a reaction (deltaU) is equivalent to the heat emitted or absorbed if the products and reactants have the same volume (so no work is done).  The change in enthalpy for a reaction (deltaH) is equivalent to the heat emitted or absorbed at constant PRESSURE (assuming only PV work is done to or by the reaction).   However, since the volumes of reactants and products are essentially the same in solids and liquids, the volume doesn’t change significantly, so both the enthalpy and internal energy change of a reaction is the same in solution.   

        There is a description of the adiabatic bomb calorimeter on Atkins3 p. 86.   The calorimeter is immersed in an external water bath (and contains an internal water bath — inside bath).  The temperature of the external bath is adjusted to that of the internal water bath  so no heat exchange occurs between the  internal bath of the calorimeter and the external bath (e. g. the process is adiabatic (no heat energy is exchanged) when the calorimeter is considered as ‘the system’).  The temperature of the bath inside the calorimeter certainly rises or falls as the reaction takes place.  The reason it is called a bomb calorimeter is because the walls of the bomb in the internal bath are quite sturdy so that no matter how much energy is released, the volume doesn’t change and so no work is done.   The device can be used for combustion studies as well.   One measures the change in temperature of the bath inside the calorimeter and finds the amount of heat of the reaction using the heat capacity (and the volume of the internal bath) of water at constant volume.   To be truly accurate the dependence of the heat capacity of water on temperature should also be known.  

      Atkins3 p. 87  Varieties of enthalpy.   The enthalpy of sublimation of carbon (from graphite) was quite hard to neasure, but quite important for organic chemistry.   It is 716.68 kiloJoules/mole.  Sublimation is just one type of phase transition.  Other phase transitions have their own enthalpies — these include enthalpy of vaporization (how in the world can volume be constant? — it doesn’t have to be — but pressure does ! ), melting (latent heat of melting.  

      The enthalpy of solution is usually measured as what happens at infinite dilution (so the molecules of what is being dissolved don’t interact with each other).   The enthalpy of formation of a compound in solution is the addition of the enthalpy of formation of the substance by itself added to the enthalpy of solution of the compound (by Hess’s law).   Enthalpies can be added together because they are state functions.    For reasons that aren’t entirely clear, the enthalpy of formation of hydrogen ion in water is taken to be zero at all temperatures.  

      As an example of how complicated things can get — consider the enthalpy of formation of NaCl in solution.
     l. Start with the enthalpy of sublimation of sodium solid (reference state at 25 C) into a gas.
     2. Next the enthapy of ionization of sodium gas into Na+ and an electron.  This value is obtained by spectroscopy.  Volume changes should be added in here, but volume changes cancel when the electron joins a chlorine atom.
     3. Next the bond dissociation enthalpy of Cl2 (again the reference state of chlorine at 25 C) into Cl atoms
     4. Next the bond association enthalpy of Cl atom + electron to chloride ion in the gasseous state.  Some values are from spectroscopy and others are from calculation.  Here is where the volume change is cancelled out.  
     5. Next the enthalpy of formation of NaCl as a solid from a gas is added in.  This is lattice enthalpy.  At this point we have gone from Na (solid) and Cl2 (gas) at 25 C to NaCl as a solid. 

     6. Last the enthalpy of solution of NaCl in solvent (usually water) is measured.   

       Atkins7 p. 46 — If a process involves only solids or liquids, the values of changes in enthalpy (deltaH) and internal energy (deltaU) are ‘almost identical’.  To really measure deltaH — Atkins talks about a thermally insulated vessel open to the atmosphere (so how can it be thermally insulated — unless air is such a poor conductor of heat < which it is ! >)  — this is the isobaric calorimeter. 

      Then they talk about a differential scanning calorimeter — which measures the heat transferred to or from a sample at constant pressure during a physical or chemical change. 

       Atkins7 p. 52 — What happens when a gas expands adiabatically  — no heat enters or leaves  the gas.  Assume the expansion is reversible (so external pressure and internal are the same).  I just don’t follow the discussion.    Even so the temperature of the gas drops — because the gas is doing work against external pressure — what if it is expanding into a vacuum ? 

       Thermochemistry — the study of the heat produced or absorbed by chemical reactions.  This is a branch of thermodynamics because a reaction vessel and its contents form a system resulting in the exchange of energy between the system and the surroundings.    Calorimetry can be used to measure the heat absorbed or produced by a reaction.    

     If the reaction occurs at constant volume — heat in or out measures the change in U.  If it occurs at constant pressure — heat in or out measures the change in enthalpy (H) — assuming no other type of work (electrical, expansion, surface expansion, extension  — Table 2.1 p. 39 Atkins7) is done on or by the system. 

        Enthalpies can be reported for physical changes (with no chemical reaction occuring) — such as vaporization, freezing, sublimation.   Quick — when you melt something is the enthalpy positive or negative.  You have to supply heat from outside, so internal enthalpy (and energy) are raised so the enthalpy of melting, vaporization, sublimation is positive. 

       Since enthalpy is a state function, it is independent of the path that brings a system to that state, so we can put a solid into a gas by (1) sublimation or by (2) melting followed by evaporation — the enthalpy change of processes (1) and (2) will be the same.    Another consequence of enthalpy being a state function is that the enthalpies of a process going one way will be the negative of the process going the other way.   This is true for both physical and chemical changes.  

        You can write chemical reactions as occuring between compounds in their pure states 
    (e.g. methane + 2 O2 –> CO2 + 2H20 + heat given off < negative H > )
     The enthalpies of mixing methane with O2 and then separating the products can be ignored, because they largely cancel — however, not for ionic reactions in solution.  

      You must multiply the enthalpy of formation per mole of the reactants by the number of moles involved in the reaction.  

     Atkins7 p. 59 — Another enthalpy is the specific enthalpy — the enthalpy of combustion per gram of material (e.g. like specific gravity).  Useful in biochemistry — where substances are rarely pure — one can talk about the specific enthalpy of combustion of a gram of fat.   One can also talk about the enthalpy density — the enthalpy of combustion of a liter of material. 

       Atkins7 p. 62 — Computer aided molecular modeling is becoming the technique of choice for estimating standard enthalpies of formation of molecules with complex 3 dimensional structures.  The idea of finding the enthalpy of formation of a group (such as a methyl group) is regarded as primitive and old fashioned — in any enert such modeling is derived from a series of compounds. 

      The modeling predicts the relative stabilities of conformational isomers fairly well — ‘good agreement between calculated and experimental values is relatively rare.’

       Heat added or lost is always an inexact differential (it depends on the path taken between initial and final states).  It is zero for an adiabatic process (even though the temperature may change during one). 

       The internal energy of a perfect gas is independent of volume at constant temperature (because its internal energy arises only from the kinetic energy of its molecules).  For any isothermal change in a gas, its energy doesn’t change.   However, it isn’t zero for a nonperfect gas (which is all gases).  Joule tried to measure the internal energy (dU/dV)T by letting a gas at 22 atmospheres expand into a vacuum.   He got zero, but his apparatus was extremely inaccurate.  As the volume which a gas is confined in shrinks, if the molecules of gas attract each other, the internal energy (U) will diminish, while it will increase if the molecules repel each other.  Enough squishing and you get a liquid and then a solid.

First Law of Thermodynamics – I

The second chapter of Anslyn & Doughterty heavily involves thermodynamics.  Organic chemists get by with a basically qualitative understanding of the subject, but it’s always good to go deeper.  So I went to the notes I’ve taken on the subject over the years before plunging into Ch. 2.  I found them useful and will post the set of them here in the hopes that some of the readers will as well.  Back in the day computer memory was stored on iron cores (not silicon), programming was done in assembly language (or even worse, machine language) and errors were frequent (they still are) but much harder to find.  Computers crashed with dismaying regularity, and the only thing to do was look at the state of the computers memory to find out what had gone wrong — this became known as a core dump.  This series of posts is basically a core dump of the notes I’ve taken on thermodynamics over the years for my one benefit and understanding.

Be warned.  The notes are jumbled up and repetitive to some extent. If I don’t understand something, I say so  They certainly aren’t the way to learn the subject.  But if you’ve gone through thermodynamics in the past and haven’t looked at it for a while, they might be a good way to get up to speed.  Hopefully some of you will find them useful.

Sources are the various editions of Atkins book (Atkins3 is the third edition).  Berry is the big text he wrote on PChem.  McQuarrie and Simon should be familiar to most.  Atkins 4Laws is a small book (4 Laws that Drive the Universe) he wrote with very little math in it explaining what thermodynamics really is about.  I liked it a lot.

       The first law of thermodynamics: The energy of an isolated system (no heat in or out, no work done on or by) is constant

       The name thermodynamics reflects its origin in the steam engine (Invented by James Watt in 1780) etc. etc. and an interest in turning heat into motion.  Stowe “An introduction to thermodynamics and statistical mechanics” p. 4 — 2nd Ed. 2007.  This is why the original sign convention for work done by the system had it positive (rather than negative as work is defined today).  Heat added to the system has always been positive. 

        Presently people are interested in direct processes — where energy is converted from one form to another, without much heat being involved — sunlight into electricity, chemical energy into electrical energy (fuel cells, batteries).  

        Atkins 4Laws p. 45 — The first law is essentially based on the conservation of energy, the fact that energy can neither be created nor destroyed.  Noether’s theorem says that every conservation law corresponds to a symmetry.  In the case of the conservation of energy, the symmetry is that of the shape of time.  Energy is conserved because time is uniform — time flows steadily, it doesn’t bunch up and run faster than spread out and run slowly.   If it did energy wouldn’t be conserved. 

        Gribbbin — “Deep Simplicity” p. 110 — “In a sense, classical thermodynamics pretends that time does not exist. Systems are described in terms of infinitesimally small changes that would take an infinite amount of time to shift from one state to another.”

       The first law assumes that the system under consideration has no energy flowing through it and that the system does no work e.g. the system is closed.   It is in closed systems that we encounter time reversibility and Poincare recurrences; in open systems (in which energy is flowing through the system) we encounter irreversibility and an arrow of time. 

        Atkins7 p. 35 — The internal energy of an isolated system is constant.  How internal energy is measured isn’t given.  Work done ON a system, heat transferred TO a system raises the internal energy.  Berry PChem2 p. 371 — Heat is simply a term for energy which crosses the boundary of a closed (no mass in or out) in a form other than that of work.  “Heat is just energy in transit to or from the system”

      U stands for internal energy (but U must be abbreviating some word — probably in German) deltaU = q + w  (q is heat, w is work).  However this formulation also says something else — work and heat are equivalent forms of energy.  (This simple law took a huge amount of work to really establish).  This is the acquisitive sign convention — w > 0, q > 0 if energy is transferred to the system as work or heat.   We are viewing the flow of energy as work or heat from the system’s perspective.   This is the way Tinoco’s PChem book and the course I took regard things.

       Atkins7 p. 36 — a nice derivation of the first law assuming that all we know is how to measure is work (in the physicist’s sense of force x distance).  Consider an adiabatic system (no heat in or out — this assumes we know what heat is and how to insulate for it). — and also that we know how to measure temperature).  Experimentally, it was found that the same increase in temperature in this system is brought about by the same quantity of ANY kind of work done on the system (which we do know how to measure but recall that finding the mechanical equivalent of heat by Joule was a very big deal).   Also measuring temperature was a big deal.   This also assumes that we know how to measure pressure, volume and temperature (or any other state variable).  

        Berry PChem2 p. 379 The only way a thermodynamic state can be changed in a system with adiabatic walls is through work. 

       Measuring work by passing an electric current seems rather hairy, measuring the work done by stirring a solution does not.   However, the work done by an electric motor (using current) lifting a weight against gravity is clear, so the work of a current flow can easily be translated into the work of lifting an object. 

       The restatement of the first law says that the work needed to change an adiabetic system from one specified state to another specified state (without saying just what it takes to specify the state) is the same however the work is done.   This is like climbing a mountain, the altitude at the begining and end of a path doesn’t depend on the route taken.  The altitude is independent of the path.   The observation that a state is independent of the path implies the existence of a state function.   So altitude and energy are both state functions.

       How to measure heat?  It is the difference between two changes of state under different conditions (adiabatic and diathermic)  e.g. between adiabatic work (no heat transfer) and diathermic work (heat transferred) — again one measures the ‘state’ of a system, but Atkins7 gives no clue (at this point) about how such things are done.  We are to infer that pressure volume, temperature, and number of moles are all you need for a gas (from the previous discussion). 

       However, this does give one a mechanical definition of heat in terms of work. 

       Atkins3 p. 38 Chapter 2  Amazingly, he never defines heat in this chapter.  Heat is what is transferred between objects at different temperatures allowed to come to (thermal?) equilibrium with each other  This isn’t surprising as chemists thought heat was some type of fluid, and it wasn’t until the mid 1800s that physicists said that heat was some type of motion, without knowing just what heat was a motion of (since the atomic constitution of matter remained controversial even into the early 20th century).  

       Atkins (Four Laws) p. 30 “In thermodynamics, heat is not an entity or even a form of energy, heat is a mode of transfer of energy”  — heat is not a fluid rather —  “heat is the name of a process, not the name of an entity” — heat is the transfer of energy as the result of a temperature difference.

        Berry (PChem2 p. 371) Heat is a term for energy which crosses the boundary of a closed system in a form other than that of work. 

       Recall that we have been able to define the internal energy of a system as a state function.  Atkins Four Laws p. 28 “The amount of energy that is transferred as heat into or out of the system can be measured very simply” FIRST — we measure the work required to bring about a given change in the adiabatic system (no heat transferred), and then SECOND — we measure the work required to bring about the same change of state in the diathermic system, and take the difference — the difference is the energy transferred as heat.  A point to note is that the measurement of the rather elusive concept of ‘heat’ has been put on a purely mechanical foundation as the difference in the heights through which a weight falls to bring about a given change of state under two different conditions”  — e.g. adiabatic and diathermic. 

       On an atomic level, the difference between work and heat is quite clear.  Work performed by a system on its surroundings is the transfer of energy which causes a uniform motion of atoms (e.g. the motion of an aggregate of atoms) in the surroundings — e.g. the lifting of a weight against gravity.  Heat is a transfer of energy to the surroundings but in the form of increased random motion of the atoms in the surroundings.

       Once energy is transferred INTO a system, either (1) by making use of the uniform motion of atoms in the surroundings (a falling weight) — which could be used to turn a rotor inside the system increasing its temperature, or (2) by causing the transfer of energy as heat, there is no memory of how it was transferred.  Atkins 4Laws p. 33 — he does make a distinction between how energy is stored — e.g. kinetic energy of the atoms or the potential energy due to the position of the system.  This energy can be withdrawn either as heat or as work.

       One statement of the first law is “the internal energy of an isolated system is constant. 
     

      Further notes on the first law are gleaned from McQuarrie and Simon and are interspersed with those of Atkins3 and Atkins7.

      Atkins7 p. 33 — The internal energy of  a system is called its energy (U).  It is the total kinetic and potential energy of the molecules in the system under consideration.  (2 Aug ’04 — What about vibration? potential energy relative to what? )

      Atkins7 p. 31 — the energy of a system is its capacity to do work.   A boundary between the system under observation and its surroundings permitting the transfer of heat is called diathermic.  A boundary not permitting heat transfer between the system and its surroundings is called adiabatic. 

      McQ p. 766 – We define heat q, to be the manner of energy  transfer that results from a temperature difference between the system and its surroundings.  Heat input to the system is considered positive as it raises the internal energy.   McQ defines work (w) to be the transfer of energy between the system of interest and its surroundings as a result of the existence of unbalanced forces between the two.   Work can always be related to the raising or lowering of a mass in the surroundings of the system. 

        When a gas expands against a constant external pressure, the work done by the gas doesn’t depend on how the pressure of the gas changes as it does the work — this is why work isn’t a state function.    The higher the pressure it expands against, the greater the work done by the gas.   Thus gas expanding to twice its size against two different external pressures will have done more work against the higher pressure.  The states of the gas at the beginning and end of both expansions are the same < 12/03 — couldn’t the temperature be different ? > , so work is not a state function.  Similar considerations apply to contraction of a gas.  

     Aha !  25 Dec ’01 — the reason the integrals of PV work of a gas and of the internal energy of a gas look so similar is that that you are integrating over state functions (at constant pressure its just over dV) and when integrating over dU, the internal energy depends only on the state of the gas.  In one case you subtract Vinitial from Vfinal and use Pexternal (at constant pressure), and in the other you subtract Ui from Uf.    P can be taken out of the integral because it is constant.  If P varied all over the place as the gas expanded, P would be some other function of volume and that function would stay inside the integral and have to be integrated yielding different amounts of work depending on the ‘path’ the pressure took (given by the function) as V varied from Vi to Vf.   This assumes that you understand path integrals which I don’t. 

     McQ p. 770 — If you compress a perfect gas so its temperature stays the same e.g. PiVi = PfVf, using external pressure just slightly greater than internal pressure, then P is a function of V — the process is reversible as well.  P = nRT/V, and integrating this over dV gives a logarithm — the point is that no extra work is done on the gas under these conditions.  When Pext is constant where does the extra work done on the gas go?    Does it heat the gas?   

      When a gas expands reversibly against a pressure infinitesmally lower than the pressure of the gas, PV work is done by the gas.  If heat doesn’t enter or leave the gas (adiabatic process) by giving up work the internal energy of the gas must drop.  This can only mean that the temperature of the gas drops in an adiabatic reversible expansion.  McQ notes that by a discussion in a previous chapter, the internal energy of an ideal gas depends only on the temperature ! !  Thus in an adiabatic reversible expansion the plot of P vs. V is not along an isotherm of PV = constant.   Heat can be added and subtracted reversibly (I thought the second law of thermodynamics prevented this).  Neither reversible work or reversible temperature change are state functions.  McQ p. 774. 
 
      Atkins3 starts with a truly dopey definition of work —  Work is done if the process could be used to bring about a change in the height of a weight somewhere in the surroundings.  
      

       Weight = acceleration of gravity * mass == force exerted on a mass.  Work is measured as force times distance (in this case as weight x height).  and it has the same units as energy (see  Joule).  

        Energy is the capacity to do work.  When work is done on an otherwise  isolated system its capacity to do work is increased (so its internal energy must be increased).   Heat can change the internal energy of a system.

       First law of thermodynamics — when a system changes from one state to another along any adiabatic pathway (no heat transfer into or out of the system) — the temperature of the system may well change in the process however,   see p. 86 where how this is measured with an adiabatic bomb calorimeter is at last described after 50 pages of confusion) the quantity of work done is the same irrespective of the means employed.   However, when heat is transferred all bets are off, as not all transferred heat can be used as work.  

       Atkins7 p. 33 —  Work is identified as energy transfer making use of the organized (e.g. correlated) motion of atoms in the surroundings.  Heat is identified as energy transfer making use of thermal motion in the surroundings.  

       This is analogous to noting that one’s vertical distance from the peak of a mountain is independent of the path of the ascent.  The independence of path of vertical distance implies the existence of a property of the mountain — which we call altitude.  

      The property of thermodynamic systems so expressed is called the internal energy of the system.    The first law of thermodynamics is that the internal energy of an isolated system is constant.   Work done ON the system is considered positive, as is heat added to the system. 

       Now let the system change between the same initial and final states as before, but allow heat to enter or leave the system.  The internal energy change is the same, but the amount of work between the two states might be different depending on the path chosen.    However, the sum of the work done and the heat exchanged add up to the change in internal energy.    The values of heat and work are dependent on the path taken.    

      deltaU (where U == internal energy) = work done on the system or – work done by the system + heat added to the system or – heat removed from the system

      If deltaU = 8, then 8 = 5+3, 6+2, 12-4, etc, etc.  
 

      The first law provides a way to define heat mechanically.  However, this requires a way measure the internal energy of a system — which isn’t described until the next chapter.  Here’s how to measure the mechanical equivalent of heat.  Assuming you know deltaU between two states, you measure the amount of work to go from one state to the other adiabatically (e.g. no heat transferred into and out of the system), and then measure the amount of heat required to go between the same two states doing no work.  If you then ASSUME that the first law is true, since work and heat are both energy, you have determined the mechanical equivalent of heat.   Joule in the mid 1800s was very involved in trying to measure the mechanical equivalent of heat.  

      1 Joule is the energy required to move 1 kilogram (2.2 pounds)  a distance of 1 meter (about 40 inches) EXACTLY THE SAME THING as the  work < force x distance >  required to move 1 kilogram (2.2 pounds) a distance of 1 meter (about 40 inches).   

      The Joule units are {kiloGram*(meter)2}/(second)2} — Note ! the same units as kinetic energy e.g. — mass * (velocity)^2.   

       The analogue in the CGS (centimeter gram second) system of units is the erg. 1 Joule is also .23 calories, and a calorie is the amount of energy (usually measured as heat) to raise 1 gram of water by 1 degree Centigrade between 14.5 to 15.5 degrees centigrade.  One calorie therefore is equivalent to 4.184 joules.  The calorie we talk about in food is really 1000 of these small calories.  So if you weigh 220 pounds (100 Kg) you have to walk 15 feet to burn 1 food calorie, or 15,000 feet to burn 1000 calories.  This is why it is so hard to lose weight. 

       The first law also implies that the internal energy of an isolated system can’t change — isolation means that no work is on or by the system, and no heat is transferred into or out of the system.    This leads to the impossibility of constructing a perpetual motion machine.    An isolated system however can undergo a change of state — liquid water below 32 C will freeze, but the energy of the system won’t change.  

        dw is the work done on a system, and dq is the heat supplied to the system.  They should be positive as both raise the internal energy of the system — which should be positive.    However w’ is the negative of w and is positive when the system does work on its surroundings.   w’ is what chemists usually measure.  

       Work is force x distance.   Interestingly, I’m battling (22 Dec ’01) with the idea of linear functionals and work is such an item — it takes a vector (force) and another (distance) and produces a scalar (work). 

       Pressure is force/area   so pressure x area = force, so pressure x area x distance equals work.  However, area x distance equals volume so pressure x volume = work.  This is why the gas constant R = PV/T is measured in Joules * Kelvin^-1 (Joules/Kelvin)
  

       weight = acceleration of gravity * mass == force exerted on a mass at the surface of the earth. 

       When a gas expands against a constant external pressure, the gas is doing work on the external environment, so the internal energy of the gas drops.   When a gas is compressed at constant external pressure its internal energy rises (work is being done on the gas).    Since PV =nRT the only way volume can decrease at constant (internal) pressure is if the temperature drops. However, this doesn’t happen at constant external pressure because the internal pressure rises and temperature stays the same (if no heat is transferred — e.g. the process being adiabatic).  

       Consider what happens when the internal pressure of a gas is higher than the external pressure on it.  The gas expands until the two pressures are equal, or until a mechanical stop is reached.   External pressure could be constant — e.g. it could be atmospheric.    The work done by the gas is then p(external) * deltaV.  The internal energy of the gas must drop by this much.   Assume no heat is transferred in the process.  All we know is initial and final volumes and the initial internal pressure and temperature.   We don’t know final pressure or final temperature.   If the process is adiabatic (no heat transferred), are we to assume that the final temperature is the same as the initial one.   We do know the internal energy has dropped by the amount of work done. 

       Atkins3 p. 46.  In thermodynamics, a reversible change is one that can be reversed by an infinitestimal modification of a variable.  If the expansion of a gas is to occur reversibly, we must ensure that at each stage the external pressure is only infinitesmially less than the internal pressure.   In some (poorly explained) way this involves the system being in equilibrium with its surroundings.  (12/03 — Atkins7 — still poorly explained).  Atkins7 p. 39 — A reversible change in thermodynamics is a change that can be reversed by an infinitestimal modification of a variable.  A system is in equilibrium with its surroundings if an infintesimal change in the conditions in OPPOSITE directions results in opposite changes in its state. 
 

      The reversible work done in expanding/contracting a gas is the integral of PdV.  P will change as volume changes — the pressure on the gas from without must (very nearly) equal the pressure of the gas at all times (so the process is reversible).  If the external pressure isn’t exactly the same as the gas, work produced won’t be reversible (which is why work isn’t a state function).  Very nice picture and explanation of this — Atkins7 p. 41 Fig 2.9 !

       Since work is the integral of pressure summed over infinitesmal volume increments, and since the pressure of a gas depends (by an equation of state) on volume and temperature, work done on or by a gas at constant temperature  depends only on volume — this is true whether the gas is perfect or not, as long as the pressure depends in some way only on volume and temperature (and we are keeping temperature constant).   This is why we need the equation of state to measure work  p = f(Vol, Temperature). If the temperature is kept constant (by being in contact with an external bath maintained at a constant temperature by the addition or subtraction of heat as necessary — see p. 86), than p depends only on V, and for a perfect gas p = nRT/V, so the integral involves evaluating a logarithm at initial and final volumes. 

       You get more work from a gas expanding isothermally and reversibly, than you do from a gas expanding at constant pressure, this is because to keep the gas at the same temperature as it expands, heat must be added to the gas (otherwise the temperature will drop), and all this heat is converted to work (because the expansion is reversible).  

       The MAXIMUM amount of work available from a system operating between specified initial and final states, and passing along a specified path is obtained when it is operating reversibly.   Unfortunately, in practice this normally means that the path has to be traversed infinitely slowly.  Consequently, reversible processes are called quasi-static 

        Berry PChem2 p. 375 — There are two types of thermodynamic variables intrinsic (P, T) and extensive (volume, mass).    Each intensive variable X has a conjugate extensive variable Y, the two being related by a particular work process.  Example:  the conjugate extensive variable of P is V, and P*V = energy (or work).

       The conjugate extensive variable Y of an intensive variable X is a quantity such that an infinitesmal change (dY) does an amount of work 

     e.g.  dBarWork = X * dY
                         Intensive variable    Extensive Variable
Fluid — or any
system which 
can expand and        Pressure              Volume
contract
Surface film         Surface Tension        Area
Wire                       Tension                   Length
Capacitor               Potential                 Charge
Electrochemical      Electron motive       Charge
     cell                          force (emf)
Paramagnetic         Magnetic field           Magnetic
    solid                         strength (H)         moment
    

        In mechanics, generalized coordinates correspond to extensive variables and generalized forces correspond to intensive variables.  If the potential energy of a mechanical system is expressed as a function of several independent variables (the generalized coordinates), then the generalized force conjugate to a particular coordinate is defined as the negative of the partial derivative of the potential energy with respect to that coordinate with all the other coordinates held fixed.  

       The same thing happens in thermodynamics.  The intensive variable X conjugate to a given extensive variable Y is defined as 

         X = – dU/dY ;  partial derivative, all other extensive variables constant

     In both thermodynamics and mechanics, the product of the conjugate pair members must have the dimensions of energy. 

      The performance of a given type of work is always associated with a change in the corresponding extensive variable (in the system, the surroundings or both).   Whenever a boundary allows the performance of a particular kind of work, we say that the boundary transmits the conjugate intensive variable.  At equilibrium the intensive variable (P, T, mass) has the same value on both sides of the boundary (system, surroundings).   If the boundary doesn’t permit the performance of a particular type of work, than the intensive variable doesn’t have to have the same value on the two sides (even at equilibrium).  

        For a system initially at equilibrium to undergo a change of state, its boundaries must be capable of transmitting some intensive variable.  So either work or heat must pass across the boundary.  If no work is possible, than heat must flow.  Continue reading

Anslyn pp. 1 – 50

      p. xxii — “Some would argue that the last century also saw the near death of the field” .  My friend, Tom Lowry (Mechanism and Theory in Organic Chemistry) told me (this century) that he thought the field had died.

      p. xxiii — I’m not even sure the authors would call the following Physical Organic Chemistry because no covalent bonds are broken or formed, but the way proteins do or do not associate with each other (bonding is probably too strong a word) is absolutely crucial for the understanding of what is going inside our cells.  For example  RNA polymerase II is a protein complex of 12 subunits (Rpb1 – 12) with a mass of 500 kiloDaltons, and size 100 – 140 Angstroms.  None of the interactions between the subunits are covalent.  Ditto for the ribosome  which contains 4500 nucleotides over a few chains and 50 proteins (all noncovalently associated) with a molecular mass of 2.5 megaDaltons. 

       Similarly, the huge and unsolved protein folding problem doesn’t involve covalent bond making or breaking.  This is certainly physical chemistry. Whether or not it is physical organic chemistry is unclear.  Hopefully the book will have something to say about these matters.  Certainly proteins are organic molecules, and physical organic chemistry should be able to say something about the way their parts interact.

        p. 4 “This limitation (of quantum mechanical calculations) is even more severe when solvation or solid state issues become critical”.   Back in the 60’s we used to refer to the DeBye Huckel theory as applicable to slightly contaminated distilled water only.  Every year my wife and I have dinner with a friend from that era (currently a chemistry department chair) and he tells me this is still true.  Remember the concentration of salt in our cells is .3 MOLAR (not a misprint). 

        p. 4 “Most of this material should be familiar to you” — not to this boy.  Molecular orbitals were just coming in in 60 – 62.  Lionel Salem was a post-doc (or something) when I was a grad student.

       p. 4 “Each row in the periodic table indicates a different principal quantum number (with the exception of d and f orbitals which are displaced down one row from their respective principal shells”  — Why?  I’ve never understood why this should be — perhaps an explanation will be forthcoming in this section. 

        p. 5  ”the ability of an electron to feel the trajectory of another electron”.  Certainly informal, but do electrons really have feelings? There’s also a strong argument that they don’t even have trajectories — if they did, how would they get past a node (zero probability of finding them there) in a 2 p orbital.

       p. 7 — formal charge.  This makes biologic sense.  Mammalian nerve talks to mammalian muscle using the neurotransmitter acetylcholine which contains a CH2 – N(CH3)3+ group.  There’s no way the nitrogen can get really near the parts of the protein the molecule binds to (which is negatively charged), so having the positive charges distributed over the hydrocarbon part makes sense. 
p. 14 — Electron density of “.002 electrons/A^2”  — the denominator should be A^3,  Density on a surface doesn’t make sense.  

p. 18 — How are dipole moments for molecules determined? Since the dipole moment is the product of a distance and a charge separation.  Increasing the amount of charge separated and decreasing the distance between them will yield the same Dipole moment.  Is there any way to measure the two separately? Anslyn makes the point with CH3Br and CH3F — which have the same dipole moment. 

p. 21 — In the Going Deeper box “The potential energy cannot be infinite . .. ” is wrong — it should be “The kinetic energy cannot be infinite .. ”   Now I have the 2nd printing and they don’t have an errata page (which I think is awful) correcting errors as they are found with each new printing, so this may have been corrected already.

p. 25 — The discussion of polarizabilities is clear, but the units are not.  It’s probably the relative polarizabilities that are important.   The fact that alkanes are so polarizable is never mentioned in discussions of the membrane potential — given that the interior of biologic membranes is mostly hydrocarbon, this should diminish the potential across it. 

      Now I doubt that the average chemist, organic or otherwise, knows the following.  The potential difference across the cell membrane isn’t that large (70 milliVolts), but the field is enormous, because the 70 milliVolts is across a distance of 70 Angstroms (7 nanoMeters).   So that’s an electric field of 

        70 x 10^-3 volts/7 x 10^-9 meters = 10^7 Volts/meter 
enough to fry your socks off. 
I wrote a post on the subject elsewhere on this blog https://luysii.wordpress.com/2011/03/06/why-arent-we-all-dead/

p. 27 — “Modern calculational methods now provide accurate representations of the molecular orbitals not only of stable molecules, but of reactive intermediates and even transition states.”  — How do you know they are accurate?  Do they predict reaction rates? 

P. 27 — “In contrast to valence bond theory ‘full-blown’ molecular orbital theory (MOT) considers the electrons in molecules to occupy molecular orbitals that are formed by linear combinations of ALL atomic orbitals on ALL the atoms in the structure”.   I assume that the linear combinations allow weighting of the atomic orbitals as they are combined .  True?  (appears to be true by rule #14 of QMOT on p. 28 which talks about the size of the atomic orbital coefficients).  

   p.28 (added 13 June ’11)  — It wasn’t until I arrived at “Orbital effects” on p. 128 that the utility of the molecular orbital approach made it seem worth learning.  My eyes glazed over in the section on Qualitative Molecular Orbital Theory (pp. 28 –> ) the first time I read it.  So I went back and reread it.  

       Looking back, there are several things which threw me.  Chemists use lines between atoms to represent bonds — not so  in figures 1.7, 1.8 and the rest of the book — the lines just represent the positions of the atoms in space.  Only when the color of the orbital on atom #1 is the same as the color of the orbital on atom #2 is there bonding.  If the colors are different there is antibonding.  If there is no colored orbital on atom #2 the line between atoms #1 and #2 remains, but there is no bonding interaction, so the orbital on atom #1 is a nonbonding orbital. The lines between the atoms remain nonetheless, faking me out.  

     Another point (see figure 1.12 p. 37) — This is the mixing diagram of two CH3 groups to form ethane — there is no significance to which side of the energy levels of the mixed orbitals the orbital diagrams are placed.  This is true of all mixing diagrams in the book.

p. 31 — “We will constantly be checking our qualitative reasoning against quantitative calculations to be sure we are getting things right.”  Well, you’re really checking consistency, but how accurate are the calculations?

p. 31 — If you’re reading this chaper from front to back, make a copy of Figure 1.8 as it is referred to again and again.  Also memorize sigma(CH3), pi(CH3) and sigma(out) — the names are far from descriptive in terms of what you already know about sigma and pi bonds.  Also, the rationale for the names is far from convincing (see p. 32). 

p. 34 — I found the handwaving about the CH2 group rather difficult to believe until the mention of the different ionization energies of the  electrons in water’s lone pairs — proving the two ‘lone pair’ orbitals aren’t equivalent.  Can they actually show the different ionization energies leaving H20 as just lacking one electron (e.g. H3O+) and not stripping out a second electron from H3O+ ??

p. 35 — Even more interesting — the approach of another molecule to water (or any other for that matter) lowers the symmetry of the system allowing orbital mixing, giving two sp3-like orbitals.  QMOT might be proven correct by studying isolated molecules in the gas phase, but most chemistry happens when one molecule approaches another — so how useful is the theory described up to now? 

p. 36 — the fact that orbital mixing of filled identical orbitals from two identical atoms produces two molecular orbitals (bonding and antibonding), which when filled is destabilizing isn’t stressed in most introductory organic books.  But why is this so?   I don’t recall seeing an explanation in the QM course I audited.  “Closed shell repulsion” sounds almost like an explanation but it could use some elaboration.  Hopefully Ch. 14 on perturbation theory some 800 pages later will make this clear. 

p. 36 — In the construction of ethane from two methyl groups “We should use the MOs of pyramidal methyl as this is the geometry appropriate to ethane”.  Do you know this from calculations or from chemistry or chemical spectroscopy?  NMR? Crystallography?   I’ve always wondered how atoms come to be placed exactly where they are found with subsequent construction of molecular orbitals.

p. 37. (added 13 June ’11 ) — very important to note that the highest occupied molecular orbital in ethane has an antibonding interaction between the p orbitals of the two carbons, but it still is a bonding molecular orbital, because the overlap of the carbon p orbitals with the s orbitals of the 6 hydrogens results in a net bonding effect — so not every orbital interaction in a bonding molecular orbital must be bonding, some can be antibonding.

  p. 42 — Most books I’ve read don’t talk about the tilting of the antibonding pi orbital away from the region between the two ‘antibonded’ atoms — nice !  p. 46 — Even better this partially explains the rearward attack on R-Cl in an Sn2 reaction — but simple stereochemistry and physics explains it better. 

p. 43 — “We will show experimental evidence in Chapter 14 (pp. 807 –> ) that supports the fact that hybridization does not actually occur in the standard sp3, sp2 and sp manner”  — can’t wait.  It’s 22 April ’11 We’ll see how long it takes me to get there.

p. 50 — The 3 center 2 electron bond of the diboranes and death of Bill Lipscomb this month is an appropriate way to end this post.  A girlfriend got her PhD with him, liked him a lot (as did everyone) and has a publication with him listed in his Wikipedia article.  

        Which brings me to the women in our entering class of grad students back then.  They all did well, one getting her PhD from Bartlett in 3 years (apparently everything she tried worked the first time).  Interestingly one of the women found the atmosphere at Harvard oppressive and switched to MIT (which she didn’t find oppressive at all, contrary to a lot of the press MIT has received) and got her PhD there.

       Apparently, it was a very different time from the present — the 21 Apr ’11 Nature has a bunch of articles about “The future of the PhD” — something we had no worries about back then.  I still wonder if the situation is as grim for the PhD’s coming out of the Harvard chemistry department — no chauvanism intended, just curiosity.


Why aren’t we all dead ?

Anslyn && Dougherty is even more fun than Clayden et. al.  It’s far more advanced, and I’m certainly glad I read Clayden first.  On p. 24 they talk about the polarizability of molecules, sonething distinct  from the dipole moment of the molecule.  Polarizability is the ability of the molecule’s electron distribution to distort in the presence of an electric field.  I was suprised to find that the usual suspects (e.g. water) aren’t that polarizable and that the champs are hydrocarbons.   They don’t say how polarizability is measured, but I’ll take them at face value.

We wouldn’t exist without the membranes enclosing our cells which are largely hydrocarbon.  Chemists know that fatty acids have one end (the carboxyl group) which dissolves in water while the rest is pure hydrocarbon.  The classic is stearic acid — 18 carbons in a straight chain with a carboxyl group at one end.  3 molecules of stearic acid are esterified to glycerol in beef tallow (forming a triglyceride).  The pioneers hydrolyzed it to make soap. Saturated fatty acids of 18 carbons or more are solid at body temperature (soap certainly is), but cellular membranes are fairly fluid, and proteins embedded in them move around pretty quickly.  Why?  Because most fatty acids found in biologic membranes over 16 carbons have double bonds in them.  Guess whether they are cis or trans.   Hint:  the isomer used packs less well into crystals — you’ve got it, all the double bonds found in oleic (18 carbons 1 double bond), arachidonic (20 carbons, 4 double bonds) are trans – this keeps membranes fluids as well.   No, they are cis — thanks to PostDoc for pointing this out.  The cis double bond essentially puts a 60 degree kink in the hydrocarbon chain, making it much more difficult to pack in a liquid crystal type structure with all the hydrocarbon chains stretched out.   Then there’s cholesterol which makes up 1/5 or so of membranes by weight — it also breaks up the tendency of fatty acid hydrocarbon chains to align with each other because it doesn’t pack with them very well.  So cholesterol is another fluidizer  of membranes.

How thick is the cellular membrane?  If you figure the hydrocarbon chains of a saturated fatty acid stretched out as far as they can go, you get 1.54 Angstroms * cosine (30 degrees)  = 1.33 Angstroms/carbon — times 16 = 21 Angstroms.  Now double that because cellular membranes are lipid bilayers meaning that they are made of two layers of hydrocarbons facing each other, with the hydrophilic ends (carboxyls, phosphate groups) pointing outward.  So we’re up to 42 Angstroms of thickness for the hydrocarbon part of the membrane.  Add another 10 Angstroms or so for the hydrophilic ends (which include things like serine, choline etc. etc.) and you’re up to about 60 Angstroms thickness for the membrane (which is usually cited as 70 Angstroms — I don’t know why).

Neurologists and neurophysiologists spent a lot of time thinking about membranes, particularly those of neurons.  In all these years, I’ve never hear anyone talk about hydrocarbon polarizability.  It ought to be a huge factor in membrane function.  Why?  Because of the enormous electric field across the membranes enclosing all our cells (not just our neurons).  The potential across the membranes is usually given as 70 milliVolts (inside negatively charged, outside positively charged).  Why is this a big deal?

Because the electric field across our membranes is huge.  70 x 10^-3 volts is 70 milliVolts.  70 Angstroms is 7 nanoMeters (7 x 10^-9) meters.  Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.   If hydrocarbons are ever going to polarize they should in this environment.  The college physics book I bought for the Quantum Mechanics course a while ago — “Physics for Scientists and Engineers” 4th edition p. 662 talks about lightning.  The potential difference leading to the discharge is the same; 10,000,000 Volts.  This results in a much smaller electric field (probably by a factor of 1,000) because clouds aren’t 1 meter off the ground.

So why don’t our cells collapse and we die?  I don’t know.

Here are a few Physics 102 questions for the cognoscenti out there.

l. Potential difference is due to charge separation.  Assume a flat membrane 1 micron square and 70 Angstroms thick.  How much charge must be separated to account for a potential of 70 milliVolts.  Answer in number of charges rather than Coulombs.

2. Now let’s get real.  We’re talking about neuronal processes here.  So lets talk about a cylindrical membrane 1 micron long (remember that some neuronal processes — such as those going from your spinal cord to your big toe are a million times longer than this).  Diameters of our nerve fiber range from 1 micron to 25 microns.  Ignoring the complication of the myelin sheath, how much charge must be separated to produce a potential across the membrane of the neuronal process of 70 milliVolts.

The more you think about life, the more remarkable it becomes.

Anslyn and Dougherty: Introduction and Allegro

Well, I’ve been through the first chapter (all 61 pages) and it’s first rate — well written, engaging and informal.     I plan to do a series of posts as I go through the book (just as I did with Clayden), with questions possible errors etc. etc.  I’d definitely like help from the readership for possible mistakes.  The authors correct known errors with each new printing (now up to 4).  I have the 2nd printing, and I’m damned if I’m going to lay out $118.74 on Amazon for a second copy of the book.  See https://luysii.wordpress.com/2011/02/09/chemistry-textbook-errotica/ for a rant about chemical textbook errata and the lack of web pages for them.

There is a Student Solutions Manual for it, by someone else (who worked with Dougherty).  Any opinions out there about how good it is?  Is it worth the(new)  $47.74 on Amazon?  I’ve not tried any of the problems yet.

I plan to reread the first chapter and have specific comments and questions, but here are a few generalities.

p. xxii — “Some would argue that the last century also saw the near death of the field” .  My friend, Tom Lowry (Mechanism and Theory in Organic Chemistry) told me (this century) that he thought the field had died.

p. 4 “Most of this material should be familiar to you” — not to this boy.  Molecular orbitals were just coming in in 60 – 62.  Lionel Salem was a post-doc (or something) when I was a grad student.

p. 5  “the ability of an electron to feel the trajectory of another electron”.  Certainly informal, but do electrons really have feelings?  There’s a strong argument, that they don’t even have trajectories — if they did, how would they get past a node (zero probability of finding them there) in a 2 p orbital.

pp. 59 – 61 — Pictures of d orbitals.  Hosanna ! “One theme of this textbook is to consistently tie organic chemistry to organometallic chemistry”  Hosanna again ! ! Clayden was valiant in their attempt to give a glimpse of the field, but the d orbital view and the bonding patterns was sketchy at best.  Certainly looking forward to enlightenment on this score.

Next up “Why Pre-Meds Hate Organic Chemistry before they even begin the course”