Tag Archives: thermodynamics

Why there’s more to chemistry than quantum mechanics

As juniors entering the Princeton Chemistry Department as majors in 1958 we were told to read “The Logic Of Modern Physics” by P. W. Bridgeman — https://en.wikipedia.org/wiki/The_Logic_of_Modern_Physics.   I don’t remember whether we ever got together to discuss the book with faculty, but I do remember that I found the book intensely irritating.  It was written in 1927, in early hay day of quantum mechanics.  It  said that all you could know was measurements (numbers on a dial if you wish) without any understanding of what went on in between them.

I thought chemists knew a lot more than that.  Here’s Henry Eyring — https://en.wikipedia.org/wiki/Henry_Eyring_(chemist)https://en.wikipedi developing transition state theory a few years later in 1935 in the department.  It was pure ideation based on thermodynamics, which was developed long before quantum mechanics and is still pretty much a quantum mechanics free zone of physics (although people are busy at work on the interface).

Henry would have loved a recent paper [ Proc. Natl. Acad. Sci. vol. 118 e2102006118 ’21 ] where the passage of a molecule back and forth across the free energy maximum was measured again and again.

A polyNucleotide hairpin of DNA  was connected to double stranded DNA handles in optical traps where it could fluctuate between folded (hairpin) and unfolded (no hairpin) states.  They could measure just how far apart the handles were and in the hairpin state the length appears to be 100 Angstroms (10 nanoMeters) shorter than the unfolded state.

So they could follow the length vs. time and measure the 50 microSeconds or so it took to make the journey across the free energy maximum (e.g. the transition state). A mere 323,495 different transition paths were studied.  You can find much more about the work here — https://luysii.wordpress.com/2022/02/15/transition-state-theory/

Does Bridgeman have the last laugh — remember all that is being measured are numbers (lengths) on a dial.

Here’s another recent paper Eyring would have loved — [ Proc. Natl. Acad. Sci. vol. 119 e2112372118 ’22  — ] https://www.pnas.org/doi/epdf/10.1073/pnas.2112382119  ]

The paper studied Barnase, a 110 amino acid protein which degrades RNA (so much like the original protein Anfinsen studied years ago).  Barnase is highly soluble and very stable making it one of the E. Coli’s of protein folding studies.

The new wrinkle of the paper is that they were able to study the folding and unfolding and the transition state of single molecules of Barnase at different temperatures (an experiment which would have been unlikely for Eyring to even think about doing in 1935 when he developed transition state theory, and yet this is exactly the sort of thing what he was thinking about but not about proteins whose structure was unknown back then).

This allowed them to determine not just the change in free energy (deltaG)  between the unfolded (U) and the transition state (TS) and the native state (N) of Barnase, but also the changes in enthalpy (delta H) and entropy (delta S) between U and TS and between N and TS.

Remember delta G = Delta H – T delta S.  A process will occur if deltaG is negative, which is why an increase in entropy is favorable, and why the decrease in entropy between U and TS is unfavorable.   You can find out more about this work here — https://luysii.wordpress.com/2022/03/25/new-light-on-protein-folding/

So the purely mental ideas of Eyring are being confirmed once again (but by numbers on a dial).  I doubt that Eyring would have thought such an experiment possible back in 1935.

Chemists know so much more than quantum mechanics says we can know.  But much of what we do know would be impossible without quantum mechanics.

However, Eyring certainly wasn’t averse to quantum mechanics, having written a text book Quantum Chemistry with Walter and Kimball on the very subject in 1944.

Internal Energy, Enthalpy, Helmholtz free energy and Gibbs free energy are all Legebdre transformations of each other

Sometimes it pays to be persistent in thinking about things you don’t understand (if you have the time as I do). The chemical potential is of enormous interest to chemists, and yet is defined thermodynamically in 5 distinct ways. This made me wonder if the definitions were actually describing essentially the same thing (not to worry they are).

First, a few thousand definitions

Chemical potential of species i — mu(i)
Internal energy — U
Entropy — S
Enthalpy — H
Helmholtz free energy — F or A (but, maddeningly, never H)
Gibbs free energy — G
Ni — number of elements of chemical species i
Pressure — p
Volume — V
Temperature — T

Just 5 more
mu(i) == ∂H/∂Ni constant S, p
mu(i) == ∂S/∂Ni constant U, V
mu(i) == ∂U/∂Ni constant S, V
mu(i) == ∂F/∂Ni constant T, V
mu(i) == ∂G/∂Ni constant T, p

Clearly, at a given constellation of S, U, F, G the mu(i)’s won’t all be the same number, but they’re essentially the same thing. Here’s why.

Start with a simple mathematical problem. Assume you have a simple function (f) of two variables (x,y), and that f is continuous in x and y and that its partial derivatives u = ∂f/∂x and w = ∂f/∂y are continuous as well so you have

df = u dx + w dy

u and dx are conjugate variables, as are w and dy

Suppose you want to change df = u dx + w dy to

another function g such that

dg = u dx – y dw

which is basically flipping a pair of conjugate variables around

Patience, the reason for wanting to do this will become apparent in a moment.

The answer is to use what is called the Legendre transform of f which is simply

g = f – y w

dg = df – y dw – w dy

plug in df

dg = u dx + w dw – y dw – w dy == df – y dw – w dy Done.

Where does the thermodynamics come in?

Well, you have to start somewhere, so why not with the fundamental thermodynamic equation for internal energy U

dU = ∂U/∂S dS + ∂U/∂V dV + ∑ ∂U/∂Ni dNi

We already know that ∂U/Ni = mu(i)

Because of the partial derivative notation (∂) it is assumed that all the other variables say in the expression for dU e.g. V and Ni are held constant in ∂U/∂S. This will reduce the clutter in notation which is already cluttered enough.

We already know that ∂U/∂Ni is mu(i). One definition of temperature T, is as ∂U/∂S, and another for p is -∂U/∂V (which makes sense if you think about it — decreasing volume relative to U should increase pressure).

Suddenly dU looks like what we were talking about with the Legendre transformation.

dU = T dS – p dV + ∑ mu(i) dNi

Apply the Legendre transformation to U to switch conjugate variables p and V

H = U + pV ; looks suspiciously like enthalpy (H) because it is

dH = dU + p dV + V dp + ∑ mu(i) dNi

= T dS – p dV + ∑ mu(i) dNi + p dV + V dp

= T dS + V dp + ∑ mu(i) dNi

Notice how mu(i) here comes out to ∂H/dNi at constant S and P

Start with the fundamental thermodynamic equation for internal energy

dU = T dS – p dV + ∑ mu(i) dNi

Now apply the Legendre transformation to T and S and you get
F = U – TS ; looks like the Helmholtz free energy (sometimes written A, but never as H) because it is.

You get

dF = – S dT – p dV + ∑ mu(i) dNi

Who cares? Chemists do because, although it is difficult to hold U constant or S constant (and it is impossible to measure them directly) it is very easy to keep temperature and volume constant in a reaction, meaning that changes in Helmholtz free energy under those conditions is just
∑ mu(i) dNi. So here mu(i) = ∂F/∂Ni at constant T and p

If you start with enthalpy

dH = T dS + V dp + ∑ mu(i) dNi

and do the Legendre transform you get the Gibbs free energy G = H – TS

I won’t bore you with it but this gives you the chemical potential mu(i) at constant T and p, conditions chemists easily arrange all the time.

To summarize

Enthalpy (H) is one Legendre transform of internal energy (U)
Helmholtz free energy (F) is another Legendre transform of U
Gibbs free energy (G) is the Legendre transform of Enthalpy (H)

It should be clear that Legendre transforms are all reversible

For example if H = U + PV then U = H – PV

If you think a bit about the 5 definitions of chemical potential, you’ll see that it can depend on 5 things (U, S, p, V and T). Ultimately all thermodynamic variables (U, S, H, G, F, p, V, T, mu(i) ) often have relations to each other.

Examples include H = U + pV, F = U – TS, G = H -TS

Helping keep things clear are equations of state from the things you can easily measure (p,V, T). The most famous is the ideal gas law p V = nRT.

Smoke, mirrors and statistical mechanics

This will be the year I look at PChem and biophysics. What comes first? Why thermodynamics of course, and chemists always think of molecules not steam engines, so statistical mechanics comes before thermodynamics

The assumptions behind statistical mechanics are really so bizarre that it’s a miracle that it works at all, but work it does.

Macrostates are things you can measure — temperature, pressure, volume.

Microstates give rise to macrostates, but you can’t measure them. However even though you can’t measure them, you can distinguish different ones and count them. Then you assume that each microstate is equally probable, even though you have no way in hell of experimentally measuring even one of them, and probability is what you find after repeated measurements (none of which you can make).

Amazing.

Thrust and Parry about memory storage outside neurons.

First the post of 23 Feb ’14 discussing the paper (between *** and &&& in case you’ve read it already)

Then some of the rather severe criticism of the paper.

Then some of the reply to the criticisms

Then a few comments of my own, followed by yet another old post about the chemical insanity neuroscience gets into when they apply concepts like concentration to very small volumes.

Enjoy
***
Are memories stored outside of neurons?

This may turn out to be a banner year for neuroscience. Work discussed in the following older post is the first convincing explanation of why we need sleep that I’ve seen.https://luysii.wordpress.com/2013/10/21/is-sleep-deprivation-like-alzheimers-and-why-we-need-sleep-in-the-first-place/

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

We’ve concentrated so much on neurons, that we may have missed something big. In a similar vein, the function of sleep may be to wash neurons free of stuff built up during the day outside of them.

&&&

In the 5 September ’14 Science (vol. 345 p. 1130) 6 researchers from Finland, Case Western Reserve and U. California (Davis) basically say the the paper conflicts with fundamental thermodynamics so severely that “Given these theoretical objections to their interpretations, we choose not to comment here on the experimental results”.

In more detail “If Cl− were initially in equilibrium across a membrane, then the mere introduction of im- mobile negative charges (a passive element) at one side of the membrane would, according to their line of thinking, cause a permanent change in the local electrochemical potential of Cl−, there- by leading to a persistent driving force for Cl− fluxes with no input of energy.” This essentially accuses the authors of inventing a perpetual motion machine.

Then in a second letter, two more researchers weigh in (same page) — “The experimental procedures and results in this study are insufficient to support these conclusions. Contradictory results previously published by these authors and other laboratories are not referred to.”

The authors of the original paper don’t take this lying down. On the same page they discuss the notion of the Donnan equilibrium and say they were misinterpreted.

The paper, and the 3 letters all discuss the chloride concentration inside neurons which they call [Cl-]i. The problem with this sort of thinking (if you can call it that) is that it extrapolates the notion of concentration to very small volumes (such as a dendritic spine) where it isn’t meaningful. It goes on all the time in neuroscience. While between any two small rational numbers there is another, matter can be sliced only so thinly without getting down to the discrete atomic level. At this level concentration (which is basically a ratio between two very large numbers of molecules e.g. solute and solvent) simply doesn’t apply.

Here’s a post on the topic from a few months ago. It contains a link to another post showing that even Nobelists have chemical feet of clay.

More chemical insanity from neuroscience

The current issue of PNAS contains a paper (vol. 111 pp. 8961 – 8966, 17 June ’14) which uncritically quotes some work done back in the 80’s and flatly states that synaptic vesicles http://en.wikipedia.org/wiki/Synaptic_vesicle have a pH of 5.2 – 5.7. Such a value is meaningless. Here’s why.

A pH of 5 means that there are 10^-5 Moles of H+ per liter or 6 x 10^18 actual ions/liter.

Synaptic vesicles have an ‘average diameter’ of 40 nanoMeters (400 Angstroms to the chemist). Most of them are nearly spherical. So each has a volume of

4/3 * pi * (20 * 10^-9)^3 = 33,510 * 10^-27 = 3.4 * 10^-23 liters. 20 rather than 40 because volume involves the radius.

So each vesicle contains 6 * 10^18 * 3.4 * 10^-23 = 20 * 10^-5 = .0002 ions.

This is similar to the chemical blunders on concentration in the nano domain committed by a Nobelist. For details please see — https://luysii.wordpress.com/2013/10/09/is-concentration-meaningful-in-a-nanodomain-a-nobel-is-no-guarantee-against-chemical-idiocy/

Didn’t these guys ever take Freshman Chemistry?

Addendum 24 June ’14

Didn’t I ever take it ? John wrote the following this AM

Please check the units in your volume calculation. With r = 10^-9 m, then V is in m^3, and m^3 is not equal to L. There’s 1000 L in a m^3.
Happy Anniversary by the way.

To which I responded

Ouch ! You’re correct of course. However even with the correction, the results come out to .2 free protons (or H30+) per vesicle, a result that still makes no chemical sense. There are many more protons in the vesicle, but they are buffered by the proteins and the transmitters contained within.