Tag Archives: Gibbs free energy

Dynamic allostery

It behooves drug chemists to know as much as they can about protein allostery, since so many of their drugs attempt to manipulate it.  An earlier post discussed dynamic allostery which is essentially change in ligand binding affinity without structural change in the protein binding the ligand.  A new paper challenges the concept.

First here’s the old post, and then the new stuff

Remember entropy? — Take II

Organic chemists have a far better intuitive feel for entropy than most chemists. Condensations such as the Diels Alder reaction decrease it, as does ring closure. However, when you get to small ligands binding proteins, everything seems to be about enthalpy. Although binding energy is always talked about, mentally it appears to be enthalpy (H) rather than Gibbs free energy (F).

A recent fascinating editorial and paper [ Proc. Natl. Acad. Sci. vol. 114 pp. 4278 – 4280, 4424 – 4429 ’17 ]shows how the evolution has used entropy to determine when a protein (CzrA) binds to DNA and when it doesn’t. As usual, advances in technology permit us to see this (e.g. multidimensional heteronuclear nuclear magnetic resonance). This allows us to determine the motion of side chains (methyl groups), backbones etc. etc. When CzrA binds to DNA methyl side chains on the protein move more, increasing entropy (deltaS) as well. We all know the Gibbs free energy of reaction (deltaF) isn’t just enthalpy (deltaH) but deltaH – TdeltaS, so an increase in deltaS pushes deltaF lower meaning the reaction proceeds in that direction.

Binding of Zinc redistributes these side chain motion so that entropy decreases, and the protein moves off DNA. The authors call this dynamics driven allostery. The fascinating thing, is that this may happen without any conformational change of CzrA.

I’m not sure that molecular dynamics simulations are good enough to pick this up. Fortunately newer NMR techniques can measure it. Just another complication for the hapless drug chemist thinking about protein ligand interactions.

A recent paper [ Proc. Natl. Acad. Sci. vol. 114 pp. 6563  – 6568 ’17 ] went into more detail about measuring side chain motions  as a surrogate for conformational entropy.  It can now be measured by NMR.  They define complete restriction of  the methyl group symmetry axis as 1, and complete disorder, and state that ‘a variety of models’ imply that the value is LINEARLY related to conformational entropy making it an ‘entropy meter’.  They state that measurement of fast internal side chain motion is largely restricted to the methyl group — this makes me worry that other side chains (which they can’t measure) are moving as well and contributing to entropy.

The authors studied some 28 protein/ligand systems, and found that the contribution of conformational entropy to ligand binding can be favorable, negligible or unfavorable.

What is bothersome to the authors (and to me) is that there were no obvious structural correlates between the degree of conformation entropy and protein structure.  So it’s something you measure not something you predict, making life even more difficult for the computational chemist studying protein ligand interactions.

Now the new stuff [ Proc. Natl. Acad. Sci. vol. 114 pp. 7480 – 7482, E5825 – E5834 ’17 ].  It’s worth considering what ‘no structural change’ means.  Proteins are moving all the time.  Bonds are vibrating at rates up to 10^15 times a second.  Methyl groups are rotating, hydrogen bonds are being made and broken.  I think we can assume that no structural change means no change in the protein backbone.

The work studied a protein of interest to neurological function, the PDZ3 domain — found on the receiving side of a synapse (post-synaptic side).  Ligand binding produced no change in the backbone, but there were significant changes in the distribution of electrons — which the authors describe as an enthalpic rather than an entropic effect.  Hydrogen bonds and salt bridges changed.  Certainly any change in the charge distribution would affect the pKa’s of acids and bases. The changes in charge distribution the ligand would see due to hydrogen ionization from acids and binding to bases would certainly hange ligand binding — even forgetting van der Waals effects.

Internal Energy, Enthalpy, Helmholtz free energy and Gibbs free energy are all Legebdre transformations of each other

Sometimes it pays to be persistent in thinking about things you don’t understand (if you have the time as I do). The chemical potential is of enormous interest to chemists, and yet is defined thermodynamically in 5 distinct ways. This made me wonder if the definitions were actually describing essentially the same thing (not to worry they are).

First, a few thousand definitions

Chemical potential of species i — mu(i)
Internal energy — U
Entropy — S
Enthalpy — H
Helmholtz free energy — F or A (but, maddeningly, never H)
Gibbs free energy — G
Ni — number of elements of chemical species i
Pressure — p
Volume — V
Temperature — T

Just 5 more
mu(i) == ∂H/∂Ni constant S, p
mu(i) == ∂S/∂Ni constant U, V
mu(i) == ∂U/∂Ni constant S, V
mu(i) == ∂F/∂Ni constant T, V
mu(i) == ∂G/∂Ni constant T, p

Clearly, at a given constellation of S, U, F, G the mu(i)’s won’t all be the same number, but they’re essentially the same thing. Here’s why.

Start with a simple mathematical problem. Assume you have a simple function (f) of two variables (x,y), and that f is continuous in x and y and that its partial derivatives u = ∂f/∂x and w = ∂f/∂y are continuous as well so you have

df = u dx + w dy

u and dx are conjugate variables, as are w and dy

Suppose you want to change df = u dx + w dy to

another function g such that

dg = u dx – y dw

which is basically flipping a pair of conjugate variables around

Patience, the reason for wanting to do this will become apparent in a moment.

The answer is to use what is called the Legendre transform of f which is simply

g = f – y w

dg = df – y dw – w dy

plug in df

dg = u dx + w dw – y dw – w dy == df – y dw – w dy Done.

Where does the thermodynamics come in?

Well, you have to start somewhere, so why not with the fundamental thermodynamic equation for internal energy U

dU = ∂U/∂S dS + ∂U/∂V dV + ∑ ∂U/∂Ni dNi

We already know that ∂U/Ni = mu(i)

Because of the partial derivative notation (∂) it is assumed that all the other variables say in the expression for dU e.g. V and Ni are held constant in ∂U/∂S. This will reduce the clutter in notation which is already cluttered enough.

We already know that ∂U/∂Ni is mu(i). One definition of temperature T, is as ∂U/∂S, and another for p is -∂U/∂V (which makes sense if you think about it — decreasing volume relative to U should increase pressure).

Suddenly dU looks like what we were talking about with the Legendre transformation.

dU = T dS – p dV + ∑ mu(i) dNi

Apply the Legendre transformation to U to switch conjugate variables p and V

H = U + pV ; looks suspiciously like enthalpy (H) because it is

dH = dU + p dV + V dp + ∑ mu(i) dNi

= T dS – p dV + ∑ mu(i) dNi + p dV + V dp

= T dS + V dp + ∑ mu(i) dNi

Notice how mu(i) here comes out to ∂H/dNi at constant S and P

Start with the fundamental thermodynamic equation for internal energy

dU = T dS – p dV + ∑ mu(i) dNi

Now apply the Legendre transformation to T and S and you get
F = U – TS ; looks like the Helmholtz free energy (sometimes written A, but never as H) because it is.

You get

dF = – S dT – p dV + ∑ mu(i) dNi

Who cares? Chemists do because, although it is difficult to hold U constant or S constant (and it is impossible to measure them directly) it is very easy to keep temperature and volume constant in a reaction, meaning that changes in Helmholtz free energy under those conditions is just
∑ mu(i) dNi. So here mu(i) = ∂F/∂Ni at constant T and p

If you start with enthalpy

dH = T dS + V dp + ∑ mu(i) dNi

and do the Legendre transform you get the Gibbs free energy G = H – TS

I won’t bore you with it but this gives you the chemical potential mu(i) at constant T and p, conditions chemists easily arrange all the time.

To summarize

Enthalpy (H) is one Legendre transform of internal energy (U)
Helmholtz free energy (F) is another Legendre transform of U
Gibbs free energy (G) is the Legendre transform of Enthalpy (H)

It should be clear that Legendre transforms are all reversible

For example if H = U + PV then U = H – PV

If you think a bit about the 5 definitions of chemical potential, you’ll see that it can depend on 5 things (U, S, p, V and T). Ultimately all thermodynamic variables (U, S, H, G, F, p, V, T, mu(i) ) often have relations to each other.

Examples include H = U + pV, F = U – TS, G = H -TS

Helping keep things clear are equations of state from the things you can easily measure (p,V, T). The most famous is the ideal gas law p V = nRT.

Now we know why hot food tastes differently

An absolutely brilliant piece of physical chemistry explained a puzzling biologic phenomenon that organic chemistry was powerless to illuminate.

First, a fair amount of background

Ion channels are proteins present in the cell membrane of all our cells, but in neurons they are responsible for the maintenance of a membrane potential across the membrane, which has the ability change abruptly causing an nerve cell to fire an impulse. Functionally, ligand activated ion channels are pretty easy to understand. A chemical binds to them and they open and the neuron fires (or a muscle contracts — same thing). The channels don’t let everything in, just particular ions. Thus one type of channel which binds acetyl choline lets in sodium (not potassium, not calcium) which causes the cell to fire impulses. The GABA[A] receptor (the ion channel for gamma amino butyric acid) lets in chloride ions (and little else) which inhibits the neuron carrying it from firing. (This is why the benzodiazepines and barbiturates are anticonvulsants).

Since ion channels are full of amino acids, some of which have charged side chains, it’s easy to see how a change in electrical potential across the cell membrane could open or shut them.

By the way, the potential is huge although it doesn’t seem like much. It is usually given as 70 milliVolts (inside negatively charged, outside positively charged). Why is this a big deal? Because the electric field across our membranes is huge. 70 x 10^-3 volts is only 70 milliVolts. The cell membrane is quite thin — just 70 Angstroms. This is 7 nanoMeters (7 x 10^-9) meters. Divide 7 x 10^-3 volts by 7 x 10^-9 and you get a field of 10,000,000 Volts/meter.

Now for the main course. We easily sense hot and cold. This is because we have a bunch of different ion channels which open in response to different temperatures. All this without neurotransmitters binding to them, or changes in electric potential across the membrane.

People had searched for some particular sequence of amino acids common to the channels to no avail (this is the failure of organic chemistry).

In a brilliant paper, entropy was found to be the culprit. Chemists are used to considering entropy effects (primarily on reaction kinetics, but on equilibria as well). What happens is that in the open state a large number of hydrophobic amino acids are exposed to the extracellular space. To accommodate them (e.g. to solvate them), water around them must be more ordered, decreasing entropy. This, of course, is why oil and water don’t mix.

As all the chemists among us should remember, the equilibrium constant has components due to kinetic energy (e.g. heat, e.g. enthalpy) and due to entropy.

The entropy term must be multiplied by the temperature, which is where the temperature sensitivity of the equilibrium constant (in this case open channel/closed channel) comes in. Remember changes in entropy and enthalpy work in opposite directions —

delta G(ibbs free energy) = delta H (enthalpy) T * delta S (entropy

Here’s the paper [ Cell vol. 158 pp. 977 – 979, 1148 1158 ’14 ] They note that if a large number of buried hydrophobic groups become exposed to water on a conformational change in the ion channel, an increased heat capacity should be produced due to water ordering to solvate the hydrophobic side chains. This should confer a strong temperature dependence on the equilibrium constant for the reaction. Exposing just 20 hydrophobic side chains in a tetrameric channel should do the trick. The side chains don’t have to be localized in a particular area (which is why organic chemists and biochemists couldn’t find a stretch of amino acids conferring cold or heat sensitivity — it didn’t matter where the hydrophobic amino acids were, as long as there were enough of them, somewhere).

In some way this entwines enthalpy and entropy making temperature dependent activation U shaped rather than monotonic. So such a channel is in principle both hot activated and cold activated, with the position of the U along the temperature axis determining which activation mode is seen at experimentally accessible temperatures.

All very nice, but how many beautiful theories have we seen get crushed by ugly facts. If they really understood what is going on with temperature sensitivity, they should be able to change a cold activated ion channel to a heat activated one (by mutating it). If they really, really understood things, they should be able to take a run of the mill temperature INsensitive ion channel and make it temperature sensitive. Amazingly, the authors did just that.

Impressive. Read the paper.

This harks back to the days when theories of organic reaction mechanisms were tested by building molecules to test them. When you made a molecule that no one had seen before and predicted how it would react you knew you were on to something.