Tag Archives: Statistical mechanics

Prolegomena to reading Fall by Neal Stephenson

As a college freshman I spent hours trying to untangle Kant’s sentences in “Prolegomena to Any Future Metaphysics”  Here’s sentence #1.   “In order that metaphysics might, as science, be able to lay claim, not merely to deceitful persuasion, but to insight and conviction, a critique of reason itself must set forth the entire stock of a priori concepts, their division according to the different sources (sensibility, understanding, and reason), further, a complete table of those concepts, and the analysis of all of them along with everything that can be derived from that analysis; and then, especially, such a critique must set forth the possibility of synthetic cognition a priori through a deduction of these concepts, it must set forth the principles of their use, and finally also the boundaries of that use; and all of this in a complete system.”

This post is something to read before tackling “Fall” by Neal Stephenson, a prolegomena if you will.  Hopefully it will be more comprehensible than Kant.   I’m only up to p. 83 of a nearly 900 page book.  But so far the book’s premise seems to be that if you knew each and every connection (synapse) between every neuron, you could resurrect the consciousness of an individual (e.g. a wiring diagram).  Perhaps Stephenson will get more sophisticated as I proceed through the book.  Perhaps not.  But he’s clearly done a fair amount neuroscience homework.

So read the following old post about why a wiring diagram of the brain isn’t enough to explain how it works.   Perhaps he’ll bring in the following points later in the book.

Here’s the old post.  Some serious (and counterintuitive) scientific results to follow in tomorrow’s post.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home.  Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain.  They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to.  For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana).  This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time.  Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them).  This is where the neurological urban myth of the loss of 10,000 neurons each day came from.  For details see https://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 ’11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses.  Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it.  I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the  complexity of the interconnection between neurons and just how many there are.  Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope.  Stare at figure 2.f. a while and try to figure out what’s going on.  It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing.  There are at least 3 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses).  A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible.  It is now possible to watch synapses in the living brain of awake animal for extended periods of time.  But we now know that synapses come and go in the brain.  The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times.  Here are a few references [ Neuron vol. 69 pp. 1039 – 1041 ’11, ibid vol. 49 pp. 780 – 783, 877 – 887 ’06 ].  So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses.  Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission)  Their importance in psychiatry and drug addiction is unparalleled.  Examples of such volume transmitters include serotonin, dopamine and norepinephrine.  Drugs of abuse affecting their action include cocaine, amphetamine.  Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg.  This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles.  There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse.  The construction of the brain from the fertilized egg must be in some sense statistical.  Remarkable that it happens at all.  Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

 

A creation myth

Sigmund Freud may have been wrong about penis envy, but most lower forms of scientific life (chemists, biologists) do have physics envy — myself included.  Most graduate chemists have taken a quantum mechanics course, if only to see where atomic and molecular orbitals come from.  Anyone doing physical chemistry has likely studied statistical mechanics. I was fortunate enough to audit one such course given by E. Bright Wilson (of Pauling and Wilson).

Although we no longer study physics per se, most of us read books about physics.  Two excellent such books have come out in the past year.  One is “What is Real?” — https://www.basicbooks.com/titles/adam-becker/what-is-real/9780465096053/, the other is “Lost in Math” by Sabine Hossenfelder whose blog on physics is always worth reading, both for herself and the heavies who comment on what she writes — http://backreaction.blogspot.com

Both books deserve a long discursive review here. But that’s for another time.  Briefly, Hossenfelder thinks that physics for the past 30 years has become so fascinated with elegant mathematical descriptions of nature, that theories are judged by their mathematical elegance and beauty, rather than agreement with experiment.  She acknowledges that the experiments are both difficult and expensive, and notes that it took a century for one such prediction (gravitational waves) to be confirmed.

The mathematics of physics can certainly be seductive, and even a lowly chemist such as myself has been bowled over by it.  Here is how it hit me

Budding chemists start out by learning that electrons like to be in filled shells. The first shell has 2 elements, the next 2 + 6 elements etc. etc. It allows the neophyte to make some sense of the periodic table (as long as they deal with low atomic numbers — why the 4s electrons are of lower energy than the 3d electons still seems quite ad hoc to me). Later on we were told that this is because of quantum numbers n, l, m and s. Then we learn that atomic orbitals have shapes, in some wierd way determined by the quantum numbers, etc. etc.

Recursion relations are no stranger to the differential equations course, where you learn to (tediously) find them for a polynomial series solution for the differential equation at hand. I never really understood them, but I could use them (like far too much math that I took back in college).

So it wasn’t a shock when the QM instructor back in 1961 got to them in the course of solving the Schrodinger equation for the hydrogen atom (with it’s radially symmetric potential). First the equation had to be expressed in spherical coordinates (r, theta and phi) which made the Laplacian look rather fierce. Then the equation was split into 3 variables, each involving one of r, theta or phi. The easiest to solve was the one involving phi which involved only a complex exponential. But periodic nature of the solution made the magnetic quantum number fall out. Pretty good, but nothing earthshaking.

Recursion relations made their appearance with the solution of the radial and the theta equations. So it was plug and chug time with series solutions and recursion relations so things wouldn’t blow up (or as Dr. Gouterman, the instructor, put it: the electron has to be somewhere, so the wavefunction must be zero at infinity). MEGO (My Eyes Glazed Over) until all of a sudden there were the main quantum number (n) and the azimuthal quantum number (l) coming directly out of the recursion relations.

When I first realized what was going on, it really hit me. I can still see the room and the people in it (just as people can remember exactly where they were and what they were doing when they heard about 9/11 or (for the oldsters among you) when Kennedy was shot — I was cutting a physiology class in med school). The realization that what I had considered mathematical diddle, in some way was giving us the quantum numbers and the periodic table, and the shape of orbitals, was a glimpse of incredible and unseen power. For me it was like seeing the face of God.

But what interested me the most about “Lost in Math” was Hossenfelder’s discussion of the different physical laws appearing at different physical scales (e.g. effective laws), emergent properties and reductionism (pp. 44 –> ).  Although things at larger scales (atoms) can be understood in terms of the physics of smaller scales (protons, neutrons, electrons), the details of elementary particle interactions (quarks, gluons, leptons etc.) don’t matter much to the chemist.  The orbits of planets don’t depend on planetary structure, etc. etc.  She notes that reduction of events at one scale to those at a smaller one is not an optional philosophical position to hold, it’s just the way nature is as revealed by experiment.  She notes that you could ‘in principle, derive the theory for large scales from the theory for small scales’ (although I’ve never seen it done) and then she moves on

But the different structures and different laws at different scales is what has always fascinated me about the world in which we exist.  Do we have a model for a world structured this way?

Of course we do.  It’s the computer.

 

Neurologists have always been interested in computers, and computer people have always been interested in the brain — von Neumann wrote “The Computer and the Brain” shortly before his death in 1958.

Back in med school in the 60s people were just figuring out how neurons talked to each other where they met at the synapse.  It was with a certain degree of excitement that we found that information appeared to flow just one way across the synapse (from the PREsynaptic neuron to the POST synaptic neuron).  E.g. just like the vacuum tubes of the earliest computers.  Current (and information) could flow just one way.

The microprocessors based on transistors that a normal person could play with came out in the 70s.  I was naturally interested, as having taken QM I thought I could understand how transistors work.  I knew about energy gaps in atomic spectra, but how in the world a crystal with zillions of atoms and electrons floating around could produce one seemed like a mystery to me, and still does.  It’s an example of ’emergence’ about which more later.

But forgetting all that, it’s fairly easy to see how electrons could flow from a semiconductor with an abundance of them (due to doping) to a semiconductor with a deficit — and have a hard time flowing back.  Again a one way valve, just like our concept of the synapses.

Now of course, we know information can flow the other way in the synapse from POST synaptic to PREsynaptic neuron, some of the main carriers of which are the endogenous marihuana-like substances in your brain — anandamide etc. etc.  — the endocannabinoids.

In 1968 my wife learned how to do assembly language coding with punch cards ones and zeros, the whole bit.  Why?  Because I was scheduled for two years of active duty as an Army doc, a time in which we had half a million men in Vietnam.  She was preparing to be a widow with 2 infants, as the Army sent me a form asking for my preferences in assignment, a form so out of date, that it offered the option of taking my family with me to Vietnam if I’d extend my tour over there to 4 years.  So I sat around drinking Scotch and reading Faulkner waiting to go in.

So when computers became something the general populace could have, I tried to build a mental one using and or and not logical gates and 1s and 0s for high and low voltages. Since I could see how to build the three using transistors (reductionism), I just went one plane higher.  Note, although the gates can be easily reduced to transistors, and transistors to p and n type semiconductors, there is nothing in the laws of semiconductor physics that implies putting them together to form logic gates.  So the higher plane of logic gates is essentially an act of creation.  They do not necessarily arise from transistors.

What I was really interested in was hooking the gates together to form an ALU (arithmetic and logic unit).  I eventually did it, but doing so showed me the necessity of other components of the chip (the clock and in particular the microcode which lies below assembly language instructions).

The next level up, is what my wife was doing — sending assembly language instructions of 1’s and 0’s to the computer, and watching how gates were opened and shut, registers filled and emptied, transforming the 1’s and 0’s in the process.  Again note that there is nothing necessary in the way the gates are hooked together to make them do anything.  The program is at yet another higher level.

Above that are the higher level programs, Basic, C and on up.  Above that hooking computers together to form networks and then the internet with TCP/IP  etc.

While they all can be reduced, there is nothing inherent in the things that they are reduced to which implies their existence.  Their existence was essentially created by humanity’s collective mind.

Could something be going on in the levels of the world seen in physics.  Here’s what Nobel laureate Robert Laughlin (he of the fractional quantum Hall effect) has to say about it — http://www.pnas.org/content/97/1/28.  Note that this was written before people began taking quantum computers seriously.

“However, it is obvious glancing through this list that the Theory of Everything is not even remotely a theory of every thing (2). We know this equation is correct because it has been solved accurately for small numbers of particles (isolated atoms and small molecules) and found to agree in minute detail with experiment (35). However, it cannot be solved accurately when the number of particles exceeds about 10. No computer existing, or that will ever exist, can break this barrier because it is a catastrophe of dimension. If the amount of computer memory required to represent the quantum wavefunction of one particle is Nthen the amount required to represent the wavefunction of k particles is Nk. It is possible to perform approximate calculations for larger systems, and it is through such calculations that we have learned why atoms have the size they do, why chemical bonds have the length and strength they do, why solid matter has the elastic properties it does, why some things are transparent while others reflect or absorb light (6). With a little more experimental input for guidance it is even possible to predict atomic conformations of small molecules, simple chemical reaction rates, structural phase transitions, ferromagnetism, and sometimes even superconducting transition temperatures (7). But the schemes for approximating are not first-principles deductions but are rather art keyed to experiment, and thus tend to be the least reliable precisely when reliability is most needed, i.e., when experimental information is scarce, the physical behavior has no precedent, and the key questions have not yet been identified. There are many notorious failures of alleged ab initio computation methods, including the phase diagram of liquid 3He and the entire phenomenonology of high-temperature superconductors (810). Predicting protein functionality or the behavior of the human brain from these equations is patently absurd. So the triumph of the reductionism of the Greeks is a pyrrhic victory: We have succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything only to discover that it has revealed exactly nothing about many things of great importance.”

So reductionism doesn’t explain the laws we have at various levels.  They are regularities to be sure, and they describe what is happening, but a description is NOT an explanation, in the same way that Newton’s gravitational law predicts zillions of observations about the real world.     But even  Newton famously said Hypotheses non fingo (Latin for “I feign no hypotheses”) when discussing the action at a distance which his theory of gravity entailed. Actually he thought the idea was crazy. “That Gravity should be innate, inherent and essential to Matter, so that one body may act upon another at a distance thro’ a Vacuum, without the Mediation of any thing else, by and through which their Action and Force may be conveyed from one to another, is to me so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it”

So are the various physical laws things that are imposed from without, by God only knows what?  The computer with its various levels of phenomena certainly was consciously constructed.

Is what I’ve just written a creation myth or is there something to it?

The bouillabaisse of the synaptic cleft

The synaptic cleft is so small ( under 400 Angstroms — 40 nanoMeters ) that it can’t be seen with the light microscope ( the smallest wavelength of visible light 3,900 Angstroms — 390 nanoMeters).  This led to a bruising battle between Cajal and Golgi a just over a century ago over whether the brain was actually made of cells.  Even though Golgi’s work led to the delineation of single neurons he thought the brain was a continuous network.  They both won the Nobel in 1906.

Semifast forward to the mid 60s when I was in medical school.  We finally had the electron microscope, so we could see synapses. They showed up as a small CLEAR spaces (e.g. electrons passed through it easily leaving it white) between neurons.  Neurotransmitters were being discovered at the same time and the synapse was to be the analogy to vacuum tubes, which could pass electricity in just one direction (yes, the transistor although invented hadn’t been used to make anything resembling a computer — the Intel 4004 wasn’t until the 70s).  Of course now we know that information flows back and forth across the synapse, with endocannabinoids (e. g. natural marihuana) being the major retrograde neurotransmitter.

Since there didn’t seem to be anything in the synaptic cleft, neurotransmitters were thought to freely diffuse across it to being to receptors on the other (postsynaptic) side e.g. a free fly zone.

Fast forward to the present to a marvelous (and grueling to read because of the complexity of the subject not the way it’s written) review of just what is in the synaptic cleft [ Cell vol. 171 pp. 745 – 769 ’17 ] http://www.cell.com/cell/fulltext/S0092-8674(17)31246-1 (It is likely behind a paywall).  There are over 120 references, and rather than being just a catalogue, the single author Thomas Sudhof extensively discusseswhich experimental work is to be believed (not that Sudhof  is saying the work is fraudulent, but that it can’t be used to extrapolate to the living human brain).  The review is a staggering piece of work for one individual.

The stuff in the synaptic cleft is so diverse, and so intimately involved with itself and the membranes on either side what what is needed for comprehension is not a chemist but a sociologist.  Probably most of the molecules to be discussed are present in such small numbers that the law of mass action doesn’t apply, nor do binding constants which rely on large numbers of ligands and receptors. Not only that, but the binding constants haven’t been been determined for many of the players.

Now for some anatomic detail and numbers.  It is remarkably hard to find just how far laterally the synaptic cleft extends.  Molecular Biology of the Cell ed. 5 p. 1149 has a fairly typical picture with a size marker and it looks to be about 2 microns (20,000 Angstroms, 2,000 nanoMeters) — that’s 314,159,265 square Angstroms (3.14 square microns).  So let’s assume each protein takes up a square 50 Angstroms on a side (2,500 square Angstroms).  That’s room for 125,600 proteins on each side assuming extremely dense packing.  However the density of acetyl choline receptors at the neuromuscular junction is 8,700/square micron, a packing also thought to be extremely dense which would give only 26,100 such proteins in a similarly distributed CNS synapse. So the numbers are at least in the right ball park (meaning they’re within an order of magnitude e.g. within a power of 10) of being correct.

What’s the point?

When you see how many different proteins and different varieties of the same protein reside in the cleft, the numbers for  each individual element is likely to be small, meaning that you can’t use statistical mechanics but must use sociology instead.

The review focuses on the neurExins (I capitalize the E  to help me remember that they are prEsynaptic).  Why?  Because they are the best studied of all the players.  What a piece of work they are.  Humans have 3 genes for them. One of the 3 contains 1,477 amino acids, spread over 1,112,187 basepairs (1.1 megaBases) along with 74 exons.  This means that just over 1/10 of a percent of the gene is actually coding for for the amino acids making it up.  I think it takes energy for RNA polymerase II to stitch the ribonucleotides into the 1.1 megabase pre-mRNA, but I couldn’t (quickly) find out how much per ribonucleotide.  It seems quite wasteful of energy, unless there is some other function to the process which we haven’t figured out yet.

Most of the molecule resides in the synaptic cleft.  There are 6 LNS domains with 3 interspersed EGFlike repeats, a cysteine loop domain, a transmembrane region and a cytoplasmic sequence of 55 amino acids. There are 6 sites for alternative splicing, and because there are two promoters for each of the 3 genes, there is a shorter form (beta neurexin) with less extracellular stuff than the long form (alpha-neurexin).  When all is said and done there are over 1,000 possible variants of the 3 genes.

Unlike olfactory neurons which only express one or two of the nearly 1,000 olfactory receptors, neurons express mutiple isoforms of each, increasing the complexity.

The LNS regions of the neurexins are like immunoglobulins and fill at 60 x 60 x 60 Angstrom box.  Since the synaptic cleft is at most 400 Angstroms long, the alpha -neurexins (if extended) reach all the way across.

Here the neurexins bind to the neuroligins which are always postsynaptic — sorry no mnemonic.  They are simpler in structure, but they are the product of 4 genes, and only about 40 isoforms (due to alternative splicing) are possible. Neuroligns 1, 3 and 4 are found at excitatory synapses, neuroligin 2 is found at inhibitory synapses.  The intracleft part of the neuroligins resembles an important enzyme (acetylcholinesterase) but which is catalytically inactive.  This is where the neurexins.

This is complex enough, but Sudhof notes that the neurexins are hubs interacting with multiple classes of post-synaptic molecules, in addition to the neuroligins — dystroglycan, GABA[A] receptors, calsystenins, latrophilins (of which there are 4).   There are at least 50 post-synaptic cell adhesion molecules — “Few are well understood, although many are described.”

The neurexins have 3 major sites where other things bind, and all sites may be occupied at once.  Just to give you a taste of he complexity involved (before I go on to  larger issues).

The second LNS domain (LNS2)is found only in the alpha-neurexins, and binds to neuroexophilin (of which there are 4) and dystroglycan .

The 6th LNS domain (LNS6) binds to neuroligins, LRRTMs, GABA[A] receptors, cerebellins and latrophilins (of which there are 4)_

The juxtamembrane sequence of the neurexins binds to CA10, CA11 and C1ql.

The cerebellins (of which there are 4) bind to all the neurexins (of a particular splice variety) and interestingly to some postsynaptic glutamic acid receptors.  So there is a direct chain across the synapse from neurexin to cerebellin to ion channel (GLuD1, GLuD2).

There is far more to the review. But here is something I didn’t see there.  People have talked about proton wires — sites on proteins that allow protons to jump from one site to another, and move much faster than they would if they had to bump into everything in solution.  Remember that molecules are moving quite rapidly — water is moving at 590 meters a second at room temperature. Since the synaptic cleft is 40 nanoMeters (40 x 10^-9 meters, it should take only 40 * 10^-9 meters/ 590 meters/second   60 trillionths of a second (60 picoSeconds) to cross, assuming the synapse is a free fly zone — but it isn’t as the review exhaustively shows.

It it possible that the various neurotransmitters at the synapse (glutamic acid, gamma amino butyric acid, etc) bind to the various proteins crossing the cleft to get their target in the postsynaptic membrane (e.g. neurotransmitter wires).  I didn’t see any mention of neurotransmitter binding to  the various proteins in the review.  This may actually be an original idea.

I’d like to put more numbers on many of these things, but they are devilishly hard to find.  Both the neuroligins and neurexins are said to have stalks pushing them out from the membrane, but I can’t find how many amino acids they contain.  It can’t find how much energy it takes to copy the 1.1 megabase neurexin gene in to mRNA (or even how much energy it takes to add one ribonucleotide to an existing mRNA chain).

Another point– proteins have a finite lifetime.  How are they replenished?  We know that there is some synaptic protein synthesis — does the cell body send packages of mRNAs to the synapse to be translated there.  There are at least 50 different proteins mentioned in the review, and don’t forget the thousands of possible isoforms, each of which requires a separate mRNA.

Old Chinese saying — the mountains are high and the emperor is far away. Protein synthesis at the synaptic cleft is probably local.  How what gets made and when is an entirely different problem.

A large part of the review concerns mutations in all these proteins associated with neurologic disease (particularly autism).  This whole area has a long and checkered history.  A high degree of cynicism is needed before believing that any of these mutations are causative.  As a neurologist dealing with epilepsy I saw the whole idea of ion channel mutations causing epilepsy crash and burn — here’s a link — https://luysii.wordpress.com/2011/07/17/we’ve-found-the-mutation-causing-your-disease-not-so-fast-says-this-paper/

Once again, hats off to Dr. Sudhof for what must have been a tremendous amount of work

The road to the Boltzmann constant

If you’re going to think seriously about cellular biophysics, you’ve really got to clearly understand the Boltzmann constant and what it means.

The road to it starts well outside the cell, in the perfect gas law, part of Chem 101. This seems rather paradoxic. Cells (particularly neurons) do use gases (carbon monoxide, hydrogen sulfide, nitric oxide, and of course oxygen and CO2) as they function, but they are far from all gas.

Get out your colored pencils with separate colors for pressure, energy, work, force, area, acceleration, volume. All of them are combinations of just 3 things mass, distance and time for which you don’t need a colored pen,

The perfect gas law is Pressure * Volume = n R Temperature — the units don’t matter at this point. R is the gas constant, and n is the number of moles (chem 101 again).

Pressure = Force / Area
Force = Mass * Acceleration
Acceleration = distance / (time * time )
Area = Distance * distance

Volume = Distance * distance * distance

So Pressure * Volume = { Mass * distance / (time * time * distance * distance ) } * { distance * distance * distance }

= mass * distance * distance / ( time * time )

This looks suspiciously like kinetic energy (because it is )

Since work is defined as force * distance == mass * acceleration * distance

This also comes out to mass * distance * distance / ( time * time )

So Pressure * Volume has the units of work or kinetic energy

Back to the perfect gas law P * V = n * R * T

It’s time to bring in the units we actually use to  measure energy and work.

Energy and work are measured in Joules. Temperature in degrees above absolute zero (e.g. degrees Kelvin) — 300 is close to body temperature at 81.

Assume we have one mole of gas. Then the gas constant (R) is just PV/T or Joules/degree kelvin == energy/degree kelvin.

Statistical mechanics thinks about molecules not moles (6.022 * 10^23 molecules).

So the Boltzmann constant is just the Gas constant (R) divided by (the number of molecules in a mole * one degree Kelvin ) — it’s basically the energy each molecule posses divided by the current temperature — it is called k and equals 8.31441 Joules/ (mole * degree kelvin)

Biophysicists are far more interested in how much energy a given molecule has at body temperature — to find this multiply k by T (which is why you see kT all over the place.

At 300 Kelvin

kT is
4 picoNewton * nanoMNeters — work
23 milliElectron volts
.6 kiloCalories/mole
4.1 * 10^-21 joules/molecule — energy

Now we’re ready to start thinking about the molecular world.

I should do it, but hopefully someone out there can use this information to find how fast a sodium ion is moving around in our cells. Perhaps I’ll do this in a future post if no one does — it’s probably out there on the net.

Smoke, mirrors and statistical mechanics

This will be the year I look at PChem and biophysics. What comes first? Why thermodynamics of course, and chemists always think of molecules not steam engines, so statistical mechanics comes before thermodynamics

The assumptions behind statistical mechanics are really so bizarre that it’s a miracle that it works at all, but work it does.

Macrostates are things you can measure — temperature, pressure, volume.

Microstates give rise to macrostates, but you can’t measure them. However even though you can’t measure them, you can distinguish different ones and count them. Then you assume that each microstate is equally probable, even though you have no way in hell of experimentally measuring even one of them, and probability is what you find after repeated measurements (none of which you can make).

Amazing.