Are Van der Waals interactions holding asteroids together?

A recent post of Derek’s concerned the very weak (high kD) but very important interactions of proteins within our cells. http://pipeline.corante.com/archives/2014/08/14/proteins_grazing_against_proteins.phpAr

Most of this interaction is due to Van der Waals forces — http://en.wikipedia.org/wiki/Van_der_Waals_force. Shape shape complementarity (e.g. steric factors) and dipole dipole interactions are also important.

Although important, Van der Waals interactions have always seemed like a lot of hand waving to me.

Well guess what, they are now hypothesized to be what is holding an asteroid together. Why are people interested in asteroids in the first place? [ Science vol. 338 p. 1521 '12 ] “Asteroids and comets .. reflect the original chemical makeup of the solar system when it formed roughly 4.5 billion years ago.”

[ Nature vol. 512 p. 118 '14 ] The Rosetta spacecraft reached the comet 67P/Churyumov-Gerasimenko after a 10 year journey becoming the first spacecraft to rendezvous with a comet. It will take a lap around the sun with the comet and will watch as the comet heats up and releases ice in a halo of gas and dust. It is now flying triangles in front of the comet, staying 100 kiloMeters away. In a few weeks it will settle into a 30 kiloMeter orbit around he comet. It will attempt to place a lander (Philae) the size of a washing machine on its surface in November. The comet is 4 kiloMeters long.

[ Nature vol. 512 pp. 139 - 140, 174 - 176 '14 ] A kiloMeter sized near Earth asteroid called (29075) 1950 DA (how did they get this name?) is covered with sandy regolith (heterogeneous material covering solid rock { on earth } it includes dust, soil, broken rock ). The asteroid rotates every 2+ hours, and it is so small that gravity alone can’t hold the regolith to its surface. An astronaut could scoop up a sample from its surface, but would have to hold on to the asteroid to avoid being flung off by the rotation. So the asteroid must have some degree of cohesive strength. The strength required is 64 pascals to hold the rubble together — about the pressure that a penny exerts on the palm of your hand. A Pascal is 1/101,325 of atmospheric pressure.

They think the strength comes from van der Waals interactions between small (1 – 10 micron) grains — making it fairy dust. It’s rather unsatisfying as no one has seen these particles.

The ultimate understanding of the large multi-protein and RNA machines (ribosome, spliceosome, RNA polymerase etc. etc. ) without which life would be impossible will involve the very weak interactions which hold them together. Along with permanent dipole dipole interactions, charge interactions and steric complementarity, the van der Waals interaction is high on anyone’s list.

Some include dipole dipole interactions as a type of van der Waals interaction. The really fascinating interaction is the London dispersion force. These are attractions seen between transient induced dipoles formed in the electron clouds surrounding each atomic nucleus.

It’s time to attempt the surmount the schizophrenia which comes from trying to see how quantum mechanics gives rise to the macroscopic interactions between molecules which our minds naturally bring to matters molecular (with a fair degree of success).

Steric interactions come to mind first — it’s clear that an electron cloud surrounding molecule 1 should repel another electron cloud surrounding molecule 2. Shape complementarity should allow two molecules to get closer to each other.

What about the London dispersion forces, which are where most of the van der Waals interaction is thought to be. We all know that quantum mechanical molecular orbitals are static distributions of electron probability. They don’t fluctuate (at least the ones I’ve read about). If something is ‘transiently inducing a dipole’ in a molecule, it must be changing the energy level of a molecule, somehow. All dipoles involve separation of charge, and this always requires energy. Where does it come from? The kinetic energy of the interacting molecules? Macroscopically it’s easy to see how a collision between two molecules could change the vibrational and/or rotation energy levels of a molecule. What does a collision between between molecules look like in terms of the wave functions of both. I’ve never seen this. It has to have been worked out for single particle physics in an accelerators, but that’s something I’ve never studied.

One molecule inducing a transient dipole in another, which then induces a complementary dipole in the first molecule, seems like a lot of handwaving to me. It also appears to be getting something for nothing contradicting the second law of thermodynamics.

Any thoughts from the physics mavens out there?

I sincerely hope it works, but I’m very doubtful

A fascinating series of papers offers hope (in the form of a small molecule) for the truly horrible Werdnig Hoffman disease which basically kills infants by destroying neurons in their spinal cord. For why this is especially poignant for me, see the end of the post.

First some background:

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 - 9623 '00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

Intellectually fascinating but ghastly to watch.

This brings us to the current papers [ Science vol. 345 pp. 624 - 625, 688 - 693 '14 ].

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

Why am I skeptical? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Why does the paper grip me so. I watched the beautiful infant daughter of a cop and a nurse die of it 30 – 40 years ago. Even with all the degrees, all the training I was no better for the baby than my immigrant grandmother dispensing emotional chicken soup from her dry goods store (she only had a 4th grade education). Fortunately, the couple took the 25% risk of another child with WH and produced a healthy infant a few years later.

A second reason — a beautiful baby grandaughter came into our world 24 hours ago.

Poets and religious types may intuit how miraculous our existence is, but the study of molecular biology proves it (to me at least).

As if the job shortage for organic/medicinal chemists wasn’t bad enough

Will synthetic organic chemists be replaced by a machine? Today’s (7 August ’14) Nature (vol. 512 pp. 20 – 22) describes RoboChemist. As usual the job destruction is the fruit of the species being destroyed. Nothing new here — “The Capitalists will sell us the rope with which we will hang them.” — Lenin. “I would consider it entirely feasible to build a synthesis machine which could make any one of a billion defined small molecules on demand” says one organic chemist.

The design of the machine is already being studied, but with a rather paltry grant (1.2 million dollars). Even worse, for the thinking chemist, the choice of reactants and reactions to build the desired molecule will be made by the machine (given a knowledge base, and the algorithms that experienced chemists use, assuming they can be captured by a set of rules). E. J. Corey tried to do this automatically years ago with a program called LHASA (Logic and Heuristics Applied to Synthetic Analysis), but it never took off. Corey formalized what chemists had been doing all along — see http://luysii.wordpress.com/2010/06/20/retrosynthetic-analysis-and-moliere/

Another attempt along these lines is Chematica, which recently has had some success. A problem with using the chemical literature, is that only the conditions for a successful reaction are published. A synthetic program needs to know what doesn’t work as much as it needs to know what does. This is an important problem in the medical/drug literature where only studies showing a positive effect are published. There’s a great chapter in “How Not to Be Wrong” concerning the “International Journal of Haruspicy” which publishes only statically significant results for predicting the future reading sheep entrails. They publish a lot of stuff because some 400 Haruspicists in different labs are busy performing multiple experiments, 5% of which reach statistical significance. Previously drug companies had to publish only successful clinical trials. Now they’ll be going into a database regardless of outcome.

Automated machinery for making polynucleotides and poly peptides already exists, but here the reactions are limited. Still, the problem of getting the same reaction to work over and over with different molecules of the same class (amino acids, nucleotides) has been solved.

The last sentence is the most chilling “And with a large workforce of graduate students to draw on, academic labs often have little incentive to automate.” Academics — the last Feudal system left standing.

However, telephone operators faced the same fate years ago, due to automatic switching machinery. Given the explosion of telephone volume 50 years ago, there came a point where every woman in the USA would have worked for the phone company to handle the volume.

A similar moment of terror occurred in my field (clinical neurology) years ago with the invention of computerized axial tomography (CAT scans). All our diagnostic and examination skills (based on detecting slight deviations from normal function) would be out the window, when the CAT scan showed what was structurally wrong with the brain. Diagnosis was possible because abnormalities in structure invariably occurred earlier than abnormalities in function. Didn’t happen. We’d get calls – we found this thing on the CAT scan. What does it mean?

Even this wonderful machine which can make any molecule you wish, will not tell you what cellular entity to attack, what the target does, and how attacking it will produce a therapeutically useful result.

Getting cytoplasm out of a single cell without killing it

It’s easy to see what cells are doing metabolically. Just take a million or so, grind them up and measure what you want. If this sounds crude to you, you’re right. We’ve learned a lot this way, but wouldn’t it be nice to take a single cell and get a sample of its cytoplasm (or it’s nucleus) without killing it. A technique described in the 29 July PNAS (vol. pp. 10966 – 10971 ’14) does just that. It’s hardly physiologic, as cells are grown on a layer of polycarbonate containing magnetically active carbon nanoTubes http://en.wikipedia.org/wiki/Carbon_nanotube covered in L-tyrosine polymers. The nanotubes are large enough to capture anything smaller than an organelle (1,000 Angstrom, 100 nanoMeter diameter, 15,000 Angstroms long). Turn on a magnetic underneath the polycarbonate, and they puncture the overlying cell and are filled with cytoplasm. Reverse the magnetic field and they come out, carrying the metabolites with them. Amazingly, there was no significant impact on cell viability or proliferation. Hardly physiologic but far better than what we’ve had.

It’s a long way from drug development, but wouldn’t it be nice to place your drug candidate inside a cell and watch what it’s doing?

A Troublesome Inheritance – IV — Chapter 3

Chapter III of “A Troublesome Inheritance” contains a lot of very solid molecular genetics, and a lot of unfounded speculation. I can see why the book has driven some otherwise rational people bonkers. Just because Wade knows what he’s talking about in one field, doesn’t imply he’s competent in another.

Several examples: p. 41 “”Nonethless, it is reasonable to assume that if traits like skin color have evolved in a population, the same may be true of its social behavior.” Consider yes, assume no.

p. 42 “The society of living chimps can thus with reasonable accuracy stand as a surrogate for the joint ancester” (of humans and chimps — thought to be about 7 megaYears ago) and hence describe the baseline from which human social behavior evolved.” I doubt this.

The chapter contains many just so stories about the evolution of chimp and human societies (post hoc propter hoc). Plausible, but not testable.

Then follows some very solid stuff about the effects of the hormone oxytocin (which causes lactation in nursing women) on human social interaction. Then some speculation on the ways natural selection could work on the oxytocin system to make people more or less trusting. He lists several potential mechanisms for this (1) changes in the amount of oxytocin made (2) increasing the number of protein receptors for oxytocin (3) making each receptor bind oxytocin more tightly. This shows that Wade has solid molecular biological (and biological) chops.

He quotes a Dutch psychologist on his results with oxytocin and sociality — unfortunately, there have been too many scandals involving Dutch psychologists and sociologists to believe what he says until its replicated (Google Diederik Stapel, Don Poldermans, Jens Forster, Markus Denzler if you don’t believe me). It’s sad that this probably honest individual is tarred with that brush but he is.

p. 59 — He notes that the idea that human behavior is solely the result of social conditions with no genetic influence is appealing to Marxists, who hoped to make humanity behave better by designing better social conditions. Certainly, much of the vitriol heaped on the book has come from the left. A communist uncle would always say ‘it’s the system’ to which my father would reply ‘people will corrupt any system’.

p. 61 — the effect of mutations of lactose tolerance on survival on society are noted — people herding cattle and drinking milk, survive better if their gene to digest lactose (the main sugar in milk) isn’t turned off after childhood. If your society doesn’t herd animals, there is no reason for anyone to digest milk after weaning from the breast. The mutations aren’t in the enzyme digesting lactose, but in the DNA that turns on expression of the gene for the enzyme (e.g. the promoter). Interestingly, 3 separate mutations in African herders have been found to do this, and different from the one that arose in the Funnel Beaker Culture of Scandinavia 6,000 yers ago. This is a classic example of natural selection producing the same phenotypic effect by separate mutations.

There is a much bigger biological fish to be fried here, which Wade doesn’t discuss. It takes energy to make any protein, and there is no reason to make a protein to help you digest milk if you aren’t nursing, and one very good reason not to — it wastes metabolic energy, something in short supply in humans as they lived until about 15,000 years ago. So humans evolved a way not to make the protein in adult life. The genetic change is in the DNA controlling protein production not the protein itself.

You may have heard it said that we are 98% Chimpanzee. This is true in the sense that our 20,000 or so proteins are that similar to the chimp. That’s far from the whole story. This is like saying Monticello and Independence Hall are just the same because they’re both made out of bricks. One could chemically identify Monticello bricks as coming from the Virginia piedmont, and Independence Hall bricks coming from the red clay of New Jersey, but the real difference between the buildings is the plan.

It’s not the proteins, but where and when and how much of them are made. The control for this (plan if you will) lies outside the genes for the proteins themselves, in the rest of the genome. The control elements have as much right to be called genes, as the parts of the genome coding for amino acids. Granted, it’s easier to study genes coding for proteins, because we’ve identified them and know so much about them. It’s like the drunk looking for his keys under the lamppost because that’s where the light is.

p. 62 — There follows some description of the changes of human society from hunter gathering, to agrarian, to the rise of city states, is chronicled. Whether adaptation to different social organizations produced genetic changes permitting social adaptation or were the cause of it isn’t clear. Wade says “changes in social behavior, has most probably been molded by evolution, through the underlying genetic changes have yet to be identified.” This assumes a lot, e.g. that genetic changes are involved. I’m far from sure, but the idea is not far fetched. Stating that genetic changes have never, and will never shape society, is without any scientific basis, and just as fanciful as many of Wade’s statements in this chapter. It’s an open question, which is really all Wade is saying.

In defense of Wade’s idea, think about animal breeding as Darwin did extensively. The Origin of Species (worth a read if you haven’t already read it) is full of interchanges with all sorts of breeders (pigeons, cattle). The best example we have presently are the breeds of dogs. They have very different personalities — and have been bred for them, sheep dogs mastifs etc. etc. Have a look at [ Science vol. 306 p. 2172 '04, Proc. Natl. Acad. Sci. vol. 101 pp. 18058 - 18063 '04 ] where the DNA of variety of dog breeds was studied to determine which changes determined the way they look. The length of a breed’s snout correlated directly with the number of repeats in a particular protein (Runx-2). The paper is a decade old and I’m sure that they’re starting to look at behavior.

More to the point about selection for behavioral characteristics, consider the domestication of the modern dog from the wolf. Contrast the dog with the chimp (which hasn’t been bred).

[ Science vol. 298 pp. 1634 - 1636 '02 ] Chimps are terrible at picking up human cues as to where food is hidden. Cues would be something as obvious as looking at the containing, pointing at the container or even touching it. Even those who eventually perform well, take dozens of trials or more to learn it. When tested in more difficult tests requiring them to show flexible use of social cues they don’t

This paper shows that puppies (raised with no contact with humans) do much better at reading humans than chimps. However wolf cubs do not do better than the chimps. Even more impressively, wolf cubs raised by humans don’t show the same skills. This implies that during the process of domestication, dogs have been selected for a set of social cognitive abilities that allow them to communicate with humans in unique ways. Dogs and wolves do not perform differently in a non-social memory task, ruling out the possibility that dogs outperform wolves in all human guided tasks.

All in all, a fascinating book with lots to think about, argue with, propose counterarguments, propose other arguments in support (as I’ve just done), etc. etc. Definitely a book for those who like to think, whether you agree with it all or not.

Old dog does new(ly discovered) tricks

One of the evolutionarily oldest enzyme classes is aaRS (for amino acyl tRNA synthetase). Every cell has them including bacteria. Life as we know it wouldn’t exist without them. Briefly they load tRNA with the appropriate amino acid. If this Greek to you, look at the first 3 articles in https://luysii.wordpress.com/category/molecular-biology-survival-guide/.

Amino acyl tRNA syntheses are enzymes of exquisite specificity, having to correctly match up 20 amino acids to some 61 different types of tRNAs. Mistakes in the selection of the correct amino acid occurs every 1/10,000 to 1/100,000, and in the selection of the correct tRNA every 1/1,000,000. The lower tRNA error rate is due to the fact that tRNAs are much larger than amino acids, and so more contacts between enzyme and tRNA are possible.

As the tree of life was ascended from bacteria over billions of years, 13 new protein domains which have no obvious association with aminoacylation have been added to AARS genes. More importantly, the additions have been maintained over the course of evolution (with no change in the primary function of the synthetase). Some of the new domains are appended to each of several synthetases, while others are specific to a single synthetase. The fact that they’ve been retained implies they are doing something that natural selection wants (teleology inevitably raises its ugly head with any serious discussion of molecular biology or cellular physiology — it’s impossible to avoid).

[ Science vol.345 pp 328 - 332 '14 ] looked at what mRNAs some 37 different AARS genes were transcribed into. Six different human tissues were studied this way. Amazingly, 79% of the 66 in-frame splice variants removed or disrupted the aaRS catalytic domain. . The AARS for histidine had 8 inframe splice variants all of which removed the catalytic domain. 60/70 variants losing the catalytic domain (they call these catalytic nulls) retained at least one of the 13 added domains in higher eukaryotes. Some of the transcripts were tissue specific (e.g. present in some of the 6 tissues but not all).

Recent work has shown roles for specific AARSs in a variety of pathways — blood vessel formation, inflammation, immune response, apoptosis, tumor formation, p53 signaling. The process of producing a completely different function for a molecule is called exaptation — to contrast it with adaptation.

Up to now, when a given protein was found to have enzymatic activity, the book on what that protein did was closed (with the exception of the small GTPases). End of story. Yet here we have cells spending the metabolic energy to make an enzymatically dead protein (aaRSs are big — the one for alanine has nearly 1,000 amino acids). Teleology screams — what is it used for? It must be used for something! This is exactly where chemistry is silent. It can explain the incredible selectivity and sensitivity of the enzyme but not what it is ‘for’. We have crossed the Cartesian dualism between flesh and spirit.

Could this sort of thing be the tip of the iceberg? We know that splice variants of many proteins are common. Could other enzymes whose function was essentially settled once substrates were found, be doing the same thing? We may have only 20,000 or so protein coding genes, but 40,000, 60,000, . . . or more protein products of them, each with a different biological function.

So aaRSs are very old molecular biological dogs, who’ve been doing new tricks all along. We just weren’t smart enough to see them (’till now).

Novels may have only 7 basic plots, but molecular biology continues to surprise and enthrall.

Two math tips

Two of the most important theorems in differential geometry are Gauss’s Theorem egregium and the Inverse function theorem. Basically the theorem egregium says that you don’t need to look at the shape of a two dimensional surface (say the surface of a walnut) from outside (e.g. from the way it sits in 3 dimensional space) to understand its shape. All the information is contained in the surface itself.

The inverse function theorem (InFT) is used over and over. If you have a continuous function from Euclidean space U of finite dimension n to Euclidean space V of the same dimension, and certain properties of its derivative are present at a point x of U, then there exists a another function to get you back from space V to U.

Even better, once you’ve proved the inverse function theorem, proof of another important theorem (the implicit function theorem aka the ImFT) is quite simple. The ImFT lets you know if given f(x, y, .. .) –> R (e.g. a real valued function) if you can express one variable (say x) in terms of the others. Again sometimes it’s difficult to solve such an equation for x in terms of y — consider arctan(e^(x + y^2) * sin(xy) + ln x). What is important to know in this case, is whether it’s even possible.

The proofs of both are tricky. In particular, the proof of the inverse function theorem is an existence proof. You may not be able to write down the function from V to U even though you’ve just proved that it exists. So using the InFT to prove the implicit function theory is also nonconstructive.

At some point in your mathematical adolescence, you should sit down and follow these proofs. They aren’t easy and they aren’t short.

Here’s where to go. Both can be found in books by James J. Callahan, emeritus professor of Mathematics at Smith College in Northampton Mass. The proof of the InVT is to be found on pages 169 – 174 of his “Advanced Calculus, A Geometric View”, which is geometric, with lots of pictures. What’s good about this proof is that it’s broken down into some 13 steps. Be prepared to meet a lot of functions and variables.

Just the statement of InVT involves functions f, f^-1, df, df^-1, spaces U^n, R^n, variables a, q, B

The proof of InVT involves functions g, phi, dphi, h, dh, N, most of which are vector valued (N is real valued)

Then there are the geometric objects U^n, R^n, Wa, Wfa, Br, Br/2

Vectors a, x, u, delta x, delta u, delta v, delta w

Real number r

That’s just to get you through step 8 of the 13 step proof, which proves the existence of the inverse function (aka f^-1). The rest involves proving properties of f^-1 such as continuity and differentiability. I must confess that just proving existence of f^-1 was enough for me.

The proof of the implicit function theorem for two variables — e.g. f(x, y) = k takes less than a page (190).

The proof of the Theorem Egregium is to be found in his book “The Geometry of Spacetime” pp. 258 – 262 in 9 steps. Be prepared for fewer functions, but many more symbols.

As to why I’m doing this please see http://luysii.wordpress.com/2011/12/31/some-new-years-resolutions/

Keep on truckin’ Dr. Schleyer

My undergraduate advisor (Paul Schleyer) has a new paper out in the 15 July ’14 PNAS pp. 10067 – 10072 at age 84+. Bravo ! He upends what we were always taught about electrophilic aromatic addition of halogens. The Arenium ion is out (at least in this example). Anyone with a smattering of physical organic chemistry can easily follow his mechanistic arguments for a different mechanism.

However, I wonder if any but the hardiest computational chemistry jock can understand the following (which is how he got his results) and decide if the conclusions follow.

Our Gaussian 09 (54) computations used the 6-311+G(2d,2p) basis set (55, 56) with the B3LYP hybrid functional (57⇓–59) and the Perdew–Burke–Ernzerhof (PBE) functional (60, 61) augmented with Grimme et al.’s (62) density functional theory with added Grimme’s D3 dispersion corrections (DFT-D3). Single-point energies of all optimized structures were obtained with the B2-PLYP [double-hybrid density functional of Grimme (63)] and applying the D3 dispersion corrections.

This may be similar to what happened with functional MRI in neuroscience, where you never saw the raw data, just the end product of the manipulations on the data (e.g. how the matrix was inverted and what manipulations of the inverted matrix was required to produce the pretty pictures shown). At least here, you have the tools used laid out explicitly.

Do axons burp out mitochondria?

People have been looking at microscope slides of the brain almost since there were microscopes (Alzheimer’s paper on his disease came out in 1906). Amazingly, something new has just been found [ Proc. Natl. Acad. Sci. vol. 111 pp. 9633 - 9638 '14 ]

To a first approximation, the axon of a neuron is the long process which carries impulses to other neurons far away. They have always been considered to be quite delicate (particularly in the brain itself, in the limbs they are sheathed in tough connective tissue). After an axon is severed in the limbs, all sorts of hell breaks lose. The part of the axon no longer in contact with the neuron degenerates (Wallerian degeneration), and the neuron cell body still attached to the remaining axon, changes markedly (central chromatolysis). At least the axons making up peripheral nerves do grow back (but maddeningly slowly). In the brain, they don’t, yet another reason neurologic disease is so devastating. Huge research efforts have been made to find out why. All sorts of proteins have been found which hinder axonal regrowth in the brain (and the spinal cord). Hopefully, at some point blocking them will lead to treatment.

THe PNAS paper found that axons in the optic nerve of the mouse (which arise from neurons in the retina) burp out mitochondria. Large protrusions form containing mitochondria which are then shed, somehow leaving the remaining axon intact (remarkable when you think of it). Once shed the decaying mitochondria are found in the cells supporting the axons (astrocytes). Naturally, the authors made up a horrible name to describe the process and sound impressive (transmitophagy).

This probably occurs elsewhere in the brain, because accumulation of degrading mitochondria along nerve processes in the superficial layers of the cerebral cortex (the gray matter on the surface of the brain) have been seen. People are sure to start looking for this everywhere in the brain, and perhaps outside as well.

Where else does sort of thing this occur? In the fertilized egg, that’s where. Sperm mitochondria are activated in the egg (which is why you get your mitochondria from mommy).

Trouble in River City (aka Brain City)

300 European neuroscientists are unhappy. If 50,000,000 Frenchmen can’t be wrong, can the neuroscientists be off base? They don’t like that way things are going in a Billion Euro project to computationally model the brain (Science vol. 345 p. 127 ’14 11 July, Nature vol. 511 pp. 133 – 134 ’14 10 July). What has them particularly unhappy is that one of the sections involving cognitive neuroscience has been eliminated.

A very intelligent Op-Ed in the New York Times 12 July by psychology professor, notes that we have no theory of how to go from neurons, their connections and their firing of impulses, to how the brain produces thought. Even better, he notes that we have no idea of what such a theory would look like, or what we would accept as an explanation of brain function in terms of neurons.

While going from the gene structure in our DNA to cellular function, from there to function at the level of the organ, and from the organ to the function of the organism is more than hard (see https://luysii.wordpress.com/2014/07/09/heres-a-drug-target-for-schizophrenia-and-other-psychiatric-diseases/) at least we have a road map to guide us. None is available to take us from neurons to thought, and the 300 argue that concentrating only on neurons, while producing knowledge, won’t give us the explanation we seek. The 300 argue that we should progress on all fronts, which the people running the project reject as too diffuse.

I’ve posted on this problem before — I don’t think a wiring diagram of the brain (while interesting) will tell us what we want to know. Here’s part of an earlier post — with a few additions and subtractions.

Would a wiring diagram of the brain help you understand it?

Every budding chemist sits through a statistical mechanics course, in which the insanity and inutility of knowing the position and velocity of each and every of the 10^23 molecules of a mole or so of gas in a container is brought home. Instead we need to know the average energy of the molecules and the volume they are confined in, to get the pressure and the temperature.

However, people are taking the first approach in an attempt to understand the brain. They want a ‘wiring diagram’ of the brain. e. g. a list of every neuron and for each neuron a list of the other neurons connected to it, and a third list for each neuron of the neurons it is connected to. For the non-neuroscientist — the connections are called synapses, and they essentially communicate in one direction only (true to a first approximation but no further as there is strong evidence that communication goes both ways, with one of the ‘other way’ transmitters being endogenous marihuana). This is why you need the second and third lists.

Clearly a monumental undertaking and one which grows more monumental with the passage of time. Starting out in the 60s, it was estimated that we had about a billion neurons (no one could possibly count each of them). This is where the neurological urban myth of the loss of 10,000 neurons each day came from. For details see http://luysii.wordpress.com/2011/03/13/neurological-urban-legends/.

The latest estimate [ Science vol. 331 p. 708 '11 ] is that we have 80 billion neurons connected to each other by 150 trillion synapses. Well, that’s not a mole of synapses but it is a nanoMole of them. People are nonetheless trying to see which areas of the brain are connected to each other to at least get a schematic diagram.

Even if you had the complete wiring diagram, nobody’s brain is strong enough to comprehend it. I strongly recommend looking at the pictures found in Nature vol. 471 pp. 177 – 182 ’11 to get a sense of the complexity of the interconnection between neurons and just how many there are. Figure 2 (p. 179) is particularly revealing showing a 3 dimensional reconstruction using the high resolutions obtainable by the electron microscope. Stare at figure 2.f. a while and try to figure out what’s going on. It’s both amazing and humbling.

But even assuming that someone or something could, you still wouldn’t have enough information to figure out how the brain is doing what it clearly is doing. There are at least 6 reasons.

l. Synapses, to a first approximation, are excitatory (turn on the neuron to which they are attached, making it fire an impulse) or inhibitory (preventing the neuron to which they are attached from firing in response to impulses from other synapses). A wiring diagram alone won’t tell you this.

2. When I was starting out, the following statement would have seemed impossible. It is now possible to watch synapses in the living brain of awake animal for extended periods of time. But we now know that synapses come and go in the brain. The various papers don’t all agree on just what fraction of synapses last more than a few months, but it’s early times. Here are a few references [ Neuron vol. 69 pp. 1039 - 1041 '11, ibid vol. 49 pp. 780 - 783, 877 - 887 '06 ]. So the wiring diagram would have to be updated constantly.

3. Not all communication between neurons occurs at synapses. Certain neurotransmitters are generally released into the higher brain elements (cerebral cortex) where they bathe neurons and affecting their activity without any synapses for them (it’s called volume neurotransmission) Their importance in psychiatry and drug addiction is unparalleled. Examples of such volume transmitters include serotonin, dopamine and norepinephrine. Drugs of abuse affecting their action include cocaine, amphetamine. Drugs treating psychiatric disease affecting them include the antipsychotics, the antidepressants and probably the antimanics.

4. (new addition) A given neuron doesn’t contact another neuron just once as far as we know. So how do you account for this by a graph (which I think allows only one connection between any two nodes).

5. (new addition) All connections (synapses) aren’t created equal. Some are far, far away from the part of the neuron (the axon) which actually fires impulses, so many of them have to be turned on at once for firing to occur. So in addition to the excitatory/inhibitory dichotomy, you’d have to put another number on each link in the graph, about the probability of a given synapse producing and effect. In general this isn’t known for most synapses.

6. (new addition) Some connections never directly cause a neuron to fire or not fire. They just increase or decrease the probability that a neuron will fire or not fire with impulses at other synapses.These are called neuromodulators, and the brain has tons of different ones.

Statistical mechanics works because one molecule is pretty much like another. This certainly isn’t true for neurons. Have a look at http://faculties.sbu.ac.ir/~rajabi/Histo-labo-photos_files/kora-b-p-03-l.jpg. This is of the cerebral cortex — neurons are fairly creepy looking things, and no two shown are carbon copies.

The mere existence of 80 billion neurons and their 150 trillion connections (if the numbers are in fact correct) poses a series of puzzles. There is simply no way that the 3.2 billion nucleotides of out genome can code for each and every neuron, each and every synapse. The construction of the brain from the fertilized egg must be in some sense statistical. Remarkable that it happens at all. Embryologists are intensively working on how this happens — thousands of papers on the subject appear each year.

(Addendum 17 July ’14) I’m fortunate enough to have a family member who worked at Bell labs (when it existed) and who knows much more about graph theory than I do. Here are his points and a few comments back

Seventh paragraph: Still don’t understand the purpose of the three lists, or what that buys that you don’t get with a graph model. See my comments later in this email.

“nobody’s brain is strong enough to comprehend it”: At some level, this is true of virtually every phenomenon of Nature or science. We only begin to believe that we “comprehend” something when some clever person devises a model for the phenomenon that is phrased in terms of things we think we already understand, and then provides evidence (through analysis and perhaps simulation) that the model gives good predictions of observed data. As an example, nobody comprehended what caused the motion of the planets in the sky until science developed the heliocentric model of the solar system and Newton et al. developed calculus, with which he was able to show (assuming an inverse-square behavior of the force of gravity) that the heliocentric model explained observed data. On a more pedestrian level, if a stone-age human was handed a personal computer, his brain couldn’t even begin to comprehend how the thing does what it does — and he probably would not even understand exactly what it is doing anyway, or why. Yet we modern humans, at least us engineers and computer scientists, think we have a pretty good understanding of what the personal computer does, how it does it, and where it fits in the scheme of things that modern humans want to do.

Of course we do, that’s because we built it.

On another level, though, even computer scientists and engineers don’t “really” understand how a personal computer works, since many of the components depend for their operation on quantum mechanics, and even Feynman supposedly said that nobody understands quantum mechanics: “If you think you understand quantum mechanics, you don’t understand quantum mechanics.”

Penrose actually did think the brain worked by quantum mechanics, because what it does is nonAlgorithmic. That’s been pretty much shot down.

As for your six points:

Point 1 I disagree with. It is quite easy to express excitatory or inhibitory behavior in a wiring diagram (graph). In fact, this is done all the time in neural network research!

Point 2: Updating a graph is not necessarily a big deal. In fact, many systems that we analyze with graph theory require constant updating of the graph. For example, those who analyze, monitor, and control the Internet have to deal with graphs that are constantly changing.

Point 3: Can be handled with a graph model, too. You will have to throw in additional edges that don’t represent synapses, but instead represent the effects of neurotransmitters.Will this get to be complicated graph? You bet. But nobody ever promised an uncomplicated model. (Although uncomplicated — simple — models are certainly to be preferred over complicated ones.)

Point 4: This can be easily accounted for in a graph. Simply place additional edges in the graph to account for the additional connections. This adds complexity, but nobody ever promised the model would be simple. Another alternative, as I mentioned in an earlier email, is to use a hypergraph.

Point 5: Not sure what your point is here. Certainly there is a huge body of research literature on probabilistic graphs (e.g., Markov Chain models), so there is nothing you are saying here that is alien to what graph theory researchers have been doing for generations. If you are unhappy that we don’t know some probabilistic parameters for synapses, you can’t be implying that scientists must not even attempt to discover what these parameters might be. Finding these parameters certainly sounds like a challenge, but nobody ever claimed this was an un-challenging line of research.

In addition to not knowing the parameters, you’d need a ton of them, as it’s been stated frequency in the literature that the ‘average’ neuron has 10,000 synapses impinging on it. I’ve never been able to track this one down. It may be neuromythology, like the 10,000 synapses we’re said to lose every day. With 10,000 adjustable parameters you could make a neuron sing the Star Spangled Banner. Perhaps this is why we can sing the Star Spangled Banner.

Point 6: See my comments on Point 5. Probabilistic graph models have been well-studied for generations. Nothing new or frightening in this concept.

Follow

Get every new post delivered to your Inbox.

Join 68 other followers