Phil Anderson probably never heard of Ludwig Mies Van Der Rohe, he of the Bauhaus and his famous dictum ‘less is more’, so he probably wasn’t riffing on it when he wrote “More Is Different” in August of 1970 [ Science vol. 177 pp. 393 – 396 ’72 ] — https://science.sciencemag.org/content/sci/177/4047/393.full.pdf.
I was just finishing residency and found it a very unusual paper for Science Magazine. His Nobel was 5 years away, but Anderson was of sufficient stature that Science published it. The article was a nonphilosophical attack on reductionism with lots of hard examples from solid state physics. It is definitely worth reading, if the link will let you. The philosophic repercussions are still with us.
He notes that most scientists are reductionists. He puts it this way ” The workings of our minds and bodies and of all the matter animate and inanimate of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under extreme conditions we feel we know pretty well.”
So many body physics/solid state physics obeys the laws of particle physics, chemistry obeys the laws of many body physics, molecular biology obeys the laws of chemistry, and onward and upward to psychology and the social sciences.
What he attacks is what appears to be a logical correlate of this, namely that understanding the fundamental laws allows you to derive from them the structure of the universe in which we live (including ourselves). Chemistry really doesn’t predict molecular biology, and cellular molecular biology doesn’t really predict the existence of multicellular organisms. This is because new phenomena arise at each level of increasing complexity, for which laws (e.g. regularities) appear which don’t have an explanation by reducing them the next fundamental level below.
Even though the last 48 years of molecular biology, biophysics have shown us a lot of new phenomena, they really weren’t predictable. So they are a triumph of reductionism, and yet —
As soon as you get into biology you become impaled on the horns of the Cartesian dualism of flesh vs. spirit. As soon as you ask what something is ‘for’ you realize that reductionism can’t help. As an example I’ll repost an old one in which reductionism tells you exactly how something happens, but is absolutely silent on what that something is ‘for’
The limits of chemical reductionism
“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago. Chemists are used to being crapped on, because it starts so early and never lets up. However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?
Early on, physicists said that quantum mechanics explains all of chemistry. Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest. Approximate the solution, yes, but this is hardly a pure reduction of chemistry to physics. So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.
The last post “The death of the synonymous codon – II” — https://luysii.wordpress.com/2011/05/09/the-death-of-the-synonymous-codon-ii/ –puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself. We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well. We understand why less transfer RNA for a particular codon would mean slower protein synthesis. Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence. So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction. Not bad. Not bad at all.
Now ask yourself, why the cell would want to have less gamma actin around than beta actin. There is no conceivable explanation for this in terms of chemistry. A better understanding of protein structure won’t give it to you. Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same. Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.
Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.
So there you have it. Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present. It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you. So the effect is not some trivial toy model someone made up.
Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear. Probably no one has tried (please correct me if I’m wrong). They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.
Chemistry and molecular biology aren’t general this way. We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other. Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make. This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).
If you’re interested in how far reductionism can take us have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html
Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? “
Comments
What about when reductionism fails? I fear we may have been led down that road by the connectionist model of memory. This model is an abstraction based on other abstractions: the McCulloch-Pitts model of the neuron, the Hebb model of learning, and the Hubel-Wiesel model of vision. Granted, simulations of connectionist models have achieved enormous success in neural-network pattern recognition algorithms that translate languages for Google and suggest crap you might want to buy on Amazon. That’s not really a reason to take connectionist models seriously today as a model of human memory.
The McCulloch-Pitts model treats the individual neuron as a very low-level function, like a transistor or gate in a computer (or more specifically, what engineers call a threshold-logic gate). This model is based on experimental observations, so it is broadly correct but highly simplified. Brushed under the rug of synaptic facilitation and potentiation are other mechanisms more complex than a simple summing network and threshold element.
Hebb learning proposes a mechanism for learning and memory in an array of neurons. It works great in artificial neural networks operating on real-world data. But has anyone taken a natural neural network, measured all of its input and lateral synaptic weights, trained it to recognize something, and then measured the post-training weights to prove this is how a memory had been stored? I’ve never heard of that, and that would indeed be a sensational demonstration if it ever happened.
The Hubel-Wiesel model is a remarkable attempt to explain how the visual system works, and it does provide a reasonable explanation of how contrast enhancement and edge detection work at a low level. But does it go further than that? When I was taught the theory, it involved the retina, lateral geniculate nucleus, and area 17, but it doesn’t come close to explaining all the cell types or connections in the retina alone. Why do about one-third of the axons in the optic nerve go from the brain to the retina? In the more than forty years since I was in school, has anyone made progress in decoding the function and organization of the hypercomplex cells and areas 17, 18, and 19?
This is why I think pursuing chemical memory theory is important. What if connectionism is wrong? What if the individual neuron, instead of being like a transistor or gate, is more like an IBM mainframe computer with a large room full of disk drives. I do not find it difficult to believe that the function of a mammalian brain neuron is at least as complex as that of a Paramecium. Sure, the neuron doesn’t have an eyespot or any cilia, but it does have tens of thousands of inputs and it does have memory. If its memory is in the form of peptide or RNA oligomers, it could be quite vast indeed, all in one cell. Chemical memory theory is clearly opposed to the prevailing reductionist theory, and I think there always should be at least a few people opposed to prevailing theory, no matter what it is.
Interesting:
I don’t think it’s a question of ‘ What if the individual neuron, instead of being like a transistor or gate, is more like an IBM mainframe computer with a large room full of disk drives’ because I don’t think it’s a question up for debate at all, as pyramidal neurons with their 1,000 -10,000 synaptic inputs (depending on who you read) are almost certainly calculating something more complex than sum/integrate and fire.
If that’s the case, then I wonder how the output can be something as simple as an action potential. Isn’t the synapse more complex than it needs to be to just send a simple on-off signal between cells?
What if 1% of synaptic vesicles don’t contain neurotransmitters — they contain the chemical memory factor? Would we even know that’s what’s going on? It might be that an action potential is not actually carrying information. Maybe it’s a signal that means “I’m sending memory chemicals now, please absorb them” to the downstream neuron. When you stick needles into neurons and stimulate them with your own electrical signals, it could really upset their operation. Stimulate them enough, and maybe a sufficiently disturbed neuron spits out an action potential. That would seem like summation, and if it took a minimum level of upset to produce an action potential that would seem like a threshold. If you can’t see the memory chemicals, then the neuron sure looks like the McCulloch-Pitts model. But if the memory chemicals are there, we have vastly underestimated the complexity of interneuron communication.
Mark:
Actually I was thinking similar thoughts 5 years ago. Have a look at — https://luysii.wordpress.com/2015/05/05/the-neuron-as-motherboard/
Here is an even wilder theory — memories are the holes in the extracellular matrix permitting synapses to form. https://luysii.wordpress.com/2017/01/09/memories-are-made-of-this/