The New York Times Parodies itself

I have a conservative friend who is becoming increasingly exercised by what he regards as the antiTrump bias of the Times. I’ve told him to calm down as the Times was turning into a parody of its former self. Today the NYT obliged by doing just that.

Here’s what so exercised my friend in today’s Times (19 Feb ’17). “For $200,000, a Chance to Whisper in Trump’s Ear”, Membership at Mar-a-Lago Gives Titans Easier Access to Political Power.” This appeared on the front page taking up the twomost right columns above the fold. All of page 13 inside is devoted to the article.

Here’s how the Times parodied itself “Around the World by Private Jet: Cultures in Transformation ” This took up the entire back page of the Style Section (New England Edition at Least) “Privately chartered Boeing 757 26 day/9 countries/50 travelers/$135,000” You will ride with 5 members of the Times staff (lilywhite) — Arthur O. Sulzberger Jr. Alan Riding, Nicholas Kristof, Elaine Sciolino and Elizabeth Bumiller. You will not have to share the air with the Times’ minority editorial contributors, Charles Blow (Black) and Ross Douhat (Conservative). They don’t appear to have a Latino.

Imagine the joy of access for the cut rate price of 135K (who said the Times didn’t care about the little man), while cruising at 35,000 feet exuding both virtue and carbon dioxide.

Here’s part of what my friend had to say about the article (unfortunately he doesn’t blog (he should ) so I can’t supply a link).

Back in the early 18th Century William Congreve wrote:

” Heaven has no rage like love to hatred turned
Nor hell a fury like a woman scorned”

You would think he was talking about the venerable “Gray LADY”, aka New York Times. Indeed the paper has jettisoned any pretense of professional journalistic ethics – any pretense of journalism purpose. A week after the election, after flagrantly shilling for Clinton and smearing Trump in previous months, the editor of the Times issued in writing to the papers readers an apology of sorts by admitting the paper had lost its way and promised to return to reporting news. Evidently atonement to its readers is in the words not the performance. The Gray Lady is profoundly stunned by the rejection by most of the country of the paper’s vision of how the world should be.

—-

Since the election the scorned and enraged Gray Lady has fill page after page , day after day , with disgrace as represented by the article below. The paper has flooded us with conjecture about things that have not happened and gossip of any sort that could denigrate and damage Trump.

The relentless attacks on Trump and his playing golf with dangerous cohorts etc is in marked contrast to how it suppressed any conjecture about Obama’s rise through the notoriously crooked Chicago political machine. Not a whisper of how he was dependent on other graduates of the Chicago cesspool, such as Axelrod and Jarrett. There was dismissal of Obama’s friendship with Ayers, a principal in a murderous urban terrorist group.

The august “paper of record” never conjectured how Obama could spent 20 years listening to Rev. Wright vicious racist rants and kept listening to them, but later said he hardly knew the man.

One final thought — could this be fake news, an ad bought by the Koch brothers to embarrass the Times. Possible, but unlikely.

The strangeness of mathematical proof

I’ver written about Urysohn’s Lemma before and a copy of that post will be found at the end. I decided to plow through the proof since coming up with it is regarded by Munkres (the author of a widely used book on topology) as very creative. Here’s how he introduces it

“Now we come to the first deep theorem of the book,. a theorem that is commonly called the “Urysohn lemma”. . . . It is the crucial tool used in proving a number of important theorems. . . . Why do we call the Urysohn lemma a ‘deep’ theorem? Because its proof involves a really original idea, which the previous proofs did not. Perhaps we can explain what we mean this way: By and large, one would expect that if one went through this book and deleted all the proofs we have given up to now and then handed the book to a bright student who had not studied topology, that student ought to be able to go through the book and work out the proofs independently. (It would take a good deal of time and effort, of course, and one would not expect the student to handle the trickier examples.) But the Uyrsohn lemma is on a different level. It would take considerably more originality than most of us possess to prove this lemma.”

I’m not going to present the proof just comment on one of the tools used to prove it. This is a list of all the rational numbers found in the interval from 0 to 1, with no repeats.

Munkres gives the list at its start and you can see why it would list all the rational numbers. Here it is

0, 1, 1/2, 1/3, 2/3, 1/4, 3/4, 1/5 . . .

Note that 2/4 is missing (because 2 divides into 4 leaving a whole number). It would be fairly easy to write a program to produce the list, but a computer running the program would never stop. In addition it would be slow, because to avoid repeats given a denominator n, it would include 1/n and n-1/n in the list, but to rule out repeats it would have to perform n-2 divisions. It it had a way of knowing if a number was prime it could just put in 1/prime, 2/prime , , , (prime -1)/n without the division. But although there are lists of primes for small integers, there is no general way to find them, so brute force is required. So for 10^n, that means 10^n – 2 divisions. Once the numbers get truly large, there isn’t enough matter in the universe to represent them, nor is there enough time since the big bang to do the calculations.

However, the proof proceeds blithely on after showing the list — this is where the strangeness comes in. It basically uses the complete list of rational numbers as indexes for the infinite number of open sets to be found in a normal topological space. The proof below refers to the assumption of infinite divisibility of space (inherent in the theorem on normal topological spaces), something totally impossible physically.

So we’re in the never to be seen land of completed infinities (of time, space, numbers of operations). It’s remarkable that this stuff applies to the world we inhibit, but it does, and anyone wishing to understand physics at a deep level must come to grips with mathematics at this level.

Here’s the old post

Urysohn’s Lemma

The above quote is from one of the standard topology texts for undergraduates (or perhaps the standard text) by James R. Munkres of MIT. It appears on page 207 of 514 pages of text. Lee’s text book on Topological Manifolds gets to it on p. 112 (of 405). For why I’m reading Lee see https://luysii.wordpress.com/2012/09/11/why-math-is-hard-for-me-and-organic-chemistry-is-easy/.

Well it is a great theorem, and the proof is ingenious, and understanding it gives you a sense of triumph that you actually did it, and a sense of awe about Urysohn, a Russian mathematician who died at 26. Understanding Urysohn is an esthetic experience, like a Dvorak trio or a clever organic synthesis [ Nature vol. 489 pp. 278 – 281 ’12 ].

Clearly, you have to have a fair amount of topology under your belt before you can even tackle it, but I’m not even going to state or prove the theorem. It does bring up some general philosophical points about math and its relation to reality (e.g. the physical world we live in and what we currently know about it).

I’ve talked about the large number of extremely precise definitions to be found in math (particularly topology). Actually what topology is about, is space, and what it means for objects to be near each other in space. Well, physics does that too, but it uses numbers — topology tries to get beyond numbers, and although precise, the 202 definitions I’ve written down as I’ve gone through Lee to this point don’t mention them for the most part.

Essentially topology reasons about our concept of space qualitatively, rather than quantitatively. In this, it resembles philosophy which uses a similar sort of qualitative reasoning to get at what are basically rather nebulous concepts — knowledge, truth, reality. As a neurologist, I can tell you that half the cranial nerves, and probably half our brains are involved with vision, so we automatically have a concept of space (and a very sophisticated one at that). Topologists are mental Lilliputians trying to tack down the giant Gulliver which is our conception of space with definitions, theorems, lemmas etc. etc.

Well one form of space anyway. Urysohn talks about normal spaces. Just think of a closed set as a Russian Doll with a bright shiny surface. Remove the surface, and you have a rather beat up Russian doll — this is an open set. When you open a Russian doll, there’s another one inside (smaller but still a Russian doll). What a normal space permits you to do (by its very definition), is insert a complete Russian doll of intermediate size, between any two Dolls.

This all sounds quite innocent until you realize that between any two Russian dolls an infinite number of concentric Russian dolls can be inserted. Where did they get a weird idea like this? From the number system of course. Between any two distinct rational numbers p/q and r/s where p, q, r and s are whole numbers, you can always insert a new one halfway between. This is where the infinite regress comes from.

For mathematics (and particularly for calculus) even this isn’t enough. The square root of two isn’t a rational number (one of the great Euclid proofs), but you can get as close to it as you wish using rational numbers. So there are an infinite number of non-rational numbers between any two rational numbers. In fact that’s how non-rational numbers (aka real numbers) are defined — essentially by fiat, that any series of real numbers bounded above has a greatest number (think 1, 1.4, 1.41, 1.414, defining the square root of 2).

What does this skullduggery have to do with space? It says essentially that space is infinitely divisible, and that you can always slice and dice it as finely as you wish. This is the calculus of Newton and the relativity of Einstein. It clearly is right, or we wouldn’t have GPS systems (which actually require a relativistic correction).

But it’s clearly wrong as any chemist knows. Matter isn’t infinitely divisible, Just go down 10 orders of magnitude from the visible and you get the hydrogen atom, which can’t be split into smaller and smaller hydrogen atoms (although it can be split).

It’s also clearly wrong as far as quantum mechanics goes — while space might not be quantized, there is no reasonable way to keep chopping it up once you get down to the elementary particle level. You can’t know where they are and where they are going exactly at the same time.

This is exactly one of the great unsolved problems of physics — bringing relativity, with it’s infinitely divisible space together with quantum mechanics, where the very meaning of space becomes somewhat blurry (if you can’t know exactly where anything is).

Interesting isn’t it?

The Rorschach test

Despite spending 6 months of a 3 year neurology residency on the psychiatry service (as was typical in those days) the Rorschach test never came up. Of course, it was well known in the wider world, primarily by a joke.

For those who don’t know, the Rorschach test is a series of 10 inkblots and subjects were asked to tell the examiner what they brought to mind.  To learn more about the test see — https://en.wikipedia.org/wiki/Rorschach_test

The joke:  The response to all 10 by one frisky subject was that they reminded him of sex. The examiner asked him why he was so obsessed with sex. The subject asked the examiner why he was showing him dirty pictures.

There is a very interesting review of a book about Dr. Rorschach in the current issue of Science (vol. 355 p.588 ’17). The reviewer is at the Department of Translational Science and Molecular Medicine, Michigan State University, Grand Rapids, MI 49503, USA. Email: erin.mckay@hc.msu.edu

Here is the first part — unfortunately I can’t reproduce it all, as you must be a subscriber to Science —
“We’re all familiar with the inkblots that make up the Rorschach test: black and white, bilaterally symmetrical figures that hover close to familiarity. Or, at least, we think we are. In modern times,the term “Rorschach test” often serves as a metaphor for our divisiveness, as shorthand for an encoded message, or as a warning that appearances

Inkblots were used in psychology to gauge a person’s imagination for nearly two decades before Rorschach developed his version. Rorschach’s contribution was born of his desire to detect the differences in perceptual processes that explained seemingly nonsensical delusions and neuroses. When designing his inkblots, he can be deceiving. But we may not know as much as we think we do about this classic psychological tool or the man behind it, argues Damion Searls in The Inkblots: Hermann Rorschach, His Iconic Test, and The Power of Seeing.

In tracing the story of the inkblots, Searls sets out to restore two vital stipulations of the Rorschach test: that there are good answers and bad answers and that the test is a measure of perception, not of imagination or projection. The book addresses many questions fundamental to understanding the genesis and effectiveness of Rorschach’s eponymous test as well as the life of the man himself.

Hoping to create images that were suggestive of shape and movement, Rorschach hand-painted each of his 10 eponymous inkblots.”

It always seemed incredibly subjective to me (typical of much of psychoanalysis IMHO).

Not so.

I asked two friends long in the field, whose experience and intelligence and hardheadedness isn’t open to question.

The psychiatrist’s response

As a psychiatrist I was never trained in the Rorschach as psychologists are but have generally found them very helpful. In fact, I took one myself back in residency and had the psychologist interpret the results, which at the time left me feeling naked, ie, all my defenses stripped away.

My office mate doesn’t favor it largely for the reasons in the article: the lack of a scientific basis. Since he is a forensic psychiatrist, this drawback is even worse, since one might potentially have to present the results to a jury, which is almost universally likely to view it as hocus pocus even if there was more scientific basis.
There is a technology to interpret the results, but I think an experienced clinician is also key to its results being helpful. It gives a much deeper dimension to the findings s/w similar to other projective tests and relative to more scientifically based tests such as the MMPI.

Interesting article; thanks for sending.

The Psychiatric Nurse’s response

I actually did use the Rorschach test when leading groups on the in patient psychiatric unit at — a prominent Boston Hospiutal (1975-1980). It was always a challenge to get depressed, withdrawn, and psychotic people to express themselves. Trying to be creative and engaging, I would hold up the ink blots and get anywhere from 1-100 word responses……dependent upon their diagnosis! OF COURSE the bipolar manics, with pressured speech, had to be interrupted for the sake of time!

Then, the artist in me would come out. I had people make their own Rorschach’s with paint. It helped engage the withdrawn members in a different format. The response was that those with paucity of speech were able to express themselves in a non-verbal way. There was always more discussion stimulated by their own creations.

Back from China

Back from China after an fascinating few weeks there. A few points. Then back to the science in future posts.

#1 Chinese food — don’t judge it by what you find in America. It’s great and non-greasy. The Chinese eat tons of it and remain quite thin — about which more later. It is remarkably INexpensive. Remember that fortune cookies and General Tso’s chicken are American inventions.

#2 Naturally after a great meal (ordered in Chinese by our daughter – in – law) in a restaurant over there, I thank the waitress and the cashier. This invariably makes them uncomfortable. Our son explained that to the Chinese this oversteps a boundary, using as an analogy a professor discussing his personal life (marriage etc. etc.) with a student. Strange but true.

#3 After two + weeks in Hong Kong (and Manila) the prevalence of obesity in the states is simply staggering. I think that under 10% of American adults are trim (at least the groups seen on the MassPike and the Motel). Even the trim by American standards could well lose 10 – 20 pounds. All is not perfect over there, as far more Chinese smoke than we do.

#4 If you are male and 6 feet tall, or female and 5′ 7″ prepare to feel like a giraffe (particularly in Manila). It is quite an experience to be on the excellent Hong Kong subway system (10 cars with an average of 50 people per car) and see over everyone’s head.

#5 If you are a woman who has let your hair go gray, prepare to stand out. By and large 50% of the white haired women I saw in Hong Kong, require wheelchairs or canes. My wife took to counting them — the highest she came up with on a given day was under 10.

#6 If you are a male over 60 and think you’re in good shape, prepare to have your ego diminished, as younger people on the subway get up to offer you their seat — particularly galling when some sweet young thing does it.

#7 Take all this with a grain of salt. Hong Kong is not China, and people living there talk about ‘mainlanders’ the way many Americans talked about blacks 60 years ago.

It all depends on whose ox is being gored

The following article appeared in the New York Times 19 October 2016. The following paragraph begins a direct, continuous, unedited quote from the start of the article. Subsequently, the article discusses other matters brought up in the debate — here’s the link for the whole thing — https://www.nytimes.com/2016/10/20/us/politics/presidential-debate.html?_r=0. The times they are a’changin’ aren’t they?

“In a remarkable statement that seemed to cast doubt on American democracy, Donald J. Trump said Wednesday that he might not accept the results of next month’s election if he felt it was rigged against him — a stand that Hillary Clinton blasted as “horrifying” at their final and caustic debate on Wednesday.

Mr. Trump, under enormous pressure to halt Mrs. Clinton’s steady rise in opinion polls, came across as repeatedly frustrated as he tried to rally conservative voters with hard-line stands on illegal immigration and abortion rights. But he kept finding himself drawn onto perilous political territory by Mrs. Clinton and the debate’s moderator, Chris Wallace.

He sputtered when Mrs. Clinton charged that he would be “a puppet” of President Vladimir V. Putin of Russia if elected. He lashed out repeatedly, saying that “she’s been proven to be a liar on so many different ways” and that “she’s guilty of a very, very serious crime” over her State Department email practices. And by the end of the debate, when Mrs. Clinton needled him over Social Security, Mr. Trump snapped and said, “Such a nasty woman.”

Mrs. Clinton was repeatedly forced to defend her long service in government, which Mr. Trump charged had yielded no real accomplishments. But she was rarely rattled, and made a determined effort to rise above Mr. Trump’s taunts while making overtures to undecided voters.

She particularly sought to appeal to Republicans and independents who have doubts about Mr. Trump, arguing that she was not an opponent of the Second Amendment as he claimed, and promising to be tougher and shrewder on national security than Mr. Trump.

But it was Mr. Trump’s remark about the election results that stood out, even in a race that has been full of astonishing moments.

Every losing presidential candidate in modern times has accepted the will of the voters, even in extraordinarily close races, such as when John F. Kennedy narrowly defeated Richard M. Nixon in 1960 and George W. Bush beat Al Gore in Florida to win the presidency in 2000.

Mr. Trump insisted, without offering evidence, that the general election has been rigged against him, and he twice refused to say that he would accept its result.

“I will look at it at the time,” Mr. Trump said. “I will keep you in suspense.”

“That’s horrifying,” Mrs. Clinton replied. “Let’s be clear about what he is saying and what that means. He is denigrating — he is talking down our democracy. And I am appalled that someone who is the nominee of one of our two major parties would take that position.”

Mrs. Clinton then ticked off the number of times he had deemed a system rigged when he suffered a setback, noting he had even called the Emmy Awards fixed when his show, “The Apprentice,’’ was passed over.

“It’s funny, but it’s also really troubling,” she said. “That is not the way our democracy works.”

Mrs. Clinton also accused Mr. Trump of extreme coziness with Mr. Putin, criticizing him for failing to condemn Russian espionage against her campaign’s internal email.

When Mr. Trump responded that Mr. Putin had “no respect” for Mrs. Clinton, she shot back, in one of the toughest lines of the night: “That’s because he’d rather have a puppet as president of the United States.”

“No puppet, no puppet,” Mr. Trump sputtered. “You’re the puppet.” He quickly recovered and said, “She has been outsmarted and outplayed worse than anybody I’ve ever seen in any government, whatsoever.”

There’s more — but the above is a direct continuous unedited quote from the article

No posts for a while

Off to Manila for a wedding, and Hong Kong to see a new grandson — will be back mid-February

For a picture see — https://luysii.wordpress.com/2016/12/18/noel/

Ring currents ride again

One of the most impressive pieces of evidence (to me at least) that we really understand what electrons are doing in organic molecules are the ring currents. Recall that the pi electrons in benzene are delocalized above and below the planar ring determined by the 6 carbon atoms.

How do we know this? When a magnetic field is applied the electrons in the ring cloud circulate to oppose the field. So what? Well if you can place a C – H bond above the ring, the induced current will shield it. Such molecules are known, and the new edition of Clayden (p. 278) shows the NMR spectra showing [ 7 ] paracyclophane which is benzene with 7 CH2’s linked to the 1 and 4 positions of benzene, so that the hydrogens of the 4th CH2 is directly over the ring (7 CH2’s aren’t long enough for it to be anywhere else). Similarly, [ 18 ] Annulene has 6 hydrogens inside the armoatic ring — and these hydrogens are even more deshielded. Interestingly building larger and larger annulenes, as shown that aromaticity decreases with increasing size, vanishing for systems with more than 30 pi electrons (diameter 13 Angstroms), probably because planarity of the carbons becomes less and less possible, breaking up the cloud.

This brings us to Nature vol. 541 pp. 200 – 203 ’17 which describes a remarkable molecule with 6 porphyins in a ring hooked together by diyne linkers. The diameter of the circle is 24 Angstroms. Benzene and [ 18 ] Annulene have all the carbons in a plane, but the picture of the molecule given in the paper does not. Each of the porphyrins is planar of course, but each plane is tangent to the circle of porphyrins.

Also discussed is the fact that ‘anti-aromatic’ ring currents exist, in which they circulate to enhance rather than diminish the imposed magnetic field. The molecule can be switched between the aromatic and anti-aromatic states by its oxidation level. When it has 78 electrons ( 18 * 4 ) + 2 in the ring (with a charge of + 6) it is aromatic. When it has 80 elections with a + 4 charge it is anti-aromatic — further confirmation of the Huckel rule (as if it was needed).

On a historical note reference #27 is to a paper of Marty Gouterman in 1961, who was teaching grad students in chemistry in the spring of 1961. He was an excellent teacher. Here he is at the University of Washington — http://faculty.washington.edu/goutermn/

Memories are made of this ?

Back in the day when information was fed into computers on punch cards, the data was the holes in the paper not the paper itself. A far out (but similar) theory of how memories are stored in the brain just got a lot more support [ Neuron vol. 93 pp. 6 -8, 132 – 146 ’17 ].

The theory says that memories are stored in the proteins and sugar polymers surrounding neurons rather than the neurons themselves. These go by the name of extracellular matrix, and memories are the holes drilled in it which allow synapses to form.

Here’s some stuff I wrote about the idea when I first ran across it two years ago.

——

An article in Science (vol. 343 pp. 670 – 675 ’14) on some fairly obscure neurophysiology at the end throws out (almost as an afterthought) an interesting idea of just how chemically and where memories are stored in the brain. I find the idea plausible and extremely surprising.

You won’t find the background material to understand everything that follows in this blog. Hopefully you already know some of it. The subject is simply too vast, but plug away. Here a few, seriously flawed in my opinion, theories of how and where memory is stored in the brain of the past half century.

#1 Reverberating circuits. The early computers had memories made of something called delay lines (http://en.wikipedia.org/wiki/Delay_line_memory) where the same impulse would constantly ricochet around a circuit. The idea was used to explain memory as neuron #1 exciting neuron #2 which excited neuron . … which excited neuron #n which excited #1 again. Plausible in that the nerve impulse is basically electrical. Very implausible, because you can practically shut the whole brain down using general anesthesia without erasing memory. However, RAM memory in the computers of the 70s used the localized buildup of charge to store bits and bytes. Since charge would leak away from where it was stored, it had to be refreshed constantly –e.g. at least 12 times a second, or it would be lost. Yet another reason data should always be frequently backed up.

#2 CaMKII — more plausible. There’s lots of it in brain (2% of all proteins in an area of the brain called the hippocampus — an area known to be important in memory). It’s an enzyme which can add phosphate groups to other proteins. To first start doing so calcium levels inside the neuron must rise. The enzyme is complicated, being comprised of 12 identical subunits. Interestingly, CaMKII can add phosphates to itself (phosphorylate itself) — 2 or 3 for each of the 12 subunits. Once a few phosphates have been added, the enzyme no longer needs calcium to phosphorylate itself, so it becomes essentially a molecular switch existing in two states. One problem is that there are other enzymes which remove the phosphate, and reset the switch (actually there must be). Also proteins are inevitably broken down and new ones made, so it’s hard to see the switch persisting for a lifetime (or even a day).

#3 Synaptic membrane proteins. This is where electrical nerve impulses begin. Synapses contain lots of different proteins in their membranes. They can be chemically modified to make the neuron more or less likely to fire to a given stimulus. Recent work has shown that their number and composition can be changed by experience. The problem is that after a while the synaptic membrane has begun to resemble Grand Central Station — lots of proteins coming and going, but always a number present. It’s hard (for me) to see how memory can be maintained for long periods with such flux continually occurring.

This brings us to the Science paper. We know that about 80% of the neurons in the brain are excitatory — in that when excitatory neuron #1 talks to neuron #2, neuron #2 is more likely to fire an impulse. 20% of the rest are inhibitory. Obviously both are important. While there are lots of other neurotransmitters and neuromodulators in the brains (with probably even more we don’t know about — who would have put carbon monoxide on the list 20 years ago), the major inhibitory neurotransmitter of our brains is something called GABA. At least in adult brains this is true, but in the developing brain it’s excitatory.

So the authors of the paper worked on why this should be. GABA opens channels in the brain to the chloride ion. When it flows into a neuron, the neuron is less likely to fire (in the adult). This work shows that this effect depends on the negative ions (proteins mostly) inside the cell and outside the cell (the extracellular matrix). It’s the balance of the two sets of ions on either side of the largely impermeable neuronal membrane that determines whether GABA is excitatory or inhibitory (chloride flows in either event), and just how excitatory or inhibitory it is. The response is graded.

For the chemists: the negative ions outside the neurons are sulfated proteoglycans. These are much more stable than the proteins inside the neuron or on its membranes. Even better, it has been shown that the concentration of chloride varies locally throughout the neuron. The big negative ions (e.g. proteins) inside the neuron move about but slowly, and their concentration varies from point to point.

Here’s what the authors say (in passing) “the variance in extracellular sulfated proteoglycans composes a potential locus of analog information storage” — translation — that’s where memories might be hiding. Fascinating stuff. A lot of work needs to be done on how fast the extracellular matrix in the brain turns over, and what are the local variations in the concentration of its components, and whether sulfate is added or removed from them and if so by what and how quickly.

—-

So how does the new work support this idea? It involves a structure that I’ve never talked about — the lysosome (for more info see https://en.wikipedia.org/wiki/Lysosome). It’s basically a bag of at least 40 digestive and synthetic enzymes inside the cell, which chops anything brought to it (e.g. bacteria). Mutations in the enzymes cause all sorts of (fortunately rare) neurologic diseases — mucopolysaccharidoses, lipid storage diseases (Gaucher’s, Farber’s) the list goes on and on.

So I’ve always thought of the structure as a Pandora’s box best kept closed. I always thought of them as confined to the cell body, but they’re also found in dendrites according to this paper. Even more interesting, a rather unphysiologic treatment of neurons in culture (depolarization by high potassium) causes the lysosomes to migrate to the neuronal membrane and release its contents outside. One enzyme released is cathepsin B, a proteolytic enzyme which chops up the TIMP1 outside the cell. So what. TIMP1 is an endogenous inhibitor of Matrix MetalloProteinases (MMPs) which break down the extracellular matrix. So what?

Are neurons ever depolarized by natural events? Just by synaptic transmission, action potentials and spontaneously. So here we have a way that neuronal activity can cause holes in the extracellular matrix,the holes in the punch cards if you will.

Speculation? Of course. But that’s the fun of reading this stuff. As Mark Twain said ” There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

Tensors yet again

In the grad school course on abstract algebra I audited a decade or so ago, the instructor began the discussion about tensors by saying they were the hardest thing in mathematics. Unfortunately I had to drop this section of the course due a family illness. I’ve written about tensors before and their baffling notation and nomenclature. The following is yet another way to look at them which may help with their confusing terminology

First, this post will assume you have a significant familiarity with linear algebra. I’ve written a series of posts on the subject if you need a brush up — pretty basic — here’s a link to the first post — https://luysii.wordpress.com/2010/01/04/linear-algebra-survival-guide-for-quantum-mechanics-i/
All of them can be found here — https://luysii.wordpress.com/category/linear-algebra-survival-guide-for-quantum-mechanics/.

Here’s another attempt to explain them — which will give you the background on dual vectors you’ll need for this post — https://luysii.wordpress.com/2015/06/15/the-many-ways-the-many-tensor-notations-can-confuse-you/

To the physicist, tensors really represent a philosophical position — e.g. there are shapes and processes external to us which are real, and independent of the way we choose to describe them mathematically. E. g. describing them by locating their various parts and physical extents in some sort of coordinate system. That approach is described here — https://luysii.wordpress.com/2014/12/08/tensors/

Zee in one of his books defines tensors as something that transforms like a tensor (honest to god). Neuenschwander in his book says “What kind of a definition is that supposed to be, that doesn’t tell you what it is that is changing.”

The following approach may help — it’s from an excellent book which I’ve not completely gotten through — “An Introduction to Tensors and Group Theory for Physicists” by Nadir Jeevanjee.

He says that tensors are just functions that take a bunch of vectors and return a number (either real or complex). It’s a good idea to keep the volume tensor (which takes 3 vectors and returns a real number) in mind while reading further. The tensor function just has one other constraint — it must be multilinear (https://en.wikipedia.org/wiki/Multilinear_map). Amazingly, it turns out that this is all you need.

Tensors are named by the number of vectors (written V) and dual vectors (written V*) they massage to produce the number. This is fairly weird when you think of it. We don’t name sin (x) by x because this wouldn’t distinguish it from the zillion other real valued functions of a single variable.

So an (r, s) tensor is named by the ordered array of its operands — (V, …V,V*, …,V*) with r V’s first and s V* next in the array. The array tells you what the tensor function must be.

How can Jeevanjee get away with this? Amazingly, multilinearity is all you need. Recall that the great thing about the linearity of any function or operator on a vector space is that ALL you need to know is what the function or operator does to the basis vectors of the space. The effect on ANY vector in the vector space then follows by linearity.

Going back to the volume tensor whose operand is (V, V, V) and the vector space for all 3 V’s (R^3), how many basis vectors are there for V x V x V ? There are 3 for each V meaning that there are 3^3 = 27 possible basis vectors. You probably remember the formula for the volume enclosed by 3 vectors (call them u, v, w). The 3 components of u are u1 u2 and u3.

The volume tensor calculates volume by ( U crossproduct V ) dot product W.
Writing the calculation out

Volume = u1*v2*w3 – u1*v3*w2 + u2*v3*w1 – u2*v1*w3 + u3*v1*w2 – u3*v2*w1. What about the other 21 combinations of basis vectors? They are all zero, but they are all present in the tensor.

While any tensor manipulating two vectors can be expressed as a square matrix, clearly the volume tensor with 27 components can not be. So don’t confuse tensors with matrices (as I did).

Note that the formula for volume implicitly used the usual standard orthogonal coordinates for R^3. What would it be in spherical coordinates? You’d have to use a change of basis matrix to (r, theta, phi). Actually you’d have to have 3 of them, as basis vectors in V x V x V are 3 places arrays. This gives the horrible subscript and superscript notation of matrices by which tensors are usually defined. So rather than memorizing how tensors transform you can derive things like

T_i’^j’ = (A^k_i’)*(A^k_i’) * T_k^l where _ before a letter means subscript and ^ before a letter means superscript and A^k_i’ and A^k_i’ are change of basis matrices and the Einstein summation convention is used. Note that the chance of basis formula for tensor components for the volume tensor would have 3 such matrices, not two as I’ve shown.

One further point. You can regard a dual vector as a function that takes a vector and returns a number — so a dual vector is a (1,0) tensor. Similarly you can regard vectors as functions that take dual vectors and returns a number, so they are (0,1) tensors. So, actually vectors and dual vectors are tensors as well.

The distinction between describing what a tensor does (e.g. its function) and what its operands actually are caused me endless confusion. You write a tensor operating on a dual vector as a (0, 1) tensor, but a dual vector is a (1,0) considered as a function.

None of this discussion applies to the tensor product, which is an entirely different (but similar) story.

Hopefully this helps

Tidings of great joy

One of the hardest things I had to do as a doc was watch an infant girl waste away and die of infantile spinal muscular atrophy (Werdnig Hoffmann disease) over the course of a year. Something I never thought would happen (a useful treatment) may be at hand. The actual papers are not available yet, but two placebo controlled trials with a significant number of patients (84, 121) in each were stopped early because trial monitors (not in any way involved with the patients) found the treated group was doing much, much better than the placebo. A news report of the trials is available [ Science vol. 354 pp. 1359 – 1360 ’16 (16 December) ].

The drug, a modified RNA molecule, (details not given) binds to another RNA which codes for the missing protein. In what follows a heavy dose of molecular biology will be administered to the reader. Hang in there, this is incredibly rational therapy based on serious molecular biological knowledge. Although daunting, other therapies of this sort for other neurologic diseases (Huntington’s Chorea, FrontoTemporal Dementia) are currently under study.

If you want to start at ground zero, I’ve written a series https://luysii.wordpress.com/category/molecular-biology-survival-guide/ which should tell you enough to get started. Start here — https://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/
and follow the links to the next two.

Here we go if you don’t want to plow through all three

Our genes occur in pieces. Dystrophin is the protein mutated in the commonest form of muscular dystrophy. The gene for it is 2,220,233 nucleotides long but the dystrophin contains ‘only’ 3685 amino acids, not the 770,000+ amino acids the gene could specify. What happens? The whole gene is transcribed into an RNA of this enormous length, then 78 distinct segments of RNA (called introns) are removed by a gigantic multimegadalton machine called the spliceosome, and the 79 segments actually coding for amino acids (these are the exons) are linked together and the RNA sent on its way.

All this was unknown in the 70s and early 80s when I was running a muscular dystrophy clininc and taking care of these kids. Looking back, it’s miraculous that more of us don’t have muscular dystrophy; there is so much that can go wrong with a gene this size, let along transcribing and correctly splicing it to produce a functional protein.

One final complication — alternate splicing. The spliceosome removes introns and splices the exons together. But sometimes exons are skipped or one of several exons is used at a particular point in a protein. So one gene can make more than one protein. The record holder is something called the Dscam gene in the fruitfly which can make over 38,000 different proteins by alternate splicing.

There is nothing worse than watching an infant waste away and die. That’s what Werdnig Hoffmann disease is like, and I saw one or two cases during my years at the clinic. It is also called infantile spinal muscular atrophy. We all have two genes for the same crucial protein (called unimaginatively SMN). Kids who have the disease have mutations in one of the two genes (called SMN1) Why isn’t the other gene protective? It codes for the same sequence of amino acids (but using different synonymous codons). What goes wrong?

[ Proc. Natl. Acad. Sci. vol. 97 pp. 9618 – 9623 ’00 ] Why is SMN2 (the centromeric copy (e.g. the copy closest to the middle of the chromosome) which is normal in most patients) not protective? It has a single translationally silent nucleotide difference from SMN1 in exon 7 (e.g. the difference doesn’t change amino acid coded for). This disrupts an exonic splicing enhancer and causes exon 7 skipping leading to abundant production of a shorter isoform (SMN2delta7). Thus even though both genes code for the same protein, only SMN1 actually makes the full protein.

More background. The molecular machine which removes the introns is called the spliceosome. It’s huge, containing 5 RNAs (called small nuclear RNAs, aka snRNAs), along with 50 or so proteins with a total molecular mass again of around 2,500,000 kiloDaltons. Think about it chemists. Design 50 proteins and 5 RNAs with probably 200,000+ atoms so they all come together forming a machine to operate on other monster molecules — such as the mRNA for Dystrophin alluded to earlier. Hard for me to believe this arose by chance, but current opinion has it that way.

Splicing out introns is a tricky process which is still being worked on. Mistakes are easy to make, and different tissues will splice the same pre-mRNA in different ways. All this happens in the nucleus before the mRNA is shipped outside where the ribosome can get at it.

The papers [ Science vol. 345 pp. 624 – 625, 688 – 693 ’14 ].describe a small molecule which acts on the spliceosome to increase the inclusion of SMN2 exon 7. It does appear to work in patient cells and mouse models of the disease, even reversing weakness.

I was extremely skeptical when I read the papers two years ago. Why? Because just about every protein we make is spliced (except histones), and any molecule altering the splicing machinery seems almost certain to produce effects on many genes, not just SMN2. If it really works, these guys should get a Nobel.

Well, I shouldn’t have been so skeptical. I can’t say much more about the chemistry of the drug (nusinersen) until the papers come out.

Fortunately, the couple (a cop and a nurse) took the 25% risk of another child with the same thing and produced a healthy infant a few years later.