Catching God’s dice being thrown

Einstein famously said “Quantum theory yields much, but it hardly brings us close to the Old One’s secrets. I, in any case, am convinced He does not play dice with the universe.”  Astronomers have caught the dice being thrown (at least as far as the origin of life is concerned).

This post will contain a lot more background than most, as I expect some readers won’t have much scientific background.  The technically inclined can read the article on which this is based — http://www.pnas.org/content/115/28/7166

To cut to the chase — astronomers have found water, a simple sugar, and a compound containing carbon, hydrogen, oxygen and nitrogen around newly forming stars and planets.  You need no more than these 4 atoms to build the bases making up the DNA of our genes, all our sugars and carbohydrates, and 18 of the 20 amino acids that make up our proteins. Throw in sulfur and you have all 20 amino acids.  Add phosphorus and you have DNA and its cousin RNA (neither has been found around newly forming stars so far).

These are the ingredients of life itself. Here’s a quote from the article — “What I can definitively say is that the ingredients needed to make biogenic molecules like DNA and RNA are found around every forming protostar. They are there at an early stage, incorporating into bodies at least as large as comets, which we know are the building blocks of terrestrial planets. Whether these molecules survive or are delivered at the late stage of planet formation, that’s the part of it we don’t know very well.”

So each newly formed star and planetary system is a throw of God’s/Nature’s/Soulless physics’ dice for the creation of life.

As of 1 July 2018, there are 3,797 confirmed planets around 2,841 stars, with 632 having more than one (Wikipedia).  And that’s just in the stars close enough to us to study.  Our galaxy, the milky way, contains 400,000,000,000.

Current estimates have some 100,000,000,000 galaxies in the universe.  https://www.space.com/25303-how-many-galaxies-are-in-the-universe.html.  That’s a lot tosses for life to arise.

Suppose that some day life is found on one such planet.  Does this invalidate Genesis, the Koran?  Assume that they are the word of God somehow transmitted to man.  If the knowledge we have about astronomy (above), biology etc. etc. were imparted to Jesus, Mohammed, Abraham, Moses — it never would have been believed.  The creator had to start with something plausible.

 

 

Advertisements

An incredible way to look at the brain

http://www.pnas.org/content/115/27/6940 [ Proc. Natl. Acad. Sci. vol. 115 pp. 6940 – 6945 ’18 ] demonstrates an incredible new way to visualize brain structures.  I don’t think the paper is behind a paywall, so follow the link and look at the movies.

The technique can be used on paraffin embedded brain.  Not to be tried at home unless you have a microCT with a liquid jet anode source, and a high resolution synchrotron instrument with special Xray waveguide optic.

No staining was involved, and they used electron contrast to show purkinje cells, granule cells, and the ramified dendritic tree of the Purkinje cells in a 1 cubic millimeter punch ‘biopsy’ of paraffin embedded cerebellum.

The moves are incredible, as unlike the standard CT or MRI, you can move a plane through the images (the movies show this), stop it at leisure.  Visualization of a plane moving through the material shows what the brain looks like in 3 d.  Then there are a few 3 d reconstructions (presented as 2 dimensional projective drawings we’re used to seeing), but even these can be moved around.

Words are inadequate.  Go to the link and look at the movies.  Let me know if you have trouble reaching it.

How many more metabolites like this are out there?

3′ deoxy 3′ 4′ didehydro cytidine triphosphate — doesn’t roll’ tripgingly on the tongue’ does it? (Hamlet Act 3, scene 2, 1–4).  Can you draw the structure?  It is the product of another euphoniously named enzyme — Viperin.  Abbreviated ddhCTP it is just cytidine triphosphate with a double bond between carbons 3 and 4 of the sugar.

Viperin is an enzyme induced by interferon which inhibits the replication of a variety of viruses. [ Nature vol. 558 pp. 610 – 614 ’18 ] describes a  beautiful sequence  of reactions for ddhCTP’s formation using S-Adenosyl Methionine (SAM).  ddhCTP acts as a chain terminator for the RNA dependent RNA polymerases of multiple members of Flaviviruses (including Zika).

However the paper totally blows it for not making the point that ddhCTP is extremely close to a drug (which has been used against AIDS for years — Zalcitabine (Hivid) — http://www.molbase.com/en/name-Zalcitibine.html which is just ddC.  ddhCTP is almost the same as ddC — except that there is no triphosphate on the 5′ hydroxyl (which enzymes in the body add), and instead of a double bond between carbons 3 and 4 of the sugar, both carbons are fully reduced (CH2 and CH2).  So ddhCTP is Nature’s own Zalcitabine.

It is worth reflecting on just how many other metabolites are out there acting as ‘natural’ drugs that we just haven’t found yet.

The Gambler’s fallacy is actually based on our experience

We don’t understand randomness very well. When asked to produce a random sequence we never produce enough repeating patterns thinking that they are less probable. This is the Gambler’s fallacy.  If heads come up 3 times in a row, the Gambler will bet on tails on the next throw   Why?  This reasoning is actually based on experience.

The following comes from a very interesting paper of a few years ago  [ Proc. Natl. Acad. Sci. vol. 112 pp. 3788 – 3792 ’15 ].  There is a surprising amount of systematic structure lurking within random sequences. For example, in the classic case of tossing a fair coin, where the probability of each outcome (heads or tails) is exactly 0.5 on every single trial, one would naturally assume that there is no possibility for some kind of interesting structure to emerge, given such a simple form of randomness.

However if you record the average amount of time for a pattern to first occur in a sequence (i.e., the waiting time statistic), it is longer for a repetition (head–head HH or tail–tail TT  (an average of six tosses is needrequired) than for an alternation (HT or TH, only four tosses is needed). This is despite the fact that on average, repetitions and alternations are equally probable (occurring once in every four tosses, i.e., the same mean time statistic).

For both of these facts to be true, it must be that repetitions are more bunched together over time—they come in bursts, with greater spacing between, compared with alternations (which is why they appear less frequent to us). Intuitively, this difference comes from the fact that repetitions can build upon each other (e.g., sequence HHH contains two instances of HH), whereas alternations cannot.

Statistically, the mean time and waiting time delineate the mean and variance in the distribution of the interarrival times of patterns (respectively). Despite the same frequency of occurrence (i.e., the same mean), alternations are more evenly distributed over time than repetitions (they have different variances) — which is exactly why they appear less frequent, hence less likely.

Then the authors go on to develop a model of the way we think about these things.

“Is this latent structure of waiting time just a strange mathematical curiosity or could it possibly have deep implications for our cognitive level perceptions of randomness? It has been speculated that the systematic bias in human randomness perception such as the gambler’s fallacy might be due to the greater variance in the interarrival times or the “delayed” waiting time for repetition patterns. Here, we show that a neural model based on a detailed biological understanding of the way the neocortex integrates information over time when processing sequences of events is naturally sensitive to both the mean time and waiting time statistics. Indeed, its behavior is explained by a simple averaging of the influences of both of these statistics, and this behavior emerges in the model over a wide range of parameters. Furthermore, this averaging dynamic directly produces the best-fitting bias-gain parameter for an existing Bayesian model of randomness judgments, which was previously an unexplained free parameter and obtained only through parameter fitting. We show that we can extend this Bayesian model to better fit the full range of human data by including a higher-order pattern statistic, and the neurally derived bias-gain parameter still provides the best fit to the human data in the augmented model. Overall, our model provides a neural grounding for the pervasive gambler’s fallacy bias in human judgments of random processes, where people systematically discount repetitions and emphasize alternations.”

Fascinating stuff

Remember entropy – take III

Pop quiz.  How would you make an enzyme in a cold dwelling organism (0 Centrigrade) as catalytically competent as its brothers living in us at 37 C?

We know that reactions go faster the hotter it is, because there is more kinetic energy of the reactants to play with.  So how do you make an enzyme move more when it’s cold and there is less kinetic energy to play with.

Well for most cold tolerance enzymes (psychrophilic enzymes — a great scrabble word), evolution mutates surface amino acids to glycine.  Why glycine?  Well it’s lighter, and there is no side chain to get in the way  when the backbone moves.  The mutations aren’t in the active site but far away.   This means more wiggling of the backbone — which means more entropy of the backbone.

The following papers [ Nature vol. 558 pp. 195 – 196, 324 – 218 ’81 ] studied adenylate kinase, an enzyme found in most eukaryotes  which catalyzes

ATP + AMP < — > 2 ADP.

They studied the enzyme from E. Coli which happily lives within us at 37 C, and mutated a few surface valines and isoleucines to glycine, lowered the temperature and found the enzyme works as well (the catalytic rate of the mutated enzyme at 0 C is the same as the rate of the unmutated enzyme at 37).

Chemists have been studying transition state theory since the days of Eyring, and reaction rates are inversely proportional the the amount of free energy (not enthalpy) to raise the enzyme to the transition state.

F = H – TS (Free energy = enthalpy – Temperature * Entropy).

So to increase speed decrease the enthalpy of activation (deltaH) or increase the amount of entropy.

It is possible to separately measure enthalpy and entropies of activation, and the authors did just that (figure 4 p. 326) and showed that the enthalpy of activation of the mutated enzyme (glycine added) was the same as the unmutated enzyme, but that the free energy of activation of the mutated enzyme was less because of an increase in entropy (due to unfolding of different parts of the enzyme).

Determining these two parameters takes an enormous amount of work (see the story from grad school at the end). You have to determine rate constants at various temperatures, plot the rate constant divided by temperature and then measure the slope of the line you get to obtain the enthalpy of activation.   Activation entropy is determined by the intercepts of the straight line (which hopefully IS straight) with the X axis.  Determining the various data points is incredibly tedious and uninteresting.

So enzymes  of cold tolerant organisms are using entropy to make their enzymes work.

Grad school story — back in the day, studies of organic reaction mechanisms were very involved with kinetic measurements (that’s where Sn1 and Sn2 actually come from).  I saw the following happen several times, and resolved never get sucked in to having to actually do kinetic measurements.  Some hapless wretch would present his kinetic data to a seminar, only to have Frank Westheimer think of something else and suggest another 6 months of kinetic measurements, so back he went to the lab for yet more drudgery.

 

 

Molecular biology’s oxymoron

Dear reader.  What does a gene do?  It codes for something.  What does a nonCoding Gene do?  It also codes for something, just RNA instead of protein. It’s molecular biology’s very own oxymoron, a throwback to the heroic protein-centric early days of molecular biology. The term has been enshrined by usage for so long that it’s impossible to get rid of.  Nonetheless, the the latest work found even more nonCoding genes than genes actually coding for  protein.

An amusing article from Nature (vol. 558 pp. 354 – 355 ’18) has the current state of play.   The latest estimate is from GTex which sequenced 900 billion RNAs found in various human tissues, matched them to the sequence(s) of the human genome and used computer algorithms to determine which  of them were the product of genes coding for proteins and genes coding for something else.

The report from GTex  (Genotype Tissue expression Project) found 21,306 protein-coding genes and 21,856 non-coding genes — amazingly there are more nonCoding genes than protein coding ones.  This  is many more genes than found in the two most widely used human gene databases. The GENCODE gene set, maintained by the EBI, includes 19,901 protein-coding genes and 15,779 non-coding genes. RefSeq, a database run by the US National Center for Biotechnology Information (NCBI), lists 20,203 protein-coding genes and 17,871 non-coding genes.

Stay tuned.  The fat lady hasn’t sung.

Quotas by any other name

I received the following from Drew Faust, president of Harvard University, 2 days ago.  My comments are at the end.  I’d be interested in what readers think about the issue. Click “Post a Comment” at the end to do so.

Harvard University - Office of the President

Dear Alumni and Friends,

In the weeks and months ahead, a lawsuit aimed to compromise Harvard’s ability to compose a diverse student body will move forward in the courts and in the media. As the case proceeds, an organization called Students for Fair Admissions—formed in part to oppose Harvard’s commitment to diversity—will seek to paint an unfamiliar and inaccurate image of our community and our admissions processes, including by raising allegations of discrimination against Asian-American applicants to Harvard College. These claims will rely on misleading, selectively presented data taken out of context. Their intent is to question the integrity of the undergraduate admissions process and to advance a divisive agenda. Please see here for more information about the case.

Year after year, Harvard brings together a community that is the most varied and diverse that any of us is likely ever to encounter. Harvard students benefit from working and living alongside people of different backgrounds, experiences, and perspectives as they prepare for the complex world that awaits them and their considerable talents.

I have affirmed in the past, and do so again today, that Harvard will vigorously defend its longstanding values and the processes by which it seeks to create a diverse educational community. We will stand behind an approach that has been held up as legal and fair by the Supreme Court, one that relies on broad and extensive outreach to exceptional students in order to attract excellence from all backgrounds.

As this case generates widespread attention and comment, Harvard will react swiftly and thoughtfully to defend diversity as the source of our strength and our excellence—and to affirm the integrity of our admissions process. A diverse student body enables us to enrich, to educate, and to challenge one another. As a university community, we are bound across differences by a shared commitment to learning, to pursuing truth, and to embracing the rigor and respect of argument and evidence. We never give up on the promise of a world made better by an assumption revisited, an understanding expanded, or a truth questioned—again and again and again.

Last month, I presided over our Commencement Exercises for a final time and reveled in the accomplishments of our graduates and alumni, and in the joy and pride of the faculty who educated them, the staff who enabled their manifold successes, and the family members who helped nurture them and their aspirations. Tercentenary Theatre was filled with individuals from the widest range of backgrounds and life experiences. It was a powerful reminder that the heart of this extraordinary institution is its people.

Now, we have an opportunity to stand together and to defend the ideals and the people that make our community so extraordinary. I am committed to ensuring that veritaswill prevail.

Sincerely,
Drew Faust 

© 2018 The President and Fellows of Harvard College | Harvard.edu

Harvard University | Massachusetts Hall | Cambridge, MA 02138

Dr Faust:

You are not defending diversity — you are defending quotas against Asians as Harvard did against Jews years ago — and I’m not Asian

 M. S.  Chemistry 1962

Chemistry and Biochemistry can’t answer the important questions but without them we are lost

The last two posts — one concerning the histone code and cerebral embryogenesis https://luysii.wordpress.com/2018/06/07/omar-khayyam-and-the-embryology-of-the-cerebral-cortex/ and the other concerning PVT1 enhancers promoters and cancer https://luysii.wordpress.com/2018/06/04/marshall-mcluhan-rides-again/ &#8212; would be impossible without chemical and biochemical knowledge and technology, but the results they produce and the answers they seek and lie totally outside both disciplines.

In fact they belong outside the physical realm in the space of logic, ideas, function — e.g. in the other half of the Cartesian dichotomy — the realm of ideas and spirit.  Certainly the biological issues are instantiated physically in molecules, just as computer memory used to be instantiated in magnetic cores, rather than transistors.

Back when I was starting out as a grad student in Chemistry in the early 60s, people were actually discovering the genetic code, poly U coded for phenylalanine etc. etc.  Our view was that all we had to do was determine the structure of things and understanding would follow.  The first xray structures of proteins (myoglobin) and Anfinsen’s result on ribonuclease showing that it could fold into its final compact form all by itself reinforced this. It also led us to think that all proteins had ‘a’ structure.

This led to people thinking that the only difference between us and a chimpanzee were a few amino acid differences in our proteins (remember the slogan that we were 98% chimpanzee).

So without chemistry and biochemistry we’d be lost, but the days of crude reductionism of the 60s and 70s are gone forever.  Here’s another example of chemical and biochemical impotence from an earlier post.

The limits of chemical reductionism

“Everything in chemistry turns blue or explodes”, so sayeth a philosophy major roommate years ago.  Chemists are used to being crapped on, because it starts so early and never lets up.  However, knowing a lot of organic chemistry and molecular biology allows you to see very clearly one answer to a serious philosophical question — when and where does scientific reductionism fail?

Early on, physicists said that quantum mechanics explains all of chemistry.  Well it does explain why atoms have orbitals, and it does give a few hints as to the nature of the chemical bond between simple atoms, but no one can solve the equations exactly for systems of chemical interest.  Approximate the solution, yes, but this his hardly a pure reduction of chemistry to physics.  So we’ve failed to reduce chemistry to physics because the equations of quantum mechanics are so hard to solve, but this is hardly a failure of reductionism.

The last post “The death of the synonymous codon – II” puts you exactly at the nidus of the failure of chemical reductionism to bag the biggest prey of all, an understanding of the living cell and with it of life itself.  We know the chemistry of nucleotides, Watson-Crick base pairing, and enzyme kinetics quite well.  We understand why less transfer RNA for a particular codon would mean slower protein synthesis.  Chemists understand what a protein conformation is, although we can’t predict it 100% of the time from the amino acid sequence.  So we do understand exactly why the same amino acid sequence using different codons would result in slower synthesis of gamma actin than beta actin, and why the slower synthesis would allow a more leisurely exploration of conformational space allowing gamma actin to find a conformation which would be modified by linking it to another protein (ubiquitin) leading to its destruction.  Not bad.  Not bad at all.

Now ask yourself, why the cell would want to have less gamma actin around than beta actin.  There is no conceivable explanation for this in terms of chemistry.  A better understanding of protein structure won’t give it to you.  Certainly, beta and gamma actin differ slightly in amino acid sequence (4/375) so their structure won’t be exactly the same.  Studying this till the cows come home won’t answer the question, as it’s on an entirely different level than chemistry.

Cellular and organismal molecular biology is full of questions like that, but gamma and beta actin are the closest chemists have come to explaining the disparity in the abundance of two closely related proteins on a purely chemical basis.

So there you have it.  Physicality has gone as far as it can go in explaining the mechanism of the effect, but has nothing to say whatsoever about why the effect is present.  It’s the Cartesian dualism between physicality and the realm of ideas, and you’ve just seen the junction between the two live and in color, happening right now in just about every cell inside you.  So the effect is not some trivial toy model someone made up.

Whether philosophers have the intellectual cojones to master all this chemistry and molecular biology is unclear.  Probably no one has tried (please correct me if I’m wrong).  They are certainly capable of mounting intellectual effort — they write book after book about Godel’s proof and the mathematical logic behind it. My guess is that they are attracted to such things because logic and math are so definitive, general and nonparticular.

Chemistry and molecular biology aren’t general this way.  We study a very arbitrary collection of molecules, which must simply be learned and dealt with. Amino acids are of one chirality. The alpha helix turns one way and not the other.  Our bodies use 20 particular amino acids not any of the zillions of possible amino acids chemists can make.  This sort of thing may turn off the philosophical mind which has a taste for the abstract and general (at least my roommates majoring in it were this way).

If you’re interested in how far reductionism can take us  have a look at http://wavefunction.fieldofscience.com/2011/04/dirac-bernstein-weinberg-and.html

Were my two philosopher roommates still alive, they might come up with something like “That’s how it works in practice, but how does it work in theory? 

Advertisements

Omar Khayyam and the embryology of the cerebral cortex

“The moving finger writes; and, having writ, moves on”.  Did Omar Khayyam realize he was talking about the embryology of the human cerebral cortex?  Although apparently far removed from chemistry, embryology most certainly is not.  The moving finger in this case is an enzyme modifying histone proteins.

In the last post (https://luysii.wordpress.com/2018/06/04/marshall-mcluhan-rides-again/) I discussed how one site in the genome modified  the expression of a protein important in cancer (myc) even though it was 53,000 positions (nucleotides) away.  When stretched out into the usual B-form DNA shown in the text books this would stretch 1.7 microns or 17% of the way across the diameter of the usual spherical nucleus.  If our 3,200,000 nucleotide genome were chopped up into pieces this size some 60,000 segments would have to be crammed in.  Clearly DNA must be bent and wrapped around something, and that something is the nucleosome which is shaped like a fat disk.  Some 160 or so nucleotides are wrapped (twice) around the circumference of the nucleosome, giving a 10fold compaction in length.

The nucleosome is made of histone proteins, and here is where the moving finger comes in.  There are all sorts of chemical modifications of histones (some 130 different chemical modifications of histones are known).  Some are well known to most protein chemists, methylation of the amino groups of lysine, and the guanido groups of arginine, phosphorylation and acetylation  of serine and threonine.  Then there are the obscure small modifications –crotonylation, succinylation and malonylations.  Then there are the protein modifications, ubiquitination, sumoylation, rastafarination etc. etc.

What’s the point?  All these modifications determine what proteins and enzymes can and can’t react with a given stretch of DNA.  It goes by the name of histone code, and has little to do with the ordering of the nucleotides in DNA (the genetic code).  The particular set of histone modifications is heritable when cells divide.

Before going on, it’s worth considering just how miraculous our cerebral cortex is.  The latest estimate is that we have 80 billion neurons connected by 150 trillion synapses between them.  That’s far too much for 3.2 nucleotides to explicitly code for.

It turns out that almost all neurons in the cerebral cortex are born in a small area lining the ventricles.  They then migrate peripherally to form the 6 layered cerebral cortex.  The stem cell of the embryonic cortex is something called a radial glial cell which divides and divides each division producing 1 radial glial cell and 1 neuron which then goes on its merry way up to the cortex.

Which brings us (at last) to the moving finger, an enzyme called PRDM16 which puts a methyl group on two particular lysines  (#4 and #9) of histone H3.  PRDM16 is highly enriched in radial glia and nearly absent in mature neurons.  Knock PRDM16a out in radial glia, and the cortex is disorganized due to deficient neuronal migration.  Knock it out in newly formed neurons and the cortex is formed normally.  The moving finger having writ (in radial glia) moves on and is no longer needed (by mature neurons). “nor all thy Piety nor Wit shall lure it back to cancel half a line.  Nor all thy tears wash out a word of it”.

You may read more about this fascinating work in Neuron vol. 98 pp. 867 – 869, 945 – 962 ’18

Marshall McLuhan rides again

Marshall McLuhan famously said “the medium is the message”. Who knew he was talking about molecular biology?  But he was, if you think of the process of transcription of DNA into various forms of RNA as the medium and the products of transcription as the message.  That’s exactly what this paper [ Cell vol. 171 pp. 103 – 119 ’17 ] says.

T cells are a type of immune cell formed in the thymus.  One of the important transcription factors which turns on expression of the genes which make a T cell a Tell is called Bcl11b.  Early in T cell development it is sequestered away near the nuclear membrane in highly compacted DNA. Remember that you must compress your 1 meter of DNA down by 100,000fold to have it fit in the nucleus which is 1/100,000th of a meter (10 microns).

What turns it on?  Transcription of nonCoding (for protein) RNA calledThymoD.  From my reading of the paper, ThymoD doesn’t do anything, but just the act of opening up compacted DNA near the nuclear membrane produced by transcribing ThymoD is enough to cause this part of the genome to move into the center of the nucleus where the gene for Bcl11b can be transcribed into RNA.

There’s a lot more to the paper,  but that’s the message if you will.  It’s the act of transcription rather than what is being transcribed which is important.

Here’s more about McLuhan — https://en.wikipedia.org/wiki/Marshall_McLuhan

If some of the terms used here are unfamiliar — look at the following post and follow the links as far as you need to.  https://luysii.wordpress.com/2010/07/07/molecular-biology-survival-guide-for-chemists-i-dna-and-protein-coding-gene-structure/

Well that was an old post.  Here’s another example [ Cell vol. 173 pp. 1318 – 1319, 1398 – 1412 ’18 ] It concerns a gene called PVT1 (Plasmacytoma Variant Translocation 1) found 25 years ago.  It was the first gene coding for a long nonCoding (for proteins RNA (lncRNA) found as a recurrent breakpoint in Burkitt’s lymphoma, which sadly took a friend (Nick Cozzarelli) far too young as (he edited PNAS for 10 years).

So PVT1 is involved in cancer.  The translocation turns on expression of the myc oncogene, something that has been studied out the gazoo and we’re still not sure of how it causes cancer. I’ve got 60,000 characters of notes on the damn thing, but as someone said 6 years ago “Whatever the latest trend in cancer biology — cell cycle, cell growth, apoptosis, metabolism, cancer stem cells, microRNAs, angiogenesis, inflammation — Myc is there regulating most of the key genes”

We do know that the lncRNA coded by PVT1 in some way stabilizes the myc protein [ Nature vol. 512 pp. 82 – 87 ’14 ].  However the cell experiments knocked out the lncRNA of PVT1 and myc expression was still turned on.

PVT1 resides 53 kiloBases away from myc on chromosome #8.  That’s about 17% of the diameter of the average nucleus (10 microns) if the DNA is stretched out into the B-DNA form seen in all the textbooks.  Since each base is 3.3 Angstroms thick that’s 175,000 Angstroms 17,500 nanoMeters 1.7 microns.  You can get an idea of how compacted DNA is in the nucleus when you realize that there are 3,200,000,000/53,000 = 60,000 such segments in the genome all packed into a sphere 10 microns in diameter.

To cut to the chase, within the PVT1 gene there are at least 4 enhancers (use the link above to find what all the terms to be used actually mean).  Briefly enhancers are what promoters bind to to help turn on the transcription of the genes in DNA into RNA (messenger and otherwise).  This means that the promoter of PVT1 binds one or more of the enhancers, preventing the promoter of the myc oncogene from binding.

Just how they know that there are 4 enhancers in PVT1 is a story in itself.  They cut various parts of the PVT1 gene (which itself has 306,721 basepairs) out, and place it in front of a reporter gene and see if transcription increases.

The actual repressor of myc is the promoter of PVT1 according to the paper (it binds to the enhancers present in the gene body preventing the myc promoter from doing so).  Things may be a bit more complicated as the PVT1 gene also codes for a cluster of 7 microRNAs and what they do isn’t explained in the paper.

So it’s as if the sardonic sense of humor of ‘nature’, ‘evolution’, ‘God’, (call it what you will) has set molecular biologists off on a wild goose chase, looking at the structure of the gene product (the lncRNA) to determine the function of the gene, when actually it’s the promoter in front of the gene and the enhancers within which are performing the function.

The mechanism may be more widespread, as 4/36 lncRNA promoters silenced by CRISPR techniques subsequently activated genes in a 1 megaBase window (possibly by the same mechanism as PVT1 and myc).

Where does McLuhan come in?  The cell paper also notes that lncRNA gene promoters are more evolutionarily conserved than their gene bodies.  So it’s the medium (promoter, enhancer) is the message once again (rather than what we thought the message was).