Bad news on the cancer front

There’s an interesting article in the current Cell ( vol. 141 pp. 917 – 919 ’10 — 11 June issue ) which discusses some of the uses to which (cheap) whole genome sequencing has been put.  I riffed on one potential application (which seems to have been missed by the authors) in a previous post —

The bad news is to be found in a Nature paper  (vol. 465 pp. 473 – 477 ‘1o  — 27 May issue).   The authors (numbering 27) completely sequenced the genome of a  form of lung cancer (adenocarcinoma) in one unfortunate (1.25 packs of cigarettes daily x 15 years), along with the genome of the surrounding normal tissue.  They sequenced each type of genome many times over (60 for the tumor 46 for the normal tissue) for a total of 171.25 gigaBases of sequence (recall that the human genome has about 3.2 gigaBases), in effect duplicating the human genome project which took a decade and billions of dollars some 53 times over.

They found some 50,000 (not a misprint) changes at the single nucleotide (base) level between the genomes of normal and tumor tissue.  In addition, there were 43 large scale structural variations.  Why is this bad news? It’s unlikely that drugs will be found to stop something this divergent in its tracks (which makes Gleevec all the more miraculous).   They did find mutations in some of the known pathways implicated in cancer formation, but there were so many mutations that this could have occurred by chance.  Will we now have an army of 50,000 researchers tracking down each of the mutations.

It does tell us that the genome of a cancer cell is incredibly unstable (something we already knew, though not just how unstable it is). The “War on Cancer” is now in ripe middle age, having been launched in 1971 amidst lots of hubris (we can put a man on the moon, surely we can cure cancer).  We are just beginning to realize how complex an adversary we have.  Hopefully the research for alternative forms of energy will have better luck, but Lyman Spitzer was talking about fusion power when I was a Freshman in ’56.

Post a comment or leave a trackback: Trackback URL.


  • Wavefunction  On June 18, 2010 at 9:48 am

    The situation is undoubtedly very complex but it’s probably not as bad as it sounds here. Not all 50,000 mutations will be created equal. The protein products they encode will be connected in signaling transduction cascades which probably will contain a few crucial ‘hubs’ for therapeutic intervention. So you may not have to target all mutations for killing the cell; targeting a privileged subset should be adequate. We will have to wait and watch.

  • luysii  On June 18, 2010 at 1:46 pm

    You’re probably correct, but did any of us suspect that there would be this many single nucleotide changes? Where does this leave the (much criticized by some) expensive (1.5 gigaDollars) Cancer Genome Atlas ?

  • Yonemoto  On July 2, 2010 at 11:18 am

    Re: nuclear fusion

  • Yonemoto  On July 2, 2010 at 11:22 am

    … I think it shouldn’t be surprising that most cancer drugs just blast the heck out anything living. Quite a few of them were identified because they kill bacteria!

    Keep in mind that Gleevec is an anti-leukemic. Leukemia is not quite one of those “big five” cancers that require five major changes to become cancer. Since leukemia is a blood disease 1) no angiogenesis necessary 2) no invasion-motility change necessary 3) no loss of physiological zip code necessary. And I think it’s just a structurally weak part of our genome that basically everyone has (somewhat homologous section of two different chromosomes).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: