The idea might not be new – we already diluted DNA already a decade ago ( see this 2003 paper ). A new Nature paper by Peters ( Accurate whole-genome sequencing and haplotyping from 10 to 20 human cells… ) now shows that diluting DNA into 384 wells, adding unique tags, and pooling again before sequencing everything on a Hiseq, will result in an enormous reduction of sequencing errors – a problem that we are fighting now for a year. IMHO the paper isn’t primarily about the low number of cells that can be sequenced, but also about error reduction in WGS. The two key facts are certainly
To ensure complete representation of the genome we maximized the input of DNA fragments for a given read coverage and number of aliquots. Unlike other experimental approaches this resulted in low- coverage read data for each fragment in each of the wells a fragment is found in.
plus an intelligent phasing algorithm
Application of our algorithm to the (…) libraries resulted in the placement of on average 92% of the phasable heterozygous SNPs into long contigs with N50s of approx 1 Mb.
Looks very much as this will be my “Paper of the Year 2012”, if we can reproduce it. It will also another dream come true, the correct mapping of variants
… any genes or noncoding functional sequences constituted by two homologous chromosomes can be genetically very different. Whether variant alleles reside on the same chromosome (in cis), or on opposite chromosomes (in trans), is key to understanding their impact on gene function and phenotype (…) different configurations of mutations result in different phenotypes: Two null mutations in cis left the second allele intact, but when in trans, no functional form of the gene was present. Cis versus trans configurations between mutations in cell essential genes and tumor suppressor genes, even megabases (Mb) apart, have been shown to result in profound alterations of cancer phenotype, spectrum, and progression.