In the news

  • Newsletter Edition
The PHG Foundation monthly newsletter features news and views about genetics and genetics research, from a public health perspective. The newsletter is written by staff of the PHG Foundation.

In the news

News story   |   By Dr Philippa Brice   |   Published 31 August 2009

A UK trial of a form of gene therapy for Duchenne muscular dystrophy or DMD (see previous news) has reported promising results. The therapeutic AVI-4658 is an antisense oligonucleotide molecule, administered via intra-muscular injection, which binds to exon 51 of the dystrophin gene and allows production of the dystrophin protein to continue beyond deletions in the gene that normally cause premature truncation of protein production, resulting in a non-functional protein.

Researchers led by Professor Francesco Muntoni at the University College London Institute of Child Health reported that the treatments appeared safe and effectively induced the expression of dystrophin in the treated muscles; they plan now to look at the effects of intravenous administration of AVI-4658 (see Yahoo news report). Whilst this particular molecule would only be suitable for use in around 13% of DMD patients, similar therapeutics could be used for many more.

Meanwhile, a new report from a cross-party parliamentary group has found that disparities in access to specialist care for muscular dystrophy are resulting in serious inequalities including a difference in life expectancy of more than 12 years between the best and least-well served areas of the UK. This finding echoes that of a 2007 Muscular Dystrophy Campaign report (see previous news). Access to Specialist Neuromuscular Care: The Walton Report finds that the National Health Service (NHS) relies too heavily on charities to fund key worker posts, and calls for establishment of new specialised services for neuromuscular diseases, including named Muscular Dystrophy leads and a NICE clinical guideline for muscular dystrophy [Kmietowicz Z (2009) BMJ 339:b3436].


News story   |   By Dr Philippa Brice   |   Published 24 August 2009
A paper in Science has called for a revision of policy with respect to the use of biological samples and data from children in biobank projects. Some (though by no means all) biobanks include samples from children, often collected at the same time as their parents, as a resource for long-term studies on genetic and environmental factors affecting health and development. The authors of the paper argue that children are a vulnerable research population, lacking the capacity for consent to participation in such projects, but who will in time become adults able to understand them. Their concern is that biobank subjects enrolled as children will not have a genuine option to opt-out on adulthood, because ‘privacy can never be completely ensured within biobanks’ and that DNA samples could be used to identify subjects in later life [Gurwitz D et al. (2009) Science 325(5942):818-9].

They propose a more cautious approach to the use of samples from children, notably measures to prevent biobanks from sharing these samples or linked data with other researchers until the corresponding children attain adulthood and are able to grant or withhold suitably informed consent for the use of their samples. Before then, for example, some general information might be made available about genetic variants affecting health, but without disclosure of the specific DNA sequences involved, although the benefit of such limited information to other researchers would be questionable.

Importantly, these recommendations are directed solely towards population biobank resources; since disease-specific research projects and databases could offer potential benefits to participating children and their families, consent from parents is proposed to be adequate to authorise normal sharing of data in these cases.

The paper acknowledges that the proposed policy amendments ‘may negatively impact research’, but suggests that the additional expense and inconvenience would be justified by improved public trust in biomedical research. Senior author Dr Bartha Knoppers, newly appointed director of McGill University's Centre for Genomics and Policy in Canada, reportedly said of this approach: "It's not restrictive, it's cautious” (see GenomeWeb news). 

Comment: The authors seek to protect the rights of children by avoiding any risk of the effectively irreversible disclosure of potentially personally identifiable information (in the form of DNA samples or sequence information) before they are of an age to give consent; they note that ‘a child whose DNA sample is donated by her parents today and distributed over the next few decades for research projects around the world can potentially be in the public eye decades later’. Since concerns about privacy and security of data are very important for adult participants in biobanks, caution is certainly a laudable approach when considering the interests of children – although there is always the possibility that the proposed restrictions could in fact erode rather than maintain public trust in biomedical research, by implying that data is less secure than it actually is.


News story   |   By Dr Philippa Brice   |   Published 19 August 2009
At the end of last month the UK Biobank announced an additional £6 million funding from the publicly-funded Medical Research Council and Department of Health and the charity the Wellcome Trust. The announcement was made on the day that HRH Princess Anne, The Princess Royal, officially opened UK Biobank’s £4.5 million archive facility, which will eventually store some 10 million samples frozen at -80°C, totalling around 9,500 litres of blood and 2,500 litres of urine (see press release). The freezer archive is reportedly the largest of its kind in the world. To date, UK Biobank has recruited just over 350,000 of the target 500,000 middle-aged volunteers aged 40-69 for this longitudinal study of the impact of environmental, lifestyle and genetic factors in the development of a range of chronic diseases (see previous news). For more information on UK Biobank, see BBC news feature.

The Wellcome Trust has also just announced £2.5 million funding for the Chinese Kadoorie Biobank Study, originally established in 2004 as a collaborative project between the University of Oxford's Clinical Trial Service Unit, and the Chinese Centre for Disease Control and Prevention, with funding from the Kadoorie Charitable Foundation in Hong Kong. The Kadoorie Biobank Study has already recruited over 500 000 people aged 35-74 from both rural and urban areas throughout China; like the UK Biobank, the aim is to study environmental and genetic factors involved in common conditions. The Biobank is integrated with China's national systems of healthcare and disease surveillance. Professor Zhengming Chen of the University of Oxford, who leads the UK arm of the project, noted that China was uniquely placed for large-scale medical research, adding: "There is a great deal of unexplained variation in disease rate and risk exposure and a high incidence for many common conditions such as stroke” (see press release). The Wellcome Trust funding will support the project for the next 5 years.


News story   |   By Dr Sowmiya Moorthie   |   Published 17 August 2009
A US report focusing on the need to draw clearer distinctions between scientific advice and policy decisions was released earlier this month by the Bipartisan Policy Center, a non-profit body based in Washington, (reported by Nature news). The Centre’s Science for Policy Project co-chair Sherwood Boehlert explained: “Often, policy disputes are cast as fights over science.  This damages the credibility of science and obscures the real issues that ought to be debated" (see press release).

The final report contains recommendations and proposes specific procedures on how scientific results could be used in developing regulatory policies in the US. The recommendations were drawn up by a diverse panel of experts from government, academia, business and non-profit organisations and are aimed at addressing claims that science in the US is being politicised and regulation lacks a strong scientific basis.

The report calls for the US Administration to ensure that federal agencies explicitly differentiate between questions that involve scientific judgements and those that involve judgements about other matters of policy (e.g. ethics, economics), and also calls on the federal government and members of the scientific community (scientists, journals, universities) to strengthen the peer-review process in order to improve the use of science in regulatory processes. This could be achieved by expanding the available information on scientific studies and setting standards governing conflict of interest. In addition, the panel produced a number of detailed recommendations about the formation and use of scientific advisory committees, including setting requirements for disclosure and dealing with conflicts of interest and bias.

It is hoped that the recommendations will make the regulatory process more rigorous and transparent, and that “when there are disagreements, officials and the public will have a clearer sense of what they are about”.

Keywords : rRegulatory Framework


News story   |   By Dr Philippa Brice   |   Published 11 August 2009

A new report has outlined how Germany should move forward in the area of synthetic biology, the de novo creation of novel biological systems (see Nature news report). Whilst some aspects of synthetic biology are highly contentious, for example the creation of new forms of life such as artificial construction of viruses (including as potential bioweapons), some applications have obvious benefits, such as the production of drug-like molecules or biofuels. Synthetic biology is an area of simultaneous interest and concern for many countries; for example, a recent UK report examined the report on the social and ethical challenges posed by such research (see previous news).

The new document, produced jointly by the Deutsche Forschungsgemeinschaft (German Research Foundation), German Academy of Sciences Leopoldina and the German Academy of Science and Engineering, reportedly concludes that Germany’s technological strengths make it ideally placed to work in this area, but also calls for debate on ethical issues given the risk of abuse. It also recommends the creation of a national database to hold information about artificially created DNA sequences and to conduct safety assessments for these sequences.

Keywords : BioethicsDNA Technologiesr

News story   |   By Dr Caroline Wright   |   Published 10 August 2009

The Genetic Information Non-discrimination Act (GINA) was made a federal law across the US just over a year ago (see previous news). It prevents insurers from refusing to provide health insurance to healthy people on the sole basis of genetic predisposition to a disease, based on the results of a genetic test or a family history of a particular disease. It also prevents employers from requesting or using genetic information in the process of employment, promotion or dismissal of staff.

 

A recent commentary in the journal Nature Medicine highlights some of the gaps in this legislation [Wadman M. (2009) Nat Med 15(8):826]. Kathy Hudson, Director of the Genetics and Public Policy Center of Johns Hopkins University, said that "while GINA did lots of good things, there are other areas that have been left unattended." In particular, whilst health insurance is covered, long-term care insurance, disability insurance and life insurance are not addressed by GINA. As a result, numerous states are starting to adopt legislation to address this issue.

 

On a related note, Eurogentest has recently posted an unofficial English translation of the Human Genetic Examination Act (Genetic Diagnosis Act - GenDG), which was passed by the Bundesrat in Germany earlier this year (see previous news). This law, which has still to be enacted, relates to ‘genetic examinations’ and is much broader than GINA in the US. In addition to insurance and employment issues relating to discrimination, it also covers storage and destruction of both samples and data, provision for genetic counselling, and informed consent. It also states that genetic tests may only be conducted by medical doctors, thus essentially banning direct-to-consumer tests. Finally, it defines criminal penalties for anyone who violates the law, which range from fines of up to EURO 300,000 to prison time in certain circumstances.

 

Comment: At the core of both of these laws lies the concept that ‘genetic’ information is somehow importantly different from other information, and requires special protection. However, because nearly every human trait is in part determined by our genes, which interact with our external environment in a highly complex and unpredictable manner, the term ‘genetic’ is very difficult to adequately define. For example, GINA explicitly excludes sex from constituting genetic information, which is an almost entirely genetic trait that depends upon the inheritance of the X and Y chromosomes. Moreover, we all have some genetic predispositions – of varying magnitude and importance – to some diseases, and deriving the contribution of an individual’s genes versus their environment is near impossible for most diseases.

 

Since every individual is genetically distinct, it is always possible to discriminate between individuals based on genetic differences; the question, therefore, is when does such discrimination represent the unfair treatment of individuals? The principal of ‘actuarial fairness’ commonly used by insurance companies is based upon individual risk prediction specifically for the purpose of discriminating between people based on numerous risk factors, and offering premiums based on the predicted risk groupings. In the UK (and previously the US), insurers regularly use family history to inform their risk prediction, along with numerous other risk factors, which is a practice that has gone relatively uncontested for years. Similarly, the process by which employers select a future employee from a group of applicants explicitly involves discriminating between individuals based upon merit, which itself is not unrelated to either genetics or family history.

 

From this perspective, with respect to both the US and German laws, it is rather unclear why ‘genetic’ information has been singled out as requiring exceptional treatment versus other kinds of health-related information. In reality, the results from numerous other health-related tests may be much more predictive of disease than many genetic tests (e.g. blood glucose), and some are much more sensitive and have greater implications for family members (e.g. HIV status). Perhaps, instead of taking a blanket regulatory approach to preventing unfair discrimination based on the rather indistinct term ‘genetic’, criteria such as the clinical validity and utility of the test (be it diagnostic or predictive) and the personal sensitivity of the information would be more appropriate and effective.


News story   |   By Dr Sowmiya Moorthie   |   Published 4 August 2009

Recently researchers from The Children’s Hospital in Philadelphia have created a database of copy number variations detected in disease-free individuals in order to aid the diagnosis and identification of genetic diseases [Shaikh TH et al. (2009) Genome Res Epub]. Changes to DNA structure as a result of differences in the number of copies of a particular gene or segment of DNA, are referred to as copy number variations (CNVs). These variations can be found across the human genome and have been implicated in a number of genetic disorders and it has also been suggested that they may influence susceptibility and resistance to disease (see previous news). However, a large number of CNVs are also found in healthy individuals and differentiating between those CNVs which represent normal variation and those which contribute to disease can be a problem.

The database and high-resolution CNV map was generated following analysis of DNA from 2026 healthy children and their parents. The study catalogued 54,462 CNVs using a uniform array and computational platform. 77.8% of the CNVs were classified as non-unique as they detected in more than one un-related individual and 22.2% were detected in just one individual. Researchers can search the database produced by Shaikh et al which is available via the hospital’s website and compare specific CNVs to those collected in public repositories. The strengths of the database lie in the large number of individuals that were used in the study as well as the uniform analysis techniques that were used. In addition, DNA from both Caucasians and African-Americans were analysed, allowing for CNVs that differ between these ethnic groups to be identified.

Other resources aimed at harnessing the potential of genomics for clinical research and health include ClinSeq and the Human Connectome Project. ClinSeq is a pilot project aimed at investigating the use of large scale medical sequencing in a clinical research setting. The pilot project aims to recruit 1000 participants in the initial phase and use Sanger-based sequencing to target regions of the genome [Biesecker et al (2009) Genome Res Epub]. Initially, they plan to sequence 300-400 genes relevant to atherosclerosis and analyse the data for high-penetrance variants associated with specific clinical traits. Along with assessing the use of large-scale sequencing, the project will also provide insight into the implementation of genomic technology, informed consent, disclosure of genetic information, and archiving, analysing, and displaying sequence data. The Human Connectome Project aims to map the circuitry of the brain using brain imaging technology as well as DNA, behavioural and demographic information (see press release). It is hoped that this initiative will gather data which can be used by neuroscientists to aid our understanding of mental health and disease.


Research articles

Research article   |   By Dr Philippa Brice   |   Published 28 August 2009

The mitochondria are sub-cellular stuctures that release energy from food to power the cell; they contain small amounts of their own DNA encoding a total of thirteen genes, although around 1500 additional genes that contribute to mitochondrial structure and function are found within the rest of the main genomic DNA of the cell inside the nucleus. The rate of mutation in mtDNA is much higher than that in nuclear DNA, and over time cells may accumulate a mixture of normal and mutant mtDNA (heteroplasmy). Since mitochondrial DNA (mtDNA) is inherited via the cytoplasm of maternal egg cells, heteroplasmy in egg cells can allow the transmission of mtDNA mutations to the woman’s offspring. Sperm cells, whilst containing mitochondria, do not normally contribute any to the embryo as they undergo selective destruction within the fertilised egg. 

Inherited defects in mitochondrial DNA are known to be responsible for some serious rare genetic diseases including mitochondrial myopathies (neuromuscular disorders), the neurodegenerative disease Leigh syndrome, and Leber's hereditary optic neuropathy. Mitochondrial diseases may be highly variable, since the distribution of defective mitochondria in different body organs can vary; defective mitochondria in the muscular or nervous system typically have the most severe affects. Infant-onset mitochondrial diseases can be lethal in childhood; other have milder symptoms, but all tend to become progressively more severe with age [Lane N (2006) Nature 440 (7084), 600-602].

A new paper in Nature reports on a nuclear transfer procedure in rhesus macaques that could represent a step towards an effective therapy for human mitochondrial disease. The US researchers extracted DNA from the nucleus of one egg cell and transplanted it to another from which the nucleus had previously been removed – crucially, without simultaneously transferring any mitochondrial DNA [Tachibana M et al. (2009) Nature doi:10.1038/nature08368]. In all, fifteen embryos created using the technique (dubbed MII spindle–chromosomal complex transfer) were subsequently fertilised with sperm to create embryos with nuclear DNA from the first egg donor monkey and mitochondrial DNA from the second. 

The embryos were implanted into nine female monkeys, of which three became pregnant. At the time of publication, one had given birth to twins and another to a single baby, with pregnancy reportedly ongoing in the third. The researchers report that the three monkey offspring are all healthy and that virtually none of the donor mother’s mitochondrial DNA has been detected (although all the donor monkeys used had normal, healthy mtDNA). A similar procedure in humans using eggs from healthy female donors could allow women with mitochondrial disease to conceive their own healthy genetic offspring, free from the mitochondrial mutations associated with disease; only the mitochondrial DNA (a tiny proportion of the whole genome) would come from the egg donor. 

Currently, pre-implantation genetic diagnosis (PGD) has been used to screen embryos in an attempt to select those with few or no maternal disease-associated mtDNA mutations, but it isn’t possible to accurately predict risks because of the variability in how much healthy and mutated mtDNA may be passed to different embryos. David Thorburn, a mitochondrial disease specialist from the Murdoch Childrens Research Institute in Melbourne, Australia, reportedly commented: "It should be able to mimic the human situation more closely than mice. If proven safe [in humans] this could provide a huge advance" [Cyranoski D (2009) Nature doi:10.1038/news.2009.860]

Comment: This paper represents a potential step towards therapeutic cloning to avoid serious mitochondrial disease in humans, but many barriers remain. These include concerns about the long-term safety of the procedure, since previous efforts to transplant healthy mitochondria have been associated with birth defects; it will be desirable to observe larger numbers of primate offspring created in this manner. There are also possible ethical objections to the approach from some quarters as a form of germ-line (ie. permanent, heritable) genetic modification. Lead researcher Shoukhrat Mitalipov is quoted as having said that human trials could be taken forward in as little as two to three years (see BBC news report); although the particular cloning technique itself could be relatively easily transferred into the clinic in practical terms, it is likely that regulators will adopt a cautious approach until there is considerably more scientific evidence to support the safety and efficacy of the technique.


Research article   |   By Simon Leese   |   Published 26 August 2009

A paper in PLoS ONE [Hu G & Agarwal P (2009) PLos ONE Aug 6;4(8):e6536] describes how the authors performed a systematic large-scale analysis of the transcriptome profiles of thousands of different human diseases and drugs in order to create a new network of disease-drug interactions. They anticipate that the network will aid with identification of drug targets and pathways as well as suggesting drug repositioning possibilities, that is, making use of existing drugs to treat diseases other that those they are currently indicated for.

The researchers compared approximately 7,000 publicly available transcriptome profiles, extracting a network of 645 disease-disease, 5008 disease-drug, and 164,374 drug-drug relationships from some 24.5 million comparisons between them. A transcriptome profile characterises which of a cell’s genes are being expressed under particular disease states or the influence of different drugs. The study made use of Gene Expression Omnibus (GEO) datasets, which provide a statistically and biologically relevant way to compare expression data that have been processed using the same platform.

The rationale for the drug-disease network that the researchers have created is the assumption that the gene expression profiles of diseases and drugs reflect the biological effects of those diseases and drugs; i.e. that diseases with similar expression profiles are related to one another, and that drugs might be suited to treating diseases with negatively correlated profiles, whilst positively correlated ones could indicate possible side effect issues.

The authors suggest that the current method of drug positioning based upon a limited number of comparative assays may be inefficient and ineffective. They point out that a key-lock view of drug-disease relationships is an over-simplified one and that there are actually several therapeutic keys for each lock, whilst a single key can fit multiple locks. They consider that their high-throughput approach to discovering associations has the potential to vastly speed up the application of beneficial drugs to relevant diseases.

Comment: Although this study suggests large numbers of novel drug-drug, disease-disease and disease-drug connections, the actual clinical validity of these possible connections can, as the authors acknowledge, only be verified by direct experiment. Fortunately the network the authors have created does provide many testable hypotheses, including predicted biological connections between diseases previously thought to be unrelated and new therapeutic applications for existing drugs. If the methods and the assumptions of biological validity made by the authors do prove to be valid, this could indeed prove to be a powerful tool in speeding up the processes of drug development and positioning.


Research article   |   By Dr Caroline Wright   |   Published 21 August 2009

Despite the continuing hype surrounding direct-to-consumer (DTC) genetic tests, accompanied by an increasing number of academic publications discussing their validity and potential regulation (see previous news), to date very little is known about awareness of these tests amongst general consumers or healthcare providers. In the absence of sales data from any of the major companies, one might sensibly ask whether anybody (other than journalists and intrigued geneticists) is actually buying these tests.

 

One of the first attempts to address this question uses two national surveys in the US (Healthstyles for consumers and DocStyles for physicians) to gain insights into awareness, perceptions and use of DTC personal genomic tests [Kolor K et al. (2009) Genet Med 11(8):595]. The surveys were limited to “genetic tests marketed directly to consumers that scan a person’s entire genetic makeup for potential health risks” (see previous news for examples). Of the 5399 consumer responders, 22% were aware of these tests and 0.3% (16 people) had actually used a test. Amongst the 1880 physicians sampled, 42% were aware of the tests and, over the past year, 15% had at least one patient bring the results of such a test to them for discussion. Interestingly, of this latter group, 75% (212 physicians) indicated that the results had changed some aspect of their patient’s care. Thus, regardless of the continuing concerns over validity and utility, it seems that personal genomics services are fairly widely known and are already being used to influence clinical decision making.

 


Research article   |   By Dr Caroline Wright   |   Published 19 August 2009

The August 2009 issue of Genetics in Medicine has a special focus on personalised medicine, and includes research papers examining the effect of risk updates on commercial services (see previous news), as well as investigating the awareness, characteristics and perceptions of users of direct-to-consumer (DTC) genetic risk profiling.

 

It also contains the latest addition to a growing body of literature offering expert recommendations for the evaluation and regulation of personal genomics services, from an expert multidisciplinary workshop co-hosted by the National Institutes of Health and the Centers for Disease Control and Prevention in the US [Khoury M et al. (2009) Genet Med 11(8):559-67]. The workshop was convened in late 2008 to discuss the scientific foundation for personal genomics in risk prediction and disease prevention, and highlighted the need for:

  • "developing and applying scientific standards for assessing personal genomic tests;

  • developing and applying a multidisciplinary research agenda, including observational studies and clinical trials to fill knowledge gaps in clinical validity and utility;

  • enhancing credible knowledge synthesis and information dissemination to clinicians and consumers;

  • linking scientific findings to evidence-based recommendations for use of personal genomics; and

  • assessing how the concept of personal utility can affect health benefits, costs, and risks by developing appropriate metrics for evaluation.”

  • This review is published back-to-back with four commentaries, further adding to the ongoing debate over DTC genetic test services. The first discusses the current tension between “paternalism on the one hand and recklessness on the other” [Evans JP & Green RC (2009) Genet Med 11(8):568-9]. The authors attempt to find a middle ground between these two extremes by making three practical suggestions relating to the regulation of DTC genetic testing services: first, the need for transparency in the provision of accurate and transparent information to consumers; second, the requirement for formal laboratory certification to ensure that the assay result is accurate; and third, the importance of ensuring that tests are honestly labelled and explicitly state evidence (or lack of) for demonstrated utility.

     

    The second commentary argues that variation between individuals (in terms of personal autonomy, family dynamics and awareness of disease risk) means that a more expansive view of utility than is usually employed in medicine is appropriate for evaluating personal genomic information [Foster MW et al (2009) Genet Med 11(8):570-4]. The authors suggest that the concept could be reframed to include not only ‘clinical utility’ relating to direct medical actions, but also ‘personal utility’ such as “indirect health-related and other nonmedical benefits”. This model argues for greater access to personal genomic data for those in whom there would be positive overall utility, and for developing strategies (such as transparency) for minimising the risk of negative consequences from testing (such as misinterpretation). This proposal fits with a liberal future in which patients are envisioned as “copractitioners in their own health promotion and care” and the state plays a somewhat reduced role.

     

    The third commentary questions this approach, and emphasises the difficulty of adequately defining and measuring overall utility [Grosse SD et al (2009) Genet Med 11(8):575-6]. The authors helpfully define three discrete viewpoints, which may help to explain the different perspectives that underlie various alternative opinions relating to DTC genetic testing: “the public health approach, which emphasises improvements on a population level; the clinical perspective, which emphasises … diagnostic thinking and therapeutic choice; and the personal perspective, which may consider genomic information as having potential value per se regardless of its clinical use or health outcomes.” It is currently unclear which one of these philosophies will ultimately prevail with respect to regulation.

     

    The fourth and final commentary focuses on the implications for technology assessment of new ‘disruptive’ technologies in healthcare, such as personalised medicine, and asks the question “what can policy makers do to promote innovation and allow new technologies to enter this regulated marketplace?” [Schulman KA et al (2009) Genet Med 11(8):577-81]. In short, how do we responsibly regulate the marketplace, and protect the unwary consumer from quackery and fraudulence, whilst encouraging and nurturing innovation? The authors suggest that one solution might be to open an Office of Personalised Medicine, charged with identifying and reviewing new technological applications in this area and expediting their regulatory approval. This approach is intended to offer a competitive advantage to those technologies that may truly be disruptive, such as genomic risk profiling, whilst maintaining a robust assessment process and stringent regulatory environment.

     

    Comment: These articles highlight the breadth and depth of the current debate surrounding DTC genetic testing services, and explore the different ideologies and underlying motivations of the various stakeholders. The issues go to the very heart of a much wider debate on the future of public health in the coming decades, and the role of the state in the face of personalised medicine, consumer healthcare and an information-rich society. Exactly how each country will respond to this challenge will doubtless depend not only upon innovations in science and technology, but also upon its social values and cultural heritage.

     

    More details about the PHG Foundation’s views on some of these issues can be found in our response to the recent consultation from the Nuffield Council on Bioethics on “Medical Profiling and online medicine: the ethics of ‘personalised' healthcare in a consumer age".


    Research article   |   By Dr Sowmiya Moorthie   |   Published 18 August 2009
    With the rapid development and falling costs of next-generation sequencing technologies, suggestions have been made that whole-genome sequencing can be used to identify rare variants that contribute to disease. However, this is not an easy task due to the enormous amount of sequence variation present in individual genomes, which creates a challenge to validating truly disease causing mutations (see previous news). However, concentrating on exomic sequencing (i.e. sequencing only the protein coding regions, or exons) may be an approach that yields better results, by allowing assessment of whether a particular sequence change impacts on protein structure and function. In addition, sequencing the ~1% of the genome used for protein coding will be cheaper and faster in comparison to whole-genome sequencing. A recent paper published in Nature has demonstrated the feasibility of this approach to identify rare variants associated with monogenic disease.

    In their study, Ng et al. have carried out targeted sequencing of all of the protein-coding regions of eight HapMap individuals, as well as four unrelated individuals with a rare autosomal dominant disorder – Freeman-Sheldon syndrome (FSS) – to demonstrate an approach for the discovery of rare highly penetrant variants [Ng et al. (2009) Nature doi:10.1038/nature08250]. They enriched the coding sequences from the genomes by targeted capture using microarrays; the captured exomes were then sequenced using high-throughput sequencing. The quality of the exomic data was assessed in a number of ways in order to validate the sensitivity and specificity of the technique in identifying variants.

    The candidate gene related to FSS was identified through a number of steps taken to eliminate background non-causal variants. Firstly, the number of genes that had one or more non-synonymous coding SNPs (i.e. those with potentially the highest impact on phenotype), splice site disruptions or coding indels in one or several FSS exomes were investigated. Filters were then applied to remove common variants present in the dbSNP catalogue (a public database of SNPs) or the eight HapMap exomes. This narrowed the possible disease-causing candidates to a single gene, MYH3, which had previously been identified using a candidate gene approach. A disruption of this gene was observed in all four individuals with FSS but not in the dbSNP or the HapMap exomes.

    The authors suggest that “direct sequencing of exomes of small numbers of unrelated individuals with a shared monogenic disorder can serve as a genome-wide scan for the causative gene”. They further suggest that this strategy may be easier when applied to recessive diseases, as there are far fewer genes which are homozygous or compound heterozygotes. This strategy may also be applied to complex common diseases, but will require larger sample sizes and a better approach to assessing the impact of the mutation in order to combat increasing extent of genetic heterogeneity. The authors point out that although this approach is useful in discovering causal-variants, one limitation is that is does not identify structural or non-coding variants, which may be found by whole genome sequencing.

    Keywords : DNA Technologies

    Research article   |   By Dr Caroline Wright   |   Published 14 August 2009

    The number of validated gene-disease associations has increased enormously over the last few years, largely due to improvements in genotyping technology which have enabled large genome-wide association studies to be conducted based on single nucleotide polymorphisms (SNPs). In turn, this has fuelled the creation of a ‘consumer genetics’ industry, where companies offer genome-wide scans on a direct to consumer (DTC) basis, to predict the risk of various common diseases. There are broadly three steps involved in this process: first, an individual’s genome is scanned to detect many hundreds of thousands of SNPs, some of which have been associated with disease; second, an individual’s relative risk of disease is calculated, based on combining multiple disease-associated SNPs; and finally, this relative risk is combined with the population average risk (i.e. the incidence or prevalence of the disease) to predict the individual’s absolute risk of getting a disease over a defined time period.

     

    However, because the calculated risk is updated every time a new association is discovered, the prediction for an individual can change from being above average risk to below average risk overnight. This is particularly problematic where it might result in opposing recommendations. Using a cohort of 5,297 people to predict the risk of type 2 diabetes, researchers have now quantified how often this kind of ”reclassification” is likely to occur [Mihaescu R et al. (2009) Genet Med 11(8):588-594]. When the risk predictions were updated from using just a single SNP in the TCF7L2 gene, to using 18 SNPs, and finally to using 18 SNPs plus age, sex and body mass index, around 39% of individuals changed their risk category once relative to the average risk (defined as 20%, the actual prevalence of the disease in the cohort),  and 11% changed twice; nearly half the participants switched risk categories at least once when the risks were updated after every SNP. In keeping with earlier findings (see previous news), there was a small but significant increase in the predictive accuracy of the model, and its ability to discriminate those individuals who actually had diabetes from those who did not, as the number of risk factors increased.

     

    Comment:The fact that updating risk factors often changes an individual’s genetic risk prediction seems to present an unusual philosophical paradox. On the one hand, it is rather counterintuitive, because DNA itself is immutable – your genome doesn’t really change with time – so the genetic contribution to most diseases shouldn’t change either. On the other hand, it is absolutely to be expected, as we are currently experiencing an explosion in human genetics research; science is a dynamic process, in which hypotheses are continually tested and updated based on the best available evidence. For this reason, many contend that the application of genetics to personal risk profiling is premature, as the science is still developing rapidly and there is very little evidence of clinical validity or utility for these tests.

     

    However, far from supporting calls to forbid such tests being available DTC, this highlights the need for transparency in the provision of information.Companies offering genome-wide risk prediction services should ensure that their customers understand that, whilst the measurement of the DNA sequenceitself (the assay) will remain constant, the interpretation of the result (the test) is likely to change as the science develops.

     


    Research article   |   By Dr Caroline Wright   |   Published 12 August 2009

    Following the completion of the reference human genome sequence, which is a composite genome taken from a number of different (anonymous) people, at least seven individuals have had their complete genomes sequenced, including two Caucasian men (Craig Venter and James Watson), a Caucasian women, two Korean men, an African man and a Chinese man (see Genetic Future Blog and previous news). Fairly soon, we will lose count.

     

    However, the most recent complete genome to be added – that of Stanford Professor Stephen Quake, co-founder of the sequencing company Helicos – is particularly noteworthy because it makes use of single molecule sequencing technology, or so-called ‘third generation’ sequencing [Pushkarev D et al. (2009) Nat Biotech doi:10.1038/nbt.1561]. Unlike older sequencing technologies, single molecule sequencers detect individual bases on individual strands of DNA, eliminating the need for DNA amplification prior to sequencing. The technology allows an enormous number of sequencing reactions to be run in parallel – around a billion individual molecules were sequenced over the course of a week, at a reported rate of around 48,000 basepairs per second – but the error rate is relatively high, with 3.5% of bases being incorrectly assigned in the initial run.  

     

    The paper highlights the incredible decrease in the cost of sequencing, from around $300 million for the first human genome sequence in 2001, to around $50,000 today. Quake said “This is the first demonstration that you don't need a genome center to sequence a human genome. This can now be done in one lab, with one machine, at a modest cost" (reported by Genome Web). The ‘modest cost’ in this case was around $48,000, which is on a par with the current whole genome sequencing service offered direct-to-consumer by the sequencing giant Illumina (reported by Genome Web).

     

    Comment: Whilst this paper represents a big step forward in the development of fast and affordable sequencing technology, it is still a long way from clinical application. For example, the genome coverage is currently incomplete, the read lengths (contiguous DNA fragments) are quite short, and the error rate is currently much too high to be acceptable within a clinical setting.  However, like the previous complete genome sequences that have been reported for ‘healthy’ individuals, it does provide an interesting insight into what sort of information we can expect to find encoded in the ‘average’ genome. Whilst the debate still ravages over the potential harm of direct-to-consumer genetic profiling or sequencing services, it is worth bearing in mind that, for the average person, there is currently very little of immediate or proven medical value in their raw genome sequence.

     


    Research article   |   By Dr Philippa Brice   |   Published 7 August 2009

    HIV-1 (human immunodeficiency virus type 1) has a small single-stranded RNA genome. The genome comprises all the elements necessary for the virus to evade human immune responses, infect target cells and subvert the normal cellular machinery to replicate and release new viruses. Single-stranded RNA can fold up to form two and three dimensional structures with critical functions in the regulation of viral replication and gene expression. Similarly, the importance of RNA folding elements in the control of human gene expression via mRNA (the intermediate between the DNA genes and protein products) is now widely acknowledged.

    Whilst the simple RNA sequence of the HIV-1 genome has long been known, a new paper in Nature reports the characterization of a complete HIV-1 genome [Watts JM et al. (2009) Nature 460, 711-716]. The authors used a novel method that maintains RNA structure called SHAPE, selective 2'-hydroxyl acylation analysed by primer extension; see [Al-Hashimi HM et al. (2009) Nature 460, 696-698] for further explanation. Although this technique gives a lower resolution than other types of structural analysis, and does not resolve some forms of RNA structure, it nevertheless provides a structural overview of the complete genome that reveals both previously characterized and novel features.

    Stable, conserved RNA structures were found to sequester or ‘insulate’ unstructured regions, particularly those that show high sequence variability between different viral strains; these hypervariable regions (such as within the gene for the surface protein Env) are essential for viral evasion of human immune responses. This organisation, with stable RNA helices flanking hypervariable regions, probably prevents variations within the regions from interacting with and potentially disrupting other RNA structures that play essential regulatory roles.

    The authors also observed a pattern of RNA structures in positions corresponding to junctions between different domains of HIV-1 proteins that are initially produced as large polypeptides subsequently cleaved into separate, smaller proteins. They propose this to be consistent with a model whereby the RNA encodes protein structure at two levels: simple RNA sequences that encode proteins by dictating amino acid sequence, and highly structured RNA elements that determine the final three-dimensional structure of the proteins. They suggest that the highly structured RNA elements cause ribosomes (which produce proteins by assembling a chain of amino acids as dictated by the RNA coding sequence) to slow or pause, allowing time for the growing amino acid chain to fold. This could allow different protein domains to fold into the correct three-dimensional arrangements.

    Comment: This study provides further evidence that the structure and function of RNA is both complex and important; the concept of an additional level of genetic code operating via RNA structure representing an area of compelling interest. In the case of HIV-1, improved understanding of the viral genome structure and function may indicate new areas for potential therapeutic intervention against the virus. More broadly, there is undoubtedly a need for improved understanding of how RNA functions in human gene expression. However, caveats remain. One important observation is that a static map such as that presented by Watts et al., besides omitting some structural features, also cannot accurately represent the probable in vivo situation, whereby regions may show variable structural conformations.


    Research article   |   By Dr Susmita Chowdhury and Dr Caroline Wright   |   Published 6 August 2009

    According to Cancer Research UK (CRUK), ovarian cancer is the fourth most common cancer in UK women, after breast, bowel and lung cancer, accounting for around 6% of all female deaths from cancer. The survival rate in patients with ovarian cancer is poor, with about 70% of patients being diagnosed in the late stage and less than 40% of the patients surviving more than 5 year after the diagnosis. Reproductive, demographic and lifestyle factors all affect the risk of ovarian cancer, and a family history of the disease increases the relative risk by around 3-fold for a first-degree relative, suggesting a strong heritable component. Although a small minority of familial cases are caused by highly penetrant mutations, such as those in the BRCA1 and BRCA2 genes, the cause of most cases is unknown and likely to include multiple interacting genetic and environmental factors.

     

    The first common susceptibility locus for ovarian cancer has now been identified in a three-stage, multinational genome-wide association study (GWAS) [Song H et al. (2009) Nat Genet doi:10.1038/ng.424]. In the first stage, over half a million single nucleotide polymorphisms (SNPs) were genotyped, and a further 2 million predicted, in 1,817 patients with epithelial ovarian cancer (the most common form) and 2,353 controls from the UK. In the second stage, the highest ranked SNPs from the first stage were genotyped in a further 4,274 cases and 4,809 controls from Europe, the USA and Australia. A combined analysis of the data from both stages led to the identification of 12 SNPs associated with epithelial ovarian cancer risk (p<108), all located in the same region of chromosome 9 (9p22.2). In the third stage of the study, the most statistically significant SNP (rs3814113) was genotyped in a further 2,670 cases and 4,668 controls, confirming its association with a significant decrease in the risk of ovarian cancer in carriers of the minor allele (odds ratio = 0.82, 95% CI 0.79-0.86). This effect was even more pronounced in the most serious subtypes of the disease.

     

    Like the majority of hits from GWAS (see previous news), this SNP is located in between genes, rather than in a coding region, making the mechanism underlying the association somewhat elusive. However, the closest gene to rs3814113 is BNC2, and eight of the 12 SNPs on 9p22.2 associated with ovarian cancer are located within its second intron. This gene encodes a putative transcription factor and is highly expressed in reproductive tissue. It is therefore plausible that changes in the expression level of this gene are associated with ovarian cancer, although resequencing of the region will be needed to identify the causal variant.

     

    Comments: Although this study was largely confined to individuals of European descent, it is the first identification of a common genetic variant associated with ovarian cancer, and adds BNC2 to the growing list of cancer susceptibility genes. As well as in improving our understanding of the aetiology of the disease, ultimately this finding may have implications for individual genetic risk-profiling and the development of novel therapies. Perhaps more importantly for contemporary research in human genetics, whilst there are undoubtedly drawbacks to conducting such a large study, the three-stage GWAS design exemplified in this paper offers a robust and powerful method for yielding reproducible genetic associations with common diseases.

     


    Research article   |   By Dr Gurdeep Sagoo   |   Published 5 August 2009
    Pancreatic cancer, a complex disease involving both genetic and environmental risk factors, has one of the lowest cancer survival rates worldwide. Prognosis is poor, with a median survival of less than 12 months, and an estimated 5-year relative survival rate of less than 5%. There is also currently no effective screening test for pancreatic cancer, and by the time a diagnosis is made, the cancer has often spread. Despite these difficulties, established risk factors include a positive family history (2- to 4-fold increased risk for a first-degree relative), obesity, type 2 diabetes, and smoking, with genetic factors proving more difficult to replicate. With the emergence of high-throughput genotyping technology, the genome-wide association study design carries high hopes of uncovering genetic factors involved in the aetiology of pancreatic cancer.

    A new study published in the journal Nature Genetics, reports a genome-wide association study (GWAS) to identify common variants associated with pancreatic cancer. In their GWAS, Amundadottir et al. [Amundadottir L et al. (2009) Nat Genet 2 August doi:10.1038/ng.429] genotyped over half a million SNPs in 1,896 pancreatic cases and 1,939 controls collated from twelve prospective cohorts and one hospital-based case-control study. This was followed-up by a ‘fast-track replication phase’ which involved genotyping ten of the most promising SNPs identified by the initial GWAS across 3 chromosomal regions (7q36, 9q34, and 15q14) in a further 2,457 cases and 2,654 controls from eight case-control studies. The strongest association with pancreatic cancer in the combined analysis across both phases in participants of European background was for the locus marked by rs505922, located within the first intron of the ABO gene (per allele OR = 1.20, 95% CI 1.12-1.28; P = 5.37 x 10-8), which determines the major antigens found on the surface of red blood cells. This finding replicates work conducted in the 1950s and 1960s, which first reported the association between ABO blood type and gastrointestinal cancers (such as gastric cancer and pancreatic cancer) [Aird I et al. (1953) BMJ 1:799-801; Marcus D (1969) N Engl J Med 280:994-1006]. Interestingly, the protective T allele of rs505922 is in complete linkage disequilibrium with the O allele of the ABO gene, which is consistent with the earlier reports showing an increase in risk of gastric and pancreatic cancer for A and B blood group individuals. To determine whether this observed association is due to the O allele of the ABO gene would require further more detailed genotyping.

    Although the mechanism underlying the association between blood type and pancreatic cancer risk is unknown, there is also an increased risk of venous thrombosis in non-O blood type individuals and patients with pancreatic cancer, and a possible mechanism via aberrant blood coagulation has been proposed [Maisonneuve et al. (2009) JNCI 10 July doi:10.1093/jnci/djp198].

    Comment: Pancreatic cancer has one of the highest mortality rates with a very poor prognosis. Few known risk factors exist that have been well replicated and improved diagnostics as well as a finer understanding of the molecular pathogenesis is needed to improve early detection, risk stratification and to develop novel therapeutic approaches. Approaches such as GWAS offer a powerful tool for dissecting the genetic basis of disease and for uncovering genes that might not otherwise emerge from classical epidemiological studies; in this case, however, it has primarily served to support a 50 year old association with blood type, which is both simple and reliable to measure without the need for a genetic test.


    New reviews and commentaries

    Selected new reviews and commentaries, 3 August 2009

    Reviews & commentaries : by Dr Philippa Brice

    Gene therapy - still a work in clinical and regulatory progress.
    Hohmann EL. N Engl J Med. 2009 Jul 9;361(2):193-5.

    Genomic copy number variation, human health, and disease.
    Wain LV, Armour JA, Tobin MD. Lancet. 2009 Jul 25;374(9686):340-50

    Effect of genetic testing for risk of Alzheimer's disease.
    Kane RA, Kane RL. N Engl J Med. 2009 Jul 16;361(3):298-9.

    Cell-free fetal DNA and RNA in maternal blood: implications for safer antenatal testing.
    Wright CF, Chitty LS. BMJ. 2009 Jul 6;339:b2451. doi: 10.1136/bmj.b2451.

    No risk, no objections? Ethical pitfalls of cell-free fetal DNA and RNA testing.
    Schmitz D, Henn W, Netzer C. BMJ. 2009 Jul 6;339:b2690. doi: 10.1136/bmj.b2690.

    Protecting privacy and the public - limits on police use of bioidentifiers in Europe.
    Annas GJ. N Engl J Med. 2009 Jul 9;361(2):196-201.

    The illusive gold standard in genetic ancestry testing
    Soo-Jin Lee S, Bolnick DA, Duster T, Ossorio P, Tallbear K. Science. 2009 Jul 3;325(5936):38-9.

    The genetics of quantitative traits: challenges and prospects
    Mackay TF, Stone EA, Ayroles JF. Nat Rev Genet. 2009 Aug;10(8):565-77.

    Exploiting and antagonizing microRNA regulation for therapeutic and experimental applications.
    Brown BD, Naldini L. Nat Rev Genet. 2009 Aug;10(8):578-85.

    Gene defects and allergy.
    Van Bever H, Lane B, Common J. BMJ. 2009 Jul 9;339:b1203. doi: 10.1136/bmj.b1203.

    The changing moral focus of newborn screening.
    Fleischman AR, Lin BK, Howse JL. Genet Med. 2009 Jul;11(7):507-9.

    Human genetics: One gene, twenty years
    Pearson H. Nature. 2009 Jul 9;460(7252):164-9.

    The HapMap and genome-wide association studies in diagnosis and therapy.
    Manolio TA, Collins FS. Annu Rev Med. 2009;60:443-56.

    Molecular biology. Neutralizing toxic RNA.
    Cooper TA. Science. 2009 Jul 17;325(5938):272-3.

    Transcription. Sweet silencing.
    Simon JA. Science. 2009 Jul 3;325(5936):45-6.


    Research data in the digital age

    Kleppner D, Sharp PA. Science. 2009 Jul 24;325(5939):368.


    Why innovation matters today

    Darzi A. BMJ. 2009 Jul 22;339:b2970. doi: 10.1136/bmj.b2970.

    read more ...