For years, pharmaceutical companies have been lambasted in the media and government prosecutions for concealing information about the safety and efficacy of their products. In one particularly splashy example, GlaxoSmithKline (GSK) agreed to pay $3 billion in 2012 to settle criminal charges that it failed to report safety data concerning its antidepressant drug, Paxil, and its diabetes drug, Avandia, and engaged in unlawful marketing of these products and one other drug.
One mechanism proposed for avoiding such problems is to establish a system through which participant-level data from clinical trials, stripped of identifying information about patients, would be available to the public.
A potential benefit of sharing clinical trial data would be that independent scientists could re-analyze data to verify the accuracy of reports prepared by trial sponsors, which might deter sponsors from mischaracterizing or suppressing findings. Data sharing would also allow analysts both within and outside drug companies to pool data from multiple studies, creating a powerful database for exploring new questions that can’t be addressed within any given trial because the sample is too small to support such analyses.
The potential value of shared data in improving our understanding of the safety and efficacy of drugs, medical devices, and biologics has sparked considerable discussion about how to make data sharing happen. Earlier this year, the European Medicines Agency—the counterpart to the U.S. Food and Drug Administration (FDA) in the European Union—decided to start making data from trials of approved products available in 2014. This begs the question, should the FDA follow suit?
In a paper published in The New England Journal of Medicine in October (available at http://www.nejm.org/doi/pdf/10.1056/NEJMhle1309073), I wrote with colleagues from Harvard and representatives from the pharmaceutical industry that “The question is not whether, but how, these data should be broadly shared.” In our view, the benefits of data sharing are overwhelming. But at least three critical questions confront the United States as it considers how to move forward.
First, do we need a regulatory requirement like Europe’s? Some companies argue that their voluntary initiatives obviate the need for a mandate. But there’s a real collective-action problem to be considered: If data sharing is voluntary, and some companies perceive more downside than upside, what’s to make everyone do it? Won’t companies that step forward put themselves at a competitive disadvantage if others don’t follow? Further, how can we assure that those who share data adhere to a set of minimum standards to ensure that what is shared is complete and useful? Perhaps there is some nongovernmental binding mechanism that could be used to make companies stick to voluntary commitments, but it isn’t clear what it is.
Second, should data be available to anyone who wants them, or should there be some screening mechanism to vet requests? On the one hand, the spirit of transparency militates in favor of open access. Suggesting that there be some gatekeeper begs the question of who would take on this job, who would pay for their time, what criteria they should apply, and how we could keep the vetting process fair. On the other hand, gatekeeping means the ability to impose some quality controls on how the data get used. Drug companies worry that a competitor could produce or fund a flawed analysis of their product, and once the finding is reported in the media, it might be very hard for the manufacturer to dislodge public perceptions that the product is unsafe or ineffective. Completely open access would also make it harder to assure that analysts didn’t try to use the data in ways that jeopardize the privacy of the clinical trial participants. These concerns are important, and justify having some type of screening system.
Third, how do we grapple with the problem of informed consent? Most research participants who participated in clinical trials in the past never contemplated that their individual data might be posted on a website or freely given out to groups all over the world. Stripping personal identifiers like birthdates and names out of the dataset doesn’t completely solve the problem, because it’s easier than one might think to re-identify people. In the future, we could state in informed consent forms for clinical trials that the data will be broadly shared, but some ethicists argue that research participants are scarcely able to imagine what this might mean for their privacy—especially since the information environment is constantly evolving, and what is “de-identified” today could be easily re-identifiable in the future. Without this understanding, giving consent to data sharing may not be meaningful.
These problems require thoughtful solutions, but shouldn’t hold us back from pursuing data sharing. An expert committee convened by the Institute of Medicine is currently considering what a framework for responsible sharing of clinical trial data might look like, and will issue a preliminary report early in 2014. Whatever happens in the United States, the EMA’s new policy means that, at least in terms of access to data for products approved for sale in the European Union, the proverbial cat is out of the bag. It remains to be seen what this will mean for trial sponsors, research participants, and the public’s health.
Michelle Mello, JD, PhD, is a professor of law and public health at the Harvard School of Public Health, and a fellow with the Edmond J. Safra Center for Ethics at Harvard University. She is a recipient of a Robert Wood Johnson Foundation Investigator Award in Health Policy Research.
This blog post is derived from a paper published by Mello and others in the New England Journal of Medicine, 369;17, October 24, 2013, pages 1651 – 1658.
This post was reproduced with permission of the Robert Wood Johnson Foundation, Princeton, N.J