Big Data Revolution, Continued

A few months back I discussed the explosion of "big data" - how it helped Obama win the 2012 election, and how it is being used to treat multiple myeloma, a rare bone cancer. To recap, the term "big data" refers to large, complex datasets that require an enormous amount of computing power to plow through. Because over the years, the cost of computing has fallen exponentially (the cost of storage is almost irrelevant now), and the advent of cloud computing essentially gives anyone access to a supercomputer at their fingertips, the big data revolution has become a reality.

A new collaborative project between the Mayo Clinic and United Health (one of the largest insurers in the country) is poised to take big data even further. Optum Labs, the research institute that the two will be building in Cambridge, MA, will combine Mayo's and UH's data on over 100 million people. The goal will be to use the massive dataset to understand which treatments work best, focusing on patient outcomes and cost-effectiveness. For instance, it could allow researchers to find that one diabetes treatment works just as well as another, but costs half as much.

More broadly, however, the results of the research conducted at Optum Labs may help the FDA take steps to reform their clinical trial requirements.

While much has been written about the poor clinical trial designs forced upon drug developers, the FDA has only recently shown real interest in changing them. The 2012 Food and Drug Administration Safety and Innovation Act (FDASIA) establishes the "Breakthrough Therapy" designation; while the law itself was vague on how this designation differed from Accelerated Approval, one clause of the act calls on the FDA to "[take] steps to ensure that the design of the clinical trials is as efficient as possible...", and the first drugs receiving this designation were announced just last week. And a 2012 FDA report went even further, recommending a new optional pathway for drugs shown to be effective in small subgroups of patients, rather than large, broad groups.

So how does this tie into big data? New clinical trial designs, particularly for patients with orphan diseases, should allow the use of existing patient data to demonstrate drug efficacy - for instance, data from emergency room uses of an antibiotic can be used to complement (or substitute) data from what can often be, a very difficult to run clinical trial. Or data on a drug's (successful) off-label use can be used in place of a clinical trial to receive provisional FDA approval for a new indication.

More generally, the use of large datasets like these will allow drugmakers to use observational data to receive provisional FDA approval with confirming studies to follow, and expanding patient populations as new uses are validated.

For Mayo and United Health, these databases can also be used to identify and validate potential biomarkers, allowing further improvements in clinical trial design.

If the FDA continues its slow, but steady move toward reform, Optum Labs may be just the tip of the iceberg.

Related Entries:

keep in touch     Follow Us on Twitter  Facebook  Facebook

Our Research

Rhetoric and Reality—The Obamacare Evaluation Project: Cost
by Paul Howard, Yevgeniy Feyman, March 2013

Warning: mysql_connect(): Unknown MySQL server host '' (2) in /home/medicalp/public_html/incs/reports_home.php on line 17
Unknown MySQL server host '' (2)


American Council on Science and Health
in the Pipeline
Reason – Peter Suderman
WSJ Health Blog
The Hill’s Healthwatch
Forbes ScienceBiz
The Apothecary
Marginal Revolution
Megan McArdle
LifeSci VC
Critical Condition
In Vivo Blog
Pharma Strategy Blog
Drug Discovery Opinion