Value of Medicine Category

Research shows that new drugs and medical devices have the capability to improve and lengthen human life, as well as improve productivity by reducing the impact of chronic illness. MPT will discuss and explore the latest research on the benefits and costs of new therapies, and explain why medical innovation – driven by the right mix of market and individual incentives – offers the best long term strategy for controlling the growth of private and public health care costs while also stimulating economic growth.


The cardiovascular benefit of statins is anything but news. But a very large retrospective study from Taiwan may bring the utility of these drugs to an entirely new level--it appears that the use of high dose statins cut the risk of senile dementia in older people. And by a lot.

And if there is one area where medical progress is sorely needed, it's Alzheimer's Disease. Because, despite being the target of vast amounts of research, all we really have is a vast amount of failures.

Treating Alzheimer's is bad enough (and the treatments for it are plenty bad). But preventing it? With the exception of taking steps to reduce vascular dementia (due to strokes)--forget it. Every vaccine, therapeutic or preventative, has been a total bomb.

Statins have been suspected of both increasing and decreasing dementia. Ironically, one reason that this study was conducted was to see whether cognitive dysfunction was a possible side effect of statins.

It didn't exactly turn out that way.

In a presentation to the 2013 meeting of the European Society of Cardiology, Dr. Tin-Tse Lin and Dr. Min-Tsun Liao of the National Taiwan University Hospital reported that not only didn't statins adversely impact cognitive function, but they had a significant protective effect .

Lin and colleagues used a database of one million people covered by Taiwan's national health insurance program. From this sample, about 57,000 people (aged 65+) with no history of senile dementia were selected during the period of 1997-1998. About 15,000 of these participants were taking statins.

During the 4.5 year follow-up period, about 5,500 people developed non-vascular senile dementia (not caused by strokes or blockages). And the data from this group are fascinating.

As shown in Figure 1, there was a substantial and dose-dependent difference in the number of dementia cases for participants who took statins--the higher the dose, the lower the risk of dementia. Also, the more potent statins showed a larger effect than the less potent, which in effect, strengthens this dose response trend (which is already impressive).

Screen Shot 2013-09-11 at 10.42.50 AM.png
Figure 1: Dose Response of Statins (Source:http://www.theheart.org/article/1578833.do). For clarity, hazard ratios (given in the report) were converted to percent reduction: 1.0 minus HR times 100 = percent reduction.

For example, for rosuvastatin (Crestor), the low, medium and high dose groups showed a reduction in the incidence of dementia by 63, 87, and 87(!) percent, respectively.)

The relationship between the potency of the particular statin and protection against dementia is similarly interesting (Figure 2, below).

There is clearly a relationship between the inherent potency of the statins (at equivalent doses) and their effect. For comparison, the low-dose group is used as an example). The potent statins (atorvastatin and rosuvastatin) clearly outperform the less potent statins (e.g., lovastatin) in reduction of dementia.
Screen Shot 2013-09-06 at 12.38.55 PM.pngFigure 2. Correlation of statin potency (low dose) with percentage of dementia reduction . Rosuvastatin is arbitrarily assigned 100 for reference purposes.

Of course, as a retrospective study (albeit a very good one), there are built-in limitations. It cannot prove that statin use will protect you against senile dementia--only that there is a strong association between the two. Cause and effect can only be determined by a prospective controlled study--something that will surely be done now.

This is far from definitive, but given the huge number of participants, the robust dose response, and the relationship between the strength of the statin and the magnitude of the response, it would seem that they are onto something here.

And when you add the fact that non-vascular-related dementia was excluded from the data, suggesting that the is more going on here than statins acting as lipid-lowering agents, it gets even more intriguing.

It is doubtful that high dose statin therapy for seniors will become standard practice anytime soon, but these results are certainly intriguing. Anything that can help combat Alzheimer's--perhaps the most devastating of all diseases--would be an enormous medical advance.


Dr. Scott Gottlieb recently wrote an article in Forbes asserting that governmental healthcare agencies such as CMS are practicing medicine by asserting "their own clinical judgment about when and how seniors get access to new medicines." He argues that CMS extracts concessions from manufacturers by preemptively making their displeasure about new innovations known. Dr. Gottlieb portrays manufacturers as powerless in the face of an overzealous bureaucracy determined to cut spending with no consideration of the impact on innovation and patient care.

The pharmaceutical and medical device industry is not powerless to counteract this downward pressure on price constraints for new products. We know payers (government and private) are seeking to reduce costs any way they can. In the absence of any evidence that the new products deliver a commensurate improvement in outcomes, the agencies are acting rationally in considering cost as the sole determinant of value. Pharmaceutical and medical device manufacturers need to develop and present compelling data as evidence of the value of their product if they want to be reimbursed at all -- let alone at higher levels than existing treatments. This presents two significant challenges manufacturers must address.

First, they need to generate compelling data to support the economic and clinical value of their products. In the case of the Sapien aortic valve replacement that Dr. Gottlieb cites, the increased cost of the product and procedure is offset by the reduced risk of infection, shorter hospital stay and greater patient satisfaction achieved by minimally invasive surgery as opposed to conventional invasive open-heart repair. Edwards Lifesciences -- the manufacturer -- has to demonstrate not a product to product cost comparison, but that the total value of its offering is greater than that of alternative treatment options. In many cases the research methodology necessary to produce this data is different from the randomized clinical trials (RCT) that of been used to gain regulatory approval of the product. New skills and capabilities are needed to develop data based on real world evidence (RWE) and comparative effectiveness research (CER).

Second, companies will need to do a better job communicating the value supported by this new evidence. For a long time manufacturers have allowed others to control the narrative, to portray them as greedy and uncaring. This impression is enhanced when manufacturers behave in a manner that leads to substantial fines and product withdrawals due to avoidable marketing malfeasance (e.g., numerous off-label promotion verdicts), manufacturing lapses (e.g. J&J recently) and poor decisions relating to safety issues (e.g., covering up or not publishing poor data).

Bad news such as fines and consent decrees travels louder, faster and further (and sells more media) than the good news that comes from the development of new medicines and products that improve patient lives. Manufacturers have to share data about their products' value more clearly and work to take the reins of public perception back. Leveraging patient advocacy groups, the media and others can influence decision makers -- but this is only a viable strategy if you have the data supporting your product.

The pressures to reduce reimbursement are real. The best defense is to get data supporting the economic and clinical value of your product and then aggressively get that data out to convince the decision makers.


The U.S. market for prescription drugs is dominated by me-too products.

Just not the kind that is much maligned in the press. Pharmaceutical companies are often attacked for spinning out "me-too" drugs that are only slightly different than earlier versions, but cost as much (or more). The real story there is a bit more complicated, but let's save that argument for another day.

The real "me-too" drugs sold in the U.S. are generics, i.e. drugs that are (for the most part) exact copies of branded (patent-protected) drugs, but sold at a fraction of the price. According to a recent report by IMS health, generics account for the vast majority of all U.S. prescription drugs - a whopping 84%. IMS estimates that by 2016 this figure will rise to 87%. This is a sea change from the late 1990s, when generics only accounted for about 40 percent of the market.

Is this good for patients, and for the health care system? Yes. Is this bad for the patients, and the health care system? Yes. It depends on whether you look at the short run or long run, and what aspect of the system you want to focus on.

First a little history and some explanation: Why are generics, from a volume perspective at least, dominating the U.S. market?

The pharmaceutical industry is, as we've said many times, largely a victim of its own success. The late 1980s and 1990s saw a slew of "blockbuster" drugs launched (Prozac, Prilosec, Zocor), largely for primary care indications like depression, high cholesterol, and acid reflux. This strategy generated billions in profits for the industry, because these indications represent very large patient populations, and patients have to take some of these drugs indefinitely - perhaps for the rest of their lives.

Blockbuster has something of a pejorative connotation, but drug treatment can represent a very cost-effective way of preventing more dangerous and much more expensive complications. Harvard health economist David Cutler, for instance, has estimated that effective use of recommended antihypertensive medicines avoids 833,000 hospitalizations and 86,000 deaths annually. And if untreated hypertensive patients were effectively treated, it could prevent another 420,000 hospitalizations and 89,000 premature deaths.

Even the Congressional Budget Office (CBO), which is very conservative when it comes to estimating offsets for health care innovations, estimates that a 1 percent increase in Medicare Part D prescription drug spending saves about 0.2 percent in other Medicare costs.

But all good things must come to an end, and all patents come with an expiration date. Blockbuster drugs patented in the 1990s or early 2000s have already lost patent protection or will do so in the next few years.

This leaves industry with a pipeline problem, since they haven't produced anywhere near enough new drugs to compensate for the products going generic. And this explains the sea change: companies just aren't moving enough patients onto new drugs as patents expire. When a generic is available, patients will chose the generic 95% of the time. In fact, this year, for the first time ever, total U.S. drug spending actually declined, at least in part because of this tremendous shift towards generics (IMS suggests other factors as well).

Why have branded companies had so much trouble filling their pipelines? They've run into something of a perfect storm.

The FDA has raised the bar for approving drugs for primary care indications, and insurers are also scrutinizing new drugs more closely when it comes to reimbursement. After being stung by rare side effects linked to FDA approved drugs like Vioxx, Avandia, and Phen-Fen, the FDA wants more data from larger clinical trials for primary care indications to rule out rare side-effect problems for drugs that are likely to be used in hundreds of thousands, or maybe even millions, of American patients after the FDA grants marketing approval. This means more tests, larger trials, and closer scrutiny of the potential for rare side effects.

Arguably, this makes sense. With many good, effective drugs already approved, regulators and payers are inevitably going to look askance at new drug candidates that don't have a clear safety or efficacy advantage over existing, and very cheap, generic medicines.

But with development costs rising (due to the aforementioned FDA regulations), industry is rationally going to walk away from developing products that might have incremental benefits, but can't generate sufficient profits to justify the investment required to bring them to market. So products that might have been commercially viable - and would make a real difference to patients - a decade ago just aren't sustainable today.

Scientifically, the low hanging fruit has been picked. If you're looking, for instance, for a drug to lower the risk of heart attacks, we've got LDL cholesterol pretty clearly licked, and you're going to have to look elsewhere (scientifically) to gain a competitive position in a patient population that is taking generic atorvastatin (formerly Lipitor).

So companies find themselves chasing novel mechanisms of action where the science is less well understood. Pfizer, and other companies, got into trouble in this area when they went after drugs to raise HDL cholesterol, and found that, in cases like torcetrapib, raising HDL cholesterol actually resulted in more deaths, not fewer - which was the opposite of what the research had previously suggested.

As companies chase more novel targets they face more scientific, financial, and regulatory risks. Add to this the fact that big time investments in genomics and other new strategies haven't (yet) paid off as handsomely or as quickly as many expected, and you've got a (at least in the short term) a severe pipeline problem.

Bigger isn't necessarily better. As pipelines thinned or slowed, companies looked to restock their shelves and improve their earnings through consolidation, i.e., insourcing other people's pipelines. From a balance sheet perspective, this can make a lot of sense, but it's also a one shot strategy - financial gains from consolidation, in terms of combining sales forces and drawing in outside revenues, can only happen once. Consolidation also makes sense given the regulatory environment (which requires more sophistication) and reimbursement environment (i.e., having more products on market gives you additional bargaining leverage with large insurers and PBMs).

But from an innovation perspective, this approach may not be effective. It's not clear that when you take two different R&D teams, with different cultures and approaches to doing research, each of which might be successful independently, and cram them together - along with the associated layoffs - you're really better off. You just might be more schizophrenic.

Where does that leave industry and patients? Increasingly, companies are focusing on "unmet medical needs" - in therapeutic areas like cancer, multiple sclerosis, cystic fibrosis, and other orphan diseases where there are few good treatment options and less generic competition. This is good for patients, who literally face life and death challenges, and it also gives industry some (frankly) much needed revenues to fund ongoing R&D.

The FDA approved 39 new drugs last year, a record, but 19 of those were for either cancer or orphan diseases. IMS also notes that out of the 28 drugs launched from last year's approvals (a number were approved late in the year, and weren't launched until 2013), representing $10.8 billion in new drug spending, specialty drugs accounted for $7 billion. That gives you a pretty good picture of where the industry's growth strategy is pointed, at least for the time being.

So are we headed to a two tiered market, where generics dominate (at least by volume) for primary care indications, and pharma/biotech focuses on specialty indications? And if that is the industry division of labor, is it sustainable?

Some of the new specialty treatments, like Kalydeco, are also truly game changers for patients. Although Kalydeco helps just 4% of cystic fibrosis patients, the drug seems to be an effective cure for those patients. That's tremendous science, and should be applauded. Cancer is another area where the industry is poised to make tremendous gains.

But all of these medicines are tremendously expensive, because of the "numbers" problem: With a smaller patient population, companies have far fewer (i.e., thousands or even hundreds) patients over which to spread their development costs and generate profits, especially since effective patent times have been flat or declining. Kalydeco, for instance, costs $294,000 annually.

As more expensive products push into the rare disease space, insurers and government payers are eventually going to balk at paying those prices, and they already are to some extent - requiring much higher co-pays from patients for some drugs.

On the other hand, the shift away from primary care indications is deeply troubling because of the enormous impact of chronic disease on patient health and the economy (through both direct and indirect costs, like lost productivity). Diabetes, heart disease, stroke and drug resistant pathogens all require much better treatments.

Burden of chronic disease.png
Source: http://www.chronicdiseaseimpact.com/

Without a different approach from the FDA, and a reimbursement environment that rewards incremental and breakthrough innovations in these disease areas, to say nothing of Alzheimer's, we're looking at a tsunami of health care costs and expensive complications from these diseases as the population ages.

The Solution: Broaden Innovative Pathways to Market for All Indications

Policymakers are apt to cheer cheaper generics today without thinking deeply about the implications for innovation tomorrow. Fundamentally, there is a very clear tradeoff between lower drug prices today, and less innovation for patients tomorrow.

The Obama Administration, for instance, seems blissfully untroubled by the implications of imposing Medicaid price controls on Part D, or reducing exclusivity for biologic medicines down from 12 years to 7. The former would save an estimated $123 billion over 10 years, and the latter would save $3 billion over the same time period--but the lost revenues would clearly reduce the industry's capacity to invest in innovation. And research suggests that future innovation has far higher returns to society than lower drug prices today - something on the order of 3-1 - but current policy debates are much more short-sighted than economics suggests that they should be.

What can we do to change the equation? Policymakers should encourage innovation across many different disease areas, both primary care and specialty.

For starters, we should revisit Hatch-Waxman, and slow the erosion in effective patent times so as to encourage more innovation. Companies should get back all of the patent time they lose in FDA mandated clinical trials and drug reviews. This won't prevent a single drug from going generic, but it will allow drug makers to spread their costs over more years, and help reduce short term pricing pressures.

Second, we should expedite pathways for new or repurposed drugs targeted at subgroups of patients. Drugs that come to market with companion diagnostics could be granted short patent extensions, say six months. Taking old generics and developing new uses for them, including conducting the clinical trials required for FDA approval, should be rewarded with new patent or marketing exclusivity. Repurposing old drugs is less expensive than developing new drugs, given that significant data already exists on their safety profile and mechanism of action. Encouraging more co-development of drugs and companion diagnostics will increase their cost-effectiveness, and help energizing the diagnostics industry. (Here, the MODDERN Cures Act is a great step in the right direction.)

Finally, FDA approval pathways - like accelerated approval or the new Breakthrough Therapies designation - need to be opened other classes of drugs beyond cancer, orphan drugs, and HIV. Here, the Obama Administration can heed the excellent report from the President's council of scientific advisors on doubling drug innovation in the U.S. over the next decade.

The agency and its stakeholders should develop standards for using novel biomarkers and adaptive clinical trial designs to approve drugs for targeted populations for primary care indications and antibiotics as well as specialty and orphan drugs. This would help industry, as well as society, replenish our supply of new medicines in these vital areas.

In the short run, the triumph of generics may seem like financial a windfall to payers and consumers. And largely, it is. But with an aging population, these gains will be short lived and will be far outweighed by the human and economic cost of lost or foregone innovations tomorrow.


I'm sure that consumers, medical policymakers and insurance companies are just oozing with joy. After all, we are rapidly heading for pharmaceutical paradise--a pharmacy packed with really cheap generics and not much else.

This will not only save tons of money, but also stick it to those bad boy pharmaceutical companies that invented the drug in the first place, and then sucked us dry by fighting off the noble generic companies for an extra two minutes of patent protection so they could suck us even drier.

Doesn't get any better than this. At least until you swallow the pill.

In today's "big surprise of the day," Ranbaxy Laboratories, India's largest generic manufacturer got a little $500 million slap on the wrist from the U.S. Justice department.

The company admitted that a few years ago that they manufactured and subsequently sold substandard drugs, which were made at two different facilities in India. Well, anyone can make an honest mistake. Except perhaps Ranbaxy, which as part of the settlement, admitted to lying about the problems by intentionally making false statements to the FDA.

Their guilty plea added up to three felony counts, $150 million in criminal penalties and another $350 million in civil penalties.

The drugs in question were for treatment of acne, epilepsy, neuropathic pain and one antibiotic-- ciprofloxacin.

This is hardly the first time that Ranbaxy has had problems with drug "quality"--a misplaced euphemism if ever there were one.

In 2008, the FDA prohibited the importation of 30 drugs from two of Ranbaxy's plants in India, and instituted a so-called "Application Integrity Policy," which stopped the review of any new drug applications from one of the company's facilities. The reason? Once again, fraudulent record keeping and reporting.

In a sane world, one might think that the company might be asked to take their business elsewhere, but sanity seems to have become, well, insane.

Despite the company's accomplished track record of incompetence and fraud, in November 2011 the FDA still gave permission for Ranbaxy (and only Ranbaxy) to sell the first generic version of the Lipitor. This was clearly well-deserved, as evidenced by the fact that in 2102, Ranbaxy was forced to recall multiple lots of the drug after the pills were found to contain glass particles.

One might think that this would be enough, but one would be wrong.

According to a recent Fortune report, the U.S. Dept. of Veterans Affairs recently signed a large contract to buy generic Lipitor from, who else? Ranbaxy. Two months ago.

OK. This stopped being funny quite a while ago. And the take home message is even less amusing--saving money is so important to our government and medical providers that they are going to look the other way while a bunch of hacks in India feed you a steady supply of crappy drugs.

And Ranbaxy's response doesn't exactly inspire confidence. According to CEO Arun Sawhney "While we are disappointed by the conduct of the past that led to this investigation, we strongly believe that settling this matter now is in the best interest of all of Ranbaxy's stakeholders; the conclusion of the DOJ investigation does not materially impact our current financial situation or performance."

Which is about as comforting as in February, when the company also issued a statement that "[I]t was confident in the continuing safety and quality of its products."

Which begs the question, "what would happen if they weren't confident? "Oops- you swallowed a hand grenade instead of a cipro? Please hold."

And if you think this is an isolated incident, you perhaps ought to consider a little Wellbutrin therapy. Except last year, Teva, Israel's giant generic manufacturer was forced to recall all of its Budeprion XL, their version of Wellbutrin XL, (the generic name is bupropion). The problem? Its U.S. manufacturer, Impax Laboratories had a little problem with the time-release formula.

This is no laughing matter with bupropion, since the 300 mg time-release pill released the drug much too soon putting patients at risk for seizures, and cardiac arrhythmias. The maximum immediate-release dose for the drug is 100 mg, which was exceeded by the failure of the time-release formulation, leaving patients susceptible to side effects early on and sub-therapeutic blood levels later.

These are two of the biggest generic companies around, which makes me wonder what will happen when Joe's Pharmaceuticals starts making generic heart drugs in a U-Haul in Newark.

These are our future medicines, and inevitably most of us will eventually run into one. The FDA has shown little ability to catch this until after the problem has already occurred, and the cheap prices are simply too enticing.

This is just getting started. Open wide folks. There's going to be a lot for you to swallow.



I'm going to have a little fun with Josh Bloom's recent posting, not because I don't respect him or his writing--I do--but because we can use it to illustrate an important point.

His posting was about Merck and Liptruzet and he asked how Merck could look itself in the mirror when "Merck is trying something that is as good an example of marketing without innovation as you'll ever see." He went on to say, "Liptruzet behaved, as expected, just like Vytorin. It reduced LDL cholesterol more [than] for patients who took Lipitor alone, but it did not reduce patients' chances of developing heart disease. Not surprisingly, this left some doctors to wonder why it was approved at all."

In other words, if Merck can't prove that Liptruzet does more than just reduce LDL, then it's just a big marketing scam. For fun, let's gain some perspective by substituting Merck with Ford and Liptruzet with the F-150 pickup.

"The Ford F-150 pickup carries workers and tools to jobsites around the country. It has been used for carpentry, masonry, steel working, HVAC, concrete, logging, plumbing, and roofing, but, at least so far, Ford has been unable to prove that the F-150 can do other, even more amazing things. With the F-150 pickup, Ford is trying something that is as good an example of marketing without innovation as you'll ever see."

Perhaps Ford has not proven the F-150's ability to do other amazing things because those things are difficult or expensive to prove. Or perhaps the study is underway, as is Merck's IMPROVE-IT study of Vytorin. Maybe down the road someone will show that F-150's can be used for other, important things. Or, maybe not. In the meantime, the stuff the F-150 does is still impressive and, by being on the market, it gives consumers a choice and provides competition for Dodge, Chevy, and Toyota.

If customers did not see the value in F-150's, they wouldn't buy them. The fact that they do buy them shows that they see value. And these are the people who are most directly affected by owning a new pickup, as opposed to outside "experts" who might have different values and preferences, and certainly have less skin in the game.

Pharmaceuticals are somehow seen as different. The opinion that Liptruzet shouldn't be given a chance on the market shows little respect for the ability of patients, physicians, and payers--the real people who take, prescribe, and purchase drugs--to form their own opinions based on their own experiences. I, for one, would prefer the pharmaceutical market to be more like the automotive market.


The pharmaceutical industry does many wonderful things, yet most people regard it as one step below head lice on the food chain.

This week, Merck, with some questionable help from the FDA, gave more ammunition to industry critics, who typically maintain that the industry contributes little innovation, and is simply concerned with profits.

For the most part, this criticism is biased and uninformed, but this time I'm siding with the critics. Because Merck is trying something that is as good an example of marketing without innovation as you'll ever see.

The company just received approval for the cholesterol-lowering combination drug Liptruzet-- a functionally similar (identical?) version of their own Vytorin, which is a combination of their statin Zocor and Schering's (now part of Merck) cholesterol absorption blocker Zetia (ezetimibe).

Liptruzet, ironically happens to be a combination of Zetia and atorvastatin (generic Lipitor). Yes--Merck is substituting a former Pfizer drug for their own Zocor with combining it with Zetia to make a "new" medication with additional patent protection. This is innovation?

Worse still, both Vytorin and Liptruzet are of questionable use. In 2009, studies showed that Vytorin, despite lowering LDL and total cholesterol did nothing to prevent cardiac events. In fact, a 2009 New England Journal of Medicine article concluded that not only did Vytorin fail to reduce heart disease, but "the use of ezetimibe led to a paradoxical increase in the degree of atherosclerosis in association with greater reduction in LDL cholesterol, an effect we hypothesize may stem from unintended biologic effects of this agent."

Liptruzet behaved, as expected, just like Vytorin. It reduced LDL cholesterol more for patients who took Lipitor alone, but it did not reduce patients' chances of developing heart disease. Not surprisingly, this left some doctors to wonder why it was approved at all.

Dr. Steven E. Nissen, chairman of the department of cardiovascular medicine at the Cleveland Clinic commented "This is extremely surprising and disturbing."

This sentiment is echoed (and then some) by Philip Gelber, M.D., Chief Cardiologist at Cardiovascular Consultants of Long Island. "It is surprising to me that the FDA approved this combination drug. The modern movement requires that drugs not just be safe and effective in their immediate goal, but to also show efficacy in improving outcomes. Cardiac medications should not just reduce the cholesterol count, but reduce the risk of heart attack and stroke as well." He continues, "There was, I'm sure, pressure by big pharma to get this approved, which by pairing it with another drug, would in effect restore blockbuster Lipitor back to branded status. A tricky move, but one which doesn't make folks any healthier."

So, why on earth would we need a virtually exact copy of a drug that doesn't even work? This is for Merck to answer.

I also don't understand what the FDA was thinking here.

Are they under so much political pressure to approve new drugs that they will accept just about anything? Because it sure seems that way right now.

This past January, FDA Commissioner Margaret Hamburg bragged about the improved performance at the agency, which approved 39 new drugs last year compared to 30 in 2011, and 21 in 2010. She said, "Not only have we been able to approve more new drugs that have real benefits for patients but also classes of drugs that signal where we are going in areas like personalised medicine, where we've been able to use diagnostics to target sub-populations of responders."

But last week's approval of Liptruzet makes me wonder whether they are simply playing a numbers game for the sake of public perception. Because if there is any drug that does not have any obvious benefits for patients, it is Liptruzet.

This is a sentiment shared by Dr. Nissen. He said, "It seems like the agency is just tone deaf to the concerns raised by many members of the community about approving drugs with surrogate endpoints like cholesterol without evidence of a benefit for the disease we are truly trying to treat--cardiovascular disease."

This episode just plain smells bad on many levels. I get the feeling that just about everything except science is driving this, and this will be a black eye that Merck will be inflicting on itself and the rest of the industry.


Every year, the Bureau of Economic Analysis (BEA) makes some revisions to the National Income and Product Accounts (NIPA) - these include changing the base year for chain-weighted indexes, adjusting benchmark years for input-output accounts, as well as different estimation and accounting methods for international accounts and other components of GDP. In what is truly a rare moment, however, the new revisions will increase US GDP by around 3 percent - roughly equivalent to the GDP of a country like Belgium.

Two-thirds of that increase will come from an idea that's been a long-time coming and long overdue - counting research and development as "fixed investment" in the national accounts. It's worthwhile to first understand what "fixed investment" really means.

Gross Domestic Product - GDP, as it's commonly known, is the sum of all economic activity in a particular country. Though it can be measured either as income or as expenditures, the latter approach is generally seen as more valid. Under the expenditure approach, there are 4 primary components of GDP (rough approximations of GDP contribution are given in parentheses).

(C) Consumption (70.8%): Everyday spending on goods and services by businesses and consumers.

(I) Investment (13%): This is generally thought of as spending on new equipment, construction of factories, or household spending on housing. (This category sees the biggest change in the new revisions).

(G) Government Spending (19.5%): Essentially, government spending on all final goods and services including salaries for public employees, but excluding transfer payments like social security and unemployment benefits.

(NX) Net Exports (-3.5%): Gross exports minus gross imports - the net amount that a country sells to the rest of the world (negative if a country is facing a trade deficit).

Under the new revisions, private investment and government investment will add an extra $300 billion to GDP - so what does this mean for the American economy?

First, at the macro level, private investment is historically almost perfectly inversely correlated with the unemployment rate: in other words, the more we invest in developing new goods and services, the lower the unemployment rate drops in the long term.

investment_unemployment.png

red line: unemployment; blue line: private investment share of GDP

Ultimately, GDP is really just an estimate for economic activity; the goods and services we provide deliver the actual value.  Still, with BEA's revisions to GDP, private investment (and government investment) will become more important in GDP calculations. This may help policymakers better understand the value of encouraging private investment and developing policies that maximize it (and in turn help reduce unemployment).

But there's much more to the BEA's change of heart than statistics. By virtue of counting domestic R&D as investment in the GDP accounts, R&D will now be given much more weight in terms of its value to the country's economic growth.

For industries that are very R&D intensive - like the pharmaceutical sector - this may be a much needed boon. As it has become ever more difficult and expensive to get a drug to market, the pharmaceutical industry (including academic researchers that rely on agencies like the NIH) will be better able to justify projects that require public investment; the benefits of expenditures on what may be relatively intangible (at least in the near future) will now become more clear.

Justifying blue sky research budgets in a climate of austerity is difficult; but it would seem that this new classification will help policymakers better  understand the "social value" of these investments. Other reforms such as making the research and experimentation credit permanent (as President Obama's FY14 budget does) should also become easier to justify with this new accounting scheme.

There may of course be pitfalls with BEA's new accounting. In a 2007 report, BEA economists noted that there are difficulties with counting R&D as private investment (for instance, separating R&D performance and investment [both financially and regionally] is non-trivial). This made it difficult for BEA to include R&D as investment for many years, but it would seem that they overcame the statistical quandaries. Interestingly, the same report made another important point: Order of magnitude estimates indicate that current dollar gross domestic product by state (GDPS) could be over eight percent higher in some states when R&D is treated as investment. These revisions could give a significant (if somewhat artificial) boost to states with well-established research hubs. Though areas like Massachusetts could have an unusually large bump due to biotech centers like Boston - at the same time, this could be a benefit by spurring state and local governments to offer more incentives to R&D-intensive businesses.   

More than any accounting changes, the most important takeaway from BEA's revisions is the realization that R&D has benefits beyond its simple costs (which were previously captured as consumption spending) -such as the$1 trillion social benefit of cholesterol lowering drugs.   In a world where economies are increasingly defined by their ability to develop new, high-tech goods and services, the BEA's revised estimates tell us where we stand today - and where the economy is going tomorrow. 

American doctors make significantly more than their European counterparts - in 2008, an Orthopedic physician in France averaged $154,000 annually; in American, it's almost three times as much at a whopping $442,000 - and this is after adjusting for costs of living. And a common refrain from those concerned about America's health care spending is that these doctors are significantly overpaid. Matt Yglesias at Slate argues this in two posts, coming to the conclusion that Medicare's monopsonistic position should be used to "negotiate" down doctor's salaries:

But when it comes to the question of health care costs overall, Medicare is the solution. Its vast bargaining clout lets it get much better prices than any private insurer, and we should be relying on it more to pay our bills, not less.

It makes sense to start with one of Yglesias' first comparisons - the U.S. versus Canada. He notes that while American doctors get paid quite a bit more than Canadian doctors, Canada has 25% more doctors' consultations per capita than America. He also briefly mentions "overprices" for medical equipment and pharmaceuticals as other cleavages between us and Canada, but not surprisingly, leaves it at that.

If there's a bell going off in your head right about now, there's a good reason. Canada has one of the longest waiting times for elective surgery in the OECD - in 2010 25 percent of people that received elective surgery had to wait more than four months for it! In the U.S., this is a mere 7 percent. The comparison also holds when looking at waiting times to see specialists. This is an important tradeoff - if you want more government sponsored health care, or at least more intervention, you will likely see reduced access.

But this doesn't change the fact that American doctors do get paid a lot more than those in other countries - this alone, however, isn't terribly problematic. Wage variations among countries exist - American office clerks make about 20 percent more than British office clerks; meanwhile, British firefighters make about 13 percent more than French firefighters. Does this mean we should "bargain" down the salaries of British firefighters or American office clerks? There are literally hundreds of reasons that certain jobs have wage variations among similar economies - unionization, differences in work weeks, or the amount of training required - these factors can and do affect wage differences. The last point is especially salient in explaining the American variation - American doctors face the largest barriers to entry, in terms of the amount of education required (minimum of 11 years including undergrad for a general practitioner), licensing exams, and the steep cost of a medical education. Given that we are already facing a shortage of GPs, it seems unwise to focus on restricting their salaries. (Although, part of this shortage would certainly be explained by Medicare's hugely specialist-skewed reimbursement rates).

The other point that should be skewered is that Medicare is "cheaper" than private insurance. Part of this argument stems from the statistic that Medicare's administrative costs are a mere two percent. The first problem is that this only uses one measure of Medicare's administrative costs - the one published by Medicare's trustees. In reality, there are two different measures - one from the National Health Expenditure Accounts and the other from the trustees. The former shows Medicare's admin costs growing to about 6 percent in 2010; the latter shows them at less than 2 percent in 2010 - quite a large variation. But even with these two divergent measures of admin costs - one important source of costs is excluded: fraud and waste. The GAO has routinely found that Medicare and Medicaid (with Medicare making up more than half) made about $70 billion in improper payments in 2010 - close to 10 percent of the two programs. Other estimates by the RAND Corporation, however find that fraud and abuse may be as high as $98 billion in the program. The second problem with thinking that Medicare is cheaper than private insurance is simply that it isn't - the latest Health Expenditure Accounts show that benefits provided by Medicare cost more than twice the same benefits provided by private insurance. The overall point is that not counting the fraud rate or these per-enrollee costs paints an incomplete picture of Medicare - to no one's benefit.  

Yglesias comes from the perspective that Medicare's reimbursement of physicians is more in-line with their actual costs and reducing all-around reimbursement to those levels would help shave down health care spending - but this looks at Medicare in a vacuum. The literature has found that hospitals respond to lower Medicare reimbursement by shifting costs to private insurers (in a competitive market they focus on cutting costs, but other literature has indicated that hospitals often operate in more concentrated markets). Indeed, what Yglesias seems to miss is that Medicare may very well drive part of the growth in health care costs by shifting them.

To his credit, Yglesias touches on some important physician-related reforms - he acknowledges that medical school is extremely expensive and that addressing malpractice reform can encourage more people to become doctors. But he stops short of real reforms - reducing the requirements to become a GP; fixing Medicare's terrible specialist-skewed payment model; and aligning Medicare away from the ACO model which encourages consolidation (and drives up costs). These reforms would begin to address both the supply (barriers to entry and consolidation) and demand (Medicare's payment model) side of the equation in a meaningful way.


terminator-medic.jpg

In a brilliant op-ed at the Wall Street Journal, Harvard Business School professor Clayton Christensen &co. make the point that the American healthcare system needs a strong dose of disruptive innovation to start addressing the issue of costs. At the core, he writes, the problem with the ACA is that Accountable Care Organizations "most assuredly will not...deliver [this] disruptive innovation."

Christensen is definitely on to something - particularly when he recognizes the importance of technologies that allow price and quality competition (such as telemedicine) to give more control over health care decisions to patients. And it shouldn't be surprising that a recent foray into this market has arrived at Wal-Mart.

The super-retailer already offers retail walk-in clinics at many locations, with low-cost services that include vaccinations, blood sugar testing, and cholesterol screenings. But it seems this was only the beginning for Wal-Mart. The cheaper a technology is (assuming equivalent quality), the more disruptive it is. What's cheaper than free? In October of last year, the retailer partnered with Solohealth, a company that develops retail "health stations" that offer basic medical tests, to install hundreds of the health stations at its retail locations.

These health stations will allow Wal-Mart customers to run basic tests like a vision and BMI check at no cost. Based on the test results, the health station will spit out a list of doctors in the surrounding area that the customer can go see. Certainly, this is a brilliant move on Wal-Mart's part - you probably rarely go to the doctor, but you're at Wal-Mart pretty often. But more than that, this echoes Christensen's point about disruptive health care innovation - while these health stations won't replace a doctor (and are not a panacea for growing health care costs), they're a great step towards more efficient use of health care resources. It isn't a stretch to imagine similar health stations offering quick and cheap video calls with doctors to answer some basic questions. Looking further down the line is even more exciting - algorithms and whole language machine learning are making computers as smart (or smarter) than the best human doctor (think of the holographic doctor in Star Trek Voyager).  And the cost of sending these "doctors" to medical school is essentially zero for the marginal patient. But one step at a time.

By ensuring that these health kiosks will be at one of the world's largest retailers, people may even start thinking about health care a little differently. Right now, when someone thinks about health care, they don't think of it as a commodity - you don't pay for it; your insurance company does. When you see your doctor and he tells you that you have a cold but he'll prescribe antibiotics anyway "just to be safe" - you don't ask why. And part of this is due to the informational asymmetry inherent in health care - let's face it, the doctor is a professional and holds a wealth of knowledge about human physiology that you, as a consumer, likely don't have. But that doesn't mean you can't be a little more educated. When you go to get your car fixed after a fender-bender and one body shop quotes you $500 and another quotes you $1,000, you'll probably feel comfortable enough asking why. When it comes to medicine and health care, we don't have that same comfort. And though it may be too early to start jumping for joy, greater commoditization of health care is a terrific way to slow future cost growth and maybe, just maybe, have more educated patients.

Image source: http://conflicthealth.com/robo-medics/


In an issue brief for the left-leaning Centre for Economic and Policy Research, economist Dean Baker resurrects the idea that Medicare should "negotiate" (read set) prices for drugs. After all, if other federal health insurance programs require mandatory "rebates" for prescription drugs - the Veterans Administration and Medicaid, for instance - why shouldn't Medicare?

Baker's analysis largely focuses on cross-country comparisons with several countries that set prices for prescription drugs - Canada, Denmark, Japan, the Netherlands, and the UK - all of whom spend significantly less per-capita on prescription drugs than the U.S. He concludes that that through "negotiation" with drug companies (cleverly, he never uses the less palatable "price controls") we could save anywhere between $309 and $726 billion over 10-years - savings which would accrue to the federal government, states, and to individuals.

To his credit, Baker at least tries to rebut the pharmaceutical industry's argument that price controls would stifle innovation. According to Baker, strong patent protections (which give companies monopoly pricing power over their products) lead to a more corrupt drug industry, which in turn leads to dangerous drugs being approved (like the recent Vioxx scandal). He offers a proposal by Nobel Laureate Joseph Stiglitz as an alternative - that clinical testing of drugs should be financed entirely by the government, which he thinks would nullify the pharmaceutical industry's argument that patents are required to recoup the high costs and enormous risks required to bring new medicines to market.

(This is a bad idea layered on top of another bad idea. A bad idea sandwich. This would put the government in the position of both paying for new medicines and funding the clinical trials to test them, creating a massive conflict of interest, since approving fewer new medicines would also lower government expenditures. And the process of choosing which medicines to take through clinical trials - ED drugs, HIV, cancer, diabetes - would become hopelessly politicized by Congress.)

Still, Baker's love of price controls is clearly popular with the White House. The president voiced support in his SOTU address for health care cuts similar to Simpson-Bowles (which also included Medicare Part D price controls), and claimed that paying drug companies market prices through Medicare Part D is tantamount to a subsidy.

For what it's worth, Baker's paper is correct (or at least strictly tautological) in its assessment: if the United States were Canada, or Denmark, or Japan, we would pay for drugs like Canada, Denmark, or Japan - and probably pay lower prices. But that's not the case.

The U.S. is different in more than just how we structure our health care system. The U.S. is different in its demographics, per-capita income, social attitudes, and income distribution. All of these are areas where the U.S. varies tremendously with its OECD competitors, and these are salient factors when it comes to evaluating at U.S. health spending.

With that in mind, it is worthwhile to address some of the faults in Baker's assessment.

Comparison of Per-Capita Spending

To compare other countries' spending with that of the United States', Baker looks at PPP (purchasing power parity) adjusted per-capita spending levels on prescription drugs. Certainly, this offers an interesting comparison of prescription drug spending across countries while holding constant purchasing power (essentially what one American dollar buys elsewhere - hint: it buys less health care but more of other goods). But this ignores potential differences elsewhere in the health system - for instance, it may be that higher prescription drug spending reduces spending in other categories (as the CBO has seemingly, though not explicitly, taken into account in their modeling efforts).

For instance, it turns out that the U.S. tends to spend significantly less than Denmark, Canada, or the Netherlands (Baker's OECD sample choice) on long-term care services for the elderly.

percap_nursing.png

Source: OECD StatExtracts http://stats.oecd.org/Index.aspx?DataSetCode=SHA# 2010;
Note: latest data for Japan was 2009; UK Data was unavailable

Does this necessarily mean that these other countries should work to reduce their spending on long-term care? Not really. Demographic needs and provision of healthcare should (and do) reflect country specific policy and political choices - not rote conformity with economic peers. One size really doesn't fit all.

But this argument probably won't assuage those concerned about pharmaceutical price-gouging - so let's look at the issue from a different perspective. We already know that the U.S. spends more on healthcare as a percent of GDP and per-capita than any other OECD country. But what about the share of total health spending represented by prescription drugs?

drug_percent.png

Source: OECD StatExtracts http://stats.oecd.org/Index.aspx?DataSetCode=SHA# 2010;
Note: latest data for Japan was 2009; Netherlands & UK data on drug spending was unavailable

Here, America's spending on prescription drugs as a share of total health spending is lower than both Canada's (by 40%) and Japan's (by 80%). It's also about two-thirds higher than Denmark's. Again, per-capita costs only provide a crude comparison for health system comparison that doesn't take into account total spending, and the share of prescription drug costs as a burden on the health system as a whole. We might spend more on prescription drugs, but it's also far from the real sources or drivers of U.S. spending considered as a whole - well worth taking into account if you're really concerned about the drivers of U.S. health care spending (as opposed to just singling out a politically convenient industry).

The last aspect of these cross-country comparisons that Baker fails to address is the makeup of prescription drug spending. Who ultimately pays for the drugs? (Baker takes this into account when projecting U.S. savings, but assumes that the makeup would remain the same - probably an unrealistic assumption).

how_we_pay.png

Source: OECD StatExtracts http://stats.oecd.org/Index.aspx?DataSetCode=SHA# 2010;
Note: latest data for Japan was 2009; Netherlands and UK data on drug spending was unavailable;
U.S. data from CMS's National Health Expenditure Tables;
Japan's data does not sum to 100% due to rounding

In fact, when looking at who ultimately pays for prescription drugs, the relative share of spending varies significantly by payer. Likely, to imitate a country like Denmark, the U.S. government's share of spending on drugs would have to almost totally displace private insurance for prescription drugs. (President Obama or Baker might like this, but it's obviously a non-starter in Congress or for the American people.)

Without this shift, lowering prices for Medicare would just lead to a cost shift from public to private payers as drug makers tried to maintain their margins. In other words, employers and patients outside of Medicare would have to pay more, so that the government could pay less. There is no such thing as a free lunch, and so Baker's plan would be to leave someone else to pick up the check.    

Savings are Likely Exaggerated

Baker at least concedes that inflicting massive price cuts on drugs would hurt pharmaceutical innovation, and acknowledges that something would have to be done to offset the impact of the lost industry revenue. Implementing Joseph Stiglitz's proposal (aside from the bad features we mentioned earlier) would cost quite a bit of money and as such, that cost - at least tens of billions of dollars annually - should be deducted from any potential savings.

More on this point: A working paper by health economist Austin Frakt looks at potential savings from restricting Medicare's drug formulary to that of the Veteran's Administration (which is about 40 percent less generous, according to the authors). The results did find savings of about $14 billion if accounting for all Part D enrollees. But there is a significant loss of "consumer surplus" (the difference between what a person is willing to pay and what they do pay) to Medicare enrollees from reduced drug access - and because the authors assume that all drugs are valued equally between VA patients and Medicare beneficiaries (an important simplification), and the net savings are relatively small compared to the gross, these estimates are highly sensitive. In any case, Frakt et al's estimates are significantly less than those that Baker arrives at even over a decade. Certainly, the proposals are different - Frakt offers an operationally feasible proposal which would likely be a more realistic implementation of Baker's.

Finally, it is useful to look at the effects of price controls and restricted formularies on innovation (something that Frakt's paper doesn't explicitly address since it is cross-sectional in nature). In a 2005 report for the Manhattan Institute's Center for Medical Progress, Frank Lichtenberg, a Columbia University Economist, noted that after the VA tightened their drug formulary, VA patients' life expectancy may have declined because of reduced access to newer drugs for VA patients.

All in all, Baker's conclusion - that European-level price controls are compatible with saving money and sustaining innovation - is wildly optimistic.

Concluding Thoughts

Price controls have been shown to stifle innovation, and shifting to a European-based system would require numerous other changes to our health care system, and indeed, our economy as a whole, that would be innovation dampening.

Broadly speaking, European price controls are sustainable (at least in the short term) because the U.S. still allows something close to market pricing. As in the case of global defense spending, Europe can afford to pay much, much less, because the U.S. spends more - proving a global security umbrella.

 If the U.S. were to sharply reduce spending on prescription drugs, Europeans would have to pay much more, or global pharmaceutical innovation would decline sharply. If nothing else, access to new drugs would suffer tremendously as a 2011 essay in the Annals of Health Law notes:

The U.S. pharmaceutical industry has historically been characterized as the market-driven pharmaceutical system of the world...companies in the U.K. have endured profit controls...[this] has led to vast differences in the advancement of pharmaceutical innovation and to significant disparities in patient access to medicines.

Baker indulges in a favorite habit of the left - assuming that free lunches really are possible. In every other sector where governments have imposed price controls (food, housing, automobiles), supply, quality, and innovation dwindle sharply.

Pharmaceuticals may be an exception for the moment, because capital can still flow to jurisdictions (like the U.S.) where market-friendly rules still apply. Ironically, European price controls may benefit the U.S. by making America a destination for risk capital, and pharmaceutical R&D investment - along with the millions of jobs and the health benefits that come with enhanced innovation. Americans undoubtedly do pay more for drugs because Europeans pay less (economists call this third degree price discrimination) but the solution is to ask them to contribute more to global medical innovation, especially through bilateral trade agreements. The Europeans can free ride to some extent, but global medical innovation is lower than it would be if they were paying prices commensurate with their wealth level - and these losses are felt just as acutely by European patients suffering from Alzheimer's, cancer and other diseases.

America does need to rein in health care spending. The best idea in this regard may come from the bipartisan Simpson-Bowles Commission, which suggested capping federal health care spending growth at GDP +1%. (It also recommended Medicaid-level drug rebates for Medicare, but no plan is perfect). This would have the benefit of forcing providers and patients to shift towards the best mix of products and services. This might entail more use of pharmaceuticals, and thus more, not less drug spending but would also be welfare enhancing. Obamacare actually attempts something like this through its ACO model, and its selective productivity cuts to some Medicare providers. The problem is that policymakers insist on keeping their hands on the till, shifting the market to and fro based on the political winds of the moment.

It's a recipe for shipwreck, albeit a time honored one on the left. We can only hope that Washington has the foresight to recognize Baker's idea for what it is - a gross oversimplification that will only produce unwanted consequences. 


For the uninitiated, 3D printing is a fast-growing manufacturing technology that effectively allows "printing" of small objects like machinery components. Where the "revolution" part comes into play is that the "printers" are small enough and inexpensive enough to let almost anyone set up a mini-factory in their garage - or laboratories. These mini-factories are connected to a computer where 3D models are designed and fed into the printer (along with the necessary raw materials). Using high-powered lasers, the printer shapes the object according to the specifications.

Last December I wrote about Organovo - a 3D bioprinting company that was partnering with Autodesk to print living, architecturally correct human tissue. A new, potentially more exciting development is that researchers from the University of Edinburgh have developed a printer that is able to produce living, viable, embryonic stem cells. For those suffering from chronic diseases like Alzheimer's or Multiple Sclerosis - this has the potential to be life-changing.

While most adult tissue has its own stem cells, embryonic stem cells are unique - they are able to differentiate into almost any type of tissue to repair it after it has been damaged. In recent years, however, there's been quite a bit of controversy surrounding the ethics of using embryonic stem cells (which have to be harvested from human embryos), to say nothing of the possibility of rejection when injecting stem cells from one person into another.

While these issues remain, the ability to spit out these stem cells through a simple manufacturing process (the printer doesn't technically manufacture the cells - it clumps them into uniform droplets to keep them viable using two types of "bio-ink") provides a new, automated way of producing embryoid bodies (essentially a clump of stem cells) that can be used in treatments. And indeed, an ever growing body of research is indicating that stem cell treatments - even those derived from a person's own body (non-embryonic) may help to cure (not only treat) chronic diseases like MS.

Of course, any optimism should be tempered with reality. Printing stem cells that have biomarkers indicating pluripotency (the ability of a stem cell to differentiate into any type of cell) is very different from using those same cells to treat diseases in humans. It's unclear whether the human body will be able to accept these manufactured stem cells or how viable they will be in the long-run compared to natural stem cells.

There are also potential regulatory pitfalls. The FDA has been less than accommodating to companies that have tried using autologous stem cell treatments (where stem cells are taken out of a person, treated, and injected back into the same person), shutting down the laboratory of a promising venture in 2012. Though the FDA has a specific statute under which they regulate human cells and tissues, these newly manufactured cells would likely not fall under that statute. Instead, a company would probably need to pursue approval as a drug - but the long, winding, and expensive process of clinical trials is poorly suited to proving the ability of stem cells in treating chronic diseases.

A more stem-cell-friendly approach would allow companies offering these treatments to conduct "N=1" trials - for patients who have decided that an unproven treatment may very well be worth it if it has the possibility of curing a disease like MS - and submit this data over time to the FDA to help prove the efficacy of the treatment as well as to potentially help validate new surrogate endpoints.  


A few months back I discussed the explosion of "big data" - how it helped Obama win the 2012 election, and how it is being used to treat multiple myeloma, a rare bone cancer. To recap, the term "big data" refers to large, complex datasets that require an enormous amount of computing power to plow through. Because over the years, the cost of computing has fallen exponentially (the cost of storage is almost irrelevant now), and the advent of cloud computing essentially gives anyone access to a supercomputer at their fingertips, the big data revolution has become a reality.

A new collaborative project between the Mayo Clinic and United Health (one of the largest insurers in the country) is poised to take big data even further. Optum Labs, the research institute that the two will be building in Cambridge, MA, will combine Mayo's and UH's data on over 100 million people. The goal will be to use the massive dataset to understand which treatments work best, focusing on patient outcomes and cost-effectiveness. For instance, it could allow researchers to find that one diabetes treatment works just as well as another, but costs half as much.

More broadly, however, the results of the research conducted at Optum Labs may help the FDA take steps to reform their clinical trial requirements.

While much has been written about the poor clinical trial designs forced upon drug developers, the FDA has only recently shown real interest in changing them. The 2012 Food and Drug Administration Safety and Innovation Act (FDASIA) establishes the "Breakthrough Therapy" designation; while the law itself was vague on how this designation differed from Accelerated Approval, one clause of the act calls on the FDA to "[take] steps to ensure that the design of the clinical trials is as efficient as possible...", and the first drugs receiving this designation were announced just last week. And a 2012 FDA report went even further, recommending a new optional pathway for drugs shown to be effective in small subgroups of patients, rather than large, broad groups.

So how does this tie into big data? New clinical trial designs, particularly for patients with orphan diseases, should allow the use of existing patient data to demonstrate drug efficacy - for instance, data from emergency room uses of an antibiotic can be used to complement (or substitute) data from what can often be, a very difficult to run clinical trial. Or data on a drug's (successful) off-label use can be used in place of a clinical trial to receive provisional FDA approval for a new indication.

More generally, the use of large datasets like these will allow drugmakers to use observational data to receive provisional FDA approval with confirming studies to follow, and expanding patient populations as new uses are validated.

For Mayo and United Health, these databases can also be used to identify and validate potential biomarkers, allowing further improvements in clinical trial design.

If the FDA continues its slow, but steady move toward reform, Optum Labs may be just the tip of the iceberg.


From yesterday's New York Times:

The conversion to electronic health records has failed so far to produce the hoped-for savings in health care costs and has had mixed results, at best, in improving efficiency and patient care, according to a new analysis by the influential RAND Corporation.

Optimistic predictions by RAND in 2005 helped drive explosive growth in the electronic records industry and encouraged the federal government to give billions of dollars in financial incentives to hospitals and doctors that put the systems in place. ...

RAND's 2005 report was paid for by a group of companies, including General Electric and Cerner Corporation, that have profited by developing and selling electronic records systems to hospitals and physician practices. Cerner's revenue has nearly tripled since the report was released, to a projected $3 billion in 2013, from $1 billion in 2005. ...

The study was widely praised within the technology industry and helped persuade Congress and the Obama administration to authorize billions of dollars in federal stimulus money in 2009 to help hospitals and doctors pay for the installation of electronic records systems.

Kudos to RAND for doing a follow up study to see whether or not their predictions were coming true. But are the latest headlines really all that surprising?

Call it the fallacy of the Next Really Big Thing (NRBT). Industry lobbyists seize on one promising study or report and tell Congress that if they just spent more money to subsidize the NRBT (wind energy, electronic health records, solar panels, biofuels, etc.) massive savings accrue and jobs will blossom.

For the most part, this doesn't ever really happen. In complex economic systems (like health care) pulling switch A often results in unintended result B (along with C, D, F and Q). Data from pilot projects, or existing health systems (like the Mayo Clinic or Geisinger) are often extrapolated far beyond the soil in which they originally flourished.

Subsidies for solar energy have turned into a massive boondoggle because the technology isn't competitive without massive subsidies, and the Chinese are subsidizing them even more heavily than we are. Biofuels are leading farmers to shift from food crops to fuel crops, hurting the poor by raising prices for dual-purpose crops (like corn). (And, from an environmental perspective, they're actually worse for the planet than the much maligned fossil fuels.)

And electronic health records mandated by Congress appear to induce more, not less, health care spending without driving the hoped for efficiency gains.

Last month, Megan McArdle, at the Daily Beast, wrote an excellent blog post explaining why the road to Hell is paved with pilot projects:

This is one more installment in a continuing series, brought to you by the universe, entitled "promising pilot projects often don't scale". They don't scale for corporations, and they don't scale for government agencies. They don't scale even when you put super smart people with expert credentials in charge of them. They don't scale even when you make sure to provide ample budget resources. Rolling something out across an existing system is substantially different from even a well-run test, and often, it simply doesn't translate. ...

Sometimes the success was due to the high quality, fully committed staff. Early childhood interventions show very solid success rates at doing things like reducing high school dropout and incarceration rates, and boosting employment in later life. Head Start does not show those same results--not unless you squint hard and kind of cock your head to the side so you can't see the whole study. Those pilot programs were staffed with highly trained specialists in early childhood education who had been recruited specially to do research. But when they went to roll out Head Start, it turned out the nation didn't have all these highly trained experts in early childhood education that you could recruit....

Megan cites a number of other excellent examples for why early results turn out to be much less promising then they first appeared (my favorite being the failure of New Coke).

But I'd add one other factor to Megan's many good observations.

In the private market, failure is the rule, success the very rare exception. Most new restaurants fail. Most new drug trials fail (at enormous cost). Most new small businesses fail.

And that failure is a good thing. It's one of the best features of market-driven economies. It means that investors, consumers, and taxpayers don't spend money on the NRBT that turns out to be a colossal billion-dollar bust.

Failure is what drives market efficiency, and what makes markets so incredibly adaptive. Markets represent the evolution of ideas and technologies in real time - rife with unintended and emergent consequences.

Ironically, liberals and conservatives often switch ideological roles when it comes to markets. Skeptics of intelligent design in nature, liberals often turn into proponents of intelligent design when it comes to public policy. If we have enough smart minds at the top, or so the theory goes, we can predict the consequences at the bottom, millions of minds and miles away. Alas, it doesn't work like that.

The temptation to assume that the NRBT will work as planned is perennial (that Mind triumphs over Markets), and so should be relentlessly questioned.

It's much more prudent to assume that unintended effects will largely swamp intended effects, and that people and institutions will continue to do what they were doing before the NRBT came along - maximizing their profits (or, if they're not for profit, like the vast majority of hospitals, maximizing their revenues).

A less hubristic approach is to just assume we can only marginally control incentives. And that means tweaking the health care reimbursement system so that it rewards efficiency and innovation and then get out of the way and let the market do what it does best - hatch many small scale experiments and kill all but a handful.

EHRs may yet succeed, and deliver their promised benefits, especially in a more competititve health care system where providers have to answer to consumers, rather than regulators or insurers.

Until then, expect lots more stories like this one. And expect people to keep wondering why the NRBT never seems to become the NRBT.


Despite all the uncertainties of Obamacare, one fact has remained - insurance will become more expensive.

The NY Times reports that insurers are seeking double-digit rate increases in a number of markets - in California for instance, some 68,000 people will see an average increase of 18.8% in the individual market. In Connecticut, a state that will have one of the highest bars for participation in their insurance exchange, a slew of individual market products have already seen increases of about 14%, affecting over 20,000 people.

Under Obamacare, HHS requires states to conduct reviews of proposed health insurance premium increases. For states without an "effective review process" HHS will conduct the reviews themselves. The basic idea is that exposing insurers to public, and government scrutiny will help keep down insurance costs for consumers - there is reason to doubt that this will pan out as well as hoped.  

For starters, the rate increases are often justified by increases in costs - in Connecticut, 20% of the 14% jump was due to administrative expenses; almost 70 percent was due to increases in actual medical costs (the largest of which was "professional services" that includes payments to doctors). Some will doubtlessly argue (and the NYT article addresses this) that this still means that states should simply have the power to deny or modify rate increases - as New York has just done (some 36 other states have the power to do so as well). The problem with this line of logic is that it doesn't address the underlying growth in costs - health care continues to become more expensive, and Obamacare doesn't help much by requiring more generous benefits and banning the ability to base premiums on health status.

Additionally, insurers also have other options for addressing cost growth - rather than raise premiums they can simply increase cost-sharing (such as co-insurance or deductibles) to make consumers foot more of the cost of their health care. This means that rather blunt policy tools like denying rate increases are unlikely to work and would instead make the cost hikes less transparent.

Instead (though it's too early to tell now), states that tend to have more restrictive policies and tendencies (such as denying rate increases or requiring more generous benefits packages) may very well see insurers exiting their markets as they decide that it isn't worthwhile. As states establish their health insurance exchanges this year, it will be wise to keep an eye out for insurers refusing to offer policies in states with higher bars for participation.

So while the ultimate reason for rising insurance costs may be uncertain - more generous coverage, more administrative costs, or higher health care costs - the end result is still the same: under Obamacare consumers will be paying more for their insurance.


Paul Howard and Yevgeniy Feyman

The recent bill passed by both the House and the Senate effectively kicked the fiscal cliff can two months down the road and was just approved with the President's signature. Among the provisions in the bill is one that also postpones, for the next year, a 26.5 percent cut to physician reimbursements from Medicare. Commonly known as the 'doc-fix', this measure has been used for many years to avert reimbursement cuts required by Medicare's Sustainable Growth Rate. While physicians can rest easy for the next year, part of the cost of averting the payment cut is being funded by cuts to hospital reimbursements.

So what are we left with? The underlying problem with how Medicare's SGR is calculated remains unattended - come 2014, Medicare providers will once again be at the mercy of congressional maneuvering. Moreover, the hospitals facing the cuts are those that primarily treat poor populations. (Disproportionate Share Hospitals).

The broader problem, which isn't addressed, is how Medicare's payments are calculated - the Resource Based Relative Value Scale. Developed decades ago, the RBRVS guides how Medicare reimbursements are structured based on four categories: mental effort, physical effort, skill, and time. Seemingly uncontroversial, the RBRVS has steadily grown to favor specialists over primary care doctors - reimbursements for specialist services have grown tremendously (even as many procedures - like a cataract extraction - have become more routine and automated) while primary care physicians have seen their reimbursements remain static. Certainly, specialists perform often complicated procedures that require years of training to perform properly - and they deserve to be compensated fairly for their work. But primary care is similarly demanding, and patients rely on their physicians to help diagnose one out of possible dozens of ailments and refer them to the appropriate specialist - no small feat with an ever growing number and variety of chronic diseases.  We're also asking primary care physicians to shoulder more of the burdens of chronic care management, in effect asking them to become health care's version of air traffic controllers.

Of course, it's possible to avoid dealing with the RBRVS entirely by simply changing how the SGR fee update is calculated with a method to always insure a positive increase.  But is this the right way to approach the question?

Congress should be agnostic about who performs a service, as long as the service is delivered effectively and efficiently.  Congress should also set up a system that encourages innovators to replace expensive labor (services) with much less expensive diagnostics.  By basing the RBRVS on the "mental skill" required to perform services, the system implicitly biases the increased utilization of labor rather than diagnostics. 

Or, to put it another way, IBM's Watson could eventually deliver routine and complex analysis of a patient's health through a low-cost tablet app offering supercomputing services to a physicians' assistant, nurse, or primary care physician. This reality isn't that far away - The Tricorder X-Prize, offered by manufacturer Qualcomm, seeks to reward the first company to "put healthcare in the palm of your hand" by essentially creating the ubiquitous Star Trek gadget. Mark Mills, senior fellow at the Manhattan Institute, writes:

The ultimate Tricorder idea is to access the wealth of (voluntary) data about what you've been doing, eating, how your own biological machinery has been operating, and marry it with a rich stream of highly precise and real-time physiologically-specific information about what's going in your body right now - wherever you are - and link this wirelessly to the analytic computing power that resides in the Cloud.

The new world of "Big Data" makes this possible - and with the exabytes of health data out there, it will help put healthcare decisions into the hands of patients.

The RBRVS and the SGR lock American health care into labor arrangements that are swiftly being overtaken by technologies that have the potential to radically change the cost and quality of American health care.  But their use will be constrained as long as pricing signals are based on assumptions about the value of labor that are woefully outdated.

A better solution would be to get out of the business of pricing services entirely, through a premium support mechanism that encouraged robust competition among many different networks of competing health care providers. Pricing competition will encourage insurers and providers to seek out the most cost effective and innovative mix of pricing and services.    

Then we won't have to worry about the SGR or the RBRVS ever again.  And that would be a priceless gain for American health care.   


Anyone following the manufacturing sector will tell you that one of the most exciting trends is the advent of 3D printing. Using complex software, it is now possible to "print" 3-dimensional objects essentially creating a "desktop factory". It really couldn't be easier - you input the proper calibrations into your computer, feed in the raw materials, and let the high-powered lasers in the printer do the rest. For manufacturing, this is an exciting turn of events - last year, my colleague Mark Mills, senior fellow at the Manhattan Institute, discussed the enormous potential that 3D printing has for disrupting the manufacturing status quo. 

More recently, it's spilled over into the biotech sphere. Autodesk, a CAD-software developer, just entered into a partnership with Organovo, a 3D bioprinting company, to develop software that can create architecturally accurate, living human tissue. Think about that for a second - software that enables the creation of human tissue. This represents a tremendous step forward in medical research. Though the technology is still far away, the logical endpoint is the ability to create replacement human organs or body parts.

There will be many challenges before this kind of technology becomes widely used - many barriers not the least of which are the safety and technological challenges stand in the way. With luck, FDA reform will ensure that regulatory barriers are minimized by then. 


Now that HHS has released all of their guidance concerning essential health benefits, the minimum benefits that insurance plans must offer to be sold in the state, nine states have received approval for their benefits package (out of 19 that have said they will build their own exchanges). Looking only at the nine approved so far, it becomes apparent that insurance will be no more affordable in most states than it was pre-Obamacare (one of the main goals of setting up health insurance exchanges is to offer consumers more choice among insurers). In fact, it is likely to be more expensive, and no more accessible than before.

State-Level Insurance Mandates

 

Previous Mandates

Proposed Mandates Under 
New Benchmark Plan

New York

61

65

Colorado

58

63

District of Columbia

27

55

Washington

58

64

Kentucky

47

56

Maryland

67

73

Massachusetts

48

41

Oregon

44

56

Connecticut

63

39

Source: "Previous Mandates" from the Council on Affordable Health Insurance's 2011 report; "New Mandates" from National Conference of State Legislatures.

Under the law's essential health benefit requirements, HHS defines the categories of benefits, while states determine the specifics by selecting a benchmark plan for minimum coverage. For the most part, states have taken to choosing plans with more generous coverage than their previous requirements mandated, with D.C. making the biggest jump. Some, like Connecticut however, have actually reduced the number of total mandates, but at the same time have imposed requirements on insurers to commit to at least two years of participating in the exchange (note: D.C. has done the opposite, and has said that they will allow any insurer  that covers the minimum federal EHB to participate).  Thus far, as my colleague Avik Roy has noted, one large insurer has only committed to participating in a little more than a dozen exchanges; another only in ten. Fewer insurers means less competition, which means higher prices for consumers.

The market-influenced approach of health insurance exchanges is surely one of Obamacare's bright spots - if nothing else, a properly implemented exchange allows the individual to know exactly what kind of insurance policy they're purchasing , how much it costs, and what it covers. It encourages consumers to shop around for value, and encourages insurers to compete for people's business.

This isn't the approach that the ACA encourages, however.  Richer plans require higher premiums, which will in turn discourage young and healthy people from buying coverage, especially when they can pay a small penalty and buy the same coverage later if they become sick.

Given what we know about the approved exchanges to date (which, to be fair, isn't much), there isn't much reason to think that they will encourage robust competition. Increased state-level mandates and other requirements are likely to cancel out any potential for increased competition amongst insurers.

So what are we left with? For insurers, this is a new minefield of regulations to navigate. Insurers will likely avoid those exchanges with more mandates, and onerous rules and regulations. For consumers and taxpayers, this leads to fewer options to choose from and higher premiums.  A more flexible exchange format, encouraging more competition among insurers (Utah's clearinghouse model for instance has contracted to sell 140 plans already) and more choice for consumers, would be a far superior approach.


Over at Xconomy, Brian Patrick Quinn of Vertex Pharmaceuticals (the company that developed Kalydeco) discusses the need for a strong American manufacturing sector:

Manufacturing - as many others have argued - is vital to many strong businesses and to all strong societies, even in the 21st century.

Quinn makes a strong case for a resurgence in American manufacturing, making the important distinction that modern-day manufacturing, is not the bleak, Dickensian dystopia that it was in days of yore. And while we hear every day about China's comparative advantage at manufacturing, we tend to forget about our own edge - technology.

Vertex for instance, started construction on a new production plant; one that is more efficient and capital-intensive than the old-fashioned, labor-heavy factories abroad. It allows higher quality drug production at significantly lower cost, and as Quinn rightly notes, having the research lab next door allows new discoveries to be implemented more quickly.

This advantage isn't unique to Vertex either; the entire American pharmaceutical industry is ever more tech-savvy and innovative. The personalized medicine revolution underscores the need for a tight knit relationship between production and research, in order to adapt on the fly - something that factories abroad may have difficulty with. And this is essential in supporting innovation:

Perhaps the most valuable trait of the manufacturing sector is its capacity for supporting innovation. In fact, experience shows that innovation and manufacturing processes are too interdependent to work well when they're separated...

But supporting American manufacturing requires more than just sitting back - reducing regulatory barriers and appropriate push/pull incentives will be critical.  Uncertainty about the FDA's future course makes these investments more risky, and makes it less likely for investors to see a particular company as a 'winner'. After all, to have a manufacturing plant, you need to have something to manufacture.   On this front, having strong and consistent leadership from Commissioner Hamburg is a plus, as is the FDA's willingness to reform clinical trials requirements for antibiotics, accelerate access to "breakthrough therapies", and rationalize regulation in other ways.  While companies can innovate anywhere in the world, assuring rapid access to America's large pharmaceutical market is certainly an attraction in locating R&D and manufacturing facilities in the U.S., close to regulators.

States are doing their part as well.  For instance, Massachusetts, for instance, has funded a 10-year $1 billion life sciences initiative to support the existing biotech cluster in the region. Similar public-private partnerships at the local level can give companies incentive to establish (or maintain) manufacturing plants in the region.

At the federal level, reforming the decrepit American tax system will also make the U.S. a more competitive location for global headquarters (and consequently manufacturing). After all, pharmaceutical manufacturing requires appropriate funding from venture capitalists and other investors who can choose from an international menu of tax regimes.  Our tax system, at a minimum, shouldn't be chasing them away.  

Let's hope Washington gets the memo.


We're just at the dawn of molecular medicine. And it's going to change everything.

Take cancer treatment, for instance. Next generation therapies for cancer - including new molecular-targeted therapies, nanotechnology enhanced chemotherapy, gene therapy, and cancer immunotherapy - have the potential to "disrupt" tradiational cancer treatment paradigms, radically improving outcomes and (in the long run) sharply lowering the costs of treatment.

Researchers and the media have been talking about the revolution in "personalized medicine" for more than a decade now, but that just means that the most promising therapies are just beginning to reach the clinic now, with even more powerful therapies in the pipeline behind them.

As these treatments reach the mainstream, they will make many of our current health care debates obsolete. Every few years, we wring our hands about the cost of new drugs, and ask how pharmaceutical companies can charge so much for treatments that only extend life by a few weeks or months.

Of course, incremental innovations are better than no innovation at all, and new some cancer therapies, like Gleevec, are truly "game changers", and for a handful of other types of cancer (like breast cancer and testicular cancer) survival rates have skyrocketed as companies and researchers have substantially improved both diagnostics and treatments. But for most solid tumors, and some blood cancers, the prognosis is still unremittingly grim and the treatment costs are very high.

That prognosis, however, is likely to change, as both the effectiveness of new treatments rises and their cost plummets as new technologies mature. For instance, the New York Times this week chronicled how researchers at the Children's Hospital of Philadelphia genetically re-engineered leukemia patient Emma Whitehead's own T-cells - using a deactivated version of the virus that causes AIDS, no less - to attack her cancer, acute lymphoblastic leukemia. This was a last ditch experimental treatment, because Emma's cancer had resisted every other treatment her doctors had tried. The Times explains:

To perform the treatment, doctors remove millions of the patient's T-cells - a type of white blood cell - and insert new genes that enable the T-cells to kill cancer cells. The technique employs a disabled form of H.I.V. because it is very good at carrying genetic material into T-cells. The new genes program the T-cells to attack B-cells, a normal part of the immune system that turn malignant in leukemia.

The altered T-cells - called chimeric antigen receptor cells - are then dripped back into the patient's veins, and if all goes well they multiply and start destroying the cancer.

The T-cells home in on a protein called CD-19 that is found on the surface of most B-cells, whether they are healthy or malignant.

What is even more remarkable is that when Emma developed a life-threatening complication from the immunotherapy, her doctors were quickly able to run a battery of diagnostic tests to isolate the specific immune response that was causing the problem (an overproduction of interleukin-6). They then used another drug (off-label, normally used for rheumatoid arthritis) to save her life. The treatment has since been used successfully in several other patients who developed the same complication.

Researchers might have to administer another dose or two of the therapy later, or might not - they can easily track her cancerous B-cells to make sure the disease remains in check. Her genetically altered T-cells, however, will remain in the body, roaming hunter-killers seeking out signs of cancer. (Although, as the Times nnotes, the engineered T-cells attack all of Emma's B-cells, cancerous or not, since they both express the same cell surface protein. However, if researchers can identify a more specific cancer protein signature they can spare the healthy cells by making the engineered T-cells even more precise.)

t-cells.jpg

The work at CHOPs is a stunning advance for cancer immunotherapy and personalized medicine, since the T-cells must be tailored for each patient, rather than brewed in enormous vats, like traditional drugs. The drug company Novartis is backing the commercial development of the technology, so it can eventually be scaled up for far more cancer patients - and eventually applied to other cancers, including solid tumors. (The first successful application of cancer immunotherapy, although it appears to be less successful as a therapeutic, is Provenge for advanced prostate cancer.)

As we suggested earlier, another bright spot in this story is the cost of the modified cell therapy, about $20,000 per patient, according to the Times. Compare that to the cost of chemotherapy. One drug, Clolar, can cost $68,000 for two weeks of treatment for relapsed pediatric ALL. Bone marrow transplants, another ALL treatment option, can cost hundreds of thousands of dollars and long hospital stays.

Another advantage of tailored immunotherapies (like other targeted therapies) is that they can show efficacy rapidly in smaller clinical trials, lowering the cost of development and allowing companies to press FDA regulators for rapid marketing approval in light of the high benefit-risk ratio for patients who've run out of other options. Doctors will then - as Emma's did in her case - fine tune them on the fly as diagnostic and treatment options improve around them.

CHOPs and Novartis are helping to pioneer a completely different model of drug development, and drug approval that can help de-risk the entire industry and enable rapid follow on innovations. While industry is going through the doldrums now, Emma's saga is a welcome sign that the future of the industry - and the science underlying it - is bright.

Of course, for Emma Whitehead and her parents, just having a future to look forward to is enough. The next time you hear someone worry about the cost of new cancer treatments, you might want to mention her story to them.


Paul Howard & Yevgeniy Feyman

Democrats sold -and continue to sell the ACA - as a way to cover the "millions of people" with pre-existing conditions who can't get affordable insurance. For instance, in defending their recently released guaranteed issue regulations, HHS claimed that 129 million Americans have pre-existing conditions.

This is a huge bait and switch. The vast majority of Americans with "pre-existing conditions" already have insurance. Why? Age is strongly correlated with developing a chronic illness - and seniors are covered by Medicare. If you're disabled and poor, and can't work, you're eligible for Medicare and Medicaid. The low-income poor (healthy or not) are already eligible for Medicaid. In between, the majority of Americans have employer-provided insurance, and are also already protected from pre-existing insurance exclusions or rate hikes due to illness, through HIPAA.
Who's left then? Not that many people.

In fact, preliminary results from the Center for Disease Control and Prevention's (CDC) National Health Interview Survey indicate that even among the uninsured, only 1.7 percent considered themselves to be in poor health, compared to 6.8 percent of those in Medicaid, and just .6 percent of those with private insurance.

A Medical Expenditure Panel Survey report from 2007-08 also estimated that only 16 percent of the uninsured had two or more chronic conditions - compared to one-third of those with private insurance and 50 percent for public (Medicare and Medicaid).

In a 2010 National Affairs article, James Capretta and Tom Miller estimate that only 2-4 million uninsured Americans with pre-existing conditions need additional financial help accessing insurance, preferably through high risk pools.

High risk pools allow people with serious pre-existing conditions get affordable coverage without increasing insurance costs for young and healthy uninsured. Yet this is where Obamacare has also failed, despite a modest effort. Under the law, federal high-risk pools were established to provide access to healthcare for patients without insurance, and with pre-existing conditions. A recent evaluation has found that only about 45,000 people signed up for these pools; a fraction of the 375,000 that CMS expected. Reasons proposed for the failure of the pools include low funding (only about $5 billion) and high costs for signing up. Regardless, for the last four years Obamacare has failed to expand healthcare to those with pre-existing conditions who really needed it.

Ironically, Obamacare also attacks consumer driven health plans - which a recent Mercer report credits with helping to hold down health insurance inflation to a 15-year low - threatening to drive up insurance costs just as we're identifying the tools to keep them in check . Various requirements such as the Minimum Loss Ratio (that insurers must spend at least 80 percent of premiums on benefits) and minimum actuarial value (that plans must cover a minimum of 60 percent of expected healthcare costs) make consumer driven health plans - which often have low premiums with high deductibles - less viable.

Ultimately, the biggest flaw with the ACA's insurance market reform is that it enforces expensive insurance regulations on the entire small group and individual insurance markets, increasing the cost of getting insurance for the vast majority of uninsured who are basically in good health. It also scales those subsidies up to 400% of the poverty level, to people who could easily afford to purchase it on their own.

Obamacare's failure at what should have been its primary goals leaves the door open for conservatives to start pushing for reform. The House could pass legislation repealing Obamacare's community rating and guaranteed issue regulations (as our colleague Avik Roy has suggested), and fixing Obamacare's flawed high risk pools. Paring back the subsidies (from 400% to 200% or 300%) would also lower Obamacare's price tag while still helping people who need it the most.

Governors of states that refuse to establish Obamacare's health exchanges (or expand Medicaid coverage) could also push for legislation to allow Medicaid funds to be used to help purchase private insurance for that vast majority of non-disabled or elderly Medicaid enrollees. This would provide high quality private coverage, and prevent people from shuffling between Medicaid and private insurance as their income changed. True state flexibility in Medicaid program design might also convince many governors to re-think their opposition to Obamacare's Medicaid expansion.

The debate on fixing or fighting Obamacare is likely to continue to for years to come. In the meantime, moderates and conservatives should point out that Obamacare's biggest shortcomings are self-inflicted - they didn't have to happen in the first place but can (and should) be remedied.


keep in touch     Follow Us on Twitter  Facebook  Facebook

Archives