Interview
Secrets

How drugs got
so expensive.

Jack Scannell on Eroom’s law,
drug discovery, and COVID-19.

Introduction

A race to
the bottom.

Eroom’s law is the observation that the cost of developing a new drug roughly doubled every nine years from 1950 through 2010. All in all, it shows a roughly 80-fold decline in the productivity of drug R&D.

It’s hard to say what’s worse, the staggering decline, or the fact that it went on unchecked for six decades. Either way, as the total sum of our efforts to look for life-saving secrets, Eroom’s law leaves no doubt that our ability to explore the frontier has atrophied catastrophically.

A graph of Eroom’s law shows that the cost of developing a new drug roughly doubled every nine years from 1950 through the 2010.
By 2010, the total R&D spend per drug approved was about a hundred times higher, in real dollars, than it was in 1950. Graph by Jack Scannell

Although some aspects of this disturbing trend line were known since the 1980s, Eroom’s law was formally articulated in 2012, at the tail end of the long decline, in a seminal paper published in Nature Reviews Drug Discovery by four co-authors led by Jack Scannell.1

In a field plagued by narrow specialization, Jack Scannell is something of a renaissance man, having seen the industry from a distance and up close while working in drug research, private consulting, pharma and biotech investing, and policy advising.

When Scannell worked as an investment analyst, he used to ask the top executives at major pharma firms to explain why the productivity of drug R&D kept falling for so long, when the science and technology inputs were getting cheaper and more efficient all the time. Generally, he’d get the same kind of answer: the industry had run out of “low-hanging fruit.”

In the course of his own later investigations, Scannell began to focus on a technical problem in drug R&D called “model validity,” which he believed to be holding back progress in the science of drug discovery. And while everyone can acknowledge that there’s a host of issues to fix with the drug industry, we agree with Scannell that starting with the science is a worthwhile step in the heroic effort needed to restart general progress in the field.

So, we talk to Jack Scannell about the beginning and end of Eroom’s law, the frontier of drug discovery, and how drug R&D could help win the fight against COVID-19.

  1. Diagnosing the decline in pharmaceutical R&D efficiency, Jack Scannell et al., Nature Reviews Drug Discovery, 2012 Learn more↩︎


Beginnings

Opposites
attract attention.

Your 2012 paper diagnosed the decline in drug R&D productivity over the preceding six decades. What moved you to write it? And how did you land on the term Eroom’s law?

There was this nagging puzzle: Since the 1950s, lots of important R&D inputs had got tens, hundreds, or even billions of times cheaper, while the number of drugs approved per $1 billion spent on R&D had fallen by roughly two orders of magnitude. That struck me as an ugly contrast.

Many people bemoaned the R&D productivity problem before, but not enough people had drawn attention to the contrast between input and output efficiency. I thought it was an important public policy problem, and it seemed to attract less attention than it deserved.

I remember talking about the contrast with a friend of mine, William Bains. We chatted that drug industry productivity looked like Moore’s law backwards, and that’s how the term “Eroom’s law” was born.

The drug industry already knew it had an R&D productivity problem, so what was missing in previous attempts to diagnose the issue?

As long ago as 1980, there was a great paper on the R&D slowdown by Fred Steward and George Wibberley, which we cited along with some other good work in our paper.2

However, in general, I thought there were two problems:

First, a lot of the diagnostic efforts were private. They were happening in some of the drug and biotech companies, but they weren’t making it into the public domain, and so they couldn’t affect science policy or investor behavior.

Second, what did often make it into the public domain were not really diagnoses per se, but attempts to sell “solutions” that, in many cases, were largely unrelated to the core problems of drug R&D.

If, for example, you were an academic geneticist, low R&D productivity was an occasion to promote more investment in genomics. If you were an investment bank, you saw an opportunity to promote M&A to Big Pharma. If you were a consulting firm, you might say that the reporting lines or the business processes were wrong; things you could fix with a re-engineering project.

At the time, I was working as an investment analyst. A bad prognosis can be very interesting to an analyst, because if you think an industry is in real trouble, you don’t have to own it. That meant I had a different set of biases to most people, and I didn’t have any particular solution to sell.

On the whole, I think lots of people would’ve been better qualified to write the paper, but their day jobs meant they couldn’t write it.

Eroom’s law was widely adopted after your paper came out. What was the general reaction you saw after you published?

People were certainly glad that someone had put names to things that they knew, or suspected, were happening already. Apart from Eroom’s law itself, people liked something else we talked about in the paper, which I called the “Better than the Beatles” problem.

Imagine how hard it would be to sell new pop songs if every new song had to be better than the Beatles, if you could download all the old Beatles records for free, and if no one ever got bored of listening to the same Beatles songs over and over again.

That’s analogous to the problem the drug industry faces, in that some old drugs are very good, they cost next to nothing when their patents expire, and doctors don’t get bored of prescribing them.

So in many therapy areas, you end up with this ever-expanding back catalog of virtually free and very good medicine, against which new medicine would have to compete. That reduces the value of as-yet-undiscovered drugs, and pushes drug R&D towards areas where we don’t have a catalog of good cheap drugs, which also tend to be areas where R&D is likely to be harder.

Inevitably, people also appropriated Eroom’s law to justify what they wanted to do anyway. That reminds me of a line from a famous British TV comedy, Yes, Prime Minister, which is about a hapless politician who is ruthlessly managed by his civil servants: “Something must be done. This is something. Therefore, we must do it.” I do occasionally see papers that make the same kind of argument.3

However, to be honest, apart from a few people in the drug industry telling me that the paper became required reading, and aside from seeing the Eroom graph in lots of presentations, it’s not clear to me that anything different was done after the paper was read.

  1. Drug innovation—what’s slowing it down?, Fred Steward and George Wibberley, Nature, 1980 Learn more↩︎

  2. See Politician’s syllogism, Wikipedia Learn more↩︎


Stasis

Pay more.
Get less.

The standard fixes to raise drug R&D productivity tend to focus on process efficiency and cost-cutting. Why haven’t these solutions been enough to stop and reverse Eroom’s law?

Simply put, they’re addressing the wrong class of problems. Suppose we were looking at the oil industry, and we found that it cost a hundred times more to produce a barrel of oil today than it did in 1950, despite the fact that your technology for pumping oil out of the ground got billions of times cheaper.

Under those circumstances, no one would say the primary problem that the oil industry has is that the reporting lines are wrong, and we need to fix the organizational structure and get the drill procurement processes sorted out. It would be obvious that there’s a geological explanation for the problem.

When drug companies talk to their shareholders, I suspect they talk about processes and cost-cutting because that’s the sort of thing shareholders can understand. You talk less about the science and technology choices because that’s very hard to evaluate from the inside, let alone the outside.

It’s very difficult to engage in those technical conversations, even for scientists. If you’re a cancer biologist in a drug company, it’s hard for you to assess respiratory medicine or depression. Expertise tends to be narrow.

Even when company executives shrug their shoulders and give the default explanation, that it’s some kind of low-hanging fruit problem, that’s still not a very straightforward or easy conversation to have with shareholders.

After all, the fruit ain’t gonna get any lower. If firms insist that they’re running out of tractable opportunities, despite all the efficiency gains, they’re almost telling their shareholders that they should be doing a lot less R&D.

Drug R&D spending grew for decades, even as drug companies lost money on their investments. How did competition factor in? At what point does R&D spending turn into a race to outspend everyone else?

Returns on R&D are stochastic and very skewed. The industry makes a disproportionate amount of its profits from a few very big drugs. And if you believe that you have one of them in your pipeline, then you expect your returns on R&D to be high.

Even for the big companies, the economics are very sensitive to one or two products, and so long as a few people in the industry are winning, it’s very hard for other people to walk away from the game.

As a result, the industry could generate poor returns for relatively long periods of time, and few drug companies would be willing to take the painful measures they would otherwise apply if they were narrowly maximizing shareholder value.

Today, R&D spending as a percentage of sales is at an all-time high. For the big firms, it is 15-20 percent of sales versus roughly 5 percent of sales in the “golden age” of drug discovery in the 1960s. And yet, the financial returns on R&D investment look to be at an all-time low.

There’s an element of the agency problem at work here. Few people who fight their way to the top of a drug company do so because they want to fire all the scientists and pay a bigger dividend. Typically, what they really want is to discover better drugs.

So, as long as some companies in the industry seem to be doing well, firms can plausibly express confidence in their pipeline and scientists, and defend their use of shareholders’ money, despite the fact that, on average, they’re going to lose some of it.

In the last decade, the debacle with a company called Valeant made R&D cost-cutting even less fashionable. Valeant was transparent in its view that the drug industry’s returns on R&D were lousy, so it borrowed money to acquire a slew of other drug companies, and slashed their R&D spending.

At the same time, and away from the spotlight, Valeant was ruthless in ramping up the prices of the drugs that came under its control. It was also quietly buying up pharmacy companies to make it easier to force insurance companies to cough up the cash for its newly-inflated drug prices.

Valeant looked untouchable until things started to unravel very quickly in 2014–2015. The company’s behavior first caught the attention of short sellers and journalists, and then came the politicians. Its pricing practices came under scrutiny, the growth evaporated, the firm was left with too much debt, and the stock price cratered.4

So while the financial returns on R&D may be poor, the idea that you brag about slashing R&D costs has been largely discredited. No one wants to look like Valeant.

Your paper anticipated that Eroom’s law would not continue to hold, and indeed, we’ve seen an uptick in the number of new drug approvals in the last decade. Has the industry turned a corner?

FDA approvals have almost doubled in the last decade. Where we were getting 20-25 drug approvals a year before 2010, we’re running 40-50 approvals a year at the moment.5

A very large proportion of those drugs have been for oncology and for rare diseases. These two therapy areas have become attractive for a number of reasons:

  1. They are very price-insensitive markets, where the drug industry has discovered over time quite how much it can charge, and the high prices pull in more R&D capital.
  2. The regulatory environment in those areas has been relatively benign.
  3. The target-based drug discovery that drug companies industrialized works better for genetically simple diseases, like orphan diseases and molecularly-defined cancer subtypes, where much of the pathology can be blamed on a single gene, or gene product, that you can isolate and characterize in the discovery process.

The industry refers to these therapy areas as having “unmet medical needs,” which is effectively the inverse of the Better than the Beatles problem.

One could try to paint a quantitative picture of the areas where there is both an unmet medical need and a tractable technological opportunity. I haven’t seen that sketched out, but I suspect that if one tried to sketch it out in detail, one might find it smaller than people would hope.

Although lots of rare diseases may prove tractable, my suspicion is that things like Alzheimer’s and metastatic solid cancers will continue to be difficult to treat well with drugs. So, while the post-2010 doubling in drug approvals is unambiguously good news, it may not ultimately have the broad health impact of the great drugs discovered from the 1940s through the 1970s.

  1. For an overview of the Valeant story, see Valeant: A timeline of the big Pharma scandal, Fortune, 2015 Learn more Archived↩︎

  2. For more on the factors driving the recent uptick in drug approvals, see Breaking Eroom’s Law, Michael Ringel, Jack Scannell, Mathias Baedeker, and Ulrik Schulze, Nature Reviews Drug Discovery, 2020 Learn more↩︎


Frontier

In search of
a better model.

Despite sifting through enormous compound libraries, drug R&D labs often have a hard time discovering good drug candidates. What do you think is holding back drug discovery today?

The rate-limiting step in drug R&D has to do with what people in the drug industry call “target validation” and what I would call “screening and disease model validity.”

Drugs are typically designed to bind a “target”; a piece of biological machinery that is thought to have some causal role in the disease you are trying to treat. Target validation is about making sure that the piece of biological machinery really does what you think it does, before you run expensive clinical trials in people.

You can only easily experiment on the biological machinery outside of humans, whether in a test tube, in a rat, or with a bit of tissue in a dish. The logical basis of using any one of these “models” is the assumption that the performance of drug candidates in the model is correlated with their performance in sick humans.

Of course, everyone in the drug industry knows that good models are better than bad models, but until you run the decision-theoretic math, which too few people have done, you don’t realize that a marginally better model is 10 or even 100 times more productive than a marginally worse model.6

Yet, it turns out to be deceptively difficult to get people to act on the idea that something they already believe to be important—model validity—is, in fact, far more important than they already believe.

Nonetheless, I’m optimistic. British cycling has been very successful in recent years under coach Dave Brailsford, and his philosophy is that if you make enough marginal gains, they eventually add up to a sizable improvement. Likewise, if I’m right about the link between model validity and R&D productivity, then there should be lots of small things that drug companies could do and that would cumulatively have a big effect.

If marginal differences in model quality can be orders of magnitude more predictive, why haven’t we seen enough investment in better target validation tools?

It used to be that medicinal chemistry was the difficult bit in drug R&D, and patents were a great way to protect unique chemical compounds. Today, it’s target validation that’s difficult, and drug companies don’t have a great way of appropriating much of the value of incremental R&D investment on target validation.

For instance, a drug company could spend a few hundred million dollars working out a new biological mechanism for Alzheimer’s and developing a very valid in vitro or in vivo model. It might even patent the model and use it to find drugs that worked in early-stage clinical trials in people.

But at that point, the basic therapeutic mechanism is proven for all to see. The target has been well and truly validated by the human result. All the other companies can then develop drugs against the same target without having to do all that investment in preclinical models.

This is a hard problem to solve, but I do think we need to consider how we could shift the financial incentives so more is spent on the bits of R&D that are difficult, like target validation, and perhaps less is spent on bits that technology has already made much easier, like medicinal chemistry and antibody production.

The present situation makes no sense. In therapy areas like metastatic cancer and Alzheimer’s, you have expensive, low-probability, chemical roulette played by lots of companies using the same kinds of disease models that they all know are lousy.

People often find unexpected uses for drugs once they’re released, yet field discovery is still underexplored by the industry. Could it be rehabilitated as a valid method for drug discovery?

A 2006 study that looked at this found that nearly 60 percent of new medical uses for drugs approved in a single year were discovered by practicing doctors who had little or nothing to do with drug companies or universities.7

This is interesting for a number of reasons:

First, there are only in the order of 1,400 approved drugs on which field discovery by physicians is possible, while pharma companies will have libraries of the order of 107 compounds, and the means to design and make many orders of magnitude more. I suspect that doctors are able to discover new uses with a very small set of compounds because the model they observe is so valid: a real clinical response in a real patient.

Second, conventional drug R&D enjoys various kinds of subsidies, tax breaks, good patent protection, and a tolerance for high drug prices, whereas field discovery happens despite very unfavorable economic incentives.

For example, once a drug goes on the market, its patent expiry clock starts ticking, and that often makes it impractical to invest in the kind of evidence that would back up the unexpected discovery of new uses. It’s much harder to make money on field discoveries, so they don’t get promoted to the same degree.

Third, medical practice is heavily influenced by an approach called “Evidence-based medicine,” which holds that randomized control trials, and meta-analyses of those trials, are true and believable in a way that nothing else is, because they do a good job avoiding “statistical bias.”

However, the randomized control trials are generally paid for by drug companies for the purposes of regulatory approval. They typically recruit young and fit patients who are unlike the older and sicker patients who get the drugs in the real world. And the meta-analyses are based on the phase III trials the drug companies chose to run. All this means that they often lack what you might call “ecological validity.”

Observational studies of real patients in the real world—the kind you could do at low cost to back up a field discovery—are dismissed on the grounds that they can’t rigorously eliminate statistical bias, despite the fact they’re often clear winners in terms of ecological validity.

Why modern medicine is more concerned with statistical bias than ecological validity is something of a mystery to me, but that certainly puts field discovery at a disadvantage.

Of course, it’s possible that new technologies could shift the balance. A world in which the response of patients to real-world pharmacology is more systematically captured and analyzed could do great things for user-led innovation.

  1. For an outlook on decision theory in drug R&D, see When Quality Beats Quantity: Decision Theory, Drug Discovery, and the Reproducibility Crisis, Jack Scannell and Jim Bosley, PLoS One, 2016 Learn more↩︎

  2. The study tracked every drug approved by the FDA in 1998, looking at new medical uses discovered for those 30-or-so drugs since coming on the market. And of the 150 new medical uses reported, it found that close to 60 percent were discovered by practicing doctors unaffiliated with any research organization. — The major role of clinicians in the discovery of off-label drug therapies, Harold J. DeMonaco, Ayfer Ali, and Eric Von Hippel, Pharmacotherapy, 2006 Learn more↩︎


Pandemic

To stop
a viral loop.

We’ve seen conflicting reports on promising treatments for COVID-19. When do you think we’ll start having definitive answers?

COVID-19 has certainly triggered a frenzy of activity in drug testing, but we have to guard against dumb luck contaminating the results of trials.

When people are running lots of clinical studies—in particular smaller ones—with lots of drug candidates, a few drugs might end up looking good through nothing more than sheer luck. Those hit the headlines, and we quietly ignore the unlucky drugs. In science, that’s related to what’s called the Proteus phenomenon.8

So the next step is to confirm the results in more trials, but I think we’ll still get to the truth very quickly. Governments and drug companies will throw money at the problem. The regulatory barriers will be low. And the drugs won’t have to be better than safe and effective old drugs.

Patients are also easy to identify, and lots of them would love to enroll in clinical trials. The infection comes and goes quickly, so it only takes a few weeks to know if people have recovered. All of this should make it quick to get a definitive answer on any specific drug.

Of course, that doesn’t tell us if the definitive answer will be an answer we like. We might just find out that the drugs don’t work, or that they don’t work once people have become really sick. A common issue with anti-viral drugs is that you have to give them early in the disease to have much of an effect.

With COVID-19, which is pretty mild in most patients, that means that the drugs would have to be very safe if they’re to be widely distributed. If they look a bit toxic or intolerable, you might only use them in high-risk patients.

Can pharma companies develop new anti-infectives against COVID-19 within a reasonable timeframe?

For new drugs that haven’t yet been tested in humans, I expect fairly rapid progress by drug R&D standards, which is to say, a very small number of years.

Anti-infectives haven’t been a big focus of the drug industry, due to the Better than the Beatles problem. By the 1980s, there were lots of good low-cost ways of treating or preventing the infectious diseases that had been a massive problem fifty years earlier. I summarized the main causes of the decline in antibiotic R&D in a 2014 paper.9

However, COVID-19 has no specific treatment and is causing chaos to healthcare systems, so the Better than the Beatles problem doesn’t apply. Any pharma company that comes up with something good will be hailed as heroes.

Infectious diseases tend to be relatively tractable in technical terms. The screening and disease models can be pretty good compared to Alzheimer’s or other chronic diseases of old age, since pathogens and people have very different biology. You can often find a bit of biological machinery that matters to the pathogen, and selectively poison it without poisoning the person.

It’s worth remembering that target validation is often the rate-limiting step. If the selective poisoning works well in vitro and in animal models, then there’s a good chance it will work in people.

What could hamper the effort to develop new drugs would be either the rapid development and deployment of a really good vaccine, or the discovery of an existing drug that works extremely well. Then we’d be back to the Better than the Beatles problem.

With viral diseases, there’s a tendency to fight the last war, only to be taken by surprise when a new virus hits. What systematic preemptive steps can we take to prevent future viral epidemics?

I think that places like Taiwan, Singapore, and South Korea have shown us how you can control pandemics without the need for novel drugs, and without locking everyone in their home for months on end.

Their experiences with the SARs virus in the early 2000s meant they built the public health infrastructure that could deal with this kind of problem. And the public health lessons they learned then are indeed working well with COVID-19; certainly much better than the U.S. and Europe. The approach calls for widespread viral testing, tracing the contacts of infected people, and applying rigorous quarantine.

Turning to anti-viral treatments, in 2017, an organisation called CEPI was launched with the specific aim of improving the world’s ability to develop new vaccines against potential global pathogens, like SARS, MERS, Zika, Ebola, etc.10

The area is beyond my expertise, so I don’t know what people have done already, but it doesn’t seem a great stretch to imagine a similar organization cataloging the kinds of viruses that have the capacity to jump the species barrier and cause new human pandemics.

It would evaluate a large set of approved drugs and antiviral drug candidates against these potential future pathogens, in standard in vitro models and animal models, and do some preliminary toxicology studies on the drugs. It would also plan for the human trials that would be needed once a new virus did appear in humans.

If we did this, and then were to face a new viral pandemic, I think that would give the world a big pharmacological head start, and that could make all the difference.

  1. Wikipedia defines the Proteus phenomenon as the tendency for early replications of a scientific work to contradict the original findings. Learn more↩︎

  2. The Development of Antimicrobial Drugs and Diagnostics, Jack Scannell, The Review on Antimicrobial Resistance, 2014 Download PDF↩︎

  3. The Coalition for Epidemic Preparedness Innovations (CEPI) describes itself as “a global alliance financing and coordinating the development of vaccines against emerging infectious diseases.“ Learn more↩︎