at whether industry funding is associated with pro-industry results. Each took a slightly different approach to finding research papers, and both found that industry-funded trials were, overall, about four times more likely to report positive results.4 A further review in 2007 looked at the new studies that had been published in the four years after these two earlier reviews: it found twenty more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.5
I am setting out this evidence at length because I want to be absolutely clear that there is no doubt on the issue. Industry-sponsored trials give favourable results, and that is not my opinion, or a hunch from the occasional passing study. This is a very well-documented problem, and it has been researched extensively, without anybody stepping out to take effective action, as we shall see.
There is one last study I’d like to tell you about. It turns out that this pattern of industry-funded trials being vastly more likely to give positive results persists even when you move away from published academic papers, and look instead at trial reports from academic conferences, where data often appears for the first time (in fact, as we shall see, sometimes trial results only appear at an academic conference, with very little information on how the study was conducted).
Fries and Krishnan studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial, and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug. There is a small punch-line coming, and to understand it we need to cover a little of what an academic paper looks like. In general, the results section is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The ‘ranges’ are given, subgroups are perhaps explored, statistical tests are conducted, and each detail of the result is described in table form, and in shorter narrative form in the text, explaining the most important results. This lengthy process is usually spread over several pages.
In Fries and Krishnan [2004] this level of detail was unnecessary. The results section is a single, simple, and – I like to imagine – fairly passive-aggressive sentence:
The results from every RCT (45 out of 45) favored the drug of the sponsor.
This extreme finding has a very interesting side effect, for those interested in time-saving shortcuts. Since every industry-sponsored trial had a positive result, that’s all you’d need to know about a piece of work to predict its outcome: if it was funded by industry, you could know with absolute certainty that the trial found the drug was great.
How does this happen? How do industry-sponsored trials almost always manage to get a positive result? It is, as far as anyone can be certain, a combination of factors. It may be that companies are more likely to run trials when they’re more confident their treatment is going to ‘win’; this sounds reasonable, although even this conflicts with the ethical principle that you should only do a trial when there’s genuine uncertainty about which treatment is best (otherwise you’re exposing half of your participants to a treatment you already know to be inferior). Sometimes the chances of one treatment winning can be increased with outright design flaws. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good (which is – for interesting reasons we shall discuss – statistical poison). And so on.
But before we get to these fascinating methodological twists and quirks, these nudges and bumps that stop a trial from being a fair test of whether a treatment works or not, there is something very much simpler at hand.
Sometimes drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them. This is not a new problem, and it’s not limited to medicine. In fact, this issue of negative results that go missing in action cuts into almost every corner of science. It distorts findings in fields as diverse as brain imaging and economics, it makes a mockery of all our efforts to exclude bias from our studies, and despite everything that regulators, drug companies and even some academics will tell you, it is a problem that has been left unfixed for decades.
In fact, it is so deep-rooted that even if we fixed it today – right now, for good, forever, without any flaws or loopholes in our legislation – that still wouldn’t help, because we would still be practising medicine, cheerfully making decisions about which treatment is best, on the basis of decades of medical evidence which is – as you’ve now seen – fundamentally distorted.
But there is a way ahead.
Why missing data matters
Reboxetine is a drug I myself have prescribed. Other drugs had done nothing for this particular patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than placebo, and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year, around the world. Reboxetine was clearly a safe and effective treatment. The patient and I discussed the evidence briefly, and agreed it was the right treatment to try next. I signed a piece of paper, a prescription, saying I wanted my patient to have this drug.
But we had both been misled. In October 2010 a group of researchers were finally able to bring together all the trials that had ever been conducted on reboxetine.6 Through a long process of investigation – searching in academic journals, but also arduously requesting data from the manufacturers and gathering documents from regulators – they were able to assemble all the data, both from trials that were published, and from those that had never appeared in academic papers.
When all this trial data was put together it produced a shocking picture. Seven trials had been conducted comparing reboxetine against placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal, for doctors and researchers to read. But six more trials were conducted, in almost ten times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials was published. I had no idea they existed.
It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials which appeared in the academic literature: but when we saw the unpublished studies, it turned out that patients were more likely to have side effects, more likely to drop out of taking the drug, and more likely to withdraw from the trial because of side effects, if they were taking reboxetine rather than one of its competitors.
If you’re ever in any doubt about whether the stories in this book make me angry – and I promise you, whatever happens, I will keep to the data, and strive to give a fair picture of everything we know – you need only look at this story. I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them, I discussed them with the patient, and we made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill, and worse, it does more harm than good. As a doctor I did something which, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.
If you find that amazing, or outrageous, your journey is just beginning. Because nobody broke any law in that situation, reboxetine is still on the market, and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional