The Methods Man

View Original

COVID-19, Hydroxychloroquine, and the Death of Evidence-Based Medicine

In a crisis, do we need to abandon the principles that have advanced medical science so profoundly in the past few decades?

The coronavirus has caused us to re-evaluate so many things. I’m doing renal consults today from my office, only physically going to see patients if absolutely necessary – something unthinkable just a few weeks ago. It’s also causing us to re-evaluate what we mean by “evidenced-based medicine”. In the days before the pandemic, many of us were of the “randomized trial or bust” mindset, often dismissing good observational studies without rigorous review, and likewise embracing even suspect studies just because they happened to be randomized.

But with the coronavirus, we don’t have the luxury to wait for those big, definitive, randomized trials. We need to act on the data we have. We need to remember what evidenced-based medicine is really about. It’s not just randomized trials. It’s integrating each study into the body of existing data, combining the best available science, reaching defensible conclusions.

I like to read a new study in the context of what I call the pre-trial probability of success. In other words, how likely was this drug to work before we got the data from the trial.  Let me show you how this works with two recent examples.

I’m going to start with the big one.

It seems like everyone is talking about hydroxychloroquine thanks to, one little study appearing in the International Journal of Antimicrobial agents which is generating a LOT of press, thanks to a shout-out from Donald Trump no less.

What is our pre-study probability that hydroxychloroquine would be effective for COVID-19.

There’s a lot of literature here. Hydroxychloroquine has a long history as an antibiotic and anti-viral drug, and encouragingly seems to inhibit coronavirus replication in vitro. It also changes the structure of the receptor coronavirus binds to.

I’d put the pre-study probability here around 50/50 but feel free to disagree.

Now let’s look at the study. 36 patients in France with COVID-19 were examined. 20 of them got hydroxychloroquine, and 16 were controls – but this was not randomized – treated patients were different from those not receiving treatment.  The researchers looked at viral carriage over time in the two groups and found what you see here:

Viral carriage in hydroxychloroquine-treated patients versus controls.

This appears to be a dramatic reduction in coronavirus carriage in those treated with hydroxychloroquine. Awesome, right? Sure, not randomized, but when we need to make decisions fast, the perfect may be the enemy of the good.  Does this study increase my 50/50 prediction that hydroxychloroquine could help?

Well, with data coming at us so fast we have to be careful. There is a huge fly in the ointment in this study that seems to have been broadly overlooked or at least underplayed. There was differential loss to follow-up in the two arms of the study – viral positivity was not available for 6 patients in the treatment group, none in the control group. Why unavailable?  I made this table to show you:

Three transferred to the ICU, one died, and the other two stopped their treatment. BTW none of the patients in the control group died or went to the ICU. Had these six patients not been dropped, the story we might have is that, huh, hydroxychloroquine increases the rate of death and ICU transfer in COVID-19.

Before reading this study I was 50/50 on hydroxychloroquine. After?

Yeah, I’m right where I started. Because of the problems with the study design, not just its observational nature but that differential loss to follow-up, the data from the French study doesn’t move the needle for me at all.

That doesn’t mean hydroxychloroquine failed.

What we have to decide now is whether 50/50 is good enough to try. Given the relatively good safety profile of hydroxychloroquine, and the dire situation we find ourselves in, it may be very reasonable to use this drug, even despite that study.

Tweets like this, though, aren’t helpful:

They misrepresent the data, which is equivocal at best. Further, they may encourage people to think “we’ve solved this” and stop their social distancing. There are already reports of these medicines being hoarded. The key to evidence-based medicine during this epidemic is being transparent about what we know and what we don’t. If we want to use hydroxychloroquine, that is a reasonable choice, but we need to tell the public the truth – we’re not too sure it will work, and it may even be harmful.

The second example I wanted to share is this randomized trial evaluating lopinavir / ritonavir for adults with severe COVID-19.

Before I read this trial – did I think Lopinavir would work for COVID-19? For a nephrologist like me, this requires a bit of reading – but there were some studies showing the drug inhibited viral replication in vitro, and some data suggesting it may have had some effect treating SARS during that epidemic.

But overall, I pegged the pre-study probability of success was fairly low – let’s give it 10%.  Experts can differ with me – I won’t be offended.

That said, this was a nice, randomized trial in 199 people with confirmed SARS-COV-2 infection. The 28-day mortality was 19.2% in the treatment group, 25.0% in the placebo group. That seems good, but it wasn’t statistically significant – the p-value was 0.32. 

In ordinary days, we’d call this non-significant and move on. Indeed, the authors of the manuscript write:

But these are not ordinary days.

#MedTwitter was quick to note that the measured effect, a 5.8% reduction in 28-day mortality, seems pretty darn good right now.

Shall we be slavishly beholden to statistical significance, even in this time of crisis? The truth is – we don’t have to compromise our principles here. One nice feature of a randomized trial is that we can use the observed p-value, with some minor mathematical jiggerings, as a measure of the strength of evidence that lopinavir is effective.

This is Bayesianism, and it may be just what we need right now.

Change from pre-study to post-study probability at p=0.05

Instead of dogmatically looking for a p-value below some threshold, we use the evidence in a given trial to lend support to a hypothesis which depends on our pre-trial probability that the drug being tested was effective.

Here’s a graph showing the probability that a drug is effective, AFTER a trial reports a p-value of 0.05, as a function of the probability that it was effective before the trial.

Change from pre-study to post-study probability at p=0.001

If you were 50/50 that the drug would work before the trial, after that p=0.05 you’d be up to around 75% sure the drug works. Maybe that’s enough to start treating.

If the trial had a REALLY significant p-value of 0.001, the curve looks like this:

If you were 50/50 before that trial, after the trail you would be almost certain the drug works.

Change from pre-study to post-study probability at p=0.32

What about the Lopinavir-Ritonavir trial, with it’s p-value of around 0.32?

It barely moves the needle at all.  If you were 90% sure the drug combo would work before you read the NEJM piece, these data are entirely consistent with that. If you were 10% sure like me, these data support that as well. In other words, this trial should NOT affect our enthusiasm for this drug – it should, really, not change much of anything.

We can use these techniques to help us make sense of the rapid-fire pace of medical research coming at us. Moreover, we can use the post-trial probability of a drug as the pre-trial probability for the NEXT study of that drug, allowing us to ratchet up the probability curve with successive trials showing similar signals, even if NONE of them are classically statistically significant.

As more data comes in, we can revise those estimates of efficacy – iteratively, and transparently.

The bottom line is, we don’t need to abandon evidence-based medicine in the face of the pandemic. We need to embrace it more than ever. But in that embrace we need to realize what we’ve known all along – EBM is not just about randomized trials – it’s about appreciating the strengths and weaknesses of all data, and allowing the data to inch us closer and closer towards truth.

This commentary first appeared on medscape.com