Two studies in the New England Journal of Medicine came to differing conclusions about the utility of oral anticoagulants in patients with cancer. This might be because they enrolled the wrong patients.
And I’m excited to talk about these studies because they lay bare a central intuition of clinical trial design that I don’t think is necessarily true.
But let’s start at the beginning.
Individuals with cancer are at risk of developing blood clots. But prevention requires the use of blood thinning medications, which themselves carry substantial risk. Physicians and patients are not always great at balancing risks and benefits, so it would really help if we had strong data to guide us.
We had a bit. Prior to the recent publications, two large randomized trials had examined the use of low-molecular weight heparin in ambulatory patients with cancer and found a significant reduction in the risk of clot from about 4% to 2%, meaning you’d have to treat 50 people with low-molecular weight heparin to prevent one clot.
For a high-risk drug, that felt like a lot. To prevent one clot, we’re essentially treating 49 people unnecessarily. Moreover, neither study showed a difference in overall mortality. So practice wasn’t really changed.
Enter a bit of intuition that I will argue is wrong.
It goes like this. Imagine a population of individuals, all of whom have different baseline risk of developing a blood clot.
Instead of enrolling this diverse group in your trial, select just those at highest risk. This is a process called “enrichment”. The idea is that if a drug is moderately efficacious among a given population, it will be MORE efficacious among members of the population at highest risk. Sounds good, right? Also, through enrichment, you increase the outcome rate in your trial, meaning you have to enroll less people and therefore spend less money, assuming of course that the drug works just as well in high-risk individuals as it does in other risk groups.
But that assumption, though intuitive, is not necessarily true. It assumes that the relative risk of outcome among the treated group is the same across risk categories. To do the math, let’s imagine a drug reduces the rate of thrombosis by 50%.
Well, we could take a low-risk group and reduce their rate of thrombosis from 1% to 0.5% - a number needed to treat of 200, or a high-risk group and reduce their rate of thrombosis from 20% to 10% - a number need to treat of 10. Which would you choose?
But why should a drug have the same relative effect in people at high risk and low risk?
Maybe people at very high risk are so high risk that the drug will have no effect – I call this the blowing against the wind phenomenon.
Nevertheless, it was the hope of the CASSINI and AVERT trials to target individuals with a high risk of blood clot based on the validated Khorana score, which takes into account the type of cancer and a variety of lab parameters, in order that the overall affect would be compelling enough to lead us to a new standard of care.
So what did they find? Nothing to settle the debate I’m afraid.
CASSINI randomized 841 patients to rivaroxaban (Xarelto) 10 mg daily. The rate of blood clot was 8.8% in the placebo group and 6% in the intervention group, a non-significant reduction with a number needed to treat of 36. AVERT randomized 563 patients to apixaban (Eliquis) 2.5mg twice daily. The rate of blood clot was 10.2% in the placebo group and 4.2% in the intervention group, a number needed to treat of 17.
It may seem that apixaban won this contest, though remember that these are two separate trials, not a head-to-head comparison.
But don’t go rushing to the prescription pad yet. The overall death rate was 12.2% in the apixaban group compared to 9.8% in the control group.
That’s right - less clots, more deaths. Not great.
Why weren’t these trials slam dunks? Because that intuition – that drugs always work better in sicker people – is not correct. In fact, cancer is a particularly good example. The individuals at highest risk of blood clots are the very ones at highest risk of dying from their cancer – and preventing a blood clot may do little to avoid that outcome.
Is there a fix for this?
Actually, there is, and it’s been staring us in the face for a while. Facebook, Amazon, and Google have been doing it for a decade to more efficiently target ads to consumers. It’s called “uplift” or “individual treatment effect” modeling, and the idea is to enroll patients not based on their risk of a given outcome but based on their chance of responding to therapy. These models allow for the fact that, sometimes, a particular therapy might work best in those at moderate risk, or even low risk. If you’re interested, I’ve written about this in a couple of places – you can find the links in the transcript.
Until we move to a smarter type of clinical trial enrichment, we’re going to remain frustrated at how hard it is to figure out just who is a good candidate for a risky therapy. Those at highest risk, counter-intuitively, may be exactly the wrong ones to target.