Researchers from Dartmouth are pitching a standard definition for "overdiagnosis" in cancer screening. Will it hold up?
It was 1964 when Supreme Court Justice Potter Stewart, in discussing the line between art and pornography stated “I know it when I see it”.
We’ve been using a Potter Stewart standard for the concept of “overdiagnosis” of cancer for too long – leading to a lot of confusion. Today we’ll take a “deep dive” into the concept of overdiagnosis, inspired by this article appearing in the Annals of Internal Medicine.
Intuitively, overdiagnosis is this idea that sometimes we find cancer that we would have been better off never finding – it never would have hurt you. Diagnosing a small breast cancer in a 105 year old woman? Sounds like overdiagnosis to me. But why? How do we measure overdiagnosis rates without a clear definition?
Let me pause to just point out that overdiagnosis is bad. This is not entirely obvious, but our entire medical system is actually designed to overdiagnose.
Patients are terrified about cancer, and the desire to know they don’t have cancer may override the rational decision that sometimes you’re better left in the dark. Doctors like to take action. Sitting and doing nothing is just not in our DNA, even when it may be the best thing for the patient. And many special interests benefit when the diagnosis rate of any disease is higher – pharma, device manufacturers, hospitals. And let’s not forget that the entire fee-for-service system is based on, you know, charging for when you do stuff and not for when you don’t.
Louise Davies of Dartmouth and her co-authors of the Annals paper have put their marker down on an official definition of overdiagnosis. Let’s dig in.
The authors define overdiagnosis as follows:
I like this definition a lot – but let me walk through exactly how it works, from the point of that initial screening test.
First things first – we’re starting with a screening test – if you have breast pain and get a mammogram that finds cancer, that’s not overdiagnosis. Seems right. OK you have a strange finding on a screening test – the next key phrase is “histologically confirmed cancer”. That means that false positive screening, while a problem, is not counted as overdiagnosis. The cancer really is there, it just wouldn’t have mattered had you never found it.
The next big question is whether that cancer would ever have been discovered in your natural lifetime. This is really hard to assess. There are a couple approaches to this outlined in the article, but they are broadly actuarial in nature – we use what data we have, based on the type of cancer, age and comorbidities of patients to make a best guess at how long they would live if they had never been screened. Remember that 105 year old woman with the small breast cancer? We all felt like that was overdiagnosis, but who knows, maybe she would live happily to 120 if she was treated, but only 110 if she wasn’t treated. In other words, overdiagnosis is a population phenomenon – we can never really know if a given individual was overdiagnosed.
But what if the cancer would have come to light later. By the authors’ definition, this is not overdiagnosis… but what is it? Well, if the screening test caught the cancer early and you live longer or with better quality of life because of that, then, well, good job screening test.
But just because you catch a cancer early, it doesn’t mean that you’ll live longer or better. I’m looking at you, PSA-screening for prostate cancer. So my question is – what if screening detects a cancer early that would have been detected later, but that early detection doesn’t matter. That’s not overdiagnosis by this definition. What is it? Overscreening?
In any case, let’s consider how we would calculate an overdiagnosis rate. The numerator of this rate should be, according to the authors, the number of cancers detected by screening that never would have come to light had we not screened. What the denominator?
The total number of people screened?
The total number of people eligible for screening?
The total number of people with cancer detected by screening?
Depending on what denominator you choose, you can get wildly differing overdiagnosis rates.
The authors argue that the right denominator here is the number of people screened.
So if you screen 1000 men with PSA tests and find 100 prostate cancers, 50 of which are “overdiagnosed”, the overdiagnosis rate is 50 out of 1000 or 5%.
I think my patients would want to know how likely it is that their recently-diagnosed cancer would have been fine had we not diagnosed it, meaning the denominator would be the number of people screened with cancer. Giving us a hypothetical overdiagnosis rate for PSA screening of 50%.
Still, for the sake of having a uniform definition, I’ll defer to the authors on this one. This area of research can only benefit from more standardization.
And with that standardization, we’ll have a new tool to assess individual screening tests. How will they all hold up? Well, it’s probably too soon to make that diagnosis.