When tragedy strikes, physicians are often asked to answer two questions. The first is the how question. How did this happen? Long illnesses provide time for patients and family members alike to come to terms with a diagnosis and prognosis. Not always, and not easily, but the time is there. In the case of sudden cardiac death in a young person, there is no time. Sudden cardiac death is a condition that feels out of place in 2016. That a healthy person can be alive and then, simply, not, feels wrong to modern sensibilities. Nevertheless, the incidence of sudden cardiac death, about 1 per 100,000 young people per year is similar across multiple countries and cultures. Now a manuscript appearing in the New England Journal of Medicine attempts to shed light on how sudden cardiac death can happen. For the video version of this post, click here.
The researchers examined literally every case of sudden cardiac death occurring in individuals less than age 35 in Australia and New Zealand from 2010 to 2012 in a prospective fashion. With each of the 490 cases, they examined autopsy and toxicology reports to determine how the death occurred. While 60% of the cases were explainable by conditions like coronary artery disease and hypertrophic cardiomyopathy, a disturbing 40% had no revealing findings.
So they expanded the search, in a subset of that 40%, the researchers performed advanced genetic sequencing to look for gene mutations that could predispose to sudden death. They found mutations of that type in 27% of the otherwise unexplained cases. While the gap of understanding was narrowed a bit, the how question remained unanswered for many individuals.
Now I should mention that identifying disease-causing mutations is not as easy as it sounds. Most of the mutations identified were classified as “probably pathogenic”. Basically, that means that the mutations are predicted to do harmful things to the protein they affect, but we don’t know for sure at this time.
To take the analysis a step farther, the researchers examined family members of the deceased to screen for the presence of heritable cardiac conditions. In 12 of 91 families screened, such a condition – like long QT syndrome – was found.
So what we have here is a great example of a well-conducted, methodical, and meticulous study that has moved us incrementally towards greater understanding. For some of the families who suddenly lost a loved one – the answer to “how did this happen” is now clear.
Of course that’s only one of the two questions we get asked. The other is “why did this happen”? And that’s a question that no methodology, no matter how advanced, can answer.
An embarrassment of riches this week as we got not one, but two randomized clinical trials evaluating the timing of dialysis initiation in acute kidney injury. Of course, the results don’t agree at all. Back to the drawing board folks. For the video version of this post, click here.
OK here’s the issue – we nephrologists see hospitalized people with severe acute kidney injury – the abrupt loss of kidney function – all the time. There is always this question – should we start dialysis before things get too bad, in order to get ahead of the metabolic disturbances, or should we hold off, watch carefully, and jump when the time is right? Several people – and, full disclosure, I’m one of them – have examined this question and the answers have been confusing. The question begs for a randomized trial.
And, as I mentioned, we have two. One, appearing in the Journal of the American Medical Association, says yes, earlier dialysis is better with mortality rates of 40% in the early arm versus 55% in the late arm. The other, appearing in the New England Journal of Medicine says there is no difference – mortality rates were 50% no matter what you did.
Figure 1: JAMA Trial - GO TEAM EARLY!
Figure 2: NEJM Trial - D'Oh!
Sometimes, rival movie studies put out very similar movies at the same time to undercut each other’s bottom line. So which of these trials is Deep Impact, and which is Armageddon? Which is Ed TV and which is the Truman show? Which is Jobs and which is Steve Jobs?
Figure 3: It's like looking in a mirror...
In this table I highlight some of the main differences:
The NEJM trial was bigger, and multi-center, so that certainly gives it an edge, but what draws my eye is the difference in definitions of early and late.
The NEJM study only enrolled people with stage 3 AKI – the most severe form of kidney injury. People in the early group got dialysis right away, and the late group got dialysis only if their laboratory parameters crossed certain critical thresholds. The JAMA paper enrolled people with Stage 2 AKI. In that study, early meant dialysis right after enrollment, and late meant dialysis started when you hit stage 3.
OK so definitions matter. The NEJM trial defined early the way the JAMA trial defined late. So putting this together, we might be tempted to say that dialysis at stage 2 AKI is good, but once you get to stage 3, the horse is out of the barn – doesn’t matter when you start at that point.
That facile interpretation is undercut by one main issue: the rate of dialysis in the late group.
See, one of the major reasons you want to hold off on dialysis is to see if people will recover on their own. In the JAMA study, enrolling people at stage 2 AKI, only 10% in the late group made it out without dialysis – and those people were dialyzed, on average, only 20 hours after randomization. In the NEJM study, using the more severe inclusion criterion, roughy 50% of the late group required no dialysis. To my mind, if 91% of the late group got dialysis, you’re not helping anybody – the whole point of not starting is so that you never have to start, not that you can delay the inevitable.
Regardless of your interpretation, these studies remind us not to put too much stock in any one study. They should also remind us that replication – honest to goodness, same protocol replication – is an incredibly valuable thing that should be celebrated and, dare I say, funded.
For now, should you dialyze that person with AKI? Take heart – there’s good evidence to support that you should keep doing whatever you’ve been doing.
I love a nice clinical trial that answers an important question and one of my favorites from the recent past was the “Learning Early About Peanut allergy” or LEAP trial, published in February of 2015 in the New England Journal. I probably don’t need to reiterate the results of this truly landmark study, but basically, it upended about two decades worth of advice to parents to avoid exposing their infants to food containing potential allergens, such as peanuts.
For the video version of this post, click here.
The trial, which enrolled infants at high risk of peanut allergy, found that the rate of peanut allergy at 5 years was 18.8% among those randomized to peanut avoidance, but only 3.6% among those randomized to peanut consumption. That’s a number needed to treat of around 7 making eating peanut products in the first five years of life about 7 times more efficacious than taking aspirin for an ST-Elevation MI. OK apples and oranges, or peanuts, but still.
But lingering questions remained. Would these kids be protected in the long-term? Did the study just kick the peanut allergy ball down the field?
To answer the question, the LEAP researchers conducted the LEAP-ON study, in which individuals in the initial study were instructed to avoid all peanut products for 12 months. Without exposure to peanuts, would allergy come roaring back? Would these kids be doomed to eat peanuts three times a week for the rest of their lives?
Well, around 90% of the original trial participants signed on to the no-peanuts-for-12-months pledge. Overall, adherence was OK. As you might expect, those who had originally been randomized to avoid peanuts had an easier time staying off the sauce – 80% of them reported complete peanut avoidance. Only 40% of those who had been randomized to eat peanuts originally were able to stay away for the year. No shame there, peanuts are delicious.
Bottom line, after 12 months of avoidance there were 6 new cases of peanut allergy, but three from each group. In other words, you didn’t see a “rebound” in peanut allergy among those kids initially randomized to eating peanuts. By the end of this study, 18.6% of those who had initially avoided peanuts and 5% of those who had eaten peanuts from a young age had confirmed allergy.
The point here is that the protection from allergy conferred from early exposure to peanuts persisted even through a year of not eating peanuts. This is a very good thing for the rare kid out there who doesn’t like peanuts – it seems like the protection you gained in infancy will stick around.
Now, I should mention that there was no control group here. I’m curious what might have happened to kids instructed to keep right on eating lots of peanuts. We also don’t know if avoidance for more than a year might let allergy recrudesce.
But taking this study with the results of the original trial, it’s not exactly a leap to say that early exposure to peanuts might dramatically curb the rising tide of peanut allergy in the developed world.