You’re in the office seeing a patient, and take a look at the vitals. Blood pressure 190/110. Being the diligent physician you are, you recheck the blood pressure manually, in both arms, after having the patient relax in a quiet room for 5 minutes. 190/110. There are no symptoms. What do you do? The situation I just described is known as hypertensive urgency, which is a systolic pressure over 180 or a diastolic pressure over 110 without any evidence of end-organ damage. And what to do with patients in this situation is a clinical grey area that, thanks to a manuscript appearing in JAMA Internal Medicine, may finally be seeing the light of day.
For the video version of this post, click here.
The study, out of the Cleveland Clinic, gives us some really important data. Here’s how it was done. The researchers identified everyone in that Healthcare system who had an outpatient visit with hypertensive urgency over a 6-year time frame. Of over 1 million visits – just under 60,000, about 5% - had blood pressures consistent with hypertensive urgency. Now, some of those individuals were sent to the hospital for evaluation, the rest were sent home. What percent do you think went to the hospital?
If you answered “less than 1%”, you’re spot on and a way better guesser than I am. I actually assumed the rate would be much higher. Now, how can we evaluate whether sending someone to the hospital is the “right” move. And let’s not fall into the assumption that sending someone to the hospital is a “safe” option. Those of us who work in hospitals will quickly disabuse anyone of that notion.
The problem is that those who got sent to the hospital were doing worse than those who got sent home. They had higher blood pressures in the “urgency” range, with a mean systolic of 198 compared to 182 in those sent home.
To create a fair assessment of the effects of sending someone to the hospital, the authors performed a propensity-score match. Basically, they matched the people who got sent to the hospital with people of similar characteristics that didn't. Comparing the matched groups, they found… nothing.
No increased risk of major adverse cardiovascular events. In other words, the people sent home weren’t having strokes during the car ride.
A curious finding
One thing I did note was that those sent to the hospital were much more likely to have a hospital admission sometime in the next 8 – 30 days compared to those who got to go home. This either means that some bad stuff happens in that initial hospital referral that leads them to bounce back later in the month or, and I’m favoring this interpretation here, the propensity match didn’t catch some factors that predisposed the hospitalized people to hospitalization in general – factors like socioeconomic status, for instance. If that’s true, then we’d actually expect the hospitalized group to do worse than their controls. The fact that they didn’t may argue that the hospital actually did something beneficial. But we are way down the causality rabbit hole here.
In the end I take home two things from this study. First, the shockingly low rate of referral to hospital for hypertensive urgency. Seriously – is this just a Cleveland Clinic thing? Feel free to let me know in the comments. And two – that for the right patient, a dedicated outpatient physician can probably do just as much good as a costly trip to the ED.
An embarrassment of riches this week as we got not one, but two randomized clinical trials evaluating the timing of dialysis initiation in acute kidney injury. Of course, the results don’t agree at all. Back to the drawing board folks. For the video version of this post, click here.
OK here’s the issue – we nephrologists see hospitalized people with severe acute kidney injury – the abrupt loss of kidney function – all the time. There is always this question – should we start dialysis before things get too bad, in order to get ahead of the metabolic disturbances, or should we hold off, watch carefully, and jump when the time is right? Several people – and, full disclosure, I’m one of them – have examined this question and the answers have been confusing. The question begs for a randomized trial.
And, as I mentioned, we have two. One, appearing in the Journal of the American Medical Association, says yes, earlier dialysis is better with mortality rates of 40% in the early arm versus 55% in the late arm. The other, appearing in the New England Journal of Medicine says there is no difference – mortality rates were 50% no matter what you did.
Figure 1: JAMA Trial - GO TEAM EARLY!
Figure 2: NEJM Trial - D'Oh!
Sometimes, rival movie studies put out very similar movies at the same time to undercut each other’s bottom line. So which of these trials is Deep Impact, and which is Armageddon? Which is Ed TV and which is the Truman show? Which is Jobs and which is Steve Jobs?
Figure 3: It's like looking in a mirror...
In this table I highlight some of the main differences:
The NEJM trial was bigger, and multi-center, so that certainly gives it an edge, but what draws my eye is the difference in definitions of early and late.
The NEJM study only enrolled people with stage 3 AKI – the most severe form of kidney injury. People in the early group got dialysis right away, and the late group got dialysis only if their laboratory parameters crossed certain critical thresholds. The JAMA paper enrolled people with Stage 2 AKI. In that study, early meant dialysis right after enrollment, and late meant dialysis started when you hit stage 3.
OK so definitions matter. The NEJM trial defined early the way the JAMA trial defined late. So putting this together, we might be tempted to say that dialysis at stage 2 AKI is good, but once you get to stage 3, the horse is out of the barn – doesn’t matter when you start at that point.
That facile interpretation is undercut by one main issue: the rate of dialysis in the late group.
See, one of the major reasons you want to hold off on dialysis is to see if people will recover on their own. In the JAMA study, enrolling people at stage 2 AKI, only 10% in the late group made it out without dialysis – and those people were dialyzed, on average, only 20 hours after randomization. In the NEJM study, using the more severe inclusion criterion, roughy 50% of the late group required no dialysis. To my mind, if 91% of the late group got dialysis, you’re not helping anybody – the whole point of not starting is so that you never have to start, not that you can delay the inevitable.
Regardless of your interpretation, these studies remind us not to put too much stock in any one study. They should also remind us that replication – honest to goodness, same protocol replication – is an incredibly valuable thing that should be celebrated and, dare I say, funded.
For now, should you dialyze that person with AKI? Take heart – there’s good evidence to support that you should keep doing whatever you’ve been doing.
An article appearing in JAMA Neurology links exposure to certain environmental toxins, like pesticides to Amyotrophic Lateral Sclerosis (ALS or Lou Gehrig’s disease). While I could spend these 150 seconds talking about whether or not we should run home and clean all the Round-Up out of our garage, I’d like to take this chance to talk about 3 methodologic issues a study like this brings to the fore. For the video version of this post, click here.
But first, the details:
Researchers from the University of Michigan performed a case-control study of 156 individuals with ALS and 128 controls. They administered a survey, asking about all sorts of environmental exposure factors, and, importantly, they drew some blood to directly measure 122 environmental pollutants. The bottom line was that there did seem to be an association between some pollutants (like pentachlorobenzene – a now-banned pesticide) and ALS.
So – on to the three issues.
Number 1 – multiple comparisons. As I mentioned, the authors looked at over 100 pollutants in the blood of the participants. Given no effect of the pollutants, chance alone would leave you with several apparently statistically significant relationships. In fact, a robust demonstration of the multiple comparisons problem is that lead exposure, in this study, was quite protective against ALS. This is not biologically plausible, but reflects that multiple comparisons can cut both ways – it can make measured factors seem to be positively, or negatively associated with the disease. Indeed, several pollutants seemed to protect against ALS.
The authors say they account for multiple comparisons, but I’m not sure this is true. In their statistics section, they write that they used a Bonferroni correction to lower the threshold p-value (from the standard 0.05 to 0.0018 to account for all the comparisons). But they never actually do this. Rather, they report the odds ratios associated with the various pesticides and just don’t report the p-values at all, except in multivariable models where the Bonferroni correction isn’t used.
Number 2 – the perils of self-reported data. The survey exposure data – questions like “do you store pesticides in your garage?” and the measured blood data were hardly correlated at all. This should be read as a warning to anyone who wants to take self-reported exposure data seriously (I’m looking at you, diet studies). When in doubt, find something you can actually measure.
And Number 3 – the lack of variance explained. Studies like this one that look at risk factors for an outcome are building models to predict that outcome. The variables in the model are things like age, race, family history, and the level of pentachlorobenzene in the blood. It’s a simple matter of statistics to tell us how good that model fits – how much of the incidence of ALS can be explained by the model. We almost never get this number, and I suspect its because you can have a highly significant model that only explains, say, 1% of the variance in disease occurrence. It doesn’t make for impressive headlines.
So while we haven’t learned which, if any, organic pollutant causes ALS, hopefully we’ve learned something about the perils of risk factor research.