Dads Want Their Daughters to be Happy. Their Sons? Not So Much.

Dads Want Their Daughters to be Happy. Their Sons? Not So Much.

A study appearing in the journal Behavioral Neuroscience found significant differences in the ways that dad treat their daughters compared to their sons. And some of that difference can be seen on MRI.

For the video version, click here.

Read More

Are Physicians Responsible for the Opioid Epidemic?

Are Physicians Responsible for the Opioid Epidemic?

An article appearing in the journal Pain attempts to quantify the transition between acute and chronic opioid use. But the study misses a few critical variables we really need in order to address this national crisis. For the video version, click here.

Read More

Pregnant women, don't stop eating fish!

pregnant-woman-eating-fish.jpg

Tuna, shark, king mackerel, tilefish, swordfish. If you’ve ever been pregnant, or known someone who has been pregnant, this list of seemingly random aquatic vertebrates is all too familiar to you. It’s the “avoid while pregnant” list of seafoods, and it’s just one of the confusing set of messages surrounding pregnancy and fish consumption.

(For the video version of this post, click here).

Because aren’t we supposed to be eating more fish? Fish are the main dietary source for omega-3 fatty acids, which can cross the placenta, and may promote healthy brain development. Of course, some of these fish contain mercury which, as Jeremy Piven taught us all, may be detrimental to cognitive development.

Thankfully not while pregnant

These contradictory facts led the US FDA, in 2014, to recommend that pregnant women consume more fish, but not more than 3 times a week.  You have to love the government sometimes.

A study appearing in JAMA pediatrics is making some waves with its claim that high levels of fish consumption, more than 3 times per week during pregnancy, is associated with more rapid neonatal growth as well as higher BMIs throughout a child’s young life. Now, contrary to what your mother-in-law has been telling you, more rapid infant growth is not necessarily a good thing, as rapid infant growth is associated with overweight and obesity in childhood and adulthood.

But fish as the culprit here? That strikes me as a bit odd. Indeed, prior studies of antenatal fish consumption have shown beneficial or null effects on childhood weight gain.  What is going on here?

The authors combined data from 15 pregnancy cohort studies across Europe and the US, leading to a final dataset including over 25,000 individuals. This is the studies greatest strength, but also its Achilles heel, as we’ll see in a moment.

But first the basic results. Fish consumption was based on a food frequency questionnaire, a survey instrument that I, and others, have a lot of concerns about. Women who reported eating less than or equal to 3 servings of fish a week had no increased risk of rapid infant growth or overweight kids.  But among those eating more than 3 servings, there was around a 22% increased risk of rapid growth from birth to 2 and overweight at age 6.

These effects were pretty small, and, more importantly, ephemeral. The authors looked not only at the percentage of obese and overweight children, but the raw differences in weight. At 6 years, though the percent of overweight and obese kids was statistically higher, there was no significant weight difference between children of mothers who ate a lot of fish and those who didn’t. When statistics are weird like this, it usually suggests that the effect isn’t very robust.

In fact, this line from the stats section caught my eye, take a look:

methods

That means the authors used numbers predicted by a statistical model to get the weight of the children rather than the actual weight of the children. I asked the study’s lead author, Dr. Leda Chatzi, about this unusual approach and she wrote “Not all cohorts had available data on child measurement at the specific time points of interest… in an effort to increase sample size and…power in our analyses, we…estimated predicted values of weight and height”.

So we have a statistical model that contains as a covariate, another statistical model. This compounds error into the final estimate, and in a study like this, where the effect size is razor thin, that can easily bias you into the realm of significance.

Pimp My Ride bias

And, at this point it probably goes without saying, but studies looking at diet are always confounded. Always. While the authors adjusted for some things like maternal age, education, smoking, BMI and birth weight, there was no adjustment for things like socio-economic status, sunlight exposure, diabetes, race, or other dietary intake.

What have we learned? Certainly not, as the authors suggest, that

no. just no.

That they wrote this in a study with no measurement of said pollutants is what we call a reach.

Look, you probably don’t want to be eating fish with high levels of mercury when you are pregnant. But if my patients were choosing between a nice bit of salmon and a cheeseburger, well, this study doesn’t exactly tip the scales.

 

Antidepressants, pregnancy, and autism: the real story

pregnant-depressed-women1.jpg

For the video version of this post, click here.

If you're a researcher trying to grab some headlines, pick any two of the following concepts and do a study that links them: depression, autism, pregnancy, Mediterranean diet, coffee-drinking, or vaccines.  While I have yet to see a study tying all of the big 6 together, waves were made when a study appearing in JAMA pediatrics linked antidepressant use during pregnancy to autism in children.

To say the study, which trumpets an 87% increased risk of autism associated with antidepressant use, made a splash would be an understatement:

The Huffington post wrote:

The Daily telegraph, rounding up, said:

Newsweek:

But if you're like me you want the details. And trust me, those details do not make a compelling case to go flushing all your fluoxetine if you catch my drift.

Researchers used administrative data from Quebec, Canada to identify around 145,000 Singleton births between 1998 and 2009. In around 3% of the births, the moms had been taking anti-depressants during at least a bit of the pregnancy. Of those kids, just over 1000 would be diagnosed with autism spectrum disorder in the first 6 years of life. But if you break it down by whether or not their mothers took antidepressants, you find that the rate of diagnosis was 1% in the antidepressant group compared to 0.7% in the non-antidepressant group. This unadjusted difference was just under the threshold of statistical significance by my calculation, at a p-value of 0.04.

These numbers aren't particularly overwhelming.  How do the researchers get to that 87% increased risk? Well, they focus on those kids who were only exposed in the second and third trimester, where the rate of autism climbs up to 1.2%.  It's not clear to me that this analysis was pre-specified. In fact, a prior study found that the risk of autism increases only when antidepressants are taken in the first trimester:

And I should point out that, again by my math, the 1.2% rate seen in those exposed during the 2nd and 3rd trimesters is not statistically different from the 1% rate seen in kids exposed in the first trimester. So focusing on the 2nd and 3rd trimester feels a bit like cherry picking.

And, as others have pointed out, that 87% is a relative increase in risk. The absolute change in risk remains quite small. If we believe the relationship as advertised, you'd need to treat about 200 women with antidepressants before you saw one extra case of autism.

But I'm not sure we should believe the relationship as advertised. Multiple factors may lead to antidepressant use and an increased risk of autism. Genetic factors, for example, were not controlled for, and some studies suggest that genes involved in depression may also be associated with autism. Other factors that weren't controlled for: smoking, BMI, paternal age, access to doctors. That last one is a biggie, in fact. Women who are taking any chronic medication likely have more interaction with the health care system. It seems fairly clear that your chances of getting an autism diagnosis increase with the more doctors you see. In fact, in a subanalysis which only looked at autism diagnoses that were confirmed by a neuropsychologist, the association with antidepressant use was no longer significant.

But there's a bigger issue, folks – when you take care of a pregnant woman, you have two patients. Trumpeting an 87% increased risk of autism based on un-compelling data will lead women to stop taking their antidepressants during pregnancy. And that may mean these women don't take as good care of themselves or their baby. In other words, more harm than good.

Could antidepressants increase the risk of autism? It's not impossible. But this study doesn't show us that. And because of the highly charged subject matter, responsible scientists, journalists, and physicians should be very clear. Women taking anti-depressants during pregnancy, do not stop until, at the very least, you have had a long discussion about the risks with your doctor.

 

 

 

Does Addyi add up? A definitive take on flibanserin for Hypoactive Sexual Desire Disorder

Slide2.jpg

For the video version of this post, click here. There’s a new drug on the market that is either A) the greatest revolution in women’s sexual health since oral contraceptives or B) a shining example of the FDA’s ineptitude when it comes to drug approval.

 

If you’ve watched TV, listened to the radio, or, basically, been awake at all for the past week, you’ve heard of the FDA-approved drug flibanserin, marketed as Addyi.

 

Like all things that have to do with sex in the United States, Addyi’s approval has been very controversial. Emotions run high on both sides of the issue, but few stories address the data that led to approval in the first place. In this in depth look, we’ll examine the numbers and arm clinicians with the information you need when patients arrive asking for the “little pink pill”.

The Pill in Question

Let’s get one thing out of the way quickly.  This is not female “Viagra”.  It is not an as-needed drug.  It is actually a complex serotonin receptor ligand that is taken daily.  In fact, it started off life as an antidepressant.

Flibanserin

Studies of its antidepressant effects were not compelling enough for Boehringher-Ingelheim, the drug’s developer, to move forward.

But there was an interesting side effect.  Some women in these studies described increased sexual desire and more frequent sexual activity.  Sensing a potential goldmine, BI ran several randomized controlled trials to demonstrate the drug’s efficacy and safety.  These trials all have somewhat unfortunate acronyms – DAISY, VIOLET, and BEGONIA.

These trials enrolled a total of around 2400 pre-menopausal women with hypoactive sexual desire disorder or HSDD.  They all had to be in monogamous and, for some reason, heterosexual relationships. HSDD is characterized by low sexual desire that is disturbing to the woman and that can not be explained by factors like medications, psychiatric illness, problems in the relationship, lack of sleep, etc.  The HSDD also had to be acquired – there must have been a prior period of normal sexual functioning.

BI needed to show the FDA two things to prove the drug works.  1 – that the drug increased “desire”.  And 2 – that the drug actually increased sexual activity.

The FDA requires two randomized trials for drug approval.  DAISY and VIOLET were supposed to be those trials, but there was a problem.  Flibanserin significantly increased the number of “sexually satisfying events”

But it did not increase a daily “desire score” as captured by an E-diary entry. Failing to meet both endpoints, the drug was denied approval by the FDA in 2010.  Boehringer Ingelheim then sold the drug to the small startup Sprout Pharmaceuticals.

One of the secondary desire metrics, however, was positive in these trials, so the BEGONIA trial used sexually satisfying events and the female sexual function index –desire score as its co-primary endpoint. The desire score here comprised two questions, take a look:

Female sexual function index - desire score

In the BEGONIA trial, both of these outcomes favored flibanserin over placebo:

You can see in the charts that you get about a half a point improvement in desire over placebo, and maybe one sexually satisfying event per month compared to placebo.

But the average performance of a drug often doesn’t give you a sense of what the range was like.  I was able to reconstruct the distribution of change in sexually satisfying events using some data from one of the trials:

Distribution of change-in-SSE in the two groups

Here Red is placebo, green is flibanserin.  The important thing to note is that, though the average flibanserin-taker gained 2.5 SSEs per month, the range was quite variable.

Now armed with a positive trial, FDA approval was again sought and again denied, this time due to concerns over side-effects.

Twice as many women randomized to flibanserin stopped the drug due to adverse events than did those taking placebo:

Central nervous system depression: somnolence, fatigue, or sedation, were seen in 21% of the women on flibanserin compared to just 7% of placebo-treated patients.

Pharmacokinetic studies nicely demonstrated that CYP3A4 inhibitors like fluconazole and grapefruit juice increased the blood levels of flibanserin. These are pretty easily avoided. Oral contraceptives are also mild inhibitors of these enzymes, and it does look like the somnolence side effects of the drugs are exacerbated by OCP use.

And then there is alcohol.  If there’s one thing you hear about this drug – it’s don’t mix it with alcohol.  What’s the data? Well, there’s not much.

At FDA urging, a small study of 25 individuals (almost all men) was conducted to measure the effect of combining flibanserin with alcohol. As you can see, mixing the two resulted in a synergistic drop in blood pressure.  In that study, 5 people (or 20%) had a severe adverse event (mostly severe somnolence).

Slide9

It’s worth noting that flibanserin alone was more likely to cause somnolence than either low or high-dose alcohol alone.

You hear a lot about fainting in the news coverage of flibanserin.  How many women passed out in the phase three trials? 14 out of around 2400.  Ten were taking the drug, and four were taking placebo.

Not to be denied, Sprout pharmaceuticals did two things.  First, they did more studies to characterize the drug’s safety profile.

For example:

The FDA commissioned a “driving” study to be sure that women who take this drug at night are safe to drive the next day.

Driving Ability and Reaction Time

 

Flibanserin is green here and what you see is basically that the drug has no effect on cognition or reaction time.  Sleepy, yes.  Dangerous? Probably not.

The second thing Sprout did is to start the “even the score” campaign.  Funded by Sprout pharmaceuticals, this was a marketing campaign directed squarely at the FDA in order to pressure drug approval.

 

The idea here was that there were all these drugs for male sexual dysfunction and none for female sexual dysfunction, and that somehow reflected bias at the level of the FDA. Female sexuality is something our society does not handle particularly well, so I get the need for movements like this, but the fact that it was funded by the very people likely to benefit financially from it does feel a bit, well, distasteful.

That is a lot of scratch.

But shady marketing practices doesn’t mean the drug is bad any more than the presence of real gender bias in society makes the drug good.

So is the drug good?  That’s the billion dollar question.  The practical answer is that it’s up to the woman taking it.  Is one additional sexually satisfying experience a month worth the side effects (which, contrary to the popular media portrayal, seem to be rather mild)?  Well, they asked the women in the Begonia study how much their HSDD had improved.  Here are the results:

Placebo is a powerful drug

 

All told, about 50% of the women taking flibanserin felt that it benefited them. Just under 40% of the women taking placebo felt that way. This leads to my major prediction for this drug.  Despite the side-effects, it will be popular.  In real-life, there are no placebo controls.  50% of women will feel better. And yes, some of that will be due to the placebo effect.

The real problem with these studies, though, is not that flibanserin is a risky drug.  The problem is that the control group got placebo. My question isn’t whether the drug works better than placebo, my question is whether it works better than sex therapy and / or couples therapy. If that study ever gets done, it probably won’t be run by Sprout Pharmaceuticals.

 

*Thanks to PhDecay (follow her on twitter here) for her advice with this article.

Liraglutide and Weight Loss - The Real Skinny

belly-fat.jpg

For the video version of this post, click here. Weight loss is something of a holy grail for pharmaceutical companies.  A large and, frankly, growing market exists not only in the US but around the world, and is only getting bigger. The list of drugs that have tried, and failed, to crack this market is ever-growing.

Slide1

We’ve recently seen a slew of news reports about the “SCALE” study touting the injectable medication liraglutide, now being marketed by Novo Nordisk as Saxenda.

The mainstream news outlets have done their job getting the details of this trial out there,

Slide2

 

but when your patients ask you about liraglutide, you might want a little more detail than what you get from CNN.  So, this week, we’re taking a second look at the SCALE study.

 

Slide3

 

Let’s start at the beginning.

 

Slide4

 

Liraglutide is a glucagon-like-peptide 1 analogue, and, as a peptide, can only be given by injection.  GLP-1 is in the incretin family - it increases insulin and insulin sensitivity, delays gastric emptying, and promotes feelings of satiety.  As such, liraglutide was originally tested in, and approved for use in, individuals with type 2 diabetes.  That weight loss was a side-effect was I think a happy accident for Novo Nordisk.

The trial, as it’s presented, was pretty straight-forward.  We have roughly 3700 non-diabetic patients with a BMI greater than 30 or 27 if they had a relevant comorbidity randomized to liraglutide or placebo and followed for 56 weeks.

 

Slide5

 

They also got lifestyle intervention.  The endpoint was change in weight, and it was pretty clear that the liraglutide group lost more than the placebo group - around 8 kilograms versus 3 kilograms.

 

Slide8

 

The news outlets have, appropriately, pointed out that for 5 kilograms of weight over a year, the $1000 a month price tag might not be reasonable, but we can dig a bit deeper than that.

Let’s talk study design. First of all, check out this sentence regarding statistical power.

Slide7

 

This study was designed with a whopping 99% power to detect a positive outcome.  Actually, by my calculation, it was around 99.99% power. If Novo Nordisk was going for a more traditional 90% power, they would only have had to recruit around 150 individuals.  In other words, they were betting BIG on this drug. Whether it paid off is still up for grabs.

Here’s another point worth noting - statistical analysis and editorial assistance for the manuscript were provided by Novo Nordisk. Given that most of the endpoints were pre-specified, they didn’t have too much flexibility to massage the data, but I think it would be informative to consider what doesn’t appear in the manuscript.

Take a look at Figure 1 here.

 

Slide8

 

This is the important figure - weight-loss in the two comparison groups.  How would you interpret the error bars?  If they are standard deviations, the graph would suggest that weight loss was remarkably consistent in the liraglutide group.  But those error bars aren’t standard deviations, they are standard errors - a subtle difference, but one with a big impact.  Here’s what the graph would look like with standard deviations.

 

Slide9

 

This would help to drive home the fact that, although the drug seems to work, your mileage may vary.

There’s also no discussion of how well blinding worked.  Did people who got placebo know they were getting placebo?  Reading between the lines, it seems pretty clear that they did.  20% of the placebo group withdrew their consent for the study, only 10% of the liraglutide group did.

They could probably figure it out because the side-effects of liraglutide were so clear. 25% of liraglutide patients had nausea in the first four weeks, as this graph shows.

Slide10

 

One other analysis I would have liked to see? Adjust the outcome for the side-effect symptomatology.  In other words, do people lose weight because they feel sick, or is that just an unlucky effect that occurs in some. It wouldn’t invalidate the results, certainly, but it would certainly help us counsel patients.

Now, this trial was conducted in 27 countries at 191 different sites. The company would state that they did this to increase generalizability. There are some more cynical reasons to do this though.  Number one, it’s a lot cheaper to do these studies outside of the US. But research oversight can also be a bit more… lax.. in other countries.  Despite my searching, I was unable to turn up how many of these patients came from the US. In a study where blinding was likely incomplete, it would be very interesting to see if certain recruitment sites had spuriously high effect sizes.

Now let me be clear, I’m not saying anything untoward happened here, only that the data would be nice to have.

As for the main issue you see in the press? That the drug is too expensive for it’s modest efficacy? Let’s not throw the baby out with the bathwater. One interesting secondary outcome was the rate of new diabetes diagnoses - low overall, but 7-fold higher in the placebo group.

One unfortunate, though unsurprising, finding was that the weight comes back when you stop using the drug, as evidenced from an extension trial in the liraglutide arm. So this might be a life-long drug.  Good for Novo Nordisk, but bad for patients.

After a second look, where are we left with liraglutide?  Well, the drug causes weight loss, in most people. It causes side-effects, in a lot of people.  It may prevent diabetes. It costs a lot. The rate of uptake is going to be heavily dependent on calculations done by the insurance companies. If the co-morbid conditions ameliorated or prevented by the weight loss will save money, we’ll see it get used. But it’s not a permanent fix. There’s an elephant in the room that isn’t mentioned in the manuscript, and in few of the news articles I read. It’s called bariatric surgery, a one-time procedure which may have more efficacy than a lifetime of liraglutide. And though that wasn’t the comparator in this trial, it may be the comparator in our patients’ heads.

That’s the skinny on liraglutide.  I’ll see you next time we need to take a second look.