Breastfeeding is associated with many benefits, ranging from the economic to the developmental. But association isn't causation, and touting the myriad benefits of breastfeeding may leave some women who are unable to breastfeed feeling less-than. Now an article appearing in the journal Pediatrics demonstrates, through careful matching, that breastfeeding itself likely has no effect on infant development. For the video version, click here.Read More
Do you remember when your little baby took her first step? How about when she sat without support? How about standing with assistance? Yes, for many of us these "milestones" are not exactly burned in our brain, but a new study from the journal Pediatrics suggests that some of these milestones may be really important – not just for baby journals, but for childhood development. For the video version of this post, click here.
Here’s the deal. We've known for a long time that kids with severe developmental disabilities in childhood seem to meet some gross motor milestones later than expected. But that's looking at an extreme case. The question these researchers had was whether delayed gross motor development would associate with later childhood development in kids without developmental delay.
To answer this question, they turned to the Upstate KIDS study, a prospective cohort study of over 6000 babies born in the New York area. The study focuses only on 501 of the children though – a subset who agreed to a follow-up examination at 4 years. So, if you’re keeping score, we’re already looking at a group that is not representative of the population at large.
Based on logs the mothers kept, the researchers looked at when the child achieved certain gross motor milestones like walking. They looked at 6 milestones in all, and compared them to the total developmental score at four years of age. The findings were… subtle.
After adjustment for factors like maternal age, prematurity, and others, there was a statistically significant association between one of the six milestones - later standing-with-assistance and total developmental score. That total score is driven by 5 subcomponents, and when those were analyzed individually, later standing with assistance was associated with worse adaptive and cognitive development.
Similar results were seen in the subset of kids with no developmental disability – the subset, which, speaking editorially here, really should have formed the primary analysis of this study.
So… ok… should we panic if our kids aren't standing and walking like a bunch of little Rory Calhoun's? I'm not ready for that yet. For one, the authors don't appear to have accounted for the multiple comparisons evaluated here – so the marginally statistically significant result has a pretty high risk of being a false-positive. Second, it's not immediately obvious what you would do with a kid who stands with assistance 2.1 months later than the average. Stand them up more? Send them to a neurologist?
In the end, we’d end up giving moms and dads just one more metric to worry about in a world obsessed with measuring kids' performance at every turn. Or every step.
A study appearing in JAMA Pediatrics suggests that children born late-term have better cognitive outcomes than children born full-term. As if pregnant women didn’t have enough to worry about. For the video version of this post, click here.
Let’s dig into the data a bit, but first some terms (sorry for the pun). “Early term” means birth at 37 or 38 weeks gestation, “full term” 39 or 40 weeks, and “late term” 41 weeks. In other words, this study is not looking at pre-term or post-term babies, all of the children here were born in a normal range.
Ok, here’s how the study was done. Researchers used birth records from the state of Florida and linked them to standardized test performance in grades 3 through 10. Compared to children born at 39 or 40 weeks of gestation, those born at 41 weeks got test scores that were, on average, about 5% of a standard deviation higher. To get a sense of what the means, if these were IQ tests (they weren’t) that would translate to a little less than 1 IQ point difference. Not huge, but the sample size of over one million births makes it statistically significant.
10.3% of those born at 41 weeks were designated as “gifted” in school, compared to 10.0% of those born at full-term.
Before I look at what might go wrong in a study like this – is the effect plausible? To be honest, I sort of doubt it. One week extra development in utero certainly will lead to some differences at or near birth, but I find it hard to believe that any intelligence signal wouldn’t simply be washed away amid all the other factors that affect developing young minds prior to age 8.
Now, the authors did their best to adjust for some of these things – race, sex, socioeconomic status, birth order, but it seems likely that there are unmeasured factors here that might lead to longer gestation and better cognitive outcomes – maternal nutrition comes to mind, for example.
We also need to worry about systematic measurement error. These gestation times came from birth certificate data – in other words, many of these measurements may have been some doctors best guess. If the dates were determined by ultrasound, larger babies might be misclassified as later term. Also, I suspect that if conception dates weren’t well known, a lot of doctors filling out the birth certificate may have just written “40 weeks” to put something in that box.
The authors attempted to look just at women where the likelihood of prenatal care was high, finding similar results, but again, with the tiny effect size, any small systematic measurement error could lead to results like this.
The authors state that this information is relevant to women who are considering a planned cesarean or induction of labor. Currently, the American College of Obstetrics and Gynecology recommends “targeting” labor to 39-40 weeks to avoid some physical complications of late-term birth. In my opinion, having this study change that recommendation at all would be premature.
For the video version of this post, click here. The public attitude towards marijuana is changing. Though some continue to view the agent as a dangerous gateway to harder drugs like cocaine and heroin, increasing use of the drug for medical purposes, and outright legalization in a few states will increase the number of recreational pot users. Its high time we had some solid data on the long-term effects of pot smoking, and a piece of the puzzle was published today in JAMA internal medicine.
Researchers leveraged an existing study (which was designed to examine risk factors for cardiac disease in young people) to determine if cumulative exposure to marijuana was associated with impaired cognitive function after 25 years. Note that I said "impaired cognitive function" and not "cognitive decline". The study didn't really assess the change, within an individual, over the 25-year period. It looked to see if smokers of the ganj had lower cognition scores than non-smokers.
That minor point aside, some signal was detected. After 25 years of followup, individuals with higher cumulative use had lower scores on a verbal memory test, a processing speed test, and a test of executive function.
But wait – those numbers are unadjusted. People with longer exposure time to weed were fairly different from non-users. They were less likely to have a college education, more likely to smoke cigarettes, and, importantly, much more likely to have puffed the magic dragon in the past 30 days.
Accounting for these factors, and removing from the study anyone with a recent exposure to the reefer showed that longer cumulative exposure was only associated with differences in the verbal learning test. Processing speed and executive function were unaffected.
Now, the authors make the point that there was a dose-dependent effect with "no evidence of non-linearity". What that is code for is that there isn't a "threshold effect". According to their model, any pot would lead to lower verbal scores. Take a look at this graph:
What you see is a flexible model looking at marijuana-years (by the way, one year means smoking one doobie a day for 365 days). The authors' point is that there isn't a kink in this line – the relationship is pretty linear. But look at the confidence intervals. The upper bound doesn't actually cross zero until five years. In short, the absence of an obvious threshold doesn't mean that no threshold exists. It is likely that the study was simply underpowered to detect threshold effects.
The most important limitation, though, was that the authors didn't account for age-of-use on the cognitive outcomes. With emerging evidence that pot-use at younger ages may have worse effects on still-developing brains, this was a critical factor to look at. Five years of pot exposure may be much different in a 25-year old than in an 18-year old. This data was available – I'm not sure why the interaction wasn't evaluated.
In the final analysis, I think we can confirm what common sense has told us for a long time. Pot certainly isn't magical. It is a drug. It's just not that bad a drug. For the time being, the data we have to work with is still half-baked.
For the video version of this post, click here.
Weve talked a lot about diet studies here on 150 Seconds. I like to complain about them because most are observational. People who eat healthy diets are healthier. Randomized trials are better, but unless youre preparing every meal for the participant, you can never be 100% sure what they are getting.
Today were talking about a randomized diet trial that takes a somewhat unique approach to the issue. In fact, I think the methodology in this study is more interesting than the results. In an article appearing in JAMA internal medicine, a group of Spanish researchers report on the relationship between a Mediterranean style diet and cognition in a group of around 450 individuals at higher risk of cardiovascular disease.
Mediterranean diets are characterized by fresh fruits and vegetables, fish, nuts, and olive oil. Micronutrient-wise, were basically talking about higher amounts of mono- and polyunsaturated fats, which, in the lab at least, seem to have antioxidant and anti-inflammatory properties. In terms of cognitive decline, which may be partly a vascular process, tamping down these things could be beneficial.
There were 3 groups in the study. A control low-fat diet group, and two Mediterranean diet groups: 1 supplemented with nuts, one with olive oil. Heres where it got interesting. The participants cooked all their own food - basically, the intervention was just some education and a weekly food gift of either 1 liter of olive oil or 1 cup of nuts.
Despite the lack of rigorous dietary control, the intervention groups did change their eating habits. Caloric intake didnt change much, but carbohydrate intake went down and polyunsaturated fat intake went up in both mediterranean diet groups compared to the controls.
As for the primary results? I wouldn't call them sizzling. Cognitive function declined a bit more in the control group than either of the Mediterranean diet groups, but changes were mild overall.
The trial itself had a few blemishes. One is that, at first, the control group wasnt treated very similarly to the intervention groups. Controls had fewer visits, and didnt get weekly gifts from the study. This was remedied about halfway through the trial, but some damage had probably been done and its conceivable that controls who didnt feel as invested in the study wouldnt have performed as well on cognitive testing.
The second issue is a somewhat high dropout rate, around 25%, that was differential among the groups. Neither of these are killer, but when you combine a couple of flaws like this with relatively mild results overall, youre left wondering what to take home from this?
For me, its not the primary results. Im just impressed that the simple act of giving people healthy food, giving people olive oil, changed their eating habits. Thats pretty slick if you ask me.
This week in 150Seconds, I take on a recent study in the journal of Neurology that examined the effects of Arts, Crafts, and Computer use on the development of mild cognitive impairment in the elderly. It's a nice study, but an error in the interpretation of subgroup analyses is one I see frequently, and decided to discuss in the following video: