Thursday, December 3, 2009

Cherrypicking stats: bad form and not helpful

by Amy Romano, CNM (Originally published on Science and Sensibility for Lamaze International)

Science & Sensibility contributor, Andrea Lythgoe, has a great post up at her own blog. In The Doula Numbers Game, Andrea shows that many of us may be overestimating – and overstating – the beneficial effects of continuous support from doulas. She argues and I agree that using outdated statistics that yield “better” results could compromise our integrity. Moreover, doing so is not necessary to advocate for greater access to doulas.

Data from the Cochrane Systematic Review show more modest effects of doula support, but they still add up to “clinically significant” benefits, greater satisfacation, and no evidence of harm. Maternal-fetal medicine researchers who evaluated the evidence for a variety of obstetric interventions in the November 2008 issue of the American Journal of Obstetrics and Gynecology called doula support “one of the most effective interventions” (p. 446) for improving outcomes. And they did so without being wowed by the inflated early statistics. (They stuck to the Cochrane.)

It can be extremely difficult to look at research objectively. It is human nature to want to cherrypick the research that furthers our cause the most. We may try to find fault with statistics we don’t like and subconsciously ignore problems or limitations of statistics we do. But improving the safety and effectiveness of maternity care requires that we critically analyze the research, which means recognizing limitations and flaws in the studies we agree with and standing behind solid research even when we don’t like the conclusions. We need not worry. Even with a critical lens, research points to a need to radically reform our system to make it more mother-friendly.

Andrea finishes each post in her Understanding Research series with a familiar plea to practice, practice, practice finding and reading research literature. One of the skills we all should practice is to read the studies that seem to contradict our beliefs or biases. Often, these studies are flawed, and spending time reading them helps us hone our ability to spot methodological problems and logical inconsistencies in other research. Other times the research is valid, and we see circumstances where technology and medicine do in fact improve outcomes. Reading these studies can also shed light on important unanswered research questions.

I highly recommend that readers take a look at Andrea’s post for an example of thoughtful critical analysis of statistics on doula support in labor. It is hard to update our long-held beliefs or alter the ways we teach and practice. But this is just what we’re asking of our “medical model” counterparts. We should lead by example.

2 comments:

Unknown said...

As a professional statistician, I have to say this is the most refreshing post I've read in a long time. Well said!

Ciarin said...

Thought this was a great post. Thanks for writing!

It's interesting how research studies can sometimes be interpreted differently depending on the readers' bias and position.