What Randomized Experiments of Alcoholics Anonymous Can’t Tell Us

Might Alcoholics Anonymous Not Work for Those Who Won’t Participate in Randomized Experiments of AA? A Study of Breast Feeding Promotion Could Help Answer the Question

Alcoholics Anonymous’ “faith-based 12-step program dominates treatment in the United States. But researchers have debunked central tenets of AA doctrine,” wrote Gabrielle Glaser in April’s Atlantic. She says that AA may not work for many with alcohol problems and that AA’s supporters unfairly dismiss effective drug treatments based in neuroscience.

Jesse Singal immediately fired back that Glaser had missed several randomized experiments of 12-step facilitation showing its effectiveness. And last week Austin Frakt explained how such randomized experiments can be analyzed to “tease apart a treatment effect (improvement due to AA itself) and a selection effect (driven by the type of people who seek [AA] help).” Keith Humphreys, Janet Blodgett and Todd Wagner did just that using combined data from five randomized experiments to show that AA really works—for those who use it.

Who is right?

Everyone.

Continue reading

Advertisement

More Than Big Data Needed for Estimates as Good as Randomized Clinical Trials: Prospective Observational Causal Studies

A research methods post…

Bill Gardner is cautiously—very cautiously—hopeful that big data from electronic health records (EHRs) will enable unbiased estimates of the effects of medical treatment, without any randomized controlled trials.  Gardner’s hope, his caution and his description of the data needed—“all the factors that determine who gets what treatment”—are all right on the mark. And EHRs will massively increase data on detailed clinical factors that drive clinical decisions.

But I fear researchers will focus only on the data laying around in EHRs. To cure confounding, you need to go out and measure all the confounders—everything related to both treatment and outcome. Continue reading

Are 90% of academic papers really never cited? Searching citations about academic citations reveals the good, the bad and the ugly

“90% of papers published in academic journals are never cited.” This damning statistic from a 2007 overview of citation analysis recently darted about cyberspace. A similar statistic had made the rounds in 2010 but that time it was about 60% of social and natural science articles that were said to be uncited. Neither statistic came with a link to supporting academic research papers.

That lack of support was a problem for me. I did not doubt the basic truth that many academic papers are uncited. But to be sure 90% was not urban legend and to learn the context and caveats, I needed to find the original research paper. I was not the only one who wanted the supporting evidence.  So, I dove into Google scholar, searching the disparaged academic literature for articles on academic citation rates.

What’s the truth? Continue reading

Bridgegate: The Case of the Missing Ethical Research Review

When the mayor of Fort Lee asked why those—now infamous—toll lanes to the George Washington Bridge were closed, he was told, “a study.”  Yet when his desperate pleas to Port Authority officials were ignored, he did not turn to the independent official body in charge of making sure that studies are conducted ethically. Such a body must ensure studies don’t harm people—don’t, say, trap ambulances in traffic.

Why didn’t the Fort Lee mayor turn to that body? Don’t research studies that affect humans have to show that they don’t harm those humans? Or at least that benefits exceed harms? Decades ago, after scandals like the Tuskegee study which kept poor sharecroppers ignorant of their syphilis, and therefore untreated, we created rules and bureaucracies to protect human research subjects. Currently, regulation 45 CFR 46 ensures this, mandating the creation of Institutional Research Boards (IRBs) who are charged with ensuring that research on humans is ethical.

The mayor of Fort Lee couldn’t turn to an IRB because there wasn’t one. IRBs don’t apply to a government agency—or school or business—trying to improve operations. Studies that are for “internal management” purposes, don’t count as research, which is defined as producing “generalizable knowledge.” (Generalizable means providing information beyond just the specific setting, place and time of the study.)

Generalizability, however, is an arbitrary, potentially dangerous and just plain stupid way to decide which studies are subject to ethical review. Continue reading