Might Alcoholics Anonymous Not Work for Those Who Won’t Participate in Randomized Experiments of AA? A Study of Breast Feeding Promotion Could Help Answer the Question
Alcoholics Anonymous’ “faith-based 12-step program dominates treatment in the United States. But researchers have debunked central tenets of AA doctrine,” wrote Gabrielle Glaser in April’s Atlantic. She says that AA may not work for many with alcohol problems and that AA’s supporters unfairly dismiss effective drug treatments based in neuroscience.
Jesse Singal immediately fired back that Glaser had missed several randomized experiments of 12-step facilitation showing its effectiveness. And last week Austin Frakt explained how such randomized experiments can be analyzed to “tease apart a treatment effect (improvement due to AA itself) and a selection effect (driven by the type of people who seek [AA] help).” Keith Humphreys, Janet Blodgett and Todd Wagner did just that using combined data from five randomized experiments to show that AA really works—for those who use it.
Who is right?
Despite missing evidence she should not have missed, Glaser’s basic reasoning and point stand. And not only because the experiments never compared AA to the drug treatments Glaser advocates. The other big problem is that randomized experiments can’t tell us about the people who won’t participate. Austin focused on those who were assigned AA facilitation but didn’t go to AA. If such the crossover selection within a randomized experiment is big, might the selection into a randomized experiment be as big or bigger?
Imagine that you’re someone with a long-time alcohol problem and that AA has not worked for you multiple times, like J.G, profiled by Glaser. You see a sign offering free treatment for your alcohol problem if you participate in a study. (One experiment described participants as “recruited from the community.”) Interested, you learn that half the participants will be assigned to AA facilitation. Will you participate? Maybe you think that what hasn’t worked 6 times before won’t work this time and so you decline.
Or perhaps, you’re more desperate and already seeking inpatient treatment. (Another experiment “recruited directly from… [those] referred from the intake unit of an outpatient treatment center in lieu of other treatment” as well as “from the community.”) Might you still decline more of what hadn’t worked before despite your desperation?
A randomized experiment can only tell us about people who participate in it.
Is there a study that compares those who don’t participate in experimental studies of AA to those that do participate? I don’t know of one. But there is a superb recent study of exactly this issue for breast-feeding promotion by Onur Altindag, Ted Joyce and Julie Reeder. That study shows directly that randomized experiment participants differ from non-participants in how likely they are to pursue breast-feeding without any promotion. Such differences seriously undermine how much the experiment’s estimates of the effect of peer breast-feeding promotion generalize to non-participants. [Full disclosure: Onur and Ted are my co-researchers on an unrelated project.]
The breastfeeding study has great data, because the Oregon WIC program tries to support breast-feeding and surveys all WIC mothers about their breast-feeding one, three and six months after they give birth. Between 2005 and 2010 the Oregon WIC program conducted a randomized experiment in four counties to examine the effectiveness of telephone counseling by former WIC beneficiaries who had successfully breast-fed. The administrative set-up meant that the WIC program learned about breast-feeding behavior among mothers who decided not to participate in the experiment. The researchers could examine breastfeeding of participants and non-participants.
What did the study find?
English-speaking mothers who choose not to participate were 6 percentage points less likely to breastfeed exclusively for the first 6 months than those in the control group of the experiment—those who never got the peer counseling.* In other words, the mothers who agreed to the peer counseling randomized experiment were mothers who were more likely to breast feed, peer counseling or not.
How big is this selection into the study effect? For comparison, the peer counseling itself, the experimental effect, did not increase exclusive breast-feeding among English speaking WIC mothers.
Selection into the experiment, however, was not so biased for everyone. Among Spanish-speaking WIC mothers, 90% agreed to participate and were not more inclined to breast feeding than the non-participants. (BTW, the peer counseling was effective for the Spanish-speakers.)
What does this breastfeeding study mean for the relevance of the AA randomized experiments? In the breast-feeding study, participants only agreed to a few phone calls, not a very big deal. Someone who had experienced AA failure might find sessions advocating AA more off-putting, meaning even more biased selection into the AA experiments than the breastfeeding one. On the other hand, those with alcohol problems might be desperate enough that there is less selection into the AA experiments.
Randomized experiments provide irreplaceable evidence. The AA experiments told us that AA isn’t only about people motivated enough to stay sober no matter what. But we need more evidence about the people who won’t agree to such experiments, as well as about the people who won’t comply—the crossovers within an experiment. Following those who don’t agree to participate in the experiment would be a good first step, although even though just following people can change their behavior.
(We should also have randomized experiments comparing AA facilitation to drug treatments, despite the generalizability issues. I assume—or hope—that such studies are already in the works.)
In the meantime, should we “encourage even more to take advantage” of AA? Should courts continue to mandate AA? Or is Glaser right that we should stop “blaming” those who fail AA and instead send them for drug treatment? Hard to know. But we should be humble—and flexible—in generalizing from randomized experiments to advice for everyone with an alcohol problem. Perhaps we should only suggest they try AA—and try something else if that doesn’t work.
*The breast feeding difference of 6% points reported in the paper controlled for any differences in age, education, income and marital status between participants and non-participants, although those differences were quite small. By personal communication, Onur informed me that the difference with no controls was also 6% points.
Note: Post slightly modified for clarity April 19, 2015.