When the mayor of Fort Lee asked why those—now infamous—toll lanes to the George Washington Bridge were closed, he was told, “a study.” Yet when his desperate pleas to Port Authority officials were ignored, he did not turn to the independent official body in charge of making sure that studies are conducted ethically. Such a body must ensure studies don’t harm people—don’t, say, trap ambulances in traffic.
Why didn’t the Fort Lee mayor turn to that body? Don’t research studies that affect humans have to show that they don’t harm those humans? Or at least that benefits exceed harms? Decades ago, after scandals like the Tuskegee study which kept poor sharecroppers ignorant of their syphilis, and therefore untreated, we created rules and bureaucracies to protect human research subjects. Currently, regulation 45 CFR 46 ensures this, mandating the creation of Institutional Research Boards (IRBs) who are charged with ensuring that research on humans is ethical.
The mayor of Fort Lee couldn’t turn to an IRB because there wasn’t one. IRBs don’t apply to a government agency—or school or business—trying to improve operations. Studies that are for “internal management” purposes, don’t count as research, which is defined as producing “generalizable knowledge.” (Generalizable means providing information beyond just the specific setting, place and time of the study.)
Generalizability, however, is an arbitrary, potentially dangerous and just plain stupid way to decide which studies are subject to ethical review.
For one thing, trying to decide which studies produce generalizable knowledge can get pretty silly. If a study is designed to measure how much signs threatening fines reduce texting while driving, that would be considered generalizable knowledge and IRB rules would apply. But if a Port Authority study examines how effective signs are in specific places on the GWB, that would be not be generalizable knowledge. It would be just about improving GWB operations and therefore “internal management.” If the GWB sign placement study finds the signs more effective at certain places on the GWB than others, wouldn’t that provides some clues about where to put them on some other bridges?
As a practical matter, IRB jurisdiction or not often comes down to whether the study is published in an academic journal. If the Port Authority employed some PhD interested in publishing the study results, she’d probably want to claim generalizability and have to go to an IRB. Since studies are now disseminated in online reports and even blogs, publishing in peer-reviewed journals is possibly an even sillier way to determine ethical review jurisdiction.
(Technically, a bunch of other factors also affect whether an IRB has jurisdiction, including “does the organization conducting the research ever get Federal funding?” But, as a practical matter, outside universities and medical settings, academic publication is often what matters.)
Which studies are subject to ethical review is not some arcane issue irrelevant to the general public. With the advent of so-called Big Data, the numbers of studies by businesses, government agencies and non-profits are on the cusp of exploding. The vast majority these studies will just analyze existing data, with ethical issues being mostly about privacy and security—stuff people are already worried about and trying to regulate.
But some studies will involve experiments—interfering with the world, say, by moving traffic cones. And these are not on the public’s radar. Most people think experiments are for medical research and labs. They’re not: they can be in schools, court rooms, political rallies, grocery stores—and anything online. Some may say that experimenting on people is always ethically wrong. But that would be wrong. We want to reduce texting and driving, reduce convictions of the innocent, increase software coders and so on. And experiments can help us achieve all that and more. We just need to weed out the harmful experiments—generalizable or not.
I feel weird complaining about studies in real world settings not being subject to IRBs. Normally, I preach to my students in government and nonprofits the value—and practicality—of doing more research in their workplaces. The last thing I want is a vast cumbersome bureaucracy that interferes with learning that could make the world a better place.
I also feel weird—actually nervous—because normally I talk about how there is too much bureaucracy for and scrutiny of innocuous social research. For example, a colleague who studies communications in social movements wanted to interview some innovators who were briefly in town. But he could not get IRB approval in time, since a rush job is a few weeks. So, he was unable to do interviews similar to those journalists do all the time.
You should see the IRB forms we academics fill out. A completely noncontroversial application with no minors, no sensitive issues, nothing remotely objectionable will have about six distinct forms. There must be a complete description of how the research will be carried out, along with informed consent forms that every research subject must agree to. All of this is needed before the review process can even start. There is a push against IRB scrutiny for some forms of social research, like oral histories. http://www.insidehighered.com/news/2013/03/06/aaup-recommends-more-researcher-autonomy-irb-reform After all, the rules and procedures were originally designed for medical research.
(Some of the excess is not due to the official rules, but rather the over-zealousness of IRBs and universities. For example, review is only supposed to be for research “on humans.” According to official definitions, interviewing people about their sex lives is research on humans, but interviewing agency officials about agency policies is not. Yet woes betide the academic who decides for himself that the IRB does not have jurisdiction over his interviews of agency officials.)
Still, buried in the excess is the great value of ethical reviews. Researchers are forced to think about and articulate any possible risks to people from the research. They must also describe potential benefits to people participating in the research—and to society. That is what we need to expand to the program evaluations, quality assurance studies and other operational studies that don’t claim generalizability. And expand to sectors beyond academia, medical settings and those wanting to publish.
At this point, I don’t know how to expand the good of ethical review without a lot of bad side effects. Perhaps going for norms and training before regulation would be best. Perhaps we could require anyone doing any investigation to think about and write up potential harms and benefits, and require them to make it available if concerns arise.
Of course, ethical review of Port Authority studies would not have deterred the Bridgegate perpetrators. After all, the Port Authority does have extensive rules and processes for closing lanes for any purpose and the perpetrators ignored all of those rules. At best, ethical review of studies on humans would have forced the perpetrators to find another excuse. But the ease with which they used that excuse highlights the immense gaps and inconsistencies in which studies are regulated. As studies explode in all corners of our lives, let’s work on a better approach.
Disclosure: Inspiration for this post came from my aunt, Joan Zorza.