“Publication Bias in the Social Sciences”
If publication bias and data mining are so common, what might this mean for studies that don’t produce positive results? A 2014 Science article written by Annie Franco, Neil Malhotra, and Gabor Simonovits tried to answer this question by reviewing over 200 high-quality studies whose publication status could be easily determined. What they found further confirmed the prevalence of publication bias in the social sciences. In order to combat this bias, Franco, Malhotra, and Simonovits pushed for journals to require authors to submit their questions and methods for review before data collection. We will learn about how social science journals have begun to formalize this process next week.
In this article, authors Annie Franco, Neil Malhotra, and Gabor Simonovits leverage Time-sharing Experiments in the Social Sciences (TESS) to find evidence of publication bias and identify when it occurs during research.
The authors state that publication bias occurs when “publication of study results is based on the direction or significance of the findings.” In general, there is a greater chance of a study being published if it has statistically significant results. This practice of selective reporting produces what is known as the “file drawer” problem where there is a tendency to store away statistically non-significant results in file drawers rather than publish them. Franco, Malhotra, and Simonovits write that “failure to publish appears to be most strongly related to the authors’ perceptions that negative or null results are uninteresting and not worthy of further analysis or publication.”
Researchers have tried to address publication bias in the past by “replicat[ing] a meta-analysis with and without unpublished literature,” and “solely examin[ing] published literature and rely[ing] on assumptions about the distribution of unpublished research.” Each of these methods have their limits, so the authors chose instead to “examine the publication outcomes of a cohort of studies.” In this case, they examined the outcomes of TESS, a research program that proposes survey-based experiments and “submits proposals to peer review and distributes grants on a competitive basis.”
Franco, Malhotra, and Simonovits compared the statistical results of TESS experiments that were published to those that were not. The advantages of this strategy are that there are:
- a known population of studies,
- full accounting of what is published or not,
- rigorous peer review for proposals with a quality threshold that must be met, and
- the same high-quality survey research firm conducting all experiments.
(Note: A concern with TESS is that it may not be completely representative of social science research.)
The analysis distinguished between two types of unpublished experiments: (1) those that were prepared for submission to a journal, and (2) those that were never written up in the first place.
The authors also considered “whether the results of each experiment are described as statistically significant by their authors,” as it can be difficult to know the exact intentions of each author. This was important because each author’s perceptions influence how they present their data to readers.
Studies were classified in 3 ways: (a) Strong – all or most hypotheses were supported; (b) null – all or most hypotheses were not supported; or (c) mixed – representing the remainder.
They found that null studies were far less likely to be published. This can be problematic for two reasons:
- Researchers may be wasting effort and resources conducting studies that have already been executed, but in which the treatments didn’t produce the desired result.
- If future studies obtain statistically significant results that are published, it could falsely suggest stronger effects.
To promote transparency, Franco, Malhotra, and Simonovits suggest a better understanding of the motivations of researchers who choose to pursue projects based on the expected results. They also propose the use of a “two-stage review (the first stage for the design and the second for the results), pre-analysis plans, and requirements to pre-register studies that should be complemented by incentives not to bury statistically non-significant results in file drawers. Creating high-status publication outlets for these studies could provide such incentives. And the movement toward open-access journals may provide space for such articles. Further, the pre-analysis plans and registries themselves should increase researcher access to null results. Alternatively, funding agencies could impose costs on investigators who do not write up the results of funded studies. Last, resources should be deployed for replications of published studies if they are unrepresentative of conducted studies and more likely to report large effects”
What do you think? Which of these proposed actions or incentives would be easiest to implement or be the most effective?
You can read the whole paper by clicking on the link in the SEE ALSO section at the bottom of this page.
Reference
Franco, Annie, Neil Malhotra, and Gabor Simonovits. 2014. “Publication Bias in the Social Sciences: Unlocking the File Drawer.” Science 345 (6203): 1502–5. doi:10.1126/science.1255484.
© Center for Effective Global Action.