False-positive psychology

In this video, we explore how flexibility, common in data collection and analysis, can increase the likelihood of producing a false-positive. Joseph Simmons, Leif Nelson, and Uri Simonsohn, the authors of “False-positive Psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant”, demonstrate this using experiments and computer simulations, and also give simple guidelines that researchers and reviewers can use to reduce the incidence of these errors.


False-positives in social science research can have particularly negative impacts when research is used to inform policy or other outcomes. These outcomes can range, from the endorsement of the idea that simply posing in a powerful stance can actually empower someone, to the suggestion that increasing austerity measures leads to economic growth.

In this article, authors Joseph Simmons, Leif Nelson, and Uri Simonsohn, address the costly issues and frequency of false positives in social science research and suggest six requirements for authors that they believe are simple, low-cost, and straight-forward solutions to the problem.

They first define a false positive as the “incorrect rejection of a null hypothesis”. The problem with false positives is that once they are published, they are persistent and there is little incentive to replicate findings to test their significance. The authors write “…false positives waste resources: they inspire investment in fruitless research programs and can lead to ineffective policy changes. Finally, a field known for publishing false positives risks losing its credibility.”

They argue that, with current standards of disclosure, false positives are actually “vastly more likely” due to the ease with which researchers can publish “statistically significant” evidence for nearly any hypothesis.

Many of these issues can be attributed to researcher degrees of freedom, or the multitude of decisions researchers can make about the design and details of their experiments. This freedom increases the likelihood that their analyses will produce false positives, and can be attributed to two factors: “ambiguity in how to best make decisions, and the researcher’s desire to find a statistically significant result”, the latter referring to the temptation and likelihood of researchers coming to conclusions that are consistent with their own desires or beliefs.

The authors performed simulations to estimate the influence of researcher degrees of freedom on the probability of a false-positive result. They focused on four common degrees of freedom that increase the likelihood of a researcher falsely detecting a significant effect. These include flexibility in:

  1. choosing among dependent variables,
  2. choosing sample size,
  3. using covariates, and
  4. reporting only subsets of experimental conditions.

They also suggest six requirements as a solution to the high incidence of false-positives, encouraging appropriate conduct of research, transparency in methods, and holding readers accountable to make informed decisions about the credibility of findings.

Authors must:

  1. Decide the rule for terminating data collection before data collection begins and report this rule in the article. This prevents authors from adding additional observations and further testing to achieve statistical significance if initial results are non-significant.
  2. Collect at least 20 observations per cell or else provide a compelling cost-of-data-collection justification. Small samples are “simply not powerful enough to detect most effects” and are “more likely to reflect interim data analysis and a flexible termination rule”.
  3. List all variables collected in a study. This prevents researchers from only reporting convenient subsets of measurements and allows readers to identify degrees of freedom.
  4. Report all experimental conditions, including failed manipulations.This “prevents authors from selectively choosing only to report conditions that yield results consistent with their hypothesis.”
  5. Report the statistical results of eliminated observations as if those observations had been included. This requires authors to explain why they eliminated the data and encourages readers to consider the validity of the exclusion.
  6. Report the statistical results of the analysis without a covariate if the analysis includes a covariate. This requires authors to “justify use of the covariate,” reveals “the extent to which a finding is reliant on the presence of a covariate,” and, again, encourages readers to practice discernment about whether the covariate is warranted.

Finally, Simmons, Nelson, and Simonsohn present four guidelines for reviewers to abide by.

Reviewers should…

  1. Ensure that authors follow the requirements.
  2. Be more tolerant of imperfections in results (false-positive findings could be due to an “unreasonable expectation” imposed by reviewers for data to turn out as predicted).
  3. Require authors to demonstrate that their results do not depend on arbitrary analytic decisions.
  4. Require authors to conduct an exact replication if justifications of data collection or analysis are not compelling.

While there has been some criticism of the authors’ proposed solutions, they argue that the requirements for which they advocate impose minimal costs to all involved in research and review and are a step towards discovering and disseminating valid research.

Think about an experiment that recently caught your interest. Were any of the author requirements listed in this article followed, or even disclosed? If not, what do you think are the implications for their findings? Can you think of a time when degrees of freedom might have affected your own research?

You can read the entire article here.


Reference

Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. “False-Positive Psychology Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” Psychological Science 22 (11): 1359–66. doi:10.1177/0956797611417632.