CONSORT-SPI 2018: Announcing an extension for randomized controlled trials of social and psychological interventions

Re-post by Paul Montgomery, Evan Mayo-Wilson, and Sean Grant Complete and transparent reporting of randomized controlled trials is integral for replication, critical appraisal and understanding context. Published today in Trials, a new extension of the CONSORT Statement aims to improve the reporting of randomized controlled trials of social and psychological interventions. Here,…

BITSS and BIDS Collaboration: Call for Reproducible Workflows

BITSS and the Reproducibility Working Group at the Berkeley Institute for Data Science are collaborating on an edited volume of reproducible workflows in the social sciences, and we are looking for submissions. BIDS Fellow Cyrus Dioun wrote about it on the Bad Hessian computational social science blog: “[M]aking work reproducible can feel…

Searching 30,000 Psychology Articles for Statistical Errors

I recently came across this interesting article in psychology, which scanned over 30,000 articles in psychology for statistical reporting errors–checking test statistics against p-values, as some articles claim significance levels that don’t match up with their test statistics. The lead author, Michèle B. Nuijten, blogged about the article for Retraction Watch, where…

P-values are Just the Tip of the Iceberg

Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.” This blog post originally appeared in Nature on April 28, 2015 (see here). There is no statistic more maligned than the P value. Hundreds of papers and…

Registered Reports to the Rescue?

After writing an article for The Upshot, Brendan Nyhan (Assistant Professor at Dartmouth) was interviewed by The Washington Post. The original Upshot article advocates for a new publishing structure called Registered Reports (RRs): A research publishing format in which protocols and analysis plans are peer reviewed and registered prior to data collection, then published regardless of the outcome. In the following interview…

Facilitating Radical Change in Publication Standards: Overview of COS Meeting Part II

Originally posted on the Open Science Collaboration by Denny Borsboom This train won’t stop anytime soon. That’s what I kept thinking during the two-day sessions in Charlottesville, where a diverse array of scientific stakeholders worked hard to reach agreement on new journal standards for open and transparent scientific reporting. The aspired standards are intended…

What to Do If You Are Accused of P-Hacking

In a recent post on Data Colada, University of Pennsylvania Professor Uri Simonsohn discusses what do in the event you (a researcher) are accused of having altered your data to increase statistical significance. Simonsohn states: It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was…

Teaching Integrity in Empirical Research

Richard Ball (Economics Professor at Haverford College and presenter at the 2014 BITSS Summer Institute) and Norm Medeiros (Associate Librarian at Haverford College) in a recent interview appearing on  the Library of Congress based blog The Signal, discussed Project TIER (Teaching Integrity in Empirical Research) and their experience educating students how to…

Scientific Misconduct in the Middle East

On plagiarism and fraud in the Middle Eastern research community (by Ranya Stamboliyska): Gallons of digital ink have been spilt discussing depressing laundry lists of misconduct cases in the west (and more recently, in China). There is, however, very little on unethical behaviour in the Arab world, despite the wide number of Mid-Eastern students…

What can be done to prevent the proliferation of errors in academic publications?

Every now and again a paper is published on the number of errors made in academic articles. These papers document the frequency of conceptual errors, factual errors, errors in abstracts, errors in quotations, and errors in reference lists. James Hartley reports that the data are alarming, but suggests a possible way of…

The Reformation: Can Social Scientists Save Themselves?

From Jerry Adler in the Pacific Standard—on the credibility crisis in social science research, publication bias, data manipulation, and non-replicability. Featuring BITSS aficionados Brian Nosek, Joe Simmons, Uri Simonsohn and Leif Nelson. Something unprecedented has occurred in the last couple of decades in the social sciences. Overlaid on the usual academic incentives of…

Scientific Pride and Prejudice

Or why Jane Austen might well be the first game theorist. Science is in crisis, just when we need it most […] A major root of the crisis is selective use of data. Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias”:…

Twenty Tips For Interpreting Scientific Claims

A useful list of 20 concepts to help decision-makers parse how evidence can contribute to a decision, and potentially avoid undue influence by those with vested interests.   Calls for the closer integration of science in political decision-making have been commonplace for decades. However, there are serious problems in the application of…

The changing face of psychology

Important changes are underway in psychology. Transparency, reliability, and adherence to scientific methods are the key words for 2014, says a recent article in The Guardian. A growing number of psychologists – particularly the younger generation – are fed up with results that don’t replicate, journals that value story-telling over truth, and an…

When is an error not an error?

Guest post by Annette N. Brown and Benjamin D. K. Wood on the World Bank Development Impact blog: We are seeing a similar propensity for replication researchers to use the word “error” (or “mistake” or “wrong”) and for this language to cause contentious discussions between the original authors and replication researchers. The…

Preregistration: Not just for the Empiro-zealots

Leif Nelson making the case for pre-registration: I recently joined a large group of academics in co-authoring a paper looking at how political science, economics, and psychology are working to increase transparency in scientific publications. Psychology is leading, by the way. Working on that paper (and the figure below) actually changed my…

Git/GitHub, Transparency, and Legitimacy in Quantitative Research

Reblogged from The Political Methodologist. A complete research project hosted on GitHub is reproducible and transparent by default in a more comprehensive manner than a typical journal mandated replication archive […] Maintaining your research project on GitHub confers advantages beyond the social desireability of the practice and the the technical benefits of using…

Welcome To The Era of Big Replication

Reblogged from Ed Young: Psychologists have been sailing through some pretty troubled waters of late. They’ve faced several cases of fraud, high-profile failures to repeat the results of classic experiments, and debates about commonly used methods that are recipes for sexy but misleading results. The critics, many of whom are psychologists themselves,…

The Folly of Powering Replications Based on Observed Effect Size

Uri Simonsohn on replications: It is common for researchers running replications to set their sample size assuming the effect size the original researchers got is correct. So if the original study found an effect-size of d=.73, the replicator assumes the true effect is d=.73, and sets sample size so as to have…

Too Much Trusting, Not Enough Verifying

This week in The Economist: Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis […] One reason is the competitiveness of science […] The obligation to “publish or perish” has come to rule over academic life. Competition for jobs is cut-throat […] Nowadays verification (the…

Changes in the Research Process Must Come From the Scientific Community

In a recent article intended to be published in a major policy journal, Victoria Stodden urges the scientific community to take the lead in establishing a new framework for more transparent research practices. While recent policy changes by the US government regarding public access to data and publications from federally funded research can…

Let’s Go Fishing

An interesting piece on p-fishing and what we can do about it.

Let's Go Fishing

An interesting piece on p-fishing and what we can do about it.

The Imperative to Share Complete Replication Files

“Good research involves publishing complete replication files, making every step of research as explicit and reproducible as is practical.” This is the conclusion from a new paper by political scientist Allan Dafoe (Yale University). Dafoe examines the availability of replication data in political science journals, and concludes that “for the majority of published statistical analyses, […]…