BITSS is delighted to announce that we’ve added a new member to our advisory board: economist Paul Romer. Paul is a prominent economic theorist who has made major contributions to our understanding of economic growth, technological change, and urbanization. Paul is currently Professor of Economics at NYU, director of the Marron Institute of Urban Management, and director of the Urbanization Project at the Leonard N. Stern School of Business. He has previously taught at Stanford University’s Graduate School of Business, the University of California Berkeley, the University of Chicago, and the University of Rochester. You can learn more about him and the other advisory board members here, or you can view Paul’s own website.
As far as work related to transparency, Paul has recently written a paper in the Papers & Proceedings issue of The American Economic Review as well as several related blog posts about “mathiness” in economic theory models. The paper and related blog posts have elicited a significant amount of interest and sparked a fascinating debate among economic theorists.
Harsh scrutiny of an influential political science experiment highlights the importance of transparency in research.
The paper, from UCLA graduate student Michael LaCour and Columbia University Professor Donald Green, was published in Science in December 2014. It asserted that short conversations with gay canvassers could not only change people’s minds on a divisive social issue like same-sex marriage, but could also have a contagious effect on the relatives of those in contact with the canvassers. The paper received wide attention in the press.
Yet three days ago, two graduate students from UC Berkeley, David Broockman and Joshua Kalla, published a response to the study, pointing to a number of statistical oddities, and discrepancies between how the experiment was reported and how the authors said it was conducted. Earlier in the year, impressed by the paper findings, Broockman and Kalla had attempted to conduct an extension of the study, building on the original data set. This is when they became aware of irregularities in the study methodology and decided to notify Green.
Reviewing the comments from Broockman and Kalla, Green, who was not involved in the original data collection, quickly became convinced that something was wrong – and on Tuesday, he submitted a letter to Science requesting the retraction of the paper. Green shared his view on the controversy in a recent interview, reflecting on what it meant for the broader practice of social science and highlighting the importance of integrity in research.
Roger Peng and Jeffrey Leek of John Hopkins University claim that “ridding science of shoddy statistics will require scrutiny of every step, not merely the last one.”
This blog post originally appeared in Nature on April 28, 2015 (see here).
There is no statistic more maligned than the P value. Hundreds of papers and blogposts have been written about what some statisticians deride as ‘null hypothesis significance testing’ (NHST; see, for example, go.nature.com/pfvgqe). NHST deems whether the results of a data analysis are important on the basis of whether a summary statistic (such as a P value) has crossed a threshold. Given the discourse, it is no surprise that some hailed as a victory the banning of NHST methods (and all of statistical inference) in the journal Basic and Applied Social Psychology in February.
Such a ban will in fact have scant effect on the quality of published science. There are many stages to the design and analysis of a successful study. The last of these steps is the calculation of an inferential statistic such as a P value, and the application of a ‘decision rule’ to it (for example, P < 0.05). In practice, decisions that are made earlier in data analysis have a much greater impact on results — from experimental design to batch effects, lack of adjustment for confounding factors, or simple measurement error. Arbitrary levels of statistical significance can be achieved by changing the ways in which data are cleaned, summarized or modelled2.
New prizes will recognize and reward transparency in social science research.
BERKELEY, CA (May 13, 2015) – Transparent research is integral to the validity of science. Openness is especially important in such social science disciplines as economics, political science and psychology, because this research shapes policy and influences clinical practices that affect millions of lives. To encourage openness in research and the teaching of best practices, the Berkeley Initiative for Transparency in the Social Sciences (BITSS) has established The Leamer-Rosenthal Prizes for Open Social Science. BITSS is an initiative of the Center for Effective Global Action (CEGA) at the University of California, Berkeley. The prizes, which provide recognition, visibility and cash awards to both the next generation of researchers and senior faculty, are generously supported by the John Templeton Foundation.
The competition is open to scholars and educators worldwide.
“In academia, career advances and research funding are usually awarded on the basis of how many journal articles a scientist publishes. This incentive structure can encourage researchers to dramatize their findings in ways that increase the probability of publication, sometimes even at the expense of transparency and integrity,” said Edward Miguel, PhD, Professor of Economics at UC Berkeley and Faculty Director of CEGA. “The Leamer-Rosenthal Prizes will help speed the adoption of transparent practices by recognizing and rewarding researchers and educators whose work and teaching exemplify the best in open social science.”
Garret Christensen–BITSS Project Scientist
BITSS participated in a pair of conferences/workshops recently that we should probably tell you about. First, BITSS was part of a research transparency conference in Washington DC put together by the Laura and John Arnold Foundation. Many of the presentations from the conference can be found here. The idea was to bring together academics, researchers on federal contracts, and federal government research sponsors and policy makers. Just a few things that were new to me or which stuck out were:
Twelve points that will help separate the science from the pseudoscience (see here).
The Coalition for Evidence-Based Policy, equivalent to CEGA’s domestic counterpart and a leading force working to institutionalize evidence-based policy making, will merge with one of its funders, the Laura and John Arnold Foundation (LJAF). Also a funder of BITSS, LJAF will integrate the staff of the Coalition into its newly established Evidence-Based Policy and Innovation division. The mission of the new division will be very similar to the one of the Coalition it is replacing which will close down its operations in the next few days and transition its staff to the LJAF in the coming weeks.
According to a LJAF press release, the evidence-based policy subdivision, that will be led by Jon Baron, the former president of the Coalition, will focus on “strategic investments in rigorous evaluations, collaborations with policy officials to advance evidence-based reforms, and evidence reviews to identify promising and proven programs” (LJAF). The innovation subdivision, to be led by Kathy Stack, former adviser for evidence-based innovation at the White House Office of Management and Budget, “will bring policymakers, researchers, and data experts from the public and private sectors together to strengthen the infrastructure and processes needed to support evidence-based decision making” (LJAF).