Observational Studies is a peer-reviewed journal that publishes papers on all aspects of observational studies. Researchers from all fields that make use of observational studies are encouraged to submit papers.
Observational Studies encourages submission of study protocols (pre-analysis plans) for observational studies. Before examining the outcomes that will form the basis for an observational study’s conclusions, a study protocol should describe the design, exclusion criteria, primary and secondary outcomes, and proposed analyses. Following a publicly available study protocol makes a study more convincing and transparent.
For any questions about the journal or submissions, please contact Dylan Small (Wharton, UPenn) at dsmall [at] wharton [dot] upenn [dot] edu.
Innovations for Poverty Action (IPA) seeks a Research Analyst to join the Data Analysis/Data Publication team. This team is leading an innovative and exciting new part of IPA’s effort to promote high quality research: releasing research data from social science experiments publicly, for re-use and replication.
The position also involves helping to develop a data repository for experiments in the social sciences (more information here), as well as producing guidelines for code and data management. IPA is collaborating with the Institute of Social and Policy Studies at Yale University on this initiative.
This job offer may be of particular interest to those with background in statistics/quantitative analysis, who are also interested in advancing research transparency within the social sciences. Please see the full job description here.
BITSS will be holding its 3rd annual conference at UC Berkeley on December 11-12, 2014. The goal of the meeting is to bring together leaders from academia, scholarly publishing, and policy to strengthen the standards of openness and integrity across social science disciplines.
This Call for Papers focuses on work that elaborates new tools and strategies to increase the transparency and reproducibility of research. A committee of reviewers will select a limited number of papers to be presented and discussed. Topics for papers include, but are not limited to:
- Pre-registration and the use of pre-analysis plans;
- Disclosure and transparent reporting;
- Replicability and reproducibility;
- Data sharing;
- Methods for detecting and reducing publication bias or data mining.
Originally posted on Retraction Watch:
That’s an important question, with an answer that may help determine how much attention some people pay to research misconduct. But it’s one that hasn’t been rigorously addressed.
Seeking some clarity, Andrew Stern, Arturo Casadevall, Grant Steen, and Ferric Fang looked at cases in which the Office of Research Integrity had determined there was misconduct in particular papers. In their study, published today in eLife:
View original 774 more words
- Colin Elman, Syracuse University
- Arthur Lupia, University of Michigan, Ann Arbor
The video from a recent BITSS roundtable entitled “Data Science Meets Social Science” is now available online. Organized in partnership with the UC Berkeley D-Lab, the event brought together leading social scientists and Silicon Valley professionals to discuss pathways of collaboration between the two different fields, and their increasing impact on society in the age of open science.
- Dav Clark (Data scientist, D-Lab) — moderator
- Ana Nelson (Founder, Dexy)
- Karthik Ram (Co-founder, rOpenSci)
- Kevin Koy (Executive Director, Geospatial Innovation Facility)
Originally posted on The Scholarly Kitchen:
Last week, Slate published an ill-advised hatchet job by an education columnist specializing in the humanities. The topic? Peer review. While the particulars of the Slate article inspired me to add a sharply critical comment, and I was only one among dozens who found fault with the article, there is a fundamental question worth considering in all this:
What is “peer review”?
Peer review certainly isn’t one thing. Arguing as if it were is a fundamental error made by the Slate columnist and often by others. Peer review is constantly evolving. It is actually difficult to define precisely because of all the variations it can have. All this makes general approbation or condemnation of peer-review difficult to take without the requisite grain of salt.
We often make an “availability error,” generalizing to all forms of peer review based on the…
View original 1,043 more words