Social science’s low replication rate is not a crisis
Let’s say you do a job that involves making predictions about human behavior — you manage money, you sell things, you write opinion columns. Just less than half of your predictions turn out to be more or less right, about 10% are completely wrong and with the rest it’s hard to say for sure. Would a success rate like that make you good at your job?
This ran through my mind as I perused the findings of the Center for Open Science’s huge Systematizing Confidence in Open Research and Evidence project, which were published in a series of articles in the journal Nature recently and are available outside the paywall — along with other papers and supporting data — at the center’s website. In a study that attempted to replicate the findings of 164 randomly selected articles published in social science journals using new data sets, 49.3% of the replications "had statistically significant findings with the same pattern as the original finding,” 9.7% showed an "opposing pattern” and 40.4% showed no statistically significant effect.
Others seemed to interpret this result as an indication of failure. "Across the Social Sciences, Half of Research Doesn’t Replicate,” was the headline of an article in Science. At Forbes it was "Only About Half Of Social Science Results Can Be Replicated, Finds New Study.” In their new book "The Credibility Crisis in Science: Tweakers, Fraudsters, and the Manipulation of Empirical Results," social scientists Thomas Plümper and Eric Neumayer term the 47% replication rate found in a 2015 analysis of psychology papers "measly.”
