Replication: Marrying Scientific Virtues with Scientific Practice

Articles

Replication lies at the heart of the scientific method: experimental results must be independently verified to strengthen the credibility of the findings. That’s the theory. But what of the practice?

Brian Nosek, a John Templeton Foundation grantee and co-founder of the Center for Open Science (COS), is putting the practice to the test. Last year, he published a report on the “Reproducibility Project: Psychology” (RP:P), described by the leading science journal Nature as “an ambitious attempt to re-test seminal findings in 100 psychology papers.” As it turns out, 61 of the 100 findings could not be replicated.

“There has been much debate about reproducibility during the last few years,” explains Nosek. “But most of the discussion has been speculative, or based on historical analysis of the published literature. At minimum, our report has provided evidence to help make the discussions about reproducibility more concrete.”

Nosek’s “careful diplomacy and can-do approach” was lauded in Nature as contributing to the good reception of this unsettling outcome. “Overall the reaction has been very positive,” Nosek continues. “That doesn’t mean that people all agree on what the results mean. Rather, the discussion is active, productive, and identifying new questions to investigate to get a better handle on the causes of irreproducibility and how to improve.”

Those questions go to the heart of the culture in which science is pursued. It’s about aligning scientific practices with the scientific values of openness, integrity, and reproducibility—as championed by the Open Science Framework, a COS initiative funded by the John Templeton Foundation. “Cultural change is hard,” Nosek says. “The reason is that researchers already endorse the core values of science—transparency, reproducibility, skepticism, disinterestedness—they just are not rewarded for practicing them. If we can nudge the system of rewards just enough, then researchers will be able to express their values in practice and not suffer career costs for doing so.”

Nosek believes that findings such as the RP:P offer an important opportunity to change dysfunctional scientific practices. No small part of that is about appealing to scientists themselves, who feel an inner dissonance when their desire to behave according to their core values is undermined or thwarted. It’s also a question for the scientific community: can it nurture the trust and discipline that should characterize the scientific enterprise? “The basis of a healthy scientific community is fostering the community’s values in its daily practice,” Nosek affirms.

His own work continues, with three priorities as next steps. “First, we want to understand reproducibility challenges more deeply, both within the social-behavioral sciences and in other disciplines,” he explains. For example, a similar project to RP:P is underway in the field of cancer biology. COS is also working to understand how reproducibility varies across, and is related to, different research environments. That should lead to better interventions to improve it.

The Open Science Framework is another key program. Essentially, it’s a set of free tools to assist researchers with transparency in their research practices, such as preregistration, and archiving data and research protocols. “Having useful and effective tools, and good training to support their use, is essential for changing behavior,” Nosek says.

A third priority addresses the wider culture of scientific research. At present, the key incentive is publishing. Conversely, there are few incentives for transparency and reproducibility. “In fact, spending my time making my research more transparent may have a career cost in that I could have spent that time on doing more research to get more published,” Nosek explains. In other words, scientific virtues are not supported by scientific practice.

That said, Nosek is optimistic that relatively modest proposals could be effective here. “For example, badges to acknowledge open practices is a very simple intervention,” he says. “Researchers that share their data or materials openly get a signal of that behavior on their journal article. Little signals of valued behaviors can have a huge impact,” he adds.

There are also positive indicators that the scientific community is taking the replication findings seriously. Alongside Nature, which names Nosek as one of 10 scientists who matter most this year, The Chronicle of Higher Education lists him on its 2015 Influence List. And the work matters so much because replication is one of the pillars of science. It’s long been known that scientists are subject to unconscious biases that can skew results. They are human beings. But if psychology can shed light on this tendency, and provide guidelines to minimize its influence, then—as Forbes reported—this will be a very big story indeed in 2016.

  • http://www.thegodreality.com/ johnheno

    None of this fits cosmological and biological evolutionary “historical theories” regarding what supposedly happened in the unobserved and unrepeatable distant past. In fact every observation and experiment ever made affirms there is a reproductive boundary and cross breeding limit for every form of life in existence, making the supposed evolutionary continuum from earth to Einstein impossible. .

  • SigmetSue

    As far as John Heno’s comments about the irreproducibility of evolution: Actually the principles of evolution are easily reproducible by breeding short lived organisms and selecting for specific traits. This neatly coincides with the fossil record. Moreover, evolution has little to do with reproductive boundaries and everything to do with change over time in response to environmental challenges.

    But what I really wanted to comment on is the “research” that goes on in education. Teachers these days are being smothered by insistence that they use “research based” and “evidence based” teaching techniques and approaches even those few of these techniques have ever been rigorously tested. And usually the “research” was done by the very entities that are pushing the new methods. It’s no wonder public education’s results are disappointing.