Changing the world turns out to be quite hard. When we set out to kick off Open Research Computation we knew that research had a problem. The fundamental standards of replicability and reproducibility are rarely reached, even in those cases, the computational sciences, where these goals ought to be relatively easy. The problem we face is that replication and reproduction of research is so rare that there is little incentive to put in the considerable effort required to make it possible for others to re-use your work.
The problem with changing practice, and improving systems, is that to achieve your ultimate goals you generally have to change a whole set of things simultaneously. Converting a car from petrol to natural gas requires more than just changing bits of the engine. You need a new tank, different filling connections, and a whole bunch of new safety features. If there is one person in charge of the car and they decide to change then this can be done. But if we’re talking about human systems, with many different parts, then agreeing on action and coordinating it is a much bigger problem.
The challenge of improving practice in computational science has many components of which one of the biggest is the many different kinds of people who carry it out. Students, both undergraduate and graduate throw code together to solve immediate problems while principal investigators guard a growing collection of spaghetti code that they haven’t touched in 20 years. But both of these groups are driven by publishing papers in traditional journals. At the other extreme are professional coders, brought in to support or develop a code base – people driven by professional pride in the quality of their work and the way it is viewed by other coders. In the middle lie researchers who feel the need to build better and more robust code, but without the time to do it, and citizen coders who could contribute but are locked out by the lack of transparency and access to the code base.
The challenge lies in finding ways to create the incentives that bridge these different communities. With Open Research Computation we thought to do that by creating a journal, a conventional venue that researchers would recognize, but one explicitly built on criteria that professional coders would respect and value. The problem is that the cross over community, the one which appreciates both of these aspects is not large, and possibly not big enough to support a journal that needs to meet conventional indexing requirements. This is disappointing but changing practice is never easy.
The challenge remains. The research community should be embarrassed by the current state of reproducibility in science and the computational research community doubly so. But to create change we need to find ways to reward and feature work that demonstrates the best practice we aspire to. Reaching the mainstream research community is still best done through papers – showing what can and should be done is critical. We have therefore collected the papers we have received into a thematic series within another software-oriented BioMed Central journal Source Code for Biology and Medicine. These papers are judged on the same criteria we set out for Open Research Computation and the series will remain open for submissions. We remain committed to the idea of featuring the software tools and services that demonstrate best practice. It just might take a little longer than we’d hoped.
Please click here to submit your manuscript to the Open Research Computation thematic series, stating clearly within your cover letter that you wish it to be considered as part of this thematic series.
Cameron Neylon, UK Science and Technology Facilities Council (STFC)