Certified Reproducibility. Q&A on ShinyLearner & the CODECHECK certificate, pt. 1

CODECHECK certificateOut today in GigaScience is ShinyLearner, a new tool to make it easier to perform benchmark comparisons of classification algorithms. This tool stands out by making this process super systematic and reproducible, and despite needing to interface with many different libraries and languages it uses software containers (and a CodeOcean demo) so end users don’t need to worry about this complexity. Another way it stands out is it is the first published example to demonstrate a new way of peer reviewing software articles: presenting a CODECHECK certificate of reproducible computation.

This tackles one of the main challenges of computational research by supporting people checking and reviewing code with a workflow, guidelines and tools to evaluate computer programs underlying scientific papers. These independently time-stamped runs are awarded a “certificate of reproducible computation” and increase availability, discovery and reproducibility of crucial artifacts for computational sciences. Alongside the open peer reviews that GigaScience provides as standard, the resulting CODECHECK certificate is displayed alongside the paper. Cited in the references (see Reference 94), on top of being a form of credit for the authors receiving it, as it is linked via DOIs and Publons to the person doing the CODECHECK, this can reward their efforts in doing this by being displayed in their ORCID profile and any other form of online researcher profile, biosketch or CV.

CODECHECK certificateWe will cover ShinyLearner in more detail in a follow-up posting, but to tell us more about the CODECHECK process we query Stephen Eglan as part of our long running Q&A series. Stephen is co-founder of CODECHECK, CODECHECKer of this specific example, and a long time practitioner of reproducible research himself. His CARMEN paper with dynamically figures generated  was highlighted in a similar Q&A back in 2014. Stephen is now a Reader in Computational Neuroscience at the University of Cambridge, in the Department of Applied Mathematics and Theoretical Physics, as well as a Fellow at the Turing Institute.

How has being an author and peer reviewer informed and driven you to set up CODECHECK?

I’m not sure that being an AUTHOR influenced the development of CODECHECK, but certainly being a peer-reviewer helped.  I’d like to think of myself as a diligent peer-reviewer, to the extent of often asking for authors to include code/data when their paper is published. However, if an author has provided code, I may take a quick glance at the code, but I have yet to download it and try to run the code myself.  This is because it would often take a long time to do and then that would delay my peer review.  So, one thing that CODECHECK tries to solve is to take that load off the peer-reviewer, as the peer-reviewer can see from the certificate that the code has been run with success by someone else.

How did the process work with ShinyLearner, and how long did it take you to test and produce this CODECHECK certificate?

I was asked to peer review the ShinyLearner paper, and when I read it and saw the supplementary information I was fairly confident that it should be reproducible.  The authors had done a very good job of curating their workflow and even provided a [Jupyter] notebook.  Having this notebook available meant that I could work easily (see next question).

I estimate that it took me about 10 hours to do the certificate.  That sounds like a long time, but involved several iterations between my colleague Daniel Nüst and me to modify our workflow based on my experience of writing this first certificate.  I’m hoping that with future certificates it will become much quicker.

CODECHECK certificate figure
How reproducible and easy to CODECHECK did you find ShinyLearner?

As I said above, thanks to the authors providing a notebook, it was very easy to reproduce.  The caveat here is that I only reproduced the second step of their workflow, which is the visualisation of the outputs from the machine learning step.  That first step would have taken much longer (days) to run the code, and so I deliberately decided to make the first project simpler by focusing just on the visualisation (see picture).

What is next for the CODECHECK project, and where would you like to see this effort eventually go?

There are lots of things we’d like to do:

– we plan to write a paper summarising the project and outcomes so far.

– we would like to get more journals involved to add CODECHECK to their workflow at whatever stage (it could be a pre-submission requirement, it could be done at the same time as peer review, or after acceptance).

– We would like to see communities of CODECHECKERS emerge within scientific disciplines [potential CODECHECKERS can sign up here].

– Thanks to some hardworking students last summer we have a body of previous papers (“classics” in computational neuroscience) for which the key figures have been reproduced.  We would like to make certificates for these papers.

– Daniel Nüst and I are grateful to Mozilla for their financial support to help kickstart this project.  What we now need to do is to find funding to help fund further technical development and to provide editorial support for CODECHECKERS.

Read more in our follow up post on ShinyLearner. Stephen presented CODECHECK at The 14th Munin Conference on Scholarly Publishing 2019 and you can watch a video recording here.

References

Piccolo SR. et al.,  ShinyLearner: A containerized benchmarking tool for machine-learning classification of tabular data, GigaScience, Volume 9, Issue 4, April 2020, giaa026, https://doi.org/10.1093/gigascience/giaa026

Eglen SJ. CODECHECK Certificate 2020-001. Zenodo. 2020 http://doi.org/10.5281/zenodo.3674056