Computational reproducibility refers to obtaining consistent results when rerunning an experiment. Jupyter Notebook, a web-based computational notebook application, facilitates running, publishing, and sharing computational experiments along with their results. However, rerunning a Jupyter Notebook may not always generate identical results due to various factors, such as randomness, changes in library versions, or variations in the computational environment. This paper introduces the Similarity-based Reproducibility Index (SRI) -- a metric for assessing the reproducibility of results in Jupyter Notebooks. SRI employs novel methods developed based on similarity metrics specific to different types of Python objects to compare rerun outputs against original outputs. For every cell generating an output in a rerun notebook, SRI reports a quantitative score in the range [0, 1] as well as some qualitative insights to assess reproducibility. The paper also includes a case study in which the proposed metric is applied to a set of Jupyter Notebooks, demonstrating how various similarity metrics can be leveraged to quantify computational reproducibility.
翻译:暂无翻译