User:SimonTaylorBrunel/sandbox

Why is simulation reporting important?

Imagine you are a young researcher in computer simulation. During a particularly challenging simulation research project, a literature search identifies a simulation study with results that would greatly enhance your own modelling. You believe that you can directly reuse and build on results of the study and gain insight into how others have tackled a particularly difficult applied modelling problem. However, on a detailed review of the work you experience one or two ‘um… how did they do that bit’ moments. You have fun tracing citations the authors have made to document vague optimisation procedures that they have included in the model – only to find a recursion of papers that never describe these vital details. Your first thought is that this is not a problem because you can always contact the author of the model. Unfortunately it turns out that the lead author was a PhD student who did not pursue a career in research and has since disappeared into a lucrative career in finance. The student’s supervisor and co-authors might be able to help, of course. However, they have long since retired; or cannot answer the questions because they are not documented in the thesis; or may genuinely have had no idea what the PhD student was doing (only in rare cases, of course). The outcome of all of this effort is that it is not possible or at best extremely difficult to reproduce the simulation model and its results. You begin from scratch again. Later in your career you are working on an important project appraising the modelling of the effectiveness of a health treatment. In general, it is a bleak picture of negative results. But, there is a singular simulation study that demonstrates that the treatment is highly cost effective. In your appraisal of the paper you have several ‘um… how did they do that bit’ moments. All is not lost though, as the modeller works for a reputable institution and is contactable. Hopefully they can share the source code with you. Unfortunately the modeller ‘cannot find the model’ and is a bit vague about the workings of it as ‘the work was conducted five years ago’. The outcome is that you cannot verify the result or have confidence that the treatment is in any way cost effective. Does any of this really matter or are we being overdramatic? After all you might argue that our hypothetical researcher simply needs to work harder and resolve the issues themselves. The answer depends on your personal view of science. A particular view taken by the authors of this paper is that credible science that is of potential value to society is transparent and open. Thus enabling results to be verified and others to build on your work either in research or industry. This is important for both theoretical and applied results. There is also an ethical argument for reproducibility, as much research is publicly funded. If results and knowledge do not have the potential to offer benefits for science or society, is this a good use of public money?

What have other people done?

The reproducibility of research findings from a study is at the centre of science. The simulation and the wider Operational Research and Management Science (ORMS) communities publish models and methods in order to advance knowledge and avoid reinventing the wheel. This issue is also of importance in industry, where models are built and maintained by a single person or a team of people and where studies using those models may need to be audited or repeated. Several authors have looked at the reproducibility of models within ORMS and found published peer-reviewed reports of models can be ambiguous, incomplete and hence difficult to reuse and extend (Boylan, Goodwin, Mohammadipour, & Syntetos, 2015; Dalle, 2012; Grimm et al., 2006; Kendall et al., 2016; Kurkowski, Camp, & Colagrosso, 2005; Rahmandad & Sterman, 2012). This is not unique to ORMS. In other model-based and empirical disciplines, there has been increasing calls to create guidelines to support authors in complete reporting of their models to maximise reproducibility (Grimm et al., 2006; Waltemath et al., 2011). However, there are still gaps in ORMS literature related to guidelines for reporting models. This article presents guidelines to support the reporting of models within Agent-Based Simulation (ABS), Discrete-Event Simulation (DES) and System Dynamics (SD). These three methods represent the most popular simulation methods within ORMS (Jahangirian, Eldabi, Naseer, Stergioulas, & Young, 2010). We describe these guidelines as the STRESS test (Strengthening the Reporting of Empirical Simulation Studies: STRESS). If followed, the STRESS guidelines provide authors with a way to maximise the chances of other researchers or practitioners reusing their work to either extend results or benefit society and give readers the ability to better judge the contribution of simulation studies. While the guidelines are focused on simulation models in ORMS, the principles they are based on could be applied to other modelling techniques.

STRESS Reporting Guidelines.