Call for Papers: Artifact Evaluation

Authors of accepted research papers are invited to submit an artifact to the ESEC/FSE Artifact Track. According to ACM’s “Result and Artifact Review and Badging” policy (​https://www.acm.org/publications/policies/artifact-review-badging), an "artifact" is “a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself [...] software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results”. A formal review of such artifacts not only ensures that the ​study is repeatable​ by the same team, if they are available online then other researchers ​can replicate the findings​ as well.

In this spirit, the ESEC/FSE 2017 artifacts track exists to review, promote, share and catalog the research artifacts produced by any of the papers accepted to the research track. Apart from repeatability and replicability, cataloguing these artifacts also allows reuse by other teams in reproduction or other studies. Artifacts of interest include (but are not limited to):

  • Tools, which are implementations of systems or algorithms potentially useful in other studies.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.
This list is not exhaustive, but if your proposed artifact is not on this list, please email the chairs before submitting.

What do you get out of it?

If your artifact is accepted, it will receive one of the following badges in the text of the paper and in the ACM Digital Library:

  • Artifacts Evaluated - Functional: The artifacts are complete, well-documented and allow to obtain the same results as the paper.
  • Artifacts Evaluated - Reusable: As above, but the artifacts are of such a high quality that they can be reused as is on other data sets, or for other purposes.

The authors must ensure that the artifacts are available from a stable URL or DOI (i.e., not a personal website) for anyone to access. Some kind of 5 year-archival plan (at least) should be provided. Note that this badge of course excludes any proprietary data or tools.

Regarding archival, all accepted artifacts will be indexed on https://github.com/researchart/. If desired, the artifacts themselves could be hosted there as a means of more permanent archival. Non-open source scripts and data could be considered as well for the artifacts track, but at least the reviewers should have access to them during the artifact review process.

How to submit?

To submit an artifact for your accepted ESEC/FSE 2017 research track paper, it is important to keep in mind: a) how accessible you are making your artifact to other researchers, and b) the fact that the FSE artifact evaluators will have very limited time for making an assessment of each artifact. The configuration and installation for your artifact should take less than 30 minutes or it is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it. If you envision difficulties, please provide your artifact in the form of a virtual machine image (http://www.virtualbox.org) or a container image (http://www.docker.com).

Whichever the case, your artifact should be made available as a link to a github repository or to a single archive file using a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2).

The repository or archive must:

  1. be self-contained (with the exception of pointers to external tools or libraries; which we will not consider being part of the evaluated artifact, but which we will try to use when evaluating the artifact);
  2. contain an HTML file called index.html that fully describes the artifact and includes (relative) links to the files (included in the archive) that constitute the artifact: a. include a getting started guide that should stress the key elements of your artifact and that should enable the reviewers to run, execute or analyze your artifact without any technical difficulty; b. include step-by-step instructions (another section within index.html) on how you propose to evaluate your artifact; c. where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs).
  3. contain the artifact itself, which may include, but is not limited to, source code, executables, data, a virtual machine image, and documents. Please use open formats for documents and we prefer experimental data to be submitted in csv format;
  4. contain the submitted version of your research track paper;
  5. optionally, authors are encouraged to submit a link to a short video (YouTube, max. 5 minutes) demonstrating the artifact.

To facilitate artifact review, you should include the link to your artifact as well as other requested information in the FSE artifact self-assessment form integrated into the easychair submission site (https://easychair.org/conferences/?conf=esecfse2017, choose “ESEC/FSE 2017 Artifact Evaluation”), together with a short abstract that briefly summarizes the artifact (to help reviewers bid for your artifact).

Review Process and Selection Criteria

The artifact will be evaluated in relation to the expectations set by the self-assessment form and paper. Although reviewers will have access to your paper (via your repository or archive), please make very clear how they can run your artifact or analyze your data set to replicate your study, without them having to hunt for this. Reviewers may try to tweak provided inputs and create new ones, to test the limits of the system.

Submitted artifacts will go through a two-phase evaluation:

  1. Phase 1: reviewers check the artifact integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). Authors are informed of the outcome and, in case of technical problems, they can help solve them during a brief author response period.
  2. Artifact assessment: reviewers evaluate the artifacts, checking if they live up to the expectations created by the paper as well as the self-assessment form.

Since portability bugs are easy to make, there will be a 48 hour rebuttal period during which authors can respond to artifact reviews and fix any major bugs or issues. The resulting version of the artifact should be considered “final” and should allow reviewers to decide about artifact acceptance and badges.

Artifacts will be scored using the following criteria:

  • Artifacts Evaluated - Functional:
    • Documented: ​Is it accompanied by tutorial notes/videos and other documentation?
    • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
    • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
    • Exercisable: If the artifact is executable, is it easy to download, install, or execute? Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data canaccessed and appropriately manipulated.
  • Artifacts Evaluated - Reusable:
    • The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.
  • Artifacts Available:
    • Author-created artifacts relevant to this paper have been placed on a publically accessible archival repository that should be ensured for at least 5 years. A DOI or link to this repository along with a unique identifierthe object is provided.

Important Dates

  • Abstract deadline: 7 June 2017 (on easychair: brief “abstract” of what’s in the artifact)
  • Submission deadline: 9 June 2017
  • Author rebuttal: 23 June 2017 (48 hours)
  • Author notification: 2 July 2017
  • Camera-ready deadline for papers with rejected artifacts: 3 July 2017
  • Camera-ready deadline for papers with accepted artifacts: 8 July 2017

Artifact Evaluation Co-Chairs

  • Bram Adams (Polytechnique Montreal, Canada)
  • Amel Bennaceur (The Open University, UK)

Program Committee

  • Hongyu Zhang (Microsoft Research)
  • David Lo (Singapore Management University)
  • Arie Gurfinkel (University of Waterloo)
  • Sonia Haiduc (Florida State University)
  • Mark Van Den Brand (Eindhoven University of Technology)
  • Barbara Russo (Free University of Bolzano/Bozen)
  • Collin McMillan (University of Notre Dame)
  • Emily Hill (Drew University)
  • Yasutaka Kamei (Kyushu University)
  • Javier Cámara Moreno (Carnegie Mellon University)
  • Liliana Pasquale (Lero - The Irish Software Engineering Research Centre)
  • Yijun Yu (The Open University)
  • Antonio Filieri (Imperial College London)
  • Pushpendra Singh (Indraprastha Institute of Information Technology)
  • Zhenchang Xing (Australian National University)
  • Shengqian Yang (Google)
  • Reyhaneh Jabbarvand Behrouz (University of California, Irvine)

ESEC/FSE 2017