Artifact Evaluation Artifact evaluation submission site Due Time: 11:59pm 11/30/2019 (AOE) General Info A well-packaged artifact is more likely to be easily usable by the reviewers, saving them time and frustration, and more clearly conveying the value of your work during evaluation. A great way to package an artifact is as a Docker image or in a virtual machine that runs “out of the box” with very little system-specific configuration. Submission of an artifact does not contain tacit permission to make its content public. AEC members will be instructed that they may not publicize any part of your artifact during or after completing evaluation, nor retain any part of it after evaluation. Thus, you are free to include models, data files, proprietary binaries, and similar items in your artifact. Artifact evaluation is single-blind. Please take precautions (e.g. turning off analytics, logging) to help prevent accidentally learning the identities of reviewers. Packaging and Instructions Your submission should consist of four pieces: The submission version of your paper. A README.txt file that explains your artifact (details below). A URL pointing to a single file containing the artifact. The URL must be a Google Drive or Dropbox URL, to help protect the anonymity of the reviewers. You may upload your artifact directly if it’s less than 100 MB. Your README.txt should consist of two parts: a Getting Started Guide and Step-by-Step Instructions for how you propose to evaluate your artifact (with appropriate connections to the relevant sections of your paper); The Getting Started Guide should contain setup instructions (including, for example, a pointer to the VM player software, its version, passwords if needed, etc.) and basic testing of your artifact that you expect a reviewer to be able to complete in 30 minutes. Reviewers will follow all the steps in the guide during an initial kick-the-tires phase. The Getting Started Guide should be as simple as possible, and yet it should stress the key elements of your artifact. Anyone who has followed the Getting Started Guide should have no technical difficulties with the rest of your artifact. The Step by Step Instructions explain how to reproduce any experiments or other activities that support the conclusions in your paper. Write this for readers who have a deep interest in your work and are studying it to improve it or compare against it. If your artifact runs for more than a few minutes, point this out and explain how to run it on smaller inputs. Where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs); if there are warnings that are safe to be ignored, explain which ones they are. The artifact’s documentation should include the following: A list of claims from the paper supported by the artifact, and how/why. A list of claims from the paper not supported by the artifact, and how/why. Example: Performance claims cannot be reproduced in VM, authors are not allowed to redistribute specific benchmarks, etc. Artifact reviewers can then center their reviews / evaluation around these specific claims. Packaging the Artifact When packaging your artifact, please keep in mind: a) how accessible you are making your artifact to other researchers, and b) the fact that the AEC members will have a limited time in which to make an assessment of each artifact. Your artifact can contain a bootable virtual machine image with all of the necessary libraries installed. Using a virtual machine provides a way to make an easily reproducible environment — it is less susceptible to bit rot. It also helps the AEC have confidence that errors or other problems cannot cause harm to their machines. You should make your artifact available as a single archive file and use the naming convention ., where the appropriate suffix is used for the given archive format. Please use a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Please use open formats for documents. Discussion with Reviewers We expect each artifact to receive 3-4 reviews. Throughout the review period, reviews will be submitted to HotCRP and will be (approximately) continuously visible to authors. AEC reviewers will be able to continuously interact (anonymously) with authors for clarifications, system-specific patches, and other logistics to help ensure that the artifact can be evaluated. The goal of continuous interaction is to prevent rejecting artifacts for a “wrong library version” types of problems. For questions, please contact AE Chairs, Harry Xu (harryxu@g.ucla.edu) and Brian Demsky (bdemsky@uci.edu). Amir Hossein Nodehi Sabet University of California, Riverside Brian Demsky chair University of California, Irvine Connor Holmes Colorado School of Mines George Bisbas Imperial College London Harry Xu chair UCLA Heehoon Kim Seoul National University Herbert Alan Beadle University of Rochester Jessica Ray MIT Kaiming Ouyang University of California, Riverside Karthik Murthy Stanford Kyle Singer Washington University in St. Louis Lev Mukhanov Queen's University Belfast Loc Hoang University of Texas at Austin Michael Davis Queen's University Belfast Mohammad Monil University of Oregon Mohsen Koohi Esfahani Queen's University Belfast Muhammad Aditya Sasongko Koç University Qi Zhao North Carolina State University Shigeyuki Sato The University of Tokyo Shu-Mei Tseng University of California, Irvine Tong Geng Boston University Vishwesh Jatala University of Texas at Austin Wentao Cai University of Rochester Yiyuan Li Tsinghua University Yuyang Jin Tsinghua University Zhengyi Qiu NC State University