Write a Blog >>

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

View the track page for all the details of the main conference.

Proceedings will be available in the ACM Digital Library.

Dates
Sun 23 Feb 2020
Mon 24 Feb 2020
Tue 25 Feb 2020
Wed 26 Feb 2020
Plenary
Hide plenary sessions
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Sun 23 Feb
Times are displayed in time zone: Tijuana, Baja California change

18:00 - 20:00: Joint Welcome Reception and Posters (Fresco's)Main Conference

Mon 24 Feb
Times are displayed in time zone: Tijuana, Baja California change

07:00 - 08:10: Breakfast (Garden Pavilion)Catering
08:10 - 08:20: Chairs' Welcome (Garden Pavilion)Main Conference
08:20 - 08:30: SIGPLAN CARES and SIGARCH/SIGMICRO CARES IntroMain Conference
08:20 - 08:30
Talk
SIGPLAN CARES and SIGARCH/SIGMICRO CARES Intro
Main Conference
08:30 - 09:30: HPCA Keynote (Garden Pavilion)Main Conference
09:35 - 10:25: Key Value Store (Mediterranean Ballroom)Main Conference
Chair(s): Milind ChabbiUber Technologies Inc.
09:35 - 10:00
Talk
Kite: Efficient and Available Release Consistency for the Datacenter
Main Conference
Vasilis GavrielatosUniversity of Edinburgh, UK, Antonios KatsarakisUniversity of Edinburgh, UK, Vijay NagarajanUniversity of Edinburgh, UK, Boris GrotUniversity of Edinburgh, UK, Arpit JoshiIntel
10:00 - 10:25
Talk
Oak: A Scalable Off-Heap Allocated Key-Value Map
Main Conference
Hagar MeirIBM Haifa Research Lab, Edward BortnikovYahoo Research, Anastasia BraginskyYahoo Research, Dmitry BasinYahoo Research, Yonatan GottesmanYahoo Research, Eshcar HillelYahoo Research, Oath, Idit KeidarTechnion - Israel institute of technology, Eran MeirYahoo Research, Gali SheffiTechnion - Israel
10:25 - 10:55: Break (Garden Pavilion)Catering
10:55 - 12:35: Machine Learning/Big Data (Mediterranean Ballroom)Main Conference
Chair(s): Shuaiwen Leon SongUniversity of Sydney
10:55 - 11:20
Talk
Optimizing Batched Winograd Convolution on GPUs
Main Conference
Da YanHong Kong University of Science and Technology, Wei WangHong Kong University of Science and Technology, Xiaowen ChuHong Kong Baptist University
11:20 - 11:45
Talk
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Main Conference
Shigang LiDepartment of Computer Science, ETH Zurich, Tal Ben-NunDepartment of Computer Science, ETH Zurich, Salvatore Di GirolamoDepartment of Computer Science, ETH Zurich, Dan AlistarhIST Austria, Torsten HoeflerDepartment of Computer Science, ETH Zurich
11:45 - 12:10
Talk
Scalable Top-K Retrieval with Sparta
Main Conference
Gali SheffiTechnion - Israel, Dmitry BasinYahoo Research, Edward BortnikovYahoo Research, David CarmelAmazon, Idit KeidarTechnion - Israel institute of technology
12:10 - 12:35
Talk
waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data
Main Conference
Jiannan TianUniversity of Alabama, Sheng DiArgonne National Laboratory, Chengming ZhangUniversity of Alabama, Xin Liang, Sian JinUniversity of Alabama, Dazhao ChengUniversity of North Carolina at Charlotte, Dingwen TaoUniversity of Alabama, Franck CappelloArgonne National Laboratory
14:00 - 15:40: Concurrent Data Structures (Mediterranean Ballroom)Main Conference
Chair(s): Michael Scott
14:00 - 14:25
Talk
Scaling Concurrent Queues by Using HTM to Profit from Failed Atomic Operations
Main Conference
Or OstrovskyTel Aviv University, Adam MorrisonTel Aviv University
14:25 - 14:50
Talk
A Wait-Free Universal Construct for Large Objects
Main Conference
Andreia CorreiaUniversity of Neuchâtel, Pedro Ramalhete, Pascal FelberUniversité de Neuchâtel
14:50 - 15:15
Talk
Fast Concurrent Data Sketches
Main Conference
Arik RinbergTechnion, Alexander SpiegelmanVMware Research, Edward BortnikovYahoo Research, Eshcar HillelYahoo Research, Oath, Idit KeidarTechnion - Israel institute of technology, Hadar ServianskyWeizmann, Lee RhodesVerizon Media
15:15 - 15:40
Talk
Universal Wait-Free Memory Reclamation
Main Conference
Ruslan NikolaevVirginia Tech, Binoy RavindranVirginia Tech
15:40 - 16:10: Break (Garden Pavilion)Catering
16:40 - 17:40: Panel Discussion (Mediterranean Ballroom)Main Conference
Chair(s): Xipeng ShenNorth Carolina State University
16:40 - 17:40
Talk
Title: Abstractions for Modern Parallel Computing: A Blessing or a Curse?
Main Conference
Albert CohenGoogle, Lawrence RauchwergerUIUC, Michael ScottUniversity of Rochester
17:40 - 18:40: Business Meeting (Mediterranean Ballroom)Main Conference

Tue 25 Feb
Times are displayed in time zone: Tijuana, Baja California change

07:00 - 08:30: Breakfast (Garden Pavilion)Catering
08:30 - 09:30: PPOPP Keynote (Garden Pavilion)Main Conference
Chair(s): Xipeng ShenNorth Carolina State University
08:30 - 09:30
Talk
Scaling Parallel Programming Beyond Threads
Main Conference
09:35 - 10:25: Scaling (Mediterranean Ballroom)Main Conference
Chair(s): Zhijia ZhaoUC Riverside
09:35 - 10:00
Talk
Using Sample-Based Time Series Data for Automated Diagnosis of Scalability Losses in Parallel Programs
Main Conference
Lai WeiRice University, John Mellor-CrummeyRice University
10:00 - 10:25
Talk
Scaling out Speculative Execution of Finite-State Machines with Parallel Merge
Main Conference
Yang XiaThe Ohio State University, Peng JiangThe University of Iowa, Gagan AgrawalThe Ohio State University
10:25 - 10:55: Break (Garden Pavilion)Catering
10:55 - 12:35: Program Analysis (Mediterranean Ballroom)Main Conference
Chair(s): Michael GarlandNVIDIA
10:55 - 11:20
Talk
On the fly MHP Analysis
Main Conference
Sonali SahaIIT Madras, V Krishna NandivadaIIT Madras
11:20 - 11:45
Talk
Detecting and Reproducing Error-Code Propagation Bugs in MPI Implementations
Main Conference
Daniel DeFreezUniversity of California, Davis, Antara BhowmickUniversity of California, Davis, Ignacio LagunaLawrence Livermore National Laboratory, Cindy Rubio-GonzálezUniversity of California, Davis
11:45 - 12:10
Talk
Parallel and Distributed Bounded Model Checking of Multi-threaded Programs
Main Conference
Omar InversoGran Sasso Science Institute, Catia TrubianiGran Sasso Science Institute
12:10 - 12:35
Talk
Parallel Race Detection with Futures
Main Conference
Yifan XuWashington University in St. Louis, Kyle SingerWashington University in St. Louis, I-Ting Angelina LeeWashington University in St. Louis
14:00 - 15:15: Graph (Mediterranean Ballroom)Main Conference
Chair(s): Jiajia LiPacific Northwest National Laboratory
14:00 - 14:25
Talk
Practical Parallel Hypergraph Algorithms
Main Conference
14:25 - 14:50
Talk
A Supernodal All-Pairs Shortest Path Algorithm
Main Conference
piyush kumar saoOak Ridge National Lab, Ramki KannanOak Ridge National Laboratory, Prasun GeraGeorgia Institute of Technology, Rich VuducGeorgia Institute of Technology
14:50 - 15:15
Talk
Increasing the Parallelism of Graph Coloring via Shortcutting
Main Conference
Ghadeer AlabandiTexas State University, Evan PowersTexas State University, Martin BurtscherTexas State University
15:15 - 15:45: Break (Garden Pavilion)Catering
15:45 - 17:00: Search and Index (Mediterranean Ballroom)Main Conference
Chair(s): Idit KeidarTechnion - Israel institute of technology
15:45 - 16:10
Talk
Non-Blocking Interpolation Search Trees with Doubly-Logarithmic Running Time
Main Conference
Trevor BrownUniversity of Waterloo, Aleksandar ProkopecOracle Labs, Dan AlistarhIST Austria
16:10 - 16:35
Talk
YewPar: Skeletons for Exact Combinatorial Search
Main Conference
Blair ArchibaldUniversity of Glasgow, Patrick MaierUniversity of Stirling, Rob StewartHeriot-Watt University, Phil TrinderUniversity of Glasgow
16:35 - 17:00
Talk
XIndex: A Scalable Learned Index for Multicore Data Storage
Main Conference
Chuzhe TangShanghai Jiao Tong University, Youyun WangShanghai Jiao Tong University, Gansen HuShanghai Jiao Tong University, Zhiyuan DongShanghai Jiao Tong University, Zhaoguo WangShanghai Jiao Tong University, Minjie WangNew York University, Haibo ChenShanghai Jiao Tong University
17:30 - 21:00: Excursion (Sea World)Main Conference

Wed 26 Feb
Times are displayed in time zone: Tijuana, Baja California change

07:00 - 08:30: Breakfast (Garden Pavilion)Catering
08:30 - 09:30: CGO Keynote (Garden Pavilion)Main Conference
09:35 - 10:50: Concurrency and GPU (Mediterranean Ballroom)Main Conference
Chair(s): Ang LiPacific Northwest National Laboratory
09:35 - 10:00
Talk
Overlapping Host-to-Device Copy and Computation using Hidden Unified Memory
Main Conference
Jaehoon JungSeoul National University, Daeyoung ParkSeoul National University, Youngdong DoSeoul National University, Jungho ParkSeoul National University, Jaejin LeeSeoul National University
10:00 - 10:25
Talk
GPU Initiated OpenSHMEM: Correct and Efficient Intra-Kernel Networking for dGPUs
Main Conference
KHALED HAMIDOUCHEAdvanced Micro Devices (AMD), Michael LeBeaneAdvanced Micro Devices (AMD)
10:25 - 10:50
Talk
No Barrier in the Road: A Comprehensive Study and Optimization of ARM Barriers
Main Conference
Nian LiuShanghai Jiao Tong University, Binyu ZangShanghai Jiao Tong University, Haibo ChenShanghai Jiao Tong University
10:50 - 11:20: Break (Garden Pavilion)Catering
11:20 - 12:35: Matrix Multiplication and Approximation (Mediterranean Ballroom)Main Conference
Chair(s): Albert CohenGoogle
11:20 - 11:45
Talk
spECK: Accelerating GPU Sparse Matrix-Matrix Multiplication Through Lightweight Analysis
Main Conference
Mathias PargerGraz University of Technology, Martin WinterGraz University of Technology, Austria, Daniel MlakarGraz University of Technology, Austria, Markus SteinbergerGraz University of Technology, Austria
11:45 - 12:10
Talk
A Novel Data Transformation and Execution Strategy for Accelerating Sparse Matrix Multiplication on GPUs
Main Conference
Peng JiangThe University of Iowa, Changwan HongThe Ohio State University, Gagan AgrawalThe Ohio State University
12:10 - 12:35
Talk
MatRox: Modular approach for improving data locality in Hierarchical (Mat)rix App(Rox)imation
Main Conference
Bangtian LiuUniversity of Toronto, Kazem CheshmiUniversity of Toronto, Saeed SooriUniversity of Toronto, Michelle StroutUniversity of Arizona, Maryam Mehri DehnaviUniversity of Toronto
12:35 - 13:00: Best Paper Award and ClosingMain Conference

Call for Papers

PPoPP 2020: 25th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

San Diego, California, USA. (collocated with HPCA-2020 and CGO-2020) Dates: Sat 22 - Wed 26 February 2020.

Submission URL: https://ppopp20.hotcrp.com

Important dates

  • Paper registration and abstract submission: July 31, 2019
  • Full paper submission: August 6, 2019
  • Author response period: October 28–November 4, 2019
  • Author Notification: November 19, 2019
  • Artifact submission to AE committee: November 30, 2019
  • Artifact notification by AE committee: December 30, 2019
  • Final paper due: January 6, 2020

All deadlines are at midnight anywhere on earth (AoE), and are firm.

Scope

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; data centers; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Specific topics of interest include (but are not limited to):

  • Compilers and runtime systems for parallel and heterogeneous systems
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance / scientific computing
  • Libraries
  • Middleware for parallel systems
  • Parallel algorithms
  • Parallel applications and frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming languages
  • Parallel programming theory and models
  • Parallelism in non-scientific workloads: web, search, analytics, cloud, machine learning
  • Performance analysis, debugging and optimization
  • Programming tools for parallel and heterogeneous systems
  • Software engineering for parallel programs
  • Software for heterogeneous architectures
  • Software productivity for parallel programming
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice. PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission

Conference submission site

All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on both A4 and US letter size paper.

All papers must be prepared in ACM Conference Format using the 2-column acmart format: use the SIGPLAN proceedings template acmart-sigplanproc-template.tex for Latex,and interim-layout.docx for Word. You may also want to consult the official ACM information on the Master Article Template and related tools. Important note: The Word template (interim-layout.docx) on the ACM website uses 9pt font; you need to increase it to 10pt.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy).

PPoPP 2020 will employ a lightweight double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.

All submissions that are not accepted for regular presentations will be automatically considered for posters. Two-page summaries of accepted posters will be included in the conference proceedings (authors must decide by December 15, 2019 if they want to submit a poster).

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. More information will be posted on AE website.

Deadlines expire at midnight anywhere on earth.

Publication Date

The titles of all accepted papers are typically announced shortly after the author notification date (late November 2019). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Accepted Papers

Title
A Novel Data Transformation and Execution Strategy for Accelerating Sparse Matrix Multiplication on GPUs
Main Conference
A Supernodal All-Pairs Shortest Path Algorithm
Main Conference
A Wait-Free Universal Construct for Large Objects
Main Conference
Detecting and Reproducing Error-Code Propagation Bugs in MPI Implementations
Main Conference
Fast Concurrent Data Sketches
Main Conference
GPU Initiated OpenSHMEM: Correct and Efficient Intra-Kernel Networking for dGPUs
Main Conference
Increasing the Parallelism of Graph Coloring via Shortcutting
Main Conference
Kite: Efficient and Available Release Consistency for the Datacenter
Main Conference
MatRox: Modular approach for improving data locality in Hierarchical (Mat)rix App(Rox)imation
Main Conference
No Barrier in the Road: A Comprehensive Study and Optimization of ARM Barriers
Main Conference
Non-Blocking Interpolation Search Trees with Doubly-Logarithmic Running Time
Main Conference
Oak: A Scalable Off-Heap Allocated Key-Value Map
Main Conference
On the fly MHP Analysis
Main Conference
Optimizing Batched Winograd Convolution on GPUs
Main Conference
Overlapping Host-to-Device Copy and Computation using Hidden Unified Memory
Main Conference
Parallel Race Detection with Futures
Main Conference
Parallel and Distributed Bounded Model Checking of Multi-threaded Programs
Main Conference
Practical Parallel Hypergraph Algorithms
Main Conference
Scalable Top-K Retrieval with Sparta
Main Conference
Scaling Parallel Programming Beyond Threads
Main Conference
Scaling out Speculative Execution of Finite-State Machines with Parallel Merge
Main Conference
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Main Conference
Universal Wait-Free Memory Reclamation
Main Conference
Using Sample-Based Time Series Data for Automated Diagnosis of Scalability Losses in Parallel Programs
Main Conference
XIndex: A Scalable Learned Index for Multicore Data Storage
Main Conference
YewPar: Skeletons for Exact Combinatorial Search
Main Conference
spECK: Accelerating GPU Sparse Matrix-Matrix Multiplication Through Lightweight Analysis
Main Conference
waveSZ: A Hardware-Algorithm Co-Design of Efficient Lossy Compression for Scientific Data
Main Conference

Artifact evaluation submission site

Due Time: 11:59pm 11/30/2019 (AOE)

General Info

A well-packaged artifact is more likely to be easily usable by the reviewers, saving them time and frustration, and more clearly conveying the value of your work during evaluation. A great way to package an artifact is as a Docker image or in a virtual machine that runs “out of the box” with very little system-specific configuration.

Submission of an artifact does not contain tacit permission to make its content public. AEC members will be instructed that they may not publicize any part of your artifact during or after completing evaluation, nor retain any part of it after evaluation. Thus, you are free to include models, data files, proprietary binaries, and similar items in your artifact.

Artifact evaluation is single-blind. Please take precautions (e.g. turning off analytics, logging) to help prevent accidentally learning the identities of reviewers.

Packaging and Instructions

Your submission should consist of four pieces:

  • The submission version of your paper.
  • A README.txt file that explains your artifact (details below).
  • A URL pointing to a single file containing the artifact. The URL must be a Google Drive or Dropbox URL, to help protect the anonymity of the reviewers. You may upload your artifact directly if it’s less than 100 MB.

Your README.txt should consist of two parts:

  • a Getting Started Guide and
  • Step-by-Step Instructions for how you propose to evaluate your artifact (with appropriate connections to the relevant sections of your paper);

The Getting Started Guide should contain setup instructions (including, for example, a pointer to the VM player software, its version, passwords if needed, etc.) and basic testing of your artifact that you expect a reviewer to be able to complete in 30 minutes. Reviewers will follow all the steps in the guide during an initial kick-the-tires phase. The Getting Started Guide should be as simple as possible, and yet it should stress the key elements of your artifact. Anyone who has followed the Getting Started Guide should have no technical difficulties with the rest of your artifact.

The Step by Step Instructions explain how to reproduce any experiments or other activities that support the conclusions in your paper. Write this for readers who have a deep interest in your work and are studying it to improve it or compare against it. If your artifact runs for more than a few minutes, point this out and explain how to run it on smaller inputs.

Where appropriate, include descriptions of and links to files (included in the archive) that represent expected outputs (e.g., the log files expected to be generated by your tool on the given inputs); if there are warnings that are safe to be ignored, explain which ones they are.

The artifact’s documentation should include the following:

  • A list of claims from the paper supported by the artifact, and how/why.
  • A list of claims from the paper not supported by the artifact, and how/why. Example: Performance claims cannot be reproduced in VM, authors are not allowed to redistribute specific benchmarks, etc. Artifact reviewers can then center their reviews / evaluation around these specific claims.

Packaging the Artifact

When packaging your artifact, please keep in mind: a) how accessible you are making your artifact to other researchers, and b) the fact that the AEC members will have a limited time in which to make an assessment of each artifact.

Your artifact can contain a bootable virtual machine image with all of the necessary libraries installed. Using a virtual machine provides a way to make an easily reproducible environment — it is less susceptible to bit rot. It also helps the AEC have confidence that errors or other problems cannot cause harm to their machines.

You should make your artifact available as a single archive file and use the naming convention <paper #>., where the appropriate suffix is used for the given archive format. Please use a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Please use open formats for documents.

Discussion with Reviewers

We expect each artifact to receive 3-4 reviews.

Throughout the review period, reviews will be submitted to HotCRP and will be (approximately) continuously visible to authors. AEC reviewers will be able to continuously interact (anonymously) with authors for clarifications, system-specific patches, and other logistics to help ensure that the artifact can be evaluated. The goal of continuous interaction is to prevent rejecting artifacts for a “wrong library version” types of problems.

For questions, please contact AE Chairs, Harry Xu (harryxu@g.ucla.edu) and Brian Demsky (bdemsky@uci.edu).

TBD

:
: