PLDI 2025
Mon 16 - Fri 20 June 2025 Seoul, South Korea

The Sparse workshop aims to bring together researchers interested in compiler techniques, programming abstractions, and hardware for sparse computing including sparse tensor algebra, relational algebra, and graph processing applications. Due to the large number of applications, optimization techniques, types of data structures, and specialized hardware, there is a need for automation. In recent years, there has been a lot of interest in compiler techniques to automatically generate sparse computing code. This workshop aims to bring together leading researchers from academia and industry for talks on applications, code generation, source code transformation and optimization, automatic scheduling, data structure modeling, compilation to different types of hardware, specialized accelerators, extensions to new types of sparse array operations, and applying the techniques beyond sparsity to areas such as lossless compression. The workshop will last one day and will include invited talks, discussions, and a small number of submitted talks.

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 16 Jun

Displayed time zone: Seoul change

09:00 - 10:10
Session 1Sparse at Cosmos
09:00
20m
Talk
Insum: Sparse GPU Kernels Simplified and Optimized with Indirect Einsums
Sparse
Saman Amarasinghe Massachusetts Institute of Technology
09:20
20m
Talk
Intelligent Auto-Tuning for High-Performance Sparse Tensor Algebra
Sparse
Jiajia Li North Carolina State University
09:40
20m
Talk
Loop Fusion in Matrix Multiplications with Sparse Dependence
Sparse
Kazem Cheshmi McMaster University
10:00
10m
Talk
Panel 1
Sparse
Saman Amarasinghe Massachusetts Institute of Technology, Kazem Cheshmi McMaster University, Jiajia Li North Carolina State University
10:30 - 12:00
Session 2Sparse at Cosmos
10:30
20m
Talk
Optimizations and abstractions for sparse machine learning
Sparse
Charith Mendis University of Illinois at Urbana-Champaign
10:50
20m
Talk
Distributed Sparse Computing with Legate Sparse
Sparse
Rohan Yadav Stanford University
11:10
20m
Talk
Optimizing Recursive Sparse Computations
Sparse
Amir Shaikhha University of Edinburgh
11:30
20m
Talk
Panel 2
Sparse
Charith Mendis University of Illinois at Urbana-Champaign, Rohan Yadav Stanford University, Amir Shaikhha University of Edinburgh
14:00 - 15:20
Session 3Sparse at Cosmos
14:00
20m
Talk
PyData/Sparse & Finch: extending sparse computing in the Python ecosystem
Sparse
Hameer Abbasi Quansight, Mateusz Sokol Quansight Labs
14:20
20m
Talk
Compiling and Compressing Structured Tensors
Sparse
14:40
20m
Talk
Sparsity-Aware Autoscheduling for Numpy with Finch and Galley
Sparse
Willow Ahrens Massachusetts Institute of Technology
15:00
20m
Talk
Panel 3
Sparse
Hameer Abbasi Quansight, Emilien Bauer , Willow Ahrens Massachusetts Institute of Technology, Mateusz Sokol Quansight Labs

Call for Talks

We are soliciting​ 15 minute​ talks and posters for the second Sparse Workshop. Relevant topics include applications, libraries, programming language constructs, compilers, libraries/frameworks, and hardware for sparse computing including sparse tensor algebra, relational algebra, and graph processing applications. The talks/posters can be technical,​ on new​ ideas,​ on your thoughts ​about​ future ​needs​, or other related topics that you are excited about. Already published ideas are welcome. There will not be a proceeding, so the talks will not require a submitted paper.​​ If you are interested, please submit a short description (100-200 words).