PLDI 2025
Mon 16 - Fri 20 June 2025 Seoul, South Korea
Mon 16 Jun 2025 16:40 - 17:00 at Orchid - SOAP 4 Chair(s): Kihong Heo

Quantization consists of replacing the original data types used to represent the weights of neural networks with less resource-intensive data types. While considerable research has focused on quantization, most existing methods that offer theoretical guarantees do so by providing error bounds on the difference between the original and reduced precision models.
In this article, we introduce a new quantization technique that, rather than focusing on bounding errors, determines the minimum precision necessary to preserve class dominance, independent of any specific set of numerical formats.
In other words, regardless of the exact scores for each class, our method guarantees that the class predicted by the original network remains unchanged after quantization. Our method is static and the proposed quantization holds for all the inputs.
Technically, we leverage existing theorems that provide error bounds for dot products and formulate an optimization problem whose solution yields the required reduced precision. We also present experimental results to validate the effectiveness of our method.

Mon 16 Jun

Displayed time zone: Seoul change

15:40 - 17:30
SOAP 4SOAP at Orchid
Chair(s): Kihong Heo KAIST
15:40
60m
Keynote
Building X-Ray for enterprise-scale software
SOAP
Charles Zhang Hong Kong University of Science and Technology
16:40
20m
Talk
Towards Bit-Level Dominance Preserving Quantization of Neural Classifiers
SOAP
Dorra Ben Khalifa ENAC - University of Toulouse, Matthieu Martel University of Peprignan; Numalis
DOI
17:00
20m
Talk
Optimizing Type Migration for LLM-Based C-to-Rust Translation: A Data Flow Graph ApproachRecorded
SOAP
Qingxiao Xu Texas A&M University, Jeff Huang Texas A&M University
DOI
17:20
10m
Day closing
Closing and Best Presentation Award
SOAP
Kihong Heo KAIST, Luca Negrini Ca’ Foscari University of Venice