Grouptuner: Efficient Group-Aware Compiler Auto-tuning
This program is tentative and subject to change.
Modern compilers typically provide hundreds of options to optimize program performance, but users often cannot fully leverage them due to the huge number of options. While standard optimization combinations (e.g., -O3) provide reasonable defaults, they often fail to deliver near-peak performance across diverse programs and architectures. To address this challenge, compiler auto-tuning techniques have emerged to automate the discovery of improved option combinations. Existing techniques typically focus on identifying critical options and prioritizing them during the search to improve efficiency. However, due to limited tuning iterations, the resulting data is often sparse and noisy, making it highly challenging to accurately identify critical options. As a result, these algorithms are prone to being trapped in local optima.
To address this limitation, we propose GroupTuner, a group-aware auto-tuning technique that directly applies localized mutation to coherent option groups based on historically best-performing combinations, thus avoiding explicitly identifying critical options. By forgoing the need to know precisely which options are most important, GroupTuner maximizes the use of existing performance data, ensuring more targeted exploration. Extensive experiments demonstrate that GroupTuner can efficiently discover competitive option combinations, achieving an average performance improvement of 12.39% over -O3 while requiring only 77.21% of the time compared to the random search algorithm, significantly outperforming state-of-the-art methods.
This program is tentative and subject to change.
Tue 17 JunDisplayed time zone: Seoul change
14:00 - 15:20 | |||
14:00 20mTalk | JetCert: A Self-Adaptive Compilation Framework for Fast and Safe Code ExecutionRecorded LCTES DOI | ||
14:20 20mTalk | Grouptuner: Efficient Group-Aware Compiler Auto-tuning LCTES Bingyu Gao Peking University, Mengyu Yao Peking University, Ziming Wang Peking University, Dong Liu ZTE, Ding Li Peking University, Xiangqun Chen Peking University, Yao Guo Peking University DOI | ||
14:40 20mTalk | Multi-level Machine Learning-Guided Autotuning for Efficient Code Generation on a Deep Learning Accelerator LCTES JooHyoung Cha Korea University of Science and Technology, Munyoung Lee ETRI, Jinse Kwon ETRI, Jemin Lee ETRI, Yongin Kwon ETRI DOI | ||
15:00 20mTalk | DSP-MLIR: A Domain-Specific Language and MLIR Dialect for Digital Signal Processing LCTES Abhinav Kumar Arizona State University, Atharva Khedkar Arizona State University, Hwisoo So Yonsei University, Megan Kuo Arizona State University, Ameya Gurjar Arizona State University, Partha Biswas MathWorks, Aviral Shrivastava Arizona State University DOI |