PLDI 2025
Mon 16 - Fri 20 June 2025 Seoul, South Korea

This program is tentative and subject to change.

Mon 16 Jun 2025 16:20 - 16:40 at Violet - AI and Accelerator Architecture + WIP Chair(s): Yongjun Park

Due to hardware customization capabilities, FPGA-based graph processing accelerators achieve significantly higher energy efficiency than many general-purpose computing engines. However, designing these accelerators remains a substantial challenge for high-level users. To overcome the programming barrier, FPGA-based accelerator design frameworks on top of generic graph processing programming models have been developed to automate accelerator generation through pre-built templates. However, they often tightly couple graph processing algorithms, programming models and processing paradigms, and accelerator architectures, which severely limits the expression scope of the algorithms and may also restrict the performance when the generated accelerators fail to suit dynamic processing patterns of the graph processing algorithms.

In this work, we propose Graphitron, a domain-specific language (DSL) that enables the automatic generation of FPGA-based graph processing accelerators without engaging with the complexities of low-level FPGA designs. Graphitron defines vertices and edges as primitive data types and enables users to implement graph processing algorithms by performing various functionalities on top of these primitive data, which greatly eases the algorithm descriptions for high-level users. During compilation, the graph processing functions are naturally classified into either a vertex-centric processing paradigm or an edge-centric processing paradigm according to the target data types, enabling the generation of accelerator kernels of different characteristics. In addition, because of the explicit binding between the graph processing functions and the data types, the Graphitron compiler can automatically infer the computing and memory access patterns of each processing function within graph processing algorithms and apply corresponding hardware optimizations such as pipelining, data shuffling, and caching. Basically, graph semantic information can be utilized to guide algorithm-specific customization of resulting accelerators for higher performance. Our experiments show that Graphitron can generate accelerators for a broader range of graph processing algorithms than prior template-based generation frameworks. Moreover, the accelerators produced by Graphitron achieve performance comparable to, and in some cases exceeding, that of existing frameworks when the combined programming paradigms are beneficial from an algorithmic perspective.

This program is tentative and subject to change.

Mon 16 Jun

Displayed time zone: Seoul change

15:40 - 17:20
AI and Accelerator Architecture + WIPLCTES at Violet
Chair(s): Yongjun Park Yonsei University
15:40
20m
Talk
SPARQ: An Accelerator Architecture for Large Language Models with Joint Sparsity and Quantization Techniques
LCTES
Seonggyu Choi Sungkyunkwan University, Hyungmin Cho Sungkyunkwan University
DOI
16:00
20m
Talk
ADaPS: Adaptive Data Partitioning to Parallelize CNN Inference on Resource-Constrained Hardware
LCTES
Jaume Mateu Cuadrat Seoul National University, Bernhard Egger Seoul National University
DOI
16:20
20m
Talk
Graphitron: A Domain Specific Language for FPGA-Based Graph Processing Accelerator Generation
LCTES
Xinmiao Zhang SKLP, Institute of Computing Technology, CAS, Zheng Feng Institute of Computing Technology at Chinese Academy of Sciences, Shengwen Liang SKLP, Institute of Computing Technology, CAS, Xinyu Chen Hong Kong University of Science and Technology, Lei Zhang ICT CAS, Cheng Liu ICT CAS
DOI
16:40
20m
Talk
Modeling and Verification of Sigma Delta Neural Networks using Satisfiability Modulo Theory
LCTES
Sirshendu Das Indian Statistical Institute, Ansuman Banerjee Indian Statistical Institute, Swarup Kumar Mohalik Ericsson Research
DOI
17:00
10m
Talk
Zoozve: A Strip-Mining-Free RISC-V Vector Extension with Arbitrary Register Grouping Compilation Support (WIP)
LCTES
Siyi Xu Shanghai University, Limin Jiang Shanghai University, Yintao Liu Shanghai University, Yihao Shen Shanghai University, Yi Shi Shanghai University, Shan Cao Shanghai University, Zhiyuan Jiang Shanghai University
DOI
17:10
10m
Talk
Towards Macro-Aware C-to-Rust Transpilation (WIP)
LCTES
Robbe De Greef Vrije Universiteit Brussel, Attilio Discepoli Vrije Universiteit Brussel, Esteban Aguililla Klein Université Libre de Bruxelles, Théo Engels Royal Military Academy of Belgium, Ken Hasselmann Royal Military Academy of Belgium, Antonio Paolillo Vrije Universiteit Brussel
DOI
Hide past events