Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Length Adaptive Algorithm-Hardware Co-design of Transformer on FPGA Through Sparse Attention and Dynamic Pipelining

Conference ·
 [1];  [1];  [2];  [1];  [3];  [3];  [4];  [5];  [1];  [2];  [1]
  1. University of Connecticut
  2. Stevens Institute Of Technology
  3. BATTELLE (PACIFIC NW LAB)
  4. George Mason Universitiy
  5. Lehigh University
Transformers are considered one of the most important deep learning models since 2018, in part because it establishes state-of-the-art (SOTA) records and could potentially replace existing Deep Neural Networks (DNNs). Despite the remarkable triumphs, the prolonged turnaround time of Transformer models is a widely recognized roadblock. The variety of sequence lengths imposes additional computing overhead where inputs need to be zero-padded to the maximum sentence length in the batch to accommodate the parallel computing platforms. This paper targets the field-programmable gate array (FPGA) and proposes a coherent sequence length adaptive algorithm–hardware co-design for Transformer acceleration. Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm. The proposed sparse attention operator brings the complexity of attention-based models down to linear complexity and alleviates the off-chip memory traffic. The proposed length-aware resource hardware scheduling algorithm dynamically allocates the hardware resources to fill up the pipeline slots and eliminates bubbles for NLP tasks. Experiments show that our design has very small accuracy loss and has 80.2 × and 2.6 × speedup compared to CPU and GPU implementation, and 4 × higher energy efficiency than state-of-the-art GPU accelerator optimized via CUBLAS GEMM.
Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1891848
Report Number(s):
PNNL-SA-170686
Country of Publication:
United States
Language:
English

Similar Records

Accelerating Transformer-based Deep Learning Models on FPGAs using Column Balanced Block Pruning
Conference · Wed Apr 07 00:00:00 EDT 2021 · OSTI ID:1811281

Towards Precision-Aware Fault Tolerance Approaches for Mixed-Precision Applications
Conference · Sat Nov 12 23:00:00 EST 2022 · OSTI ID:1963399

Kernel fusion in atomistic spin dynamics simulations on Nvidia GPUs using tensor core
Journal Article · Mon Jun 10 20:00:00 EDT 2024 · Journal of Computational Science · OSTI ID:2446864

Related Subjects