Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

LP-BNN: Ultra-low-Latency BNN Inference with Layer Parallelism

Conference ·
High inference latency seriously limits the deployment of DNNs in real-time domains such as autonomous driving, robotic control, and many others. To address this emerging challenge, researchers have proposed approximate DNNs with reduced precision, e.g., Binarized Neural Networks (BNNs). While BNNs can be built to have little loss in accuracy, latency reduction still has much room for improvement. In this paper, we propose a single-FPGA-based BNN accelerator that achieves microsecond-level ultra-low-latency inference of ImageNet, LP- BNN. We obtain this performance via several design optimizations. First, we optimize the network structure by removing Batch Normalization (BN) functions which leads to significant latency in BNNs without any loss on accuracy. Second, we propose a parameterized architecture which is based on layer parallelism and supports nearly perfect load balancing for various types of BNNs. Third, we fuse all the convolution layers and the first fully connected layer. We process them in parallel through fine- grained inter-layer pipelining. With our proposed accelerator, the inference of binarized AlexNet, VGGNet, and ResNet are completed within 21.5µs, 355µs, and 67.8µs respectively, with no loss in accuracy as compared with other BNN implementations.
Research Organization:
Pacific Northwest National Laboratory (PNNL), Richland, WA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-76RL01830
OSTI ID:
1765112
Report Number(s):
PNNL-SA-143161
Country of Publication:
United States
Language:
English

Similar Records

O3BNN-R: An Out-Of-Order Architecture for High-Performance and Regularized BNN Inference
Journal Article · Thu Dec 31 23:00:00 EST 2020 · IEEE Transactions on Parallel and Distributed Systems · OSTI ID:1670985

BSTC: A Novel Binarized-Soft-Tensor-Core Design for Accelerating Bit-Based Approximated Neural Nets
Conference · Sat Nov 16 23:00:00 EST 2019 · OSTI ID:1580517

O3BNN: An Out-Of-Order Architecture for High-Performance Binarized Neural Network Inference with Fine-Grained Pruning
Conference · Wed Aug 14 00:00:00 EDT 2019 · OSTI ID:1764982

Related Subjects