Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators

Conference ·

Large Language Models (LLMs) have propelled groundbreaking advancements across several domains and are commonly used for text generation applications. However, the computational demands of these complex models pose significant challenges, requiring efficient hardware acceleration. Benchmarking the performance of LLMs across diverse hardware platforms is crucial to understanding their scalability and throughput characteristics. We introduce LLM-Inference-Bench, a comprehensive benchmarking suite to evaluate the hardware inference performance of LLMs. We thoroughly analyze diverse hardware platforms, including GPUs from Nvidia and AMD and specialized AI accelerators, Intel Habana and SambaNova. Our evaluation includes several LLM inference frameworks and models from LLaMA, Mistral, and Qwen families with 7B and 70B parameters. Our benchmarking results reveal the strengths and limitations of various models, hardware platforms, and inference frameworks. We provide an interactive dashboard to help identify configurations for optimal performance for a given hardware platform.

Research Organization:
Argonne National Laboratory (ANL)
Sponsoring Organization:
USDOE Office of Science - Office of Advanced Scientific Computing Research (ASCR)
DOE Contract Number:
AC02-06CH11357
OSTI ID:
2563712
Country of Publication:
United States
Language:
English

Similar Records

MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models
Conference · Fri Nov 14 23:00:00 EST 2025 · OSTI ID:3010792

CACTUS: Chemistry Agent Connecting Tool Usage to Science
Journal Article · Fri Oct 25 00:00:00 EDT 2024 · ACS Omega · OSTI ID:2478063

Related Subjects