skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation

Abstract

As supercomputers approach exascale performance, the increased number of processors translates to an increased demand on the underlying network interconnect. We present that the slim fly network topology, a new low-diameter, low-latency, and low-cost interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this article, we present a high-fidelity slim fly packet-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate the model with published work before scaling the network size up to an unprecedented 1 million compute nodes and confirming that the slim fly observes peak network throughput at extreme scale. In addition to synthetic workloads, we evaluate large-scale slim fly models with real communication workloads from applications in the Design Forward program with over 110,000 MPI processes. We show strong scaling of the slim fly model on an Intel cluster achieving a peak network packet transfer rate of 2.3 million packets per second and processing over 7 billion discrete events using 128 MPI tasks. Enabled by the strong performance capabilities of the model, we perform a detailed application trace and routing protocol performance study. Lastly, through analysis of metrics such as packet latency, hopmore » count, and congestion, we find that the slim fly network is able to leverage simple minimal routing and achieve the same performance as more complex adaptive routing for tested DOE benchmark applications.« less

Authors:
 [1];  [2];  [1];  [2];  [2]
  1. Rensselaer Polytechnic Inst., Troy, NY (United States)
  2. Argonne National Lab. (ANL), Lemont, IL (United States)
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21); Air Force Research Laboratory (AFRL)
OSTI Identifier:
1488539
Grant/Contract Number:  
AC02-06CH11357
Resource Type:
Accepted Manuscript
Journal Name:
ACM Transactions on Modeling and Computer Simulation
Additional Journal Information:
Journal Volume: 28; Journal Issue: 4; Journal ID: ISSN 1049-3301
Publisher:
Association for Computing Machinery
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; Parallel discrete event simulation; Interconnection networks; Network topologies; Slim Fly

Citation Formats

Wolfe, Noah, Mubarak, Misbah, Carothers, Christopher D., Ross, Robert B., and Carns, Philip H. Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation. United States: N. p., 2018. Web. doi:10.1145/3203406.
Wolfe, Noah, Mubarak, Misbah, Carothers, Christopher D., Ross, Robert B., & Carns, Philip H. Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation. United States. doi:10.1145/3203406.
Wolfe, Noah, Mubarak, Misbah, Carothers, Christopher D., Ross, Robert B., and Carns, Philip H. Thu . "Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation". United States. doi:10.1145/3203406. https://www.osti.gov/servlets/purl/1488539.
@article{osti_1488539,
title = {Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation},
author = {Wolfe, Noah and Mubarak, Misbah and Carothers, Christopher D. and Ross, Robert B. and Carns, Philip H.},
abstractNote = {As supercomputers approach exascale performance, the increased number of processors translates to an increased demand on the underlying network interconnect. We present that the slim fly network topology, a new low-diameter, low-latency, and low-cost interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this article, we present a high-fidelity slim fly packet-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate the model with published work before scaling the network size up to an unprecedented 1 million compute nodes and confirming that the slim fly observes peak network throughput at extreme scale. In addition to synthetic workloads, we evaluate large-scale slim fly models with real communication workloads from applications in the Design Forward program with over 110,000 MPI processes. We show strong scaling of the slim fly model on an Intel cluster achieving a peak network packet transfer rate of 2.3 million packets per second and processing over 7 billion discrete events using 128 MPI tasks. Enabled by the strong performance capabilities of the model, we perform a detailed application trace and routing protocol performance study. Lastly, through analysis of metrics such as packet latency, hop count, and congestion, we find that the slim fly network is able to leverage simple minimal routing and achieve the same performance as more complex adaptive routing for tested DOE benchmark applications.},
doi = {10.1145/3203406},
journal = {ACM Transactions on Modeling and Computer Simulation},
number = 4,
volume = 28,
place = {United States},
year = {2018},
month = {8}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 2 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Load-Balancing in Multistage Interconnection Networks under Multiple-Pass Routing
journal, August 1996

  • Wang, Sying-Jyan
  • Journal of Parallel and Distributed Computing, Vol. 36, Issue 2
  • DOI: 10.1006/jpdc.1996.0099

Virtual-channel flow control
journal, March 1992

  • Dally, W. J.
  • IEEE Transactions on Parallel and Distributed Systems, Vol. 3, Issue 2
  • DOI: 10.1109/71.127260

Efficient optimistic parallel simulations using reverse computation
journal, July 1999

  • Carothers, Christopher D.; Perumalla, Kalyan S.; Fujimoto, Richard M.
  • ACM Transactions on Modeling and Computer Simulation, Vol. 9, Issue 3
  • DOI: 10.1145/347823.347828

Geometric realisation of the graphs of McKay–Miller–Širáň
journal, March 2004


A Scheme for Fast Parallel Communication
journal, May 1982

  • Valiant, L. G.
  • SIAM Journal on Computing, Vol. 11, Issue 2
  • DOI: 10.1137/0211027

ROSS: A high-performance, low-memory, modular Time Warp system
journal, November 2002

  • Carothers, Christopher D.; Bauer, David; Pearce, Shawn
  • Journal of Parallel and Distributed Computing, Vol. 62, Issue 11
  • DOI: 10.1016/S0743-7315(02)00004-7

The cost of conservative synchronization in parallel discrete event simulations
journal, April 1993


The structural simulation toolkit
journal, March 2011

  • Rodrigues, A. F.; CooperBalls, E.; Jacob, B.
  • ACM SIGMETRICS Performance Evaluation Review, Vol. 38, Issue 4
  • DOI: 10.1145/1964218.1964225

A Note on Large Graphs of Diameter Two and Given Maximum Degree
journal, September 1998

  • McKay, Brendan D.; Miller, Mirka; Širáň, Jozef
  • Journal of Combinatorial Theory, Series B, Vol. 74, Issue 1
  • DOI: 10.1006/jctb.1998.1828

Enabling Parallel Simulation of Large-Scale HPC Network Systems
journal, January 2017

  • Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.
  • IEEE Transactions on Parallel and Distributed Systems, Vol. 28, Issue 1
  • DOI: 10.1109/TPDS.2016.2543725

Technology-Driven, Highly-Scalable Dragonfly Topology
journal, June 2008

  • Kim, John; Dally, Wiliam J.; Scott, Steve
  • ACM SIGARCH Computer Architecture News, Vol. 36, Issue 3
  • DOI: 10.1145/1394608.1382129