Infiniband Performance Comparisons of SDR, DDR and Infinipath
This technical report will be comparing the performance between the most common infiniband-related technologies currently available. Included will be TCP-based, MPI-based and low-level performance tests to see what performance can be expected from Mellanox's SDR and DDR as well as PathScale's Infinipath. Also, we will be performing comparisons of the Infinipath on both OpenIB as well as PathScale's ipath stack. Infiniband promises to bring high performance interconnects for I/O (filesystem and networking) to a new cost performance level. Thus, LLNL has been evaluating Infiniband for use as a cluster interconnect. Various issues impact the decision of which interconnect to use in a cluster. This technical report will be looking more closely at the actual performance of the major infiniband technologies present today. Performance testing will focus on latency, and bandwidth (both uni and bi-directional) using both TCP and MPI. In addition, we will be looking at an even lower-level (removing most of the upper-level protocols) and seeing what the connection can really do if the TCP or MPI layers were perfectly written.
- Research Organization:
- Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- USDOE
- DOE Contract Number:
- W-7405-ENG-48
- OSTI ID:
- 900097
- Report Number(s):
- UCRL-TR-221775; TRN: US200709%%529
- Country of Publication:
- United States
- Language:
- English
Similar Records
Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing
Infiniband Based Cable Comparison