skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Challenges for high-performance networking for exascale computing.

Abstract

Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

Authors:
; ;  [1];
  1. (Intel Corporation, Hillsboro, OR)
Publication Date:
Research Org.:
Sandia National Laboratories
Sponsoring Org.:
USDOE
OSTI Identifier:
1012446
Report Number(s):
SAND2010-2892C
TRN: US201110%%99
DOE Contract Number:
AC04-94AL85000
Resource Type:
Conference
Resource Relation:
Conference: Proposed for presentation at the International Conference on Computer Communications and Networks held August 2-5, 2010 in Zurich, Switzerland.
Country of Publication:
United States
Language:
English
Subject:
99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; COMMUNICATIONS; COMPUTERS; PERFORMANCE; PROGRAMMING

Citation Formats

Barrett, Brian W., Hemmert, K. Scott, Underwood, Keith Douglas, and Brightwell, Ronald Brian. Challenges for high-performance networking for exascale computing.. United States: N. p., 2010. Web.
Barrett, Brian W., Hemmert, K. Scott, Underwood, Keith Douglas, & Brightwell, Ronald Brian. Challenges for high-performance networking for exascale computing.. United States.
Barrett, Brian W., Hemmert, K. Scott, Underwood, Keith Douglas, and Brightwell, Ronald Brian. 2010. "Challenges for high-performance networking for exascale computing.". United States. doi:.
@article{osti_1012446,
title = {Challenges for high-performance networking for exascale computing.},
author = {Barrett, Brian W. and Hemmert, K. Scott and Underwood, Keith Douglas and Brightwell, Ronald Brian},
abstractNote = {Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2010,
month = 5
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design andmore » deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.« less
  • The 1996 Simulation MultiConference features the fourth High Performance Computing Symposium, with the theme `Grand Challenges in Computer Simulation.` The goal of the Symposium is to encourage innovation in High Performance Computing Technologies and to stimulate the use of these technologies in key areas of Computer Simulation, particularly in the `Grand Challenge` areas defined in the Federal High Performance Computing and Communications Initiative. Compute Simulation is recognized today as a critical technology, playing a major role in many areas such as improving the efficiency of car engines, designing advanced materials, or predicting weather patterns and global climate changes. In themore » coming years Computer Simulation, spurred by technological changes already underway, can and should play an even greater role in providing solutions to technologies most important problems. Separate abstracts have been indexed into the database for articles from this proceedings.« less
  • Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIH’s National Center for Biotechnology Information (NCBI), The Institute for Genomic Research (TIGR), and the DOE’s Joint Genome Institute (JGI) Integrated Microbial Genome (IMG) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasets—enabling end users to rapidly access a growing public data source, as well as utilizemore » analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machines—from small clusters to emerging petascale machines—can most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats.« less