skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Final Report for Foundational Tools for Petascale Computing

Abstract

This project concentrated on various aspects of creating tool infrastructure to make it easier to program large-scale parallel computers. This project was collaborative with the University of Wisconsin and closely related to the project DE-SC0002606 (“Tools for the Development of High Performance Energy Applications and Systems”) . The research conducted during this project is summarized in this report. The complete details of the work are available in the ten publications listed at the end of the report. Many of the concepts created during this project have been incorporated into tools and made available as freely downloadable software (at www.dyninst.org). It also supported the Ph.D. studies of three students and one research staff member.

Authors:
 [1]
  1. Univ. of Maryland, College Park, MD (United States)
Publication Date:
Research Org.:
Univ. of Maryland, College Park, MD (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1169945
Report Number(s):
UMD-SC0002616
DOE Contract Number:
SC0002616
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Hollingsworth, Jeff. Final Report for Foundational Tools for Petascale Computing. United States: N. p., 2015. Web. doi:10.2172/1169945.
Hollingsworth, Jeff. Final Report for Foundational Tools for Petascale Computing. United States. doi:10.2172/1169945.
Hollingsworth, Jeff. Thu . "Final Report for Foundational Tools for Petascale Computing". United States. doi:10.2172/1169945. https://www.osti.gov/servlets/purl/1169945.
@article{osti_1169945,
title = {Final Report for Foundational Tools for Petascale Computing},
author = {Hollingsworth, Jeff},
abstractNote = {This project concentrated on various aspects of creating tool infrastructure to make it easier to program large-scale parallel computers. This project was collaborative with the University of Wisconsin and closely related to the project DE-SC0002606 (“Tools for the Development of High Performance Energy Applications and Systems”) . The research conducted during this project is summarized in this report. The complete details of the work are available in the ten publications listed at the end of the report. Many of the concepts created during this project have been incorporated into tools and made available as freely downloadable software (at www.dyninst.org). It also supported the Ph.D. studies of three students and one research staff member.},
doi = {10.2172/1169945},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Thu Feb 12 00:00:00 EST 2015},
month = {Thu Feb 12 00:00:00 EST 2015}
}

Technical Report:

Save / Share:
  • The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building toolsmore » and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “High-Performance Energy Applications and Systems”, SC0004061/FG02-10ER25972, UW PRJ36WV.« less
  • In the course of developing parallel programs for leadership computing systems, subtle programming errors often arise that are extremely difficult to diagnose without tools. To meet this challenge, University of Maryland, the University of Wisconsin—Madison, and Rice University worked to develop lightweight tools to help code developers pinpoint a variety of program correctness errors that plague parallel scientific codes. The aim of this project was to develop software tools that help diagnose program errors including memory leaks, memory access errors, round-off errors, and data races. Research at Rice University focused on developing algorithms and data structures to support efficient monitoringmore » of multithreaded programs for memory access errors and data races. This is a final report about research and development work at Rice University as part of this project.« less
  • The majority of scientific software is distributed as source code. As the number of library dependencies and supported platforms increases, so does the complexity of describing the rules for configuring and building software. In this project, we have performed an empirical study of the magnitude of the build problem by examining the development history of two DOE-funded scientific software projects. We have developed MixDown, a meta-build tool, to simplify the task of building applications that depend on multiple third-party libraries. The results of this research indicate that the effort that scientific programmers spend takes a significant fraction of the totalmore » development effort and that the use of MixDown can significantly simplify the task of building software with multiple dependencies.« less
  • Extended magnetohydrodynamic (MHD) codes are used to model the large, slow-growing instabilities that are projected to limit the performance of International Thermonuclear Experimental Reactor (ITER). The multiscale nature of the extended MHD equations requires an implicit approach. The current linear solvers needed for the implicit algorithm scale poorly because the resultant matrices are so ill-conditioned. A new solver is needed, especially one that scales to the petascale. The most successful scalable parallel processor solvers to date are multigrid solvers. Applying multigrid techniques to a set of equations whose fundamental modes are dispersive waves is a promising solution to CEMM problems.more » For the Phase 1, we implemented multigrid preconditioners from the HYPRE project of the Center for Applied Scientific Computing at LLNL via PETSc of the DOE SciDAC TOPS for the real matrix systems of the extended MHD code NIMROD which is a one of the primary modeling codes of the OFES-funded Center for Extended Magnetohydrodynamic Modeling (CEMM) SciDAC. We implemented the multigrid solvers on the fusion test problem that allows for real matrix systems with success, and in the process learned about the details of NIMROD data structures and the difficulties of inverting NIMROD operators. The further success of this project will allow for efficient usage of future petascale computers at the National Leadership Facilities: Oak Ridge National Laboratory, Argonne National Laboratory, and National Energy Research Scientific Computing Center. The project will be a collaborative effort between computational plasma physicists and applied mathematicians at Tech-X Corporation, applied mathematicians Front Range Scientific Computations, Inc. (who are collaborators on the HYPRE project), and other computational plasma physicists involved with the CEMM project.« less