Terascale Optimal PDE Simulations (TOPS) Center
Abstract
This report covers the period from Oct. 2002 to Sep. 2004 when Old Dominion University (ODU) was the lead institution for the TOPS ISIC, until in Oct. 2004 Columbia University replaced ODU as the lead institution. The TOPS members from ODU focused on various aspects of the linear and nonlinear solver infrastructure required by the partial differential equations simulation codes, working directly with SciDAC teams from the Fusion Energy Sciences program: the Center for Extended agnetohydrodynamic Modeling (CEMM) at Princeton, and with the Center for Magnetic Reconnection Studies (CMRS) at University of New Hampshire. With CEMM we worked with their MHD simulation code, called M3D, which is semi-implicit, requiring linear solves but no onlinear solves. We contributed several improvements to their current semi-implicit code. Among these was the use of multilevel reconditioning, which provides optimal scaling. This was done through the multigrid preconditioner available in Hypre, another major solver package available in TOPS. We also provided them direct solver functionality for their linear solves since they may be required for more accurate solutions in some regimes. With the CMRS group, we implemented a fully implicit parallel magnetic reconnection simulation code, built on top of PETSc. Our first attempt was amore »
- Authors:
- Publication Date:
- Research Org.:
- Old Dominion University Research Foundation
- Sponsoring Org.:
- USDOE Office of Science (SC)
- OSTI Identifier:
- 890547
- Report Number(s):
- DOE/ER/25476-3 Final Report
ODURF313401
- DOE Contract Number:
- FC02-01ER25476
- Resource Type:
- Technical Report
- Country of Publication:
- United States
- Language:
- English
- Subject:
- 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 01 COAL, LIGNITE, AND PEAT
Citation Formats
Pothen, Alex. Terascale Optimal PDE Simulations (TOPS) Center. United States: N. p., 2006.
Web. doi:10.2172/890547.
Pothen, Alex. Terascale Optimal PDE Simulations (TOPS) Center. United States. doi:10.2172/890547.
Pothen, Alex. Wed .
"Terascale Optimal PDE Simulations (TOPS) Center". United States.
doi:10.2172/890547. https://www.osti.gov/servlets/purl/890547.
@article{osti_890547,
title = {Terascale Optimal PDE Simulations (TOPS) Center},
author = {Pothen, Alex},
abstractNote = {This report covers the period from Oct. 2002 to Sep. 2004 when Old Dominion University (ODU) was the lead institution for the TOPS ISIC, until in Oct. 2004 Columbia University replaced ODU as the lead institution. The TOPS members from ODU focused on various aspects of the linear and nonlinear solver infrastructure required by the partial differential equations simulation codes, working directly with SciDAC teams from the Fusion Energy Sciences program: the Center for Extended agnetohydrodynamic Modeling (CEMM) at Princeton, and with the Center for Magnetic Reconnection Studies (CMRS) at University of New Hampshire. With CEMM we worked with their MHD simulation code, called M3D, which is semi-implicit, requiring linear solves but no onlinear solves. We contributed several improvements to their current semi-implicit code. Among these was the use of multilevel reconditioning, which provides optimal scaling. This was done through the multigrid preconditioner available in Hypre, another major solver package available in TOPS. We also provided them direct solver functionality for their linear solves since they may be required for more accurate solutions in some regimes. With the CMRS group, we implemented a fully implicit parallel magnetic reconnection simulation code, built on top of PETSc. Our first attempt was a Krylov linear iteration (GMRES because of the lack of symmetry), within each nonlinear (Newton) iteration, with optimal multilevel preconditioning, using the geometric multigrid preconditioner from PETSc. However, for reasons that we have not yet fully understood, the multigrid preconditioner fails early in the simulation, breaking the outer Newton iteration. Much better results were obtained after switching from optimal multilevel preconditioning to suboptimal one level preconditioning. Our current code, based on the additive Schwartz preconditioner from in PETSc, with ILU on subdomains, scales reasonably well, while matching the output of the original explicit code. The new Newton-Krylov-Schwarz implicitcode can take time-steps that are hundreds or thousands of times larger than the explicit code. During the three year period of this grant, we published thirteen papers and gave several invited talks at international conferences. Work on these TOPS projects continues with Columbia University as lead until Sep. 2006.},
doi = {10.2172/890547},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Wed Aug 23 00:00:00 EDT 2006},
month = {Wed Aug 23 00:00:00 EDT 2006}
}
-
Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part ofmore »
-
Final Report for UC Berkeley Terascale Optimal PDE Solvers TOPS DOE Award Number DE-FC02-01ER25478 9/15/2001 – 9/14/2006
In many areas of science, physical experimentation may be too dangerous, too expensive or even impossible. Instead, large-scale simulations, validated by comparison with related experiments in well-understood laboratory contexts, are used by scientists to gain insight and confirmation of existing theories in such areas, without benefit of full experimental verification. The goal of the TOPS ISIC was to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. A major component of this effort is to provide software for large scale parallel computers capable of efficiently solving the enormous systems of equations arising from the nonlinear PDEs underlyingmore » -
Final Report: Towards Optimal Petascale Simulations (TOPS), ER25785
Multiscale, multirate scientific and engineering applications in the SciDAC portfolio possess resolution requirements that are practically inexhaustible and demand execution on the highest-capability computers available, which will soon reach the petascale. While the variety of applications is enormous, their needs for mathematical software infrastructure are surprisingly coincident; moreover the chief bottleneck is often the solver. At their current scalability limits, many applications spend a vast majority of their operations in solvers, due to solver algorithmic complexity that is superlinear in the problem size, whereas other phases scale linearly. Furthermore, the solver may be the phase of the simulation with themore » -
Terascale Optimal PDE Simulations
The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascalemore » -
Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry: Spray Simulations
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore »