skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

Abstract

Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.

Authors:
 [1];  [2];  [1];  [1];  [1]
  1. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing (CCR)
  2. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing (CCR); Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Mathematics and Statistics
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sandia National Laboratories, Livermore, CA
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
1411594
Report Number(s):
SAND2017-0881J
Journal ID: ISSN 0377-0427; 650766
Grant/Contract Number:  
AC04-94AL85000
Resource Type:
Journal Article: Accepted Manuscript
Journal Name:
Journal of Computational and Applied Mathematics
Additional Journal Information:
Journal Volume: 344; Journal ID: ISSN 0377-0427
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Lin, P. T., Shadid, J. N., Hu, J. J., Pawlowski, R. P., and Cyr, E. C. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD. United States: N. p., 2017. Web. doi:10.1016/j.cam.2017.09.028.
Lin, P. T., Shadid, J. N., Hu, J. J., Pawlowski, R. P., & Cyr, E. C. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD. United States. doi:10.1016/j.cam.2017.09.028.
Lin, P. T., Shadid, J. N., Hu, J. J., Pawlowski, R. P., and Cyr, E. C. Mon . "Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD". United States. doi:10.1016/j.cam.2017.09.028.
@article{osti_1411594,
title = {Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD},
author = {Lin, P. T. and Shadid, J. N. and Hu, J. J. and Pawlowski, R. P. and Cyr, E. C.},
abstractNote = {Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of the original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.},
doi = {10.1016/j.cam.2017.09.028},
journal = {Journal of Computational and Applied Mathematics},
number = ,
volume = 344,
place = {United States},
year = {Mon Nov 06 00:00:00 EST 2017},
month = {Mon Nov 06 00:00:00 EST 2017}
}

Journal Article:
Free Publicly Available Full Text
This content will become publicly available on November 6, 2018
Publisher's Version of Record

Citation Metrics:
Cited by: 1 work
Citation information provided by
Web of Science

Save / Share: