skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Technical Note: On the Parallelization of a Global Climate-Chemistry Modeling System

Abstract

Coupled climate-chemistry simulations are computationally intensive owing to the spatial and temporal scope of the problem. In global chemistry models, the time integrations encountered in the chemistry and aerosol modules usually comprise the major CPU consumption. Parallelization of these segments of the code can contribute to multifold CPU speed-ups with minimal modification of the original serial code. This technical note presents a single program-multiple data (SPMD) strategy applied to the time-split chemistry modules of a coupled climate--global tropospheric chemistry model. Latitudinal domain decomposition is adopted along with a dynamic load-balancing technique that uses the previous time-step's load/latitude estimates for distributing the latitude bands amongst the processors. The coupled model is manually parallelized using the Message Passing Interface standard (MPI) on a distributed memory platform (IBM-SP2). Load-balancing efficiencies and the associated MPI overheads are discussed. Overall speed-ups and efficiencies are also calculated for a series of runs employing up to eight processors.

Authors:
; ; ;
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
15007193
Report Number(s):
PNNL-SA-31132
Journal ID: ISSN 0004-6981; ATENBP; TRN: US200414%%285
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Journal Article
Journal Name:
Atmospheric Environment, 33(4):675-681
Additional Journal Information:
Journal Volume: 33; Journal Issue: 4; Journal ID: ISSN 0004-6981
Country of Publication:
United States
Language:
English
Subject:
54 ENVIRONMENTAL SCIENCES; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; AEROSOLS; ATMOSPHERIC CHEMISTRY; MODIFICATIONS; CLIMATE MODELS; PARALLEL PROCESSING; GLOBAL ASPECTS; atmospheric chemistry modeling, domain-decomposition, load-balancing, message passing interface

Citation Formats

Lee, Peter S, Zaveri, Rahul A, Easter, Richard C, and Peters, Leonard K. Technical Note: On the Parallelization of a Global Climate-Chemistry Modeling System. United States: N. p., 1999. Web. doi:10.1016/S1352-2310(98)00272-6.
Lee, Peter S, Zaveri, Rahul A, Easter, Richard C, & Peters, Leonard K. Technical Note: On the Parallelization of a Global Climate-Chemistry Modeling System. United States. https://doi.org/10.1016/S1352-2310(98)00272-6
Lee, Peter S, Zaveri, Rahul A, Easter, Richard C, and Peters, Leonard K. 1999. "Technical Note: On the Parallelization of a Global Climate-Chemistry Modeling System". United States. https://doi.org/10.1016/S1352-2310(98)00272-6.
@article{osti_15007193,
title = {Technical Note: On the Parallelization of a Global Climate-Chemistry Modeling System},
author = {Lee, Peter S and Zaveri, Rahul A and Easter, Richard C and Peters, Leonard K},
abstractNote = {Coupled climate-chemistry simulations are computationally intensive owing to the spatial and temporal scope of the problem. In global chemistry models, the time integrations encountered in the chemistry and aerosol modules usually comprise the major CPU consumption. Parallelization of these segments of the code can contribute to multifold CPU speed-ups with minimal modification of the original serial code. This technical note presents a single program-multiple data (SPMD) strategy applied to the time-split chemistry modules of a coupled climate--global tropospheric chemistry model. Latitudinal domain decomposition is adopted along with a dynamic load-balancing technique that uses the previous time-step's load/latitude estimates for distributing the latitude bands amongst the processors. The coupled model is manually parallelized using the Message Passing Interface standard (MPI) on a distributed memory platform (IBM-SP2). Load-balancing efficiencies and the associated MPI overheads are discussed. Overall speed-ups and efficiencies are also calculated for a series of runs employing up to eight processors.},
doi = {10.1016/S1352-2310(98)00272-6},
url = {https://www.osti.gov/biblio/15007193}, journal = {Atmospheric Environment, 33(4):675-681},
issn = {0004-6981},
number = 4,
volume = 33,
place = {United States},
year = {Mon Feb 01 00:00:00 EST 1999},
month = {Mon Feb 01 00:00:00 EST 1999}
}