A GPU-accelerated package for simulation of flow in nanoporous source rocks with many-body dissipative particle dynamics
- Idaho National Lab. (INL), Idaho Falls, ID (United States)
- Idaho National Lab. (INL), Idaho Falls, ID (United States); Brown Univ., Providence, RI (United States)
- Clemson Univ., SC (United States)
- Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
- Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
- Univ. of Utah, Salt Lake City, UT (United States)
- Carl Zeiss X-ray Microscopy, Inc., Pleasanton, CA (United States)
Mesoscopic simulations of hydrocarbon flow in source shales are concerning, in part due to the heterogeneous shale pores with sizes ranging from a few nanometers to a few micrometers. Additionally, the sub-continuum fluid–fluid and fluid–solid interactions in nano- to micro-scale shale pores, which are physically and chemically sophisticated, must be captured. To address those challenges, we present a GPU-accelerated package for simulation of flow in nano- to micro-pore networks with a many-body dissipative particle dynamics (mDPD) mesoscale model. Based on a fully distributed parallel paradigm, the code offloads all intensive workloads on GPUs. Other advancements, such as smart particle packing and no-slip boundary condition in complex pore geometries, are also implemented for the construction and the simulation of the realistic shale pores from 3D nanometer-resolution stack images. Our code is validated for accuracy and compared against the CPU counterpart for speedup. In our benchmark tests, the code delivers nearly perfect strong scaling and weak scaling (with up to 512 million particles) on up to 512 K20X GPUs on Oak Ridge National Laboratory’s (ORNL) Titan supercomputer. Moreover, a single-GPU benchmark on ORNL’s SummitDev and IBM’s AC922 suggests that the host-to-device NVLink can boost performance over PCIe by a remarkable 40%. Lastly, we demonstrate, through a flow simulation in realistic shale pores, that the CPU counterpart requires 840 Power9 cores to rival the performance delivered by our package with four V100 GPUs on ORNL’s Summit architecture. This simulation package enables quick-turnaround and high-throughput mesoscopic numerical simulations for exploring complex flow phenomena in nano- to micro-porous rocks with realistic pore geometries.
- Research Organization:
- Energy Frontier Research Centers (EFRC) (United States). Multi-Scale Fluid-Solid Interactions in Architected and Natural Materials (MUSE); Univ. of Utah, Salt Lake City, UT (United States); Idaho National Laboratory (INL), Idaho Falls, ID (United States); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
- Sponsoring Organization:
- USDOE Office of Science (SC), Basic Energy Sciences (BES); USDOE Office of Nuclear Energy (NE)
- Grant/Contract Number:
- SC0019285; AC07-05ID14517; AC05-00OR22725
- OSTI ID:
- 1597088
- Alternate ID(s):
- OSTI ID: 1575938
- Report Number(s):
- INL/JOU-19-52933; TRN: US2103024
- Journal Information:
- Computer Physics Communications, Vol. 247, Issue C; ISSN 0010-4655
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Web of Science
Confinement Effect on Porosity and Permeability of Shales
|
journal | January 2020 |
HPC, Cloud and Big-Data Convergent Architectures: The LEXIS Approach
|
book | June 2019 |
Similar Records
GPU-Centric Communication on NVIDIA GPU Clusters with InfiniBand: A Case Study with OpenSHMEM
A prospect for computing in porous materials research: Very large fluid flow simulations