High performance computing&Monte Carlo
- Forrest B.
- William R.
High performance computing (HPC), used for the most demanding computational problems, has evolved from single processor custom systems in the 1960s and 1970s, to vector processors in the 1980s, to parallel processors in the 1990s, to clusters of commodity processors in the 2000s. Performance/price has increased by a factor of more than I million over that time, so that today's desktop PC is more powerful than yesterday's supercomputer. With the introduction of inexpensive Linux clusters and the standardization of parallel software through MPI and OpenMP, parallel computing is now widespread and available to everyone. Monte Carlo codes for particle transport are especially well-positioned to take advantage of accessible parallel computing, due to the inherently parallel nature of the computational algorithm. We review Monte Carlo particle parallelism, including the basic algorithm, load-balancing, fault tolerance, and scaling, using MCNP5 as an example. Due to memory limitations, especially on single nodes of Linux clusters, domain decomposition has been tried, with partial success. We conclude with a new scheme, data decomposition, which holds promise for very large problems.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE
- OSTI ID:
- 977748
- Report Number(s):
- LA-UR-04-4532; TRN: US201012%%700
- Resource Relation:
- Conference: Submitted to: American Nuclear Society 2004 Winter Meeting, Washington, DC, 14-18 November 2004
- Country of Publication:
- United States
- Language:
- English
Similar Records
Quantum Monte Carlo Endstation for Petascale Computing
Deploy threading in Nalu solver stack