Parallel implementation of the PHOENIX generalized stellar atmosphere program. II. Wavelength parallelization
- Department of Physics and Astronomy, University of Oklahoma, 440 West Brooks, Room 131, Norman, Oklahoma73019-0225 (United States) Hauschildt, Peter H. [Department of Physics and Astronomy and Center for Simulational Physics, University of Georgia, Athens, Georgia30602-2451 (United States)
We describe an important addition to the parallel implementation of our generalized nonlocal thermodynamic equilibrium (NLTE) stellar atmosphere and radiative transfer computer program PHOENIX. In a previous paper in this series we described data and task parallel algorithms we have developed for radiative transfer, spectral line opacity, and NLTE opacity and rate calculations. These algorithms divided the work spatially or by spectral lines, that is, distributing the radial zones, individual spectral lines, or characteristic rays among different processors and employ, in addition, task parallelism for logically independent functions (such as atomic and molecular line opacities). For finite, monotonic velocity fields, the radiative transfer equation is an initial value problem in wavelength, and hence each wavelength point depends upon the previous one. However, for sophisticated NLTE models of both static and moving atmospheres needed to accurately describe, e.g., novae and supernovae, the number of wavelength points is very large (200,000{endash}300,000) and hence parallelization over wavelength can lead both to considerable speedup in calculation time and the ability to make use of the aggregate memory available on massively parallel supercomputers. Here, we describe an implementation of a pipelined design for the wavelength parallelization of PHOENIX, where the necessary data from the processor working on a previous wavelength point is sent to the processor working on the succeeding wavelength point as soon as it is known. Our implementation uses a MIMD design based on a relatively small number of standard message passing interface (MPI) library calls and is fully portable between serial and parallel computers. {copyright} {ital 1998} {ital The American Astronomical Society}
- OSTI ID:
- 678815
- Journal Information:
- Astrophysical Journal, Vol. 495, Issue 1; Other Information: PBD: Mar 1998
- Country of Publication:
- United States
- Language:
- English
Similar Records
3D gyrokinetic particle-in-cell simulation of fusion plasma microturbulence on parallel computers
3D gyrokinetic particle-in-cell simulation of fusion plasma microturbulence on parallel computers