The Tradeoffs of Fused Memory Hierarchies in Heterogeneous Computing Architectures
- ORNL
With the rise of general purpose computing on graphics processing units (GPGPU), the influence from consumer markets can now be seen across the spectrum of computer architectures. In fact, many of the high-ranking Top500 HPC systems now include these accelerators. Traditionally, GPUs have connected to the CPU via the PCIe bus, which has proved to be a significant bottleneck for scalable scientific applications. Now, a trend toward tighter integration between CPU and GPU has removed this bottleneck and unified the memory hierarchy for both CPU and GPU cores. We examine the impact of this trend for high performance scientific computing by investigating AMD's new Fusion Accelerated Processing Unit (APU) as a testbed. In particular, we evaluate the tradeoffs in performance, power consumption, and programmability when comparing this unified memory hierarchy with similar, but discrete GPUs.
- Research Organization:
- Oak Ridge National Laboratory (ORNL)
- Sponsoring Organization:
- ORNL work for others; SC USDOE - Office of Science (SC)
- DOE Contract Number:
- AC05-00OR22725
- OSTI ID:
- 1048726
- Country of Publication:
- United States
- Language:
- English
Similar Records
High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.
Evaluating Unified Memory Performance in HIP