Can high bandwidth and latency justify large cache blocks in scalable multiprocessors?
- Univ. of Rochester, NY (United States)
An important architectural design decision affecting the performance of coherent caches is the choice of block size. There are two primary factors that influence this choice: The reference behavior of applications and the remote access bandwidth and latency of the machine. Given that we anticipate increases in both network bandwidth and latency (in processor cycles) in scalable shared-memory multiprocessors, the question arises as to what effect these increases will have on the choice of block size. We use analytical modeling and execution-driven simulation of parallel programs on a large-scale shared-memory machine to examine the relationship between cache block size and application performance as a function of remote access bandwidth and latency. We show that even under assumptions of high remote access bandwidth and latency, the best application performance usually results from using cache blocks between 32 and 128 bytes in size. We also show that modifying the program to remove the dominant source of misses may not increase the best performing block size. We conclude that large cache blocks cannot be justified in most realistic scenarios.
- OSTI ID:
- 98914
- Report Number(s):
- CONF-940856-; CNN: Contract N00014-92-J-1801; Grant CDA-8822724; TRN: 94:008346-0046
- Resource Relation:
- Conference: 1994 international conference on parallel processing, St. Charles, IL (United States), 15-19 Aug 1994; Other Information: PBD: 1994; Related Information: Is Part Of Proceedings of the 1994 international conference on parallel processing. Volume 1: Architecture; Agrawal, D.P. [ed.]; PB: 330 p.
- Country of Publication:
- United States
- Language:
- English
Similar Records
The effectiveness of caches and data prefetch buffers in large-scale shared memory multiprocessors
A mean-value performance analysis of a new multiprocessor architecture