skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing

 [1];  [2]
  1. ORNL
  2. North Carolina State University (NCSU), Raleigh
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
Sponsoring Org.:
OSTI Identifier:
DOE Contract Number:
Resource Type:
Resource Relation:
Conference: International Conference on Parallel Processing (ICPP-2015), Beijing, China, 20150901, 20150904
Country of Publication:
United States

Citation Formats

Gupta, Saurabh, and Zhou, Huiyang. Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing. United States: N. p., 2015. Web.
Gupta, Saurabh, & Zhou, Huiyang. Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing. United States.
Gupta, Saurabh, and Zhou, Huiyang. 2015. "Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing". United States. doi:.
title = {Spatial Locality-Aware Cache Partitioning for Effective Cache Sharing},
author = {Gupta, Saurabh and Zhou, Huiyang},
abstractNote = {},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2015,
month = 1

Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • In a distributed environment the utilization of file buffer caches in different clients may vary greatly. Cooperative caching is used to increase cache utilization by coordinating the usage of distributed caches. Existing cooperative caching protocols mainly address organizational issues, paying little attention to exploiting locality of file access patterns. We propose a locality-aware cooperative caching protocol, called LAC, that is based on analysis and manipulation of data block reuse distance to effectively predict cache utilization and the probability of data reuse. Using a dynamically controlled synchronization technique, we make local information consistently comparable among clients. The system is highly scalablemore » in the sense that global coordination is achieved without centralized control.« less
  • While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQLmore » is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.« less
  • In this paper, the authors introduce a compiler-directed coherence scheme which can exploit most of the temporal spatial locality across task boundaries. It requires only an extended tag field per cache word, one modified memory access instruction, and a counter called the epoch counter in each processor. By using the epoch counter as a system-wide version number, the scheme simplifies the cache hardware of previous version control or timestamp-based schemes, but still exploits most of the temporal and spatial locality across task boundaries. The authors present a compiler algorithm to generate the appropriate memory access instructions for the proposed scheme.more » The algorithm is based on a data flow analysis technique. It identifies potential stale references by examining memory reference patterns in a source program.« less
  • High-performance scientific computing relies increasingly on high-level large-scale object-oriented software frameworks to manage both algorithmic complexity and the complexities of parallelism: distributed data management, process management, inter-process communication, and load balancing. This encapsulation of data management, together with the prescribed semantics of a typical fundamental component of such object-oriented frameworks--a parallel or serial array-class library--provides an opportunity for increasingly sophisticated compile-time optimization techniques. This paper describes a technique for introducing cache blocking suitable for certain classes of numerical algorithms, demonstrates and analyzes the resulting performance gains, and indicates how this optimization transformation is being automated.
  • This paper presents novel cache optimizations for massively parallel, throughput-oriented architectures like GPUs. Based on the reuse characteristics of GPU workloads, we propose a design that integrates such efficient locality filtering capability into the decoupled tag store of the existing L1 D-cache through simple and cost-effective hardware extensions.