skip to main content
DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Network-aware cache coherence protocol enhancement

Abstract

A non-uniform memory access system includes several nodes that each have one or more processors, caches, local main memory, and a local bus that connects a node's processor(s) to its memory. The nodes are coupled to one another over a collection of point-to-point interconnects, thereby permitting processors in one node to access data stored in another node. Memory access time for remote memory takes longer than local memory because remote memory accesses have to travel across a communications network to arrive at the requesting processor. In some embodiments, inter-cache and main-memory-to-cache latencies are measured to determine whether it would be more efficient to satisfy memory access requests using cached copies stored in caches of owning nodes or from main memory of home nodes.

Inventors:
;
Issue Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1576397
Patent Number(s):
10,402,327
Application Number:
15/358,318
Assignee:
Advanced Micro Devices, Inc. (Santa Clara, CA)
DOE Contract Number:  
AC52-07NA27344; B608045
Resource Type:
Patent
Resource Relation:
Patent File Date: 2016 Nov 22
Country of Publication:
United States
Language:
English

Citation Formats

Roberts, David A., and Fatehi, Ehsan. Network-aware cache coherence protocol enhancement. United States: N. p., 2019. Web.
Roberts, David A., & Fatehi, Ehsan. Network-aware cache coherence protocol enhancement. United States.
Roberts, David A., and Fatehi, Ehsan. Tue . "Network-aware cache coherence protocol enhancement". United States. https://www.osti.gov/servlets/purl/1576397.
@article{osti_1576397,
title = {Network-aware cache coherence protocol enhancement},
author = {Roberts, David A. and Fatehi, Ehsan},
abstractNote = {A non-uniform memory access system includes several nodes that each have one or more processors, caches, local main memory, and a local bus that connects a node's processor(s) to its memory. The nodes are coupled to one another over a collection of point-to-point interconnects, thereby permitting processors in one node to access data stored in another node. Memory access time for remote memory takes longer than local memory because remote memory accesses have to travel across a communications network to arrive at the requesting processor. In some embodiments, inter-cache and main-memory-to-cache latencies are measured to determine whether it would be more efficient to satisfy memory access requests using cached copies stored in caches of owning nodes or from main memory of home nodes.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {9}
}

Patent:

Save / Share:

Works referenced in this record:

System and method for performing a speculative cache fill
patent, August 2004


System and method for coherence prediction
patent, April 2008