DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Selective data retrieval based on access latency

Abstract

A processor includes multiple processing units (e.g., processor cores), with each processing unit associated with at least one private, dedicated cache. The processor is also associated with a system memory that stores all data that can be accessed by the multiple processing units. A coherency manager (e.g., a coherence directory) of the processor enforces a specified coherency scheme to ensure data coherency between the different caches and between the caches and the system memory. In response to a memory access request to a given cache resulting in a cache miss, the coherency manager identifies the current access latency to the system memory as well as the current access latencies to other caches of the processor. The coherency manager transfers the targeted data to the given cache from the cache or system memory having the lower access latency.

Inventors:
Issue Date:
Research Org.:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1600387
Patent Number(s):
10503640
Application Number:
15/960,875
Assignee:
Advanced Micro Devices, Inc. (Santa Clara, CA)
Patent Classifications (CPCs):
G - PHYSICS G06 - COMPUTING G06F - ELECTRIC DIGITAL DATA PROCESSING
DOE Contract Number:  
AC52-07NA27344; B620717
Resource Type:
Patent
Resource Relation:
Patent File Date: 04/24/2018
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Eckert, Yasuko. Selective data retrieval based on access latency. United States: N. p., 2019. Web.
Eckert, Yasuko. Selective data retrieval based on access latency. United States.
Eckert, Yasuko. Tue . "Selective data retrieval based on access latency". United States. https://www.osti.gov/servlets/purl/1600387.
@article{osti_1600387,
title = {Selective data retrieval based on access latency},
author = {Eckert, Yasuko},
abstractNote = {A processor includes multiple processing units (e.g., processor cores), with each processing unit associated with at least one private, dedicated cache. The processor is also associated with a system memory that stores all data that can be accessed by the multiple processing units. A coherency manager (e.g., a coherence directory) of the processor enforces a specified coherency scheme to ensure data coherency between the different caches and between the caches and the system memory. In response to a memory access request to a given cache resulting in a cache miss, the coherency manager identifies the current access latency to the system memory as well as the current access latencies to other caches of the processor. The coherency manager transfers the targeted data to the given cache from the cache or system memory having the lower access latency.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {12}
}

Works referenced in this record:

Memory Congestion Aware NUMA Management
patent-application, December 2017


Method, Apparatus and System for Handling Cache Misses in a Processor
patent-application, May 2015