DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Device and method for cache utilization aware data compression

Abstract

A processing device is provided which includes memory and at least one processor. The memory includes main memory and cache memory in communication with the main memory via a link. The at least one processor is configured to receive a request for a cache line and read the cache line from main memory. The at least one processor is also configured to compress the cache line according to a compression algorithm and, when the compressed cache line includes at least one byte predicted not to be accessed, drop the at least one byte from the compressed cache line based on whether the compression algorithm is determined to successfully compress the cache line according to a compression parameter.

Inventors:
; ; ;
Issue Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1771643
Patent Number(s):
10838727
Application Number:
16/220,508
Assignee:
Advanced Micro Devices, Inc. (Santa Clara, CA)
Patent Classifications (CPCs):
G - PHYSICS G06 - COMPUTING G06F - ELECTRIC DIGITAL DATA PROCESSING
DOE Contract Number:  
AC52-07NA27344
Resource Type:
Patent
Resource Relation:
Patent File Date: 12/14/2018
Country of Publication:
United States
Language:
English

Citation Formats

Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, and Beckmann, Bradford M. Device and method for cache utilization aware data compression. United States: N. p., 2020. Web.
Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, & Beckmann, Bradford M. Device and method for cache utilization aware data compression. United States.
Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, and Beckmann, Bradford M. Tue . "Device and method for cache utilization aware data compression". United States. https://www.osti.gov/servlets/purl/1771643.
@article{osti_1771643,
title = {Device and method for cache utilization aware data compression},
author = {Das, Shomit N. and Punniyamurthy, Kishore and Tomei, Matthew and Beckmann, Bradford M.},
abstractNote = {A processing device is provided which includes memory and at least one processor. The memory includes main memory and cache memory in communication with the main memory via a link. The at least one processor is configured to receive a request for a cache line and read the cache line from main memory. The at least one processor is also configured to compress the cache line according to a compression algorithm and, when the compressed cache line includes at least one byte predicted not to be accessed, drop the at least one byte from the compressed cache line based on whether the compression algorithm is determined to successfully compress the cache line according to a compression parameter.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2020},
month = {11}
}

Works referenced in this record:

A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems
journal, May 2016