Device and method for cache utilization aware data compression
Abstract
A processing device is provided which includes memory and at least one processor. The memory includes main memory and cache memory in communication with the main memory via a link. The at least one processor is configured to receive a request for a cache line and read the cache line from main memory. The at least one processor is also configured to compress the cache line according to a compression algorithm and, when the compressed cache line includes at least one byte predicted not to be accessed, drop the at least one byte from the compressed cache line based on whether the compression algorithm is determined to successfully compress the cache line according to a compression parameter.
- Inventors:
- Issue Date:
- Research Org.:
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)
- Sponsoring Org.:
- USDOE
- OSTI Identifier:
- 1771643
- Patent Number(s):
- 10838727
- Application Number:
- 16/220,508
- Assignee:
- Advanced Micro Devices, Inc. (Santa Clara, CA)
- Patent Classifications (CPCs):
-
G - PHYSICS G06 - COMPUTING G06F - ELECTRIC DIGITAL DATA PROCESSING
- DOE Contract Number:
- AC52-07NA27344
- Resource Type:
- Patent
- Resource Relation:
- Patent File Date: 12/14/2018
- Country of Publication:
- United States
- Language:
- English
Citation Formats
Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, and Beckmann, Bradford M. Device and method for cache utilization aware data compression. United States: N. p., 2020.
Web.
Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, & Beckmann, Bradford M. Device and method for cache utilization aware data compression. United States.
Das, Shomit N., Punniyamurthy, Kishore, Tomei, Matthew, and Beckmann, Bradford M. Tue .
"Device and method for cache utilization aware data compression". United States. https://www.osti.gov/servlets/purl/1771643.
@article{osti_1771643,
title = {Device and method for cache utilization aware data compression},
author = {Das, Shomit N. and Punniyamurthy, Kishore and Tomei, Matthew and Beckmann, Bradford M.},
abstractNote = {A processing device is provided which includes memory and at least one processor. The memory includes main memory and cache memory in communication with the main memory via a link. The at least one processor is configured to receive a request for a cache line and read the cache line from main memory. The at least one processor is also configured to compress the cache line according to a compression algorithm and, when the compressed cache line includes at least one byte predicted not to be accessed, drop the at least one byte from the compressed cache line based on whether the compression algorithm is determined to successfully compress the cache line according to a compression parameter.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue Nov 17 00:00:00 EST 2020},
month = {Tue Nov 17 00:00:00 EST 2020}
}
Works referenced in this record:
Storage Cache Performance by Using Compressibility of the Data as a Criteria for Cache Insertion
patent-application, September 2016
- Coulson, Richard L.
- US Patent Application 14/672093; 20160283390
A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems
journal, May 2016
- Mittal, Sparsh; Vetter, Jeffrey S.
- IEEE Transactions on Parallel and Distributed Systems, Vol. 27, Issue 5
Dynamic Caching Module Selection for Optimized Data Deduplication
patent-application, September 2014
- Callaway, Robert D.; Papapanagiotou, Ioannis
- US Patent Application 13/800289; 20140281258
System and Method for Dictionary-based Cache-line Level Code Compression for On-chip Memories Using Gradual Bit Removal
patent-application, December 2015
- Ansari, Amin; Bica, Vito; Senior, Richard
- US Patent Application 14/318564; 20150381201
Method for Maximum Data Reduction Combining Compression with Deduplication in Storage Arrays
patent-application, January 2020
- Faibish, Sorin; Rafikov, Rustem; Armangau, Philippe
- US Patent Application 16/031910; 20200019329