Network cache injection for coherent GPUs
Abstract
Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.
- Inventors:
- Issue Date:
- Research Org.:
- Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Advanced Micro Devices, Inc., Sunnyvale, CA (United States)
- Sponsoring Org.:
- USDOE
- OSTI Identifier:
- 1998535
- Patent Number(s):
- 11687460
- Application Number:
- 15/498,076
- Assignee:
- Advanced Micro Devices, Inc. (Sunnyvale, CA)
- DOE Contract Number:
- AC52-07NA27344; B609201
- Resource Type:
- Patent
- Resource Relation:
- Patent File Date: 04/26/2017
- Country of Publication:
- United States
- Language:
- English
Citation Formats
LeBeane, Michael W., Benton, Walter B., and Agarwala, Vinay. Network cache injection for coherent GPUs. United States: N. p., 2023.
Web.
LeBeane, Michael W., Benton, Walter B., & Agarwala, Vinay. Network cache injection for coherent GPUs. United States.
LeBeane, Michael W., Benton, Walter B., and Agarwala, Vinay. Tue .
"Network cache injection for coherent GPUs". United States. https://www.osti.gov/servlets/purl/1998535.
@article{osti_1998535,
title = {Network cache injection for coherent GPUs},
author = {LeBeane, Michael W. and Benton, Walter B. and Agarwala, Vinay},
abstractNote = {Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2023},
month = {6}
}
Works referenced in this record:
Network Interface Card for a Computing Node of a Parallel Computer Accelerated by General Purpose Graphics Processing Units, and Related Inter-Node Communication Method
patent-application, February 2015
- Rossetti, Davide
- US Patent Application 14/377493; 20150039793
Message passing on data-parallel architectures
conference, May 2009
- Stuart, Jeff A.; Owens, John D.
- Distributed Processing (IPDPS), 2009 IEEE International Symposium on Parallel & Distributed Processing
Infiniband-Verbs on GPU: A Case Study of Controlling an Infiniband Network Device from the GPU
conference, May 2014
- Oden, Lena; Froning, Holger; Pfreundt, Franz-Joseph
- 2014 IEEE International Parallel & Distributed Processing Symposium Workshops (IPDPSW)
Methods and apparatus for managing a flow of packets using change and reply signals
patent, September 2003
- Waclawsky, John G.; Chawla, Hamesh
- US Patent Document 6,628,610
Remote Task Queuing by Networked Computing Devices
patent-application, November 2014
- Reinhardt, Steven K.; Chu, Michael L.; Tipparaju, Vinod
- US Patent Application 14/164220; 20140331230
System and Method for Accelerating Network Applications Using an Enhanced Network Interface and Massively Parallel Distributed Processing
patent-application, June 2017
- Bernath, Tracey
- US Patent Application 15/454671; 2017/0180272