DOE Patents title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Network cache injection for coherent GPUs

Abstract

Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.

Inventors:
; ;
Issue Date:
Research Org.:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Advanced Micro Devices, Inc., Sunnyvale, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1998535
Patent Number(s):
11687460
Application Number:
15/498,076
Assignee:
Advanced Micro Devices, Inc. (Sunnyvale, CA)
DOE Contract Number:  
AC52-07NA27344; B609201
Resource Type:
Patent
Resource Relation:
Patent File Date: 04/26/2017
Country of Publication:
United States
Language:
English

Citation Formats

LeBeane, Michael W., Benton, Walter B., and Agarwala, Vinay. Network cache injection for coherent GPUs. United States: N. p., 2023. Web.
LeBeane, Michael W., Benton, Walter B., & Agarwala, Vinay. Network cache injection for coherent GPUs. United States.
LeBeane, Michael W., Benton, Walter B., and Agarwala, Vinay. Tue . "Network cache injection for coherent GPUs". United States. https://www.osti.gov/servlets/purl/1998535.
@article{osti_1998535,
title = {Network cache injection for coherent GPUs},
author = {LeBeane, Michael W. and Benton, Walter B. and Agarwala, Vinay},
abstractNote = {Methods, devices, and systems for GPU cache injection. A GPU compute node includes a network interface controller (NIC) which includes NIC receiver circuitry which can receive data for processing on the GPU, NIC transmitter circuitry which can send the data to a main memory of the GPU compute node and which can send coherence information to a coherence directory of the GPU compute node based on the data. The GPU compute node also includes a GPU which includes GPU receiver circuitry which can receive the coherence information; GPU processing circuitry which can determine, based on the coherence information, whether the data satisfies a heuristic; and GPU loading circuitry which can load the data into a cache of the GPU from the main memory if on the data satisfies the heuristic.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2023},
month = {6}
}

Works referenced in this record:

Message passing on data-parallel architectures
conference, May 2009


Infiniband-Verbs on GPU: A Case Study of Controlling an Infiniband Network Device from the GPU
conference, May 2014


Remote Task Queuing by Networked Computing Devices
patent-application, November 2014