skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Schedulers with load-store queue awareness

Abstract

In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.

Inventors:
; ; ;
Publication Date:
Research Org.:
INTERNATIONAL BUSINESS MACHINES CORPORATION, Armonk, NY (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1340538
Patent Number(s):
9,552,196
Application Number:
14/744,051
Assignee:
INTERNATIONAL BUSINESS MACHINES CORPORATION (Armonk, NY) OSTI
DOE Contract Number:
B599858
Resource Type:
Patent
Resource Relation:
Patent File Date: 2015 Jun 19
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Chen, Tong, Eichenberger, Alexandre E., Jacob, Arpith C., and Sura, Zehra N. Schedulers with load-store queue awareness. United States: N. p., 2017. Web.
Chen, Tong, Eichenberger, Alexandre E., Jacob, Arpith C., & Sura, Zehra N. Schedulers with load-store queue awareness. United States.
Chen, Tong, Eichenberger, Alexandre E., Jacob, Arpith C., and Sura, Zehra N. Tue . "Schedulers with load-store queue awareness". United States. doi:. https://www.osti.gov/servlets/purl/1340538.
@article{osti_1340538,
title = {Schedulers with load-store queue awareness},
author = {Chen, Tong and Eichenberger, Alexandre E. and Jacob, Arpith C. and Sura, Zehra N.},
abstractNote = {In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue Jan 24 00:00:00 EST 2017},
month = {Tue Jan 24 00:00:00 EST 2017}
}

Patent:

Save / Share:
  • In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
  • According to one embodiment, a method for a store operation with a conditional push of a tag value to a queue is provided. The method includes configuring a queue that is accessible by an application, setting a value at an address in a memory device including a memory and a controller, receiving a request for an operation using the value at the address and performing the operation. The method also includes the controller writing a result of the operation to the address, thus changing the value at the address, the controller determining if the result of the operation meets amore » condition and the controller pushing a tag value to the queue based on the condition being met, where the tag value in the queue indicates to the application that the condition is met.« less
  • A method, system and computer program product for implementing load-reserve and store-conditional instructions in a multi-processor computing system. The computing system includes a multitude of processor units and a shared memory cache, and each of the processor units has access to the memory cache. In one embodiment, the method comprises providing the memory cache with a series of reservation registers, and storing in these registers addresses reserved in the memory cache for the processor units as a result of issuing load-reserve requests. In this embodiment, when one of the processor units makes a request to store data in the memorymore » cache using a store-conditional request, the reservation registers are checked to determine if an address in the memory cache is reserved for that processor unit. If an address in the memory cache is reserved for that processor, the data are stored at this address.« less
  • The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less