skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Understanding, allocating, and measuring requirements for capability computing.


No abstract prepared.

Publication Date:
Research Org.:
Sandia National Laboratories
Sponsoring Org.:
OSTI Identifier:
Report Number(s):
TRN: US200902%%337
DOE Contract Number:
Resource Type:
Resource Relation:
Conference: Proposed for presentation at the 2006 ASC PI Meeting held February 27-March 2, 2006 in Las Vegas, NV.
Country of Publication:
United States

Citation Formats

Ang, James Alfred. Understanding, allocating, and measuring requirements for capability computing.. United States: N. p., 2006. Web.
Ang, James Alfred. Understanding, allocating, and measuring requirements for capability computing.. United States.
Ang, James Alfred. Wed . "Understanding, allocating, and measuring requirements for capability computing.". United States. doi:.
title = {Understanding, allocating, and measuring requirements for capability computing.},
author = {Ang, James Alfred},
abstractNote = {No abstract prepared.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Wed Mar 01 00:00:00 EST 2006},
month = {Wed Mar 01 00:00:00 EST 2006}

Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • This presentation provides an overview of present and possible future ways to allocate and assign benefits for reserve requirements.
  • This presentation describes how you could conceivably allocate variability and reserve requirements, including how to allocate aggregation benefits. Conclusions of this presentation are: (1) Aggregation provides benefits because individual requirements are not 100% correlated; (2) Method needed to allocate reduced requirement among participants; (3) Differences between allocation results are subtle - (a) Not immediately obvious which method is 'better'; (b) Many are numerically 'correct', they sum to the physical requirement; (c) Many are not 'fair', Results depend on sub-aggregation and/or the order individuals are included; and (4) Vector allocation method is simple and fair.
  • At the Mound Facility of Monsanto Research Corporation, the problem of scheduling and coordinating numerous manpower resources within their matrix organization has been eased considerably by the Manpower Resource Planning (MRP) System. The system has proven to be successful in promoting the timely completion of plant engineering projects. The system is straightforward. First, the available manpower resources in mandays are identified. Second, the manpower requirements of all the major projects are identified. Then, the requirements for the individual projects are totaled and compared with the available resources. If available manpower does not meet the project requirements, area engineers meet tomore » resolve the manpower conflicts for the tradesmen and other hourly resources. The supervisors for the discipline engineering groups and draftsmen resolve any conflicts within their resources. Requirements and resources are matched by rescheduling projects to reduce requirements, by assigning overtime, or by subcontracting work to increase the resources. The plan developed is then reviewed by management before it is distributed for use in scheduling work. The procedural steps of the MRP concept are illustrated.« less
  • Accepting the challenge by the Executive Office of the President, Office of Science and Technology Policy for research to keep pace with technology, the author surveys the knowledge domain of advanced microcomputers. The paper provides a general background for social scientists in technology traditionally relegated to computer science and engineering. The concept of systems integration serves as a framework of understanding for the various elements of the knowledge domain of advanced microcomputing. The systems integration framework is viewed as a series of interrelated building blocks composed of the domain elements. These elements are: the processor platform, operating system, display technology,more » mass storage, application software, and human-computer interface. References come from recent articles in popular magazines and journals to help emphasize the easy access of this information, its appropriate technical level for the social scientist, and its transient currency. 78 refs., 3 figs.« less
  • This paper documents the need for a significant increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory (LLNL). This need could be viewed as the next step in a broad strategy outlined in the January 2002 White Paper (UCRL-ID-147449) that bears essentially the same name as this document. Therein we wrote: 'This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction if not preeminence by 2006.' This position of distinctionmore » has certainly been achieved. This paper provides a strategy for sustaining this success but will diverge from its 2002 predecessor in that it will: (1) Amplify the scientific and external success LLNL has enjoyed because of the investments made in 2002 (MCR, 11 TF) and 2004 (Thunder, 23 TF). (2) Describe in detail the nature of additional investments that are important to meet both the institutional objectives of advanced capability for breakthrough science and the scientists clearly stated request for adequate capacity and more rapid access to moderate-sized resources. (3) Put these requirements in the context of an overall strategy for simulation science and external collaboration. While our strategy for Multiprogrammatic and Institutional Computing (M&IC) has worked well, three challenges must be addressed to assure and enhance our position. The first is that while we now have over 50 important classified and unclassified simulation codes available for use by our computational scientists, we find ourselves coping with high demand for access and long queue wait times. This point was driven home in the 2005 Institutional Computing Executive Group (ICEG) 'Report Card' to the Deputy Director for Science and Technology (DDST) Office and Computation Directorate management. The second challenge is related to the balance that should be maintained in the simulation environment. With the advent of Thunder, the institution directed a change in course from past practice. Instead of making Thunder available to the large body of scientists, as was MCR, and effectively using it as a capacity system, the intent was to make it available to perhaps ten projects so that these teams could run very aggressive problems for breakthrough science. This usage model established Thunder as a capability system. The challenge this strategy raises is that the majority of scientists have not seen an improvement in capacity computing resources since MCR, thus creating significant tension in the system. The question then is: 'How do we address the institution's desire to maintain the potential for breakthrough science and also meet the legitimate requests from the ICEG to achieve balance?' Both the capability and the capacity environments must be addressed through this one procurement. The third challenge is to reach out more aggressively to the national science community to encourage access to LLNL resources as part of a strategy for sharpening our science through collaboration. Related to this, LLNL has been unable in the past to provide access for sensitive foreign nationals (SFNs) to the Livermore Computing (LC) unclassified 'yellow' network. Identifying some mechanism for data sharing between LLNL computational scientists and SFNs would be a first practical step in fostering cooperative, collaborative relationships with an important and growing sector of the American science community.« less