Powered by Deep Web Technologies
Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

Information systems and the overview report for computing curricula 2004  

Science Conference Proceedings (OSTI)

My purpose is to inform you of the draft "Overview Report" for Computing Curricula 2004 and Information Systems (IS) role in it. This draft report is available for comment. It is the first volume in a computing compendium, referred to as Computing Curricula. ...

John T. Gorgone

2004-12-01T23:59:59.000Z

2

Security Controls for Computer Systems (U): Report of ...  

Science Conference Proceedings (OSTI)

... This first step is essential in order that ... other computing systems, any facilities for security ... management controls and procedures, facility clearance is ...

2013-04-15T23:59:59.000Z

3

9 -Circuits, Systems and Communications Computer Microvision for MEMS 9 RLE Progress Report 144  

E-Print Network (OSTI)

9 - Circuits, Systems and Communications ­ Computer Microvision for MEMS ­ 9 RLE Progress Report 144 9-1 Computer Microvision for MEMS Academic and Research Staff Professor Dennis M. Freeman (Freeman) 1. Computer Microvision for MEMS Academic and Research Staff Professor Dennis M. Freeman, Michael

4

10 -Circuits, Systems and Communications -Computer Microvision for MEMS -10 RLE Progress Report 145  

E-Print Network (OSTI)

10 - Circuits, Systems and Communications - Computer Microvision for MEMS - 10 RLE Progress Report 145 10-1 Computer Microvision for MEMS Academic and Research Staff Professor Dennis M. Freeman Ryu Support Staff Janice Balzer 1. Computer Microvision for MEMS Sponsors Defense Advanced Research

5

Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study  

SciTech Connect

This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

Krstulovich, S.F.

1986-11-12T23:59:59.000Z

6

Interactive Computer-Enhanced Remote Viewing System (ICERVS): Subsystem design report - Phase 2  

SciTech Connect

This ICERVS Phase II Subsystem Design Report describes the detailed software design of the Phase II Interactive Computer-Enhanced Remote Viewing System (ICERVS). ICERVS is a computer-based system that provides data acquisition, data visualization, data analysis, and model synthesis to support robotic remediation of hazardous environments. Due to the risks associated with hazardous environments, remediation must be conducted remotely using robotic systems, which, in turn, must rely on 3D models of their workspace to support both task and path planning with collision avoidance. Tools such as ICERVS are vital to accomplish remediation tasks in a safe, efficient manner. The 3D models used by robotic systems are based on solid modeling methods, in which objects are represented by enclosing surfaces (polygons, quadric surfaces, patches, etc.) or collections of primitive solids (cubes, cylinders, etc.). In general, these 3D models must be created and/or verified by actual measurements made in the robotics workspace. However, measurement data is empirical in nature, with typical output being a collection of xyz triplets that represent sample points on some surface(s) in the workspace. As such, empirical data cannot be readily analyzed in terms of geometric representations used in robotic workspace models. The primary objective of ICERVS is to provide a reliable description of a workspace based on dimensional measurement data and to convert that description into 3D models that can be used by robotic systems. ICERVS will thus serve as a critical factor to allow robotic remediation tasks to be performed more effectively (faster, safer) and economically than with present systems.

Smith, D.A.

1994-04-22T23:59:59.000Z

7

Interactive Computer-Enhanced Remote Viewing System (ICERVS): Final report, November 1994--September 1996  

Science Conference Proceedings (OSTI)

The Interactive Computer-Enhanced Remote Viewing System (ICERVS) is a software tool for complex three-dimensional (3-D) visualization and modeling. Its primary purpose is to facilitate the use of robotic and telerobotic systems in remote and/or hazardous environments, where spatial information is provided by 3-D mapping sensors. ICERVS provides a robust, interactive system for viewing sensor data in 3-D and combines this with interactive geometric modeling capabilities that allow an operator to construct CAD models to match the remote environment. Part I of this report traces the development of ICERVS through three evolutionary phases: (1) development of first-generation software to render orthogonal view displays and wireframe models; (2) expansion of this software to include interactive viewpoint control, surface-shaded graphics, material (scalar and nonscalar) property data, cut/slice planes, color and visibility mapping, and generalized object models; (3) demonstration of ICERVS as a tool for the remediation of underground storage tanks (USTs) and the dismantlement of contaminated processing facilities. Part II of this report details the software design of ICERVS, with particular emphasis on its object-oriented architecture and user interface.

NONE

1997-05-01T23:59:59.000Z

8

Development of Analytical and Computational Methods for the Strategic Power Infrastructure Defense (SPID) System: EPRI/DoD Complex I nteractive Networks/Systems Initiative: Second Annual Report  

Science Conference Proceedings (OSTI)

This report details the second-year research accomplishments for one of six research consortia established under the Complex Interactive Networks/Systems Initiative. This particular document discusses analytical and computational methods for the Strategic Power Infrastructure Defense (SPID) System.

2001-06-21T23:59:59.000Z

9

NERSC Computational Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

My NERSC Getting Started Computational Systems Edison Hopper Carver Dirac PDSF Genepool Testbeds Retired Systems Data & File Systems Network Connections Queues and Scheduling Job...

10

High-end-Computer System Performance: Science and Engineering - Final Report  

SciTech Connect

This report summarizes the research conducted as part of the UMD effort of the multi-site PERC project. This project developed and enhanced the Dyninst instrumentation system and the Active Harmony auto-tuning framework.

Hollingsworth, Jeffrey K.

2012-01-27T23:59:59.000Z

11

Massively parallel computing system  

DOE Patents (OSTI)

A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

Benner, R.E.; Gustafson, J.L.; Montry, G.R.

1989-03-01T23:59:59.000Z

12

Computation Directorate 2007 Annual Report  

SciTech Connect

If there is a single word that both characterized 2007 and dominated the thoughts and actions of many Laboratory employees throughout the year, it is transition. Transition refers to the major shift that took place on October 1, when the University of California relinquished management responsibility for Lawrence Livermore National Laboratory (LLNL), and Lawrence Livermore National Security, LLC (LLNS), became the new Laboratory management contractor for the Department of Energy's (DOE's) National Nuclear Security Administration (NNSA). In the 55 years under the University of California, LLNL amassed an extraordinary record of significant accomplishments, clever inventions, and momentous contributions in the service of protecting the nation. This legacy provides the new organization with a built-in history, a tradition of excellence, and a solid set of core competencies from which to build the future. I am proud to note that in the nearly seven years I have had the privilege of leading the Computation Directorate, our talented and dedicated staff has made far-reaching contributions to the legacy and tradition we passed on to LLNS. Our place among the world's leaders in high-performance computing, algorithmic research and development, applications, and information technology (IT) services and support is solid. I am especially gratified to report that through all the transition turmoil, and it has been considerable, the Computation Directorate continues to produce remarkable achievements. Our most important asset--the talented, skilled, and creative people who work in Computation--has continued a long-standing Laboratory tradition of delivering cutting-edge science even in the face of adversity. The scope of those achievements is breathtaking, and in 2007, our accomplishments span an amazing range of topics. From making an important contribution to a Nobel Prize-winning effort to creating tools that can detect malicious codes embedded in commercial software; from expanding BlueGene/L, the world's most powerful computer, by 60% and using it to capture the most prestigious prize in the field of computing, to helping create an automated control system for the National Ignition Facility (NIF) that monitors and adjusts more than 60,000 control and diagnostic points; from creating a microarray probe that rapidly detects virulent high-threat organisms, natural or bioterrorist in origin, to replacing large numbers of physical computer servers with small numbers of virtual servers, reducing operating expense by 60%, the people in Computation have been at the center of weighty projects whose impacts are felt across the Laboratory and the DOE community. The accomplishments I just mentioned, and another two dozen or so, make up the stories contained in this report. While they form an exceptionally diverse set of projects and topics, it is what they have in common that excites me. They share the characteristic of being central, often crucial, to the mission-driven business of the Laboratory. Computational science has become fundamental to nearly every aspect of the Laboratory's approach to science and even to the conduct of administration. It is difficult to consider how we would proceed without computing, which occurs at all scales, from handheld and desktop computing to the systems controlling the instruments and mechanisms in the laboratories to the massively parallel supercomputers. The reasons for the dramatic increase in the importance of computing are manifest. Practical, fiscal, or political realities make the traditional approach to science, the cycle of theoretical analysis leading to experimental testing, leading to adjustment of theory, and so on, impossible, impractical, or forbidden. How, for example, can we understand the intricate relationship between human activity and weather and climate? We cannot test our hypotheses by experiment, which would require controlled use of the entire earth over centuries. It is only through extremely intricate, detailed computational simulation that we can test our theories, and simulati

Henson, V E; Guse, J A

2008-03-06T23:59:59.000Z

13

Computation of ASR Systems  

E-Print Network (OSTI)

This paper proposes a novel technique to reduce the likelihood computation in ASR systems that use continuous density HMMs. Based on the nature of dynamic features and the numerical properties of Gaussian mixture distributions, we approximate the observation likelihood computation to achieve a speedup. Although the technique does not show appreciable benefit in an isolated word task, it yields significant improvements in continuous speech recognition. For example, 50 % of the computation can be saved on the TIMIT database with only a negligible degradation in system performanc 1

Xiao Li; Jeff Bilmes; Xiao Li; Jeff Bilmes

2003-01-01T23:59:59.000Z

14

Framework forensic examination computer systems.  

E-Print Network (OSTI)

??This thesis discusses the features and requirements of a computationally intelligent computer forensic system. By introducing a novel concept, "Case-Relevance", a computationally intelligent forensic framework (more)

Gong, Ruibin.

2008-01-01T23:59:59.000Z

15

NSLS Computer Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Computer Systems Computer Systems Micro Systems WorkStation Applications Network Configuration Documents The National Synchrotron Light Source facility at the Brookhaven National Laboratory in New York, consists of two storage rings, one for VUV operating at 800 Mev and one for XRAY at 2.8 Gev and a common injection system comprised of a linear accelerator and a Booster ring. Hardware Architecture The hardware architecture of the present control system follows the current trend seen in many accelerator facilities. It is a two-level distributed system consisting of HP/900 series workstations connected by the standard ethernet to VME-based microprocessor subsystems. All the workstations have local disk and sufficient memory for fast response. Workstations are used as file server, back-up file server and for program development and other

16

Mathematical and computer modelling reports: Modeling and forecasting energy markets with the intermediate future forecasting system  

Science Conference Proceedings (OSTI)

This paper describes the Intermediate Future Forecasting System (IFFS), which is the model used to forecast integrated energy markets by the U.S. Energy Information Administration. The model contains representations of supply and demand for all of the ...

Frederic H. Murphy; John J. Conti; Susan H. Shaw; Reginald Sanders

1989-09-01T23:59:59.000Z

17

Computation Directorate 2008 Annual Report  

SciTech Connect

Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

Crawford, D L

2009-03-25T23:59:59.000Z

18

Computer memory management system  

DOE Patents (OSTI)

A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

Kirk, III, Whitson John (Greenwood, MO)

2002-01-01T23:59:59.000Z

19

Annual Report 2011 Computer Science Department  

E-Print Network (OSTI)

Annual Report 2011 Computer Science Department of the Faculty for Mathematics, Computer Science, and Natural Sciences at RWTH Aachen University #12;Published by: Computer Science Department RWTH Aachen' ........................................................................... 5 Computer Science Summer Party ­ Sommerfest der Informatik.................................... 7

Kobbelt, Leif

20

Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources  

SciTech Connect

The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

2012-07-14T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

2011 Computation Directorate Annual Report  

SciTech Connect

From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historic

Crawford, D L

2012-04-11T23:59:59.000Z

22

2011 Computation Directorate Annual Report  

SciTech Connect

From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry

Crawford, D L

2012-04-11T23:59:59.000Z

23

National Energy Research Scientific Computing Center 2007 Annual Report  

SciTech Connect

This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

2008-10-23T23:59:59.000Z

24

Part III, Section 3, Chapter 1. Computer Microvision for Microelectromechanical Systems Chapter 1. Computer Microvision for Microelectromechanical  

E-Print Network (OSTI)

299 Part III, Section 3, Chapter 1. Computer Microvision for Microelectromechanical Systems Chapter 1. Computer Microvision for Microelectromechanical Systems Academic and Research Staff Professor, Section 3, Chapter 1. Computer Microvision for Microelectromechanical Systems 300 RLE Progress Report

25

Addition of coal mining regulations to the Computer-Aided Environmental Legislative Data System (CELDS). Final report  

SciTech Connect

Department of Energy (DOE)-specific regulations on mining have been added to the Computer-Aided Environmental Legislative Data System (CELDS), an online, interactive database developed by the US Army Construction Engineering Research Laboratory (CERL). CELDS indexes and abstracts environmental regulations of the federal government and the 50 states. Included now are coal mining regulations for the federal government and those states having Department of Interior approval for their mining programs. The coal mining regulations cover federal regulatory programs for surface mining and underground mining of coal on Federal, Indian, and private lands. A draft thesaurus of mining terms has been developed. This thesaurus, which will be merged into the existing CELDS thersaurus, will be used to index CELDS records which cover coal mining regulations. The thesaurus terms cover both coal mining technology and environmental impacts of coal mining.

Webster, R.D.; Herrick, E.; Grieme, M.T.

1983-07-01T23:59:59.000Z

26

Addition of coal mining regulations to the Computer-Aided Environmental Legislative Data System (CELDS). Final report  

SciTech Connect

Department of Energy (DOE)-specific regulations on mining have been added to the Computer-Aided Environmental Legislative Data System (CELDS), an online, interactive database developed by the U.S. Army Construction Engineering Research Laboratory (CERL). CELDS indexes and abstracts environmental regulations of the Federal government and the 50 states. Included now are coal mining regulations for the Federal government and those states having Department of Interior approval for their mining programs. The coal mining regulations cover Federal regulatory programs for surface mining and underground mining of coal on Federal, Indian, and private lands. A draft thesaurus of mining terms has been developed. This thesaurus, which will be merged into the existing CELDS thesaurus, will be used to index CELDS records which cover coal mining regulations. The thesaurus terms cover both coal mining technology and environmental impacts of coal mining.

Webster, R.D.; Herrick, E.; Grieme, M.T.

1983-07-01T23:59:59.000Z

27

User computer system pilot project  

Science Conference Proceedings (OSTI)

The User Computer System (UCS) is a general purpose unclassified, nonproduction system for Mound users. The UCS pilot project was successfully completed, and the system currently has more than 250 users. Over 100 tables were installed on the UCS for use by subscribers, including tables containing data on employees, budgets, and purchasing. In addition, a UCS training course was developed and implemented.

Eimutis, E.C.

1989-09-06T23:59:59.000Z

28

Annual Report 2010 Computer Science Department  

E-Print Network (OSTI)

.......................................................................................................... 510 Co-operations to excellent talks from the research areas of operations research, exact algorithms as well as approximation to Technical Computer Science; Electrical Engineering Fundamentals of Computer Science; Operating Systems

Kobbelt, Leif

29

Enhanced absorption cycle computer model. Final report  

DOE Green Energy (OSTI)

Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperatures boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorptions systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H{sub 2}O triple-effect cycles, LiCl-H{sub 2}O solar-powered open absorption cycles, and NH{sub 3}-H{sub 2}O single-effect and generator-absorber heat exchange cycles. An appendix contains the User`s Manual.

Grossman, G.; Wilk, M. [Technion-Israel Inst. of Tech., Haifa (Israel). Faculty of Mechanical Engineering

1993-09-01T23:59:59.000Z

30

System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992  

SciTech Connect

The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

Sterling, T. [Universities Space Research Association, Washington, DC (United States); Messina, P. [Jet Propulsion Lab., Pasadena, CA (United States); Chen, M. [Yale Univ., New Haven, CT (United States)] [and others

1993-04-01T23:59:59.000Z

31

Discussion of Intelligent Cloud Computing System  

Science Conference Proceedings (OSTI)

Cloud Computing System (CCS) aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user Quality of Service (QoS) requirements. Huge investments ... Keywords: cloud computing system, intelligent cloud computing system, data warehouse, cloud computing management information system

Yu Hua Zhang; Jian Zhang; Wei Hua Zhang

2010-10-01T23:59:59.000Z

32

Computer controlled air conditioning systems  

SciTech Connect

This patent describes an improvement in a computer controlled air conditioning system providing for circulation of air through an air conditioned house in contact with concrete walls requiring a humidity within a critical range. The improvement consists of: a computer for processing sensed environmental input data including humidity and oxygen to produce output control signals for affecting the humidity of the air in the house; provision for an air flow circulation path through the house in contact with the concrete walls; sensing responsive to the amount of oxygen in the house for providing input signals to the computer; mixing for combining with the air in the house a variable amount of fresh atmospheric air to supply fresh oxygen; and humidity modifying means for modifying the humidity of the air flowing in the flow path responsive to the control signals.

Dumbeck, R.F.

1986-02-04T23:59:59.000Z

33

Redundant computing for exascale systems.  

SciTech Connect

Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

2010-12-01T23:59:59.000Z

34

Argonne's Laboratory computing center - 2007 annual report.  

Science Conference Proceedings (OSTI)

Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

Bair, R.; Pieper, G. W.

2008-05-28T23:59:59.000Z

35

Function distribution in computer system architectures  

Science Conference Proceedings (OSTI)

The levelwise structuring and complexity of a computer system is presented informally and as a general model based upon the notion of abstract machines (processors), processes and interpreters. The important domains of the computer architect are considered ... Keywords: Computer architecture, Computer history, Computer system complexity

Harold W. Lawson, Jr.

1976-01-01T23:59:59.000Z

36

Honeywell Modular Automation System Computer Software Documentation  

SciTech Connect

The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). This CSWD describes hardware and PFP developed software for control of stabilization furnaces. The Honeywell software can generate configuration reports for the developed control software. These reports are described in the following section and are attached as addendum's. This plan applies to PFP Engineering Manager, Thermal Stabilization Cognizant Engineers, and the Shift Technical Advisors responsible for the Honeywell MAS software/hardware and administration of the Honeywell System.

STUBBS, A.M.

2000-12-04T23:59:59.000Z

37

Scalable and Energy Efficient Computer Systems - Energy ...  

Technology Marketing Summary Computer engineers have developed a new design to support construction of large computer systems that perform closer to ...

38

Computational Systems & Software Environment | National Nuclear...  

National Nuclear Security Administration (NNSA)

& Technology Programs > Office of Advanced Simulation and Computing and Institutional R&D Programs > ASC Program Elements > Computational Systems & Software Environment...

39

Architecture of the Entropia Distributed Computing System  

Science Conference Proceedings (OSTI)

Distributed Computing, the exploitation of idle cycles on pervasive desktop PC systems offers the opportunity to increase the available computing power by orders of magnitude (10x - 1000x). However, for desktop PC distributed computing ...

Andrew A. Chien

2002-04-01T23:59:59.000Z

40

Benzene Monitor System report  

Science Conference Proceedings (OSTI)

Two systems for monitoring benzene in aqueous streams have been designed and assembled by the Savannah River Technology Center, Analytical Development Section (ADS). These systems were used at TNX to support sampling studies of the full-scale {open_quotes}SRAT/SME/PR{close_quotes} and to provide real-time measurements of benzene in Precipitate Hydrolysis Aqueous (PHA) simulant. This report describes the two ADS Benzene Monitor System (BMS) configurations, provides data on system operation, and reviews the results of scoping tests conducted at TNX. These scoping tests will allow comparison with other benzene measurement options being considered for use in the Defense Waste Processing Facility (DWPF) laboratory. A report detailing the preferred BMS configuration statistical performance during recent tests has been issued under separate title: Statistical Analyses of the At-line Benzene Monitor Study, SCS-ASG-92-066. The current BMS design, called the At-line Benzene Monitor (ALBM), allows remote measurement of benzene in PHA solutions. The authors have demonstrated the ability to calibrate and operate this system using peanut vials from a standard Hydragard{trademark} sampler. The equipment and materials used to construct the ALBM are similar to those already used in other applications by the DWPF lab. The precision of this system ({+-}0.5% Relative Standard Deviation (RSD) at 1 sigma) is better than the purge & trap-gas chromatograpy reference method currently in use. Both BMSs provide a direct measurement of the benzene that can be purged from a solution with no sample pretreatment. Each analysis requires about five minutes per sample, and the system operation requires no special skills or training. The analyzer`s computer software can be tailored to provide desired outputs. Use of this system produces no waste stream other than the samples themselves (i.e. no organic extractants).

Livingston, R.R.

1992-10-12T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

NERSC seeks Computational Systems Group Lead  

NLE Websites -- All DOE Office Websites (Extended Search)

seeks Computational Systems Group Lead seeks Computational Systems Group Lead NERSC seeks Computational Systems Group Lead January 6, 2011 by Katie Antypas Note: This position is now closed. The Computational Systems Group provides production support and advanced development for the supercomputer systems at NERSC. Manage the Computational Systems Group (CSG) which provides production support and advanced development for the supercomputer systems at NERSC (National Energy Research Scientific Computing Center). These systems, which include the second fastest supercomputer in the U.S., provide 24x7 computational services for open (unclassified) science to world-wide researchers supported by DOE's Office of Science. Duties/Responsibilities Manage the Computational Systems Group's staff of approximately 10

42

User's guide for the Data Analysis, Retrieval, and Tabulation System (DARTS), revised edition: A mainframe computer code for generating cross-tabulation reports  

SciTech Connect

A computer system unknown as the Data Analysis, Retrieval, and Tabulation System (DARTS) was developed by the Energy Systems Division at Argonne National Laboratory to generate tables of descriptive statistics derived from analyses of housing and energy data sources. Through a simple input command, the user can request the preparation of a hierarchical table based on any combination of several hundred of the most commonly analyzed variables. The system was written in the Statistical Analysis System (SAS) language and designed for use on a large-scale IBM mainframe computer.

Anderson, J.L.

1990-10-01T23:59:59.000Z

43

Cloud computing for dynamic systems  

Science Conference Proceedings (OSTI)

Cloud computing is a fast emerging model for enabling dynamic on-demand computing and IT-based services. It promotes dynamic properties and characteristics such as scalability, agility, flexibility, virtualised and distributed on-demand computing. However, ...

Khaled Sabry

2011-11-01T23:59:59.000Z

44

Finding representative workloads for computer system design  

Science Conference Proceedings (OSTI)

This work explores how improved workload characterization can be used for a better selection of representative workloads within the computer system and processor design process. We find that metrics easily available in modern computer systems provide ...

Jan Lodewijk Bonebakker

2007-03-01T23:59:59.000Z

45

Scientific computations section monthly report, November 1993  

Science Conference Proceedings (OSTI)

This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

Buckner, M.R.

1993-12-30T23:59:59.000Z

46

Scientific Computations on Modern Parallel Vector Systems  

Science Conference Proceedings (OSTI)

Computational scientists have seen a frustrating trend of stagnating application performance despite dramatic increases in the claimed peak capability of high performance computing systems. This trend has been widely attributed to the use of superscalar-based ...

Leonid Oliker; Andrew Canning; Jonathan Carter; John Shalf; Stephane Ethier

2004-11-01T23:59:59.000Z

47

Evaluation of computer-based ultrasonic inservice inspection systems  

SciTech Connect

This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

1994-03-01T23:59:59.000Z

48

Middleware in Modern High Performance Computing System Architectures  

E-Print Network (OSTI)

Middleware in Modern High Performance Computing System Architectures Christian Engelmann, Hong Ong trend in modern high performance computing (HPC) system architectures employs "lean" compute nodes) continue to reside on compute nodes. Key words: High Performance Computing, Middleware, Lean Compute Node

Engelmann, Christian

49

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1.

Rafael Salda Na; Felix P. Muga Ii; Jerrold J. Garcia; William Emmanuel; S. Yu

2000-01-01T23:59:59.000Z

50

Reliability in grid computing systems  

Science Conference Proceedings (OSTI)

... Checkpoint and process migration meth- ods have been long used in high-performance computing environments and a substantial body of work ...

2013-03-30T23:59:59.000Z

51

AGILA: The Ateneo High Performance Computing System  

E-Print Network (OSTI)

A Beowulf cluster is a low-cost parallel high performance computing system that uses commodity hardware components such as personal computers and standard Ethernet adapters and switches and runs on freely available software such as Linux and LAM-MPI. In this paper the development of the AGILA HPCS, which stands for the Ateneo GigaflopsRange Performance, Linux OS, and Athlon Processors High Performance Computing System, is discussed including its hardware and software configurations and performance evaluation. Keywords High-performance computing, commodity cluster computing, parallel computing, Beowulf-class cluster 1. INTRODUCTION In the Philippines today, computing power in the range of gigaflops is not generally available for use in research and development. Conventional supercomputers or high performance computing systems are very expensive and are beyond the budgets of most university research groups especially in developing countries such as the Philippines. A lower cost option...

Rafael P. Saldaa; Felix P. Muga; II; Jerrold J. Garcia; William Emmanuel S. Yu; S. Yu

2000-01-01T23:59:59.000Z

52

Workload and network-optimized computing systems  

Science Conference Proceedings (OSTI)

This paper describes a recent system-level trend toward the use of massive on-chip parallelism combined with efficient hardware accelerators and integrated networking to enable new classes of applications and computing-systems functionality. This system ...

D. P. LaPotin; S. Daijavad; C. L. Johnson; S. W. Hunter; K. Ishizaki; H. Franke; H. D. Achilles; D. P. Dumarot; N. A. Greco; B. Davari

2010-01-01T23:59:59.000Z

53

Intelligent Management of the Power Grid: An Anticipatory, Multi-Agent, High Performance Computing Approach: EPRI/DoD Complex Intera ctive Networks/Systems Initiative: Second Annual Report  

Science Conference Proceedings (OSTI)

This report details the second-year research accomplishments for one of six research consortia established under the Complex Interactive Networks/Systems Initiative. This particular document details an anticipatory, multi-agent, high performance computing approach for intelligent management of the power grid.

2001-06-21T23:59:59.000Z

54

Agent-based accountable grid computing systems  

Science Conference Proceedings (OSTI)

Accountability is an important aspect of any computer system. It assures that every action executed in the system can be traced back to some entity. Accountability is even more crucial for assuring the safety and security of grid systems, given the very ... Keywords: Accountability, Agents, Distributed denial of service attack, Grid computing

Wonjun Lee; Anna Squicciarini; Elisa Bertino

2013-08-01T23:59:59.000Z

55

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

Office of Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research (ASCR), was

Coghlan, Susan

2013-01-01T23:59:59.000Z

56

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF) and theat the Argonne Leadership Computing Facility (ALCF) and theat the Argonne Leadership Computing Facility (ALCF) and the

Coghlan, Susan

2013-01-01T23:59:59.000Z

57

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

Leadership Computing Facility (ALCF) and the National EnergyLeadership Computing Facility (ALCF) and the National EnergyHPC compute cluster into the ALCF Magellan testbed, allowing

Coghlan, Susan

2013-01-01T23:59:59.000Z

58

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

of the Argonne Leadership Computing Facility at Argonneof the Argonne Leadership Computing Facility at Argonneat the Argonne Leadership Computing Facility (ALCF) and the

Coghlan, Susan

2013-01-01T23:59:59.000Z

59

Computer System Retirement Guidelines | Department of Energy  

NLE Websites -- All DOE Office Websites (Extended Search)

adapted for use by any site or organization Computer System Retirement Guidelines More Documents & Publications DOE F 1324.9 Records Management Handbook Records Management Handbook...

60

Computer Algebra Systems - CECM - Simon Fraser University  

E-Print Network (OSTI)

This volume uses several computer algebra systems to ``activate" the papers, but principally relies on Maple. There are several reasons for this, but the main...

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Computer system reliability and nuclear war  

Science Conference Proceedings (OSTI)

Given the devastating consequences of nuclear war, it is appropriate to look at current and planned uses of computers in nuclear weapons command and control systems, and to examine whether these systems can fulfill their intended roles.

Alan Borning

1987-02-01T23:59:59.000Z

62

Computer Systems to Oil Pipeline Transporting  

E-Print Network (OSTI)

Computer systems in the pipeline oil transporting that the greatest amount of data can be gathered, analyzed and acted upon in the shortest amount of time. Most operators now have some form of computer based monitoring system employing either commercially available or custom developed software to run the system. This paper presented the SCADA systems to oil pipeline in concordance to the Romanian environmental reglementations.

Chis, Timur

2009-01-01T23:59:59.000Z

63

Summer 1994 Computational Science Workshop. Final report  

SciTech Connect

This report documents the work performed by the University of New Mexico Principal Investigators and Research Assistants while hosting the highly successful Summer 1994 Computational Sciences Workshop in Albuquerque on August 6--11, 1994. Included in this report is a final budget for the workshop, along with a summary of the participants` evaluation of the workshop. The workshop proceeding have been delivered under separate cover. In order to assist in the organization of future workshops, we have also included in this report detailed documentation of the pre- and post-workshop activities associated with this contract. Specifically, we have included a section that documents the advertising performed, along with the manner in which applications were handled. A complete list of the workshop participants in this section. Sample letters that were generated while dealing with various commercial entities and departments at the University are also included in a section dealing with workshop logistics. Finally, we have included a section in this report that deals with suggestions for future workshops.

1994-12-31T23:59:59.000Z

64

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

research used resources of the Argonne Leadership ComputingFacility at Argonne National Laboratory, which is supportedused resources of the Argonne Leadership Computing Facility

Coghlan, Susan

2013-01-01T23:59:59.000Z

65

Method and system for benchmarking computers  

DOE Patents (OSTI)

A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

Gustafson, John L. (Ames, IA)

1993-09-14T23:59:59.000Z

66

DARPA's adaptive computing systems program  

Science Conference Proceedings (OSTI)

Motivation for DARPA's ACS program will be presented along with original goals and objectives of the program. A brief description of some of the efforts that were initiated and why. A report card of what I think the program did well, and where I feel ...

Jose Munoz

2003-01-01T23:59:59.000Z

67

Console Networks for Major Computer Systems  

SciTech Connect

A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

1966-07-22T23:59:59.000Z

68

Computational representation of biological systems  

SciTech Connect

Integration of large and diverse biological data sets is a daunting problem facing systems biology researchers. Exploring the complex issues of data validation, integration, and representation, we present a systematic approach for the management and analysis of large biological data sets based on data warehouses. Our system has been implemented in the Bioverse, a framework combining diverse protein information from a variety of knowledge areas such as molecular interactions, pathway localization, protein structure, and protein function.

Frazier, Zach; McDermott, Jason E.; Guerquin, Michal; Samudrala, Ram

2009-04-20T23:59:59.000Z

69

Fault tolerant hypercube computer system architecture  

SciTech Connect

This patent describes a fault-tolerant multi-processor computer system of the hypercube type. It comprises: a plurality of first computing nodes; a first network of message conducting path means for interconnecting the first computing nodes as a hypercube. The first network providing a path for message transfer between the first computing nodes; a first watch dog node; and, a second network of message conducting path means for directly connecting each of the first computing nodes to the first watch dog node independent from the first network. The second network providing an independent path for test message and reconfiguration affecting transfers between respective ones of the first computing nodes and the first watch dog node.

Madan, H.S.; Chow, E.

1989-09-19T23:59:59.000Z

70

The user in experimental computer systems research  

Science Conference Proceedings (OSTI)

Experimental computer systems research typically ignores the end-user, modeling him, if at all, in overly simple ways. We argue that this (1) results in inadequate performance evaluation of the systems, and (2) ignores opportunities. We summarize our ... Keywords: autonomic systems, human directed adaptation, speculative remote display, user comfort with resource borrowing, user-driven power management, user-driven scheduling

Peter A. Dinda; Gokhan Memik; Robert P. Dick; Bin Lin; Arindam Mallik; Ashish Gupta; Samuel Rossoff

2007-06-01T23:59:59.000Z

71

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

resources. 1. Finding Tropical Cyclones on a Cloud Computing2010 2. Finding Tropical Cyclones on Clouds, D. Hasenkamp

Coghlan, Susan

2013-01-01T23:59:59.000Z

72

Final technical report for DOE Computational Nanoscience Project: Integrated Multiscale Modeling of Molecular Computing Devices  

Science Conference Proceedings (OSTI)

This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.

Cummings, P. T.

2010-02-08T23:59:59.000Z

73

Computing criticality of lines in power systems  

E-Print Network (OSTI)

Abstract We propose a computationally efficient method based on nonlinear optimization to identify critical lines, failure of which can cause severe blackouts. Our method computes criticality measure for all lines at a time, as opposed to detecting a single vulnerability, providing a global view of the system. This information on criticality of lines can be used to identify multiple contingencies by selectively exploring multiple combinations of broken lines. The effectiveness of our method is demonstrated on the IEEE 30 and 118 bus systems, where we can very quickly detect the most critical lines in the system and identify severe multiple contingencies. I.

Ali P?nar; Adam Reichert; Bernard Lesieutre

2007-01-01T23:59:59.000Z

74

The Magellan Final Report on Cloud Computing  

Science Conference Proceedings (OSTI)

The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

,; Coghlan, Susan; Yelick, Katherine

2011-12-21T23:59:59.000Z

75

Honeywell Modular Automation System Computer Software Documentation  

Science Conference Proceedings (OSTI)

This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

CUNNINGHAM, L.T.

1999-09-27T23:59:59.000Z

76

Monitoring SLAC High Performance UNIX Computing Systems  

SciTech Connect

Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

2005-12-15T23:59:59.000Z

77

Intertemporal Computable Equilibrium System (ICES) | Open Energy  

Open Energy Info (EERE)

Intertemporal Computable Equilibrium System (ICES) Intertemporal Computable Equilibrium System (ICES) Jump to: navigation, search Tool Summary Name: Intertemporal Computable Equilibrium System (ICES) Agency/Company /Organization: Fondazione Eni Enrico Mattei Sector: Climate, Energy Complexity/Ease of Use: Moderate Website: www.feem.it/getpage.aspx?id=138&sez=Research&padre=18&sub=75&idsub=102 Related Tools Ex Ante Appraisal Carbon-Balance Tool (EX-ACT) General Equilibrium Model for Economy - Energy - Environment (GEM-E3) DNE21+ ... further results Find Another Tool FIND DEVELOPMENT IMPACTS ASSESSMENT TOOLS A recursive dynamic general equilibrium model developed with the main but not exclusive purpose of assessing the final welfare implication of climate change impacts on world economies. In addition to climate-change impact

78

Computer Measurement and Automation System for Gas-fired Heating...  

NLE Websites -- All DOE Office Websites (Extended Search)

Computer Measurement and Automation System for Gas-fired Heating Furnace Title Computer Measurement and Automation System for Gas-fired Heating Furnace Publication Type Journal...

79

Beyond moore computing research challenge workshop report.  

SciTech Connect

We summarize the presentations and break out session discussions from the in-house workshop that was held on 11 July 2013 to acquaint a wider group of Sandians with the Beyond Moore Computing research challenge.

Huey, Mark C. [Perspectives, Inc., Albuquerque, NM] [Perspectives, Inc., Albuquerque, NM; Aidun, John Bahram

2013-10-01T23:59:59.000Z

80

Integrated system design report  

DOE Green Energy (OSTI)

The primary objective of the integrated system test phase is to demonstrate the commercial potential of a coal fueled diesel engine in its actual operating environment. The integrated system in this project is defined as a coal fueled diesel locomotive. This locomotive, shown on drawing 41D715542, is described in the separate Concept Design Report. The test locomotive will be converted from an existing oil fueled diesel locomotive in three stages, until it nearly emulates the concept locomotive. Design drawings of locomotive components (diesel engine, locomotive, flatcar, etc.) are included.

Not Available

1989-07-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Development of a Beowulf-Class High Performance Computing System for Computational Science Applications  

E-Print Network (OSTI)

Using Beowulf cluster computing technology, the Ateneo High Performance Computing Group has developed a high performance computing system consisting of eight compute nodes. Called the AGILA HPCS this Beowulf cluster computer is designed for computational science applications. In this paper, we present the motivation for the AGILA HPCS and some results on its performance evaluation.

Rafael Saldaa; Jerrold Garcia; Felix Muga Ii; William Yu

2001-01-01T23:59:59.000Z

82

System Overview | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

System Overview BG/Q Drivers Status Machine Overview Machine Partitions Torus Network Data Storage & File Systems Compiling & Linking Queueing & Running Jobs Data Transfer Debugging & Profiling Performance Tools & APIs Software & Libraries IBM References Intrepid/Challenger/Surveyor Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] System Overview Machine Overview Machine Overview is a reference for the login and compile nodes, I/O nodes, and compute nodes of the BG/Q system. Machine Partitions Machine Partitions is a reference for the way that Mira, Vesta and Cetus are partitioned and discusses the network topology of the partitions.

83

COMPUTER SYSTEMS LABORATORY STANFORD ELECTRONICS LABORATORIES  

E-Print Network (OSTI)

of Data 2.1 Performance and Utilization Data 2.2 Failure Data 5 5 6 3. Preliminary Analysis 3.1 Load Profiles 3.2 Failure Profiles 7 3.3 Analysis and Discussion of Preliminary Results Some ReliabilityCOMPUTER SYSTEMS LABORATORY I I STANFORD ELECTRONICS LABORATORIES DEPARTMENT OF ElECTRiCAl

Stanford University

84

SYSTEMS ENGINEERING FOR HIGH PERFORMANCE COMPUTING SOFTWARE: THE HDDA DAGH  

E-Print Network (OSTI)

SYSTEMS ENGINEERING FOR HIGH PERFORMANCE COMPUTING SOFTWARE: THE HDDA DAGH INFRASTRUCTURE systems implementing high performance computing applications. The example which drives the creation in the context of high performance computing software. Applicationof these principleswill be seen

Parashar, Manish

85

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

In order to meet DOE requirements, these features would needthrough the lens of DOE security requirements and report onscience? Can DOE cyber security requirements be met within a

Coghlan, Susan

2013-01-01T23:59:59.000Z

86

PDF ARTICLE: Computer Aided Design Report  

Science Conference Proceedings (OSTI)

Feb 9, 2007 ... This two page summary reports on accomplishments to date of work ... Airfoil Alloys for Industrial Gas Turbines in Coal Fired Environments.

87

Computer-controlled radiation monitoring system  

Science Conference Proceedings (OSTI)

A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory`s Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable.

Homann, S.G.

1994-09-27T23:59:59.000Z

88

Analog system for computing sparse codes  

DOE Patents (OSTI)

A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

Rozell, Christopher John (El Cerrito, CA); Johnson, Don Herrick (Houston, TX); Baraniuk, Richard Gordon (Houston, TX); Olshausen, Bruno A. (San Francisco, CA); Ortman, Robert Lowell (Houston, TX)

2010-08-24T23:59:59.000Z

89

Low Power Dynamic Scheduling for Computing Systems  

E-Print Network (OSTI)

This paper considers energy-aware control for a computing system with two states: "active" and "idle." In the active state, the controller chooses to perform a single task using one of multiple task processing modes. The controller then saves energy by choosing an amount of time for the system to be idle. These decisions affect processing time, energy expenditure, and an abstract attribute vector that can be used to model other criteria of interest (such as processing quality or distortion). The goal is to optimize time average system performance. Applications of this model include a smart phone that makes energy-efficient computation and transmission decisions, a computer that processes tasks subject to rate, quality, and power constraints, and a smart grid energy manager that allocates resources in reaction to a time varying energy price. The solution methodology of this paper uses the theory of optimization for renewal systems developed in our previous work. This paper is written in tutorial form and devel...

Neely, Michael J

2011-01-01T23:59:59.000Z

90

2009 ANNUAL REPORT electrical and computer engineering  

E-Print Network (OSTI)

efficiency, reduced cost, and the potential to consider installations of solar photovoltaic systems handheld solar battery chargers for small electronic systems. While the project was originally supported, ad hoc and sensor networks, and experimentation and protocol design for com- munication systems

Ayers, Joseph

91

Scientific computations on modern parallel vector systems  

E-Print Network (OSTI)

Computational scientists have seen a frustrating trend of stagnating application performance despite dramatic increases in the claimed peak capability of high performance computing systems. This trend has been widely attributed to the use of superscalar-based commodity components whos architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. Recently, two innovative parallel-vector architectures have become operational: the Japanese Earth Simulator (ES) and the Cray X1. In order to quantify what these modern vector capabilities entail for the scientists that rely on modeling and simulation, it is critical to evaluate this architectural paradigm in the context of demanding computational algorithms. Our evaluation study examines four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based ES and X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access in not available. Results demonstrate that the vector systems achieve excellent performance on our application suite the highest of any architecture tested to date. However, vectorization of a particle-incell code highlights the potential difficulty of expressing irregularly structured algorithms as data-parallel programs. 1.

Leonid Oliker; Andrew Canning; Jonathan Carter; John Shalf; Stephane Ethier

2004-01-01T23:59:59.000Z

92

INCITE Quarterly Report Policy | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

late: The ability to submit jobs for the PI and users of the late project will be disabled. If a report is more than 90 days late: The PI and users of the late project will...

93

The M.U.5 Computer System  

E-Print Network (OSTI)

Describes the design of the MU5 research computer, the aim of which has been to produce a high performance machine whose structure is well suited to the needs of modern high level languages. It is hoped that a computing speed improvement of about 20 over the 2-3 mu S instruction rate of ATLAS will be obtained. In the ten years which have elapsed between the ATLAS and MU5 projects, the speed of logic gates and main storage has increased by a factor of 8:1, and this will result in a commensurate increase in system performance. In order to approach the 20:1 performance target, however, it will be necessary to adopt extensive parallel processing techniques, and to incorporate data buffering systems to compensate for the disparity between processor and storage speeds. (11 refs).

Sumner, F H

1974-01-01T23:59:59.000Z

94

Specialized computer algebra system for application in general relativity  

E-Print Network (OSTI)

A brief characteristic of the specialized computer algebra system GRG_EC intended for symbolic computations in the field of general relativity is given.

S. Tertychniy

2007-04-11T23:59:59.000Z

95

Computerized Accident/Incident Reporting System  

NLE Websites -- All DOE Office Websites (Extended Search)

Accident Recordkeeping and Reporting Accident Recordkeeping and Reporting Accident/Incident Recordkeeping and Reporting CAIRS logo Computerized Accident Incident Reporting System CAIRS Database The Computerized Accident/Incident Reporting System is a database used to collect and analyze DOE and DOE contractor reports of injuries, illnesses, and other accidents that occur during DOE operations. Injury and Illness Dashboard The Dashboard provides an alternate interface to CAIRS information. The initial release of the Dashboard allows analysis of composite DOE-wide information and summary information by Program Office, and site. Additional data feature are under development. CAIRS Registration Form CAIRS is a Government computer system and, as such, has security requirements that must be followed. Access to the

96

Low Power System Design Techniques for Mobile Computers  

E-Print Network (OSTI)

Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are being used increasingly. Because these applications are battery powered, reducing power consumption is vital. In this report we first give the properties of low power design and techniques to exploit them on the hardware level such as: minimize capacitance, avoid wasteful activity, and reduce voltage and frequency. We will then elaborate on low power system-design techniques in which the main themes are to avoid wasteful activity at system level and to exploit locality of reference. Finally we review energy reduction techniques in the design of a wireless communication system, including system decomposition, communication and MAC protocols, and low power short range networks. 1 Introduction The requirement of portability of hand-held computers and portable devices places severe restrictions on size and power consumption. Even though battery technology is impr...

Paul J.M. Havinga; Gerard J. M. Smit

1997-01-01T23:59:59.000Z

97

Kathy Yelick Co-authors NRC Report on Computer Performance -...  

NLE Websites -- All DOE Office Websites (Extended Search)

the lab's NERSC Division, was a panelist in a March 22 discussion of "The Future of Computer Performance: Game Over or Next Level?" a new report by the National Research Council....

98

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

by the Director, Office of Science, Office of Ad- vancedComputing for the Office of Science. A Report from the NERSCWashington, D.C. : DOE Office of Science, Vol. 1, July 30,

Hules, John A.

2008-01-01T23:59:59.000Z

99

Architecture and applications of the HEP multiprocessor computer system  

Science Conference Proceedings (OSTI)

The HEP computer system is a large scale scientific parallel computer employing shared-resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found useful in programming the system are discussed. 3 references.

Smith, B.J.

1981-01-01T23:59:59.000Z

100

Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing  

E-Print Network (OSTI)

1 Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing William J. Rapaport Department of Computer Science and Engineering, Department of Philosophy, Department of Linguistics-2000 rapaport@buffalo.edu http://www.cse.buffalo.edu/~rapaport Keywords: computationalism, semiotic systems

Rapaport, William J.

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

The Magellan Final Report on Cloud Computing  

E-Print Network (OSTI)

J. M. Shalf, and H. Wasserman. NERSC-6 workload analysis andand E. Strohmaier. The NERSC Sustained System Performance (of 2009. Resources and research at NERSC were funded by the

Coghlan, Susan

2013-01-01T23:59:59.000Z

102

Argonne's Laboratory computing resource center : 2006 annual report.  

SciTech Connect

Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

2007-05-31T23:59:59.000Z

103

Balanced Decomposition for Power System Simulation on Parallel Computers  

E-Print Network (OSTI)

Balanced Decomposition for Power System Simulation on Parallel Computers Felipe Morales, Hugh parallelization strategy is tested in a Parsytec computer incorpo- rating two PowerXplorer systems, each one System. 1 Introduction Power system analysis is intensive in computational terms 1 . In fact, the power

Rudnick, Hugh

104

2XIIB computer data acquisition system  

SciTech Connect

All major plasma diagnostic measurements from the 2XIIB experiment are recorded, digitized, and stored by the computer data acquisition system. The raw data is then examined, correlated, reduced, and useful portions are quickly retrieved which direct the future conduct of the plasma experiment. This is done in real time and on line while the data is current. The immediate availability of this pertinent data has accelerated the rate at which the 2XII personnel have been able to gain knowledge in the study of plasma containment and fusion interaction. The up time of the experiment is being used much more effectively than ever before. This paper describes the hardware configuration of our data system in relation to various plasma parameters measured, the advantages of powerful software routines to reduce and correlate the data, the present plans for expansion of the system, and the problems we have had to overcome in certain areas to meet our original goals. (auth)

Tyler, G.C.

1975-11-18T23:59:59.000Z

105

Call for Papers Elsevier Journal of Computer and System Sciences  

E-Print Network (OSTI)

at the 14th IEEE International Conference on High Performance Computing and Communications (HPCC-2012 and an ever-increasing demand for practice of high performance computing systems, due to the rapid growth in computing and communications technology. High performance computing systems has moved into the mainstream

Chu, Xiaowen

106

Computer Science Research: Computation Directorate  

Science Conference Proceedings (OSTI)

This report contains short papers in the following areas: large-scale scientific computation; parallel computing; general-purpose numerical algorithms; distributed operating systems and networks; knowledge-based systems; and technology information systems.

Durst, M.J. (ed.); Grupe, K.F. (ed.)

1988-01-01T23:59:59.000Z

107

Challenges: environmental design for pervasive computing systems  

Science Conference Proceedings (OSTI)

We argue that pervasive computing offers not only tremendous opportunities and exciting research challenges but also possible negative environmental impacts, particularly in terms of physical waste and energy consumption. These environmental impacts ... Keywords: environmental impacts, green computing, pervasive computing

Ravi Jain; John Wullert, II

2002-09-01T23:59:59.000Z

108

Computerized Accident Incident Reporting System  

Energy.gov (U.S. Department of Energy (DOE))

The Computerized Accident/Incident Reporting System is a database used to collect and analyze DOE and DOE contractor reports of injuries, illnesses, and other accidents that occur during DOE...

109

Build Safety into the Very Beginning of the Computer System  

Science Conference Proceedings (OSTI)

Build Safety into the Very Beginning of the Computer System. From NIST Tech Beat April 28, 2011. ...

2011-05-10T23:59:59.000Z

110

Reliability-aware scheduling strategy for heterogeneous distributed computing systems  

Science Conference Proceedings (OSTI)

Heterogeneous computing systems are promising computing platforms, since single parallel architecture based systems may not be sufficient to exploit the available parallelism with the running applications. In some cases, heterogeneous distributed computing ... Keywords: Duplication, Heterogeneous distributed systems, Precedence constrained tasks, Reliability, Scheduling algorithm

Xiaoyong Tang; Kenli Li; Renfa Li; Bharadwaj Veeravalli

2010-09-01T23:59:59.000Z

111

Dynamic computation migration in DSM systems  

Science Conference Proceedings (OSTI)

We describe dynamic computation migration, the runtime choice between computation and data migration. Dynamic computation migration is useful for concurrent data structures with unpredictable read/write patterns. We implemented it in MCRL, a multithreaded ... Keywords: computation migration, data migration, replication, coherence

Wilson C. Hsieh; M. Frans Kaashoek; William E. Weihl

1996-11-01T23:59:59.000Z

112

Argonne's Laboratory Computing Resource Center : 2005 annual report.  

Science Conference Proceedings (OSTI)

Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

2007-06-30T23:59:59.000Z

113

HPCT Xprofiler on BG/P Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

HPCT Xprofiler on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance...

114

Computer System, Cluster, and Networking Summer Institute Program...  

NLE Websites -- All DOE Office Websites (Extended Search)

System, Cluster, and Networking Summer Institute Program Description The Computer System, Cluster, and Networking Summer Institute (CSCNSI) is a focused technical enrichment...

115

Argonne's Laboratory Computing Resource Center 2009 annual report.  

Science Conference Proceedings (OSTI)

Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

Bair, R. B. (CLS-CI)

2011-05-13T23:59:59.000Z

116

New and Underutilized Technology: Computer Power Management Systems |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Computer Power Management Systems Computer Power Management Systems New and Underutilized Technology: Computer Power Management Systems October 7, 2013 - 9:08am Addthis The following information outlines key deployment considerations for computer power management systems within the Federal sector. Benefits Computer power management systems include network-based software that manages computer power consumption by automatically putting them in standby, hibernation, or other low energy consuming state without interfering with user productivity or IT functions. Application Computer power management systems are applicable in most building categories with high computer counts. Key Factors for Deployment Life-cycle cost effectiveness studies are recommended prior to deployment. Ranking Criteria Federal energy savings, cost-effectiveness, and probability of success are

117

Computer controlled MHD power consolidation and pulse generation system  

DOE Green Energy (OSTI)

The major goal of this research project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility has been established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a Faraday connected MHD generator which may be viewed as a multi-terminal dc source and is simulated for the purpose of this demonstration by a set of dc power supplies. This consolidation/inversion (CI), process will be referred to subsequently as Pulse Amplitude Synthesis and Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible phase II prototype system. This report period work summarizes the accomplishments and covers the high points of the two year project. 6 refs., 41 figs.

Johnson, R.; Marcotte, K.; Donnelly, M.

1990-01-01T23:59:59.000Z

118

Honeywell modular automation system computer software documentation  

SciTech Connect

The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). The Honeywell MAS is used to control the thermal stabilization furnaces in glovebox HA-211. The PFP developed software is being updated to reflect the Polycube Processing and Unwashed Salt Thermal Stabilization program addition. The polycube processing program was installed per HNF-FMP-02-11162-R2. The functional test of the program was performed in JCS work package 22-02-1031, The unwashed salt item program was installed per HNF-FMP-03-16577-RO. The functional test of the program completed in JCS work package 22-03-00654.

STUBBS, A.M.

2003-07-02T23:59:59.000Z

119

NERSC System Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

Reports Reports Usage Reports Batch Job Statistics See queue wait times, hours used, top users and other summary statistics for jobs run at NERSC (login required). Read More » Parallel Job Statistics (Cray aprun) $RestfulQuery4... Read More » Hopper Hours Used Hours used per day on Hopper. Read More » Edison Hours Used Hours used per day on Hopper. Read More » Carver Hours Used Hours used per day on Carver. Read More » Historical Data Hopper Job Size Charts This charts shows the fraction of hours used on Hopper in each of 5 job-core-size bins. 2013 2012 . 2011 . This chart shows the fraction of hours used on Hopper by jobs using greater than 16,384 cores. 2013 2012 ... Read More » Edison Job Size Charts This charts shows the fraction of hours used on Edison in each of 5

120

CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT  

Science Conference Proceedings (OSTI)

High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

Corones, James [Krell Institute

2013-09-23T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Computer simulation of wind/diesel system operation  

DOE Green Energy (OSTI)

This document reports on a computer code, SOLSTOR W/D, that determines --- for a site's wind energy resources, load requirements, and economic constraints --- the components and sizes for a wind/diesel system that result in the lowest cost of energy. Wind diesel systems are defined here as electricity generation stations in the 50-kW to 1-MW range that (1) are not connected to another electricity network, (2) use wind energy as the first source of supply to meet demand, and (3) contain sufficient energy storage and/or backup diesel electric generators to compensate for lapses in wind energy. The computer code also determines, for the same input load, the requirements and economics that are the best number and size for an isolated diesel(s) system so that comparisons for wind/diesel systems and diesel-only systems can be made. SOLSTOR W/D provides a systematic method to show whether wind-diesel systems can be an attractive means of saving fossil fuel without significantly affecting electricity quality or production cost. 12 refs., 66 figs., 5 tabs.

Not Available

1989-09-01T23:59:59.000Z

122

List scheduling with duplication for heterogeneous computing systems  

Science Conference Proceedings (OSTI)

Effective task scheduling is essential for obtaining high performance in heterogeneous computing systems (HCS). However, finding an effective task schedule in HCS, requires the consideration of the heterogeneity of computation and communication. To solve ... Keywords: DAG, Duplication, Heterogeneous computing systems, List scheduling

Xiaoyong Tang; Kenli Li; Guiping Liao; Renfa Li

2010-04-01T23:59:59.000Z

123

TECHNICAL REPORT: Computer Simulation of Selected Windows and Doors  

E-Print Network (OSTI)

TECHNICAL REPORT: Computer Simulation of Selected Windows and Doors According to CEN Method By simulated, 3 windows and 2 doors. Of the three windows, one was a casement and two were double hung operator types. The two double hung windows were essentially the same, except for the sill, which was made

Massachusetts at Amherst, University of

124

PHYSICS, COMPUTER SCIENCE AND MATHEMATICS DIVISION. ANNUAL REPORT. 1 JANUARY - 31 DECEMBER 1979  

E-Print Network (OSTI)

Johnston, and N. Johnston. Computer Graphics 13. 2 (SIGGRAPHS. Colonias. 1979 UCRL-52824. Computer Science and Applied1979. LBL-9504. Synch-A Computer System for Synchrotron

Lepore Editor, J.V.

2010-01-01T23:59:59.000Z

125

Computer Aided Composition System with Interactive Selective Population Climbing  

Science Conference Proceedings (OSTI)

In this work, we are developing the computer aided composition system. This system aids a person, which knows cellphone or background music of home page or software. This system is implemented with the interactive selective population climbing. We suppose ... Keywords: computer aided composition system, interactive selective population climbing, composing model

Hiroshi Hasui

2009-03-01T23:59:59.000Z

126

Compiler-based Memory Optimizations for High Performance Computing Systems.  

E-Print Network (OSTI)

??Parallelism has always been the primary method to achieve higher performance. To advance the computational capabilities of state-of-the-art high performance computing systems, we continue to (more)

Kultursay, Emre

2013-01-01T23:59:59.000Z

127

BOINC: A System for Public-Resource Computing and Storage  

Science Conference Proceedings (OSTI)

BOINC (Berkeley Open Infrastructure for Network Computing) is a software system that makes it easy for scientists to create and operate public-resource computing projects. It supports diverse applications, including those with large storage or communication ...

David P. Anderson

2004-11-01T23:59:59.000Z

128

Transition to cloud computing in healthcare information systems  

E-Print Network (OSTI)

This thesis is a study on the adoption of cloud computing in healthcare information technology industry. It provides a guideline for people who are trying to bring cloud computing into healthcare information systems through ...

Ren, Haiying, S.M. Massachusetts Institute of Technology

2012-01-01T23:59:59.000Z

129

C++ programming techniques for High Performance Computing on systems with  

E-Print Network (OSTI)

C++ programming techniques for High Performance Computing on systems with non-uniform memory access (including NUMA) without sacrificing performance. ccNUMA In High Performance Computing (HPC), shared- memory

Sanderson, Yasmine

130

Computer system performance problem detection using time series models  

Science Conference Proceedings (OSTI)

Computer systems require monitoring to detect performance anomalies such as runaway processes, but problem detection and diagnosis is a complex task requiring skilled attention. Although human attention was never ideal for this task, as networks of computers ...

Peter Hoogenboom; Jay Lepreau

1993-06-01T23:59:59.000Z

131

Protecting Computer Systems Against Power Transients  

Science Conference Proceedings (OSTI)

... protection may be more important than some data loss. ... by a telephone or other network link to ... are buffered by the computer's power supply but ...

2013-05-17T23:59:59.000Z

132

Neumann Receives Computer System Security Award  

Science Conference Proceedings (OSTI)

... in the area of information security and assurance. ... significant long-term contributions to computer security ... trade, and improve the quality of life. ...

2012-12-13T23:59:59.000Z

133

No Pervasive Computing without Intelligent Systems  

Science Conference Proceedings (OSTI)

For pervasive computing ideas to reach, and be used, by the general public, they should fulfil an unmet need. Automation of manual tasks in the home, car or at work provides a rich environment for new services and applications. Pervasive computing can ...

S. G. Thompson; B. Azvine

2004-07-01T23:59:59.000Z

134

Selective inductive powering system for paper computing  

Science Conference Proceedings (OSTI)

We present a method of selective wireless power transferring for paper computing. The novelty of this method lies in the power transmitter can be controlled to selectively activate different receivers in the context of wireless power transferring with ... Keywords: paper computing, selective wireless power

Kening Zhu; Hideaki Nii; Owen Noel Newton Fernando; Adrian David Cheok

2011-11-01T23:59:59.000Z

135

Overview of ASC Capability Computing System Governance Model  

SciTech Connect

This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

Doebling, Scott W. [Los Alamos National Laboratory

2012-07-11T23:59:59.000Z

136

A survey of computer systems for expressive music performance  

Science Conference Proceedings (OSTI)

We present a survey of research into automated and semiautomated computer systems for expressive performance of music. We will examine the motivation for such systems and then examine the majority of the systems developed over the last 25 years. To highlight ... Keywords: Music performance, computer music, generative performance, machine learning

Alexis Kirke; Eduardo Reck Miranda

2009-12-01T23:59:59.000Z

137

Virtual Data System on distributed virtual machines in computational grids  

Science Conference Proceedings (OSTI)

This paper presents the work of building a Grid workflow system on distributed virtual machines. A Grid Virtualisation Engine (GVE) is implemented to manage virtual machines as computing resources for Grid applications. The Virtual Data System ... Keywords: compact muon solenoid, distributed virtual machines, grid computing, grid workflow, high energy physics applications, virtual data systems

Lizhe Wang; Gregor Von Laszewski; Jie Tao; Marcel Kunze

2010-09-01T23:59:59.000Z

138

Overview of ASC Capability Computing System Governance Model  

SciTech Connect

This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

Doebling, Scott W. [Los Alamos National Laboratory

2012-07-11T23:59:59.000Z

139

SPECTR System Operational Test Report  

Science Conference Proceedings (OSTI)

This report overviews installation of the Small Pressure Cycling Test Rig (SPECTR) and documents the system operational testing performed to demonstrate that it meets the requirements for operations. The system operational testing involved operation of the furnace system to the design conditions and demonstration of the test article gas supply system using a simulated test article. The furnace and test article systems were demonstrated to meet the design requirements for the Next Generation Nuclear Plant. Therefore, the system is deemed acceptable and is ready for actual test article testing.

W.H. Landman Jr.

2011-08-01T23:59:59.000Z

140

6.823 Computer System Architecture, Spring 2002  

E-Print Network (OSTI)

Emphasizes the relationship among technology, hardware organization, and programming systems in the evolution of computer architecture. Pipelined, out-of-order, and speculative execution. Superscaler, VLIW, vector, and ...

Asanovic, Krste

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

Computational properties of argument systems satisfying graph-theoretic constraints  

Science Conference Proceedings (OSTI)

One difficulty that arises in abstract argument systems is that many natural questions regarding argument acceptability are, in general, computationally intractable having been classified as complete for classes such as np, co-np, and ... Keywords: Argumentation frameworks, Computational complexity, Computational properties of argumentation

Paul E. Dunne

2007-07-01T23:59:59.000Z

142

BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III  

Science Conference Proceedings (OSTI)

This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.

Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.

1981-06-01T23:59:59.000Z

143

Srinivasan Named Head of NERSC's Computational Systems Group  

NLE Websites -- All DOE Office Websites (Extended Search)

Srinivasan Named Head Srinivasan Named Head of NERSC's Computational Systems Group Srinivasan Named Head of NERSC's Computational Systems Group August 31, 2011 | Tags: NERSC Jay Srinivasan has been selected as the Computational Systems Group Lead in the NERSC Systems Department. In this role, he will supervise the day-to-day operation of all of NERSC's computer systems. Prior to taking on his new assignment, Srinivasan was the team lead for the PDSF cluster that supports Nuclear Physics and High Energy Physics. Srinivasan has more than 15 years of experience in high performance computing, both as a user and administrator. Since joining NERSC in 2001, he has worked on all the large systems from NERSC-3, the IBM/SP2 system called Seaborg, to Hopper, the Cray XE6 that is currently NERSC's

144

Functional requirements for gas characterization system computer software  

DOE Green Energy (OSTI)

This document provides the Functional Requirements for the Computer Software operating the Gas Characterization System (GCS), which monitors the combustible gasses in the vapor space of selected tanks. Necessary computer functions are defined to support design, testing, operation, and change control. The GCS requires several individual computers to address the control and data acquisition functions of instruments and sensors. These computers are networked for communication, and must multi-task to accommodate operation in parallel.

Tate, D.D.

1996-01-01T23:59:59.000Z

145

Surveyor / Gadzooks File Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems...

146

Data analysis using the Gnu R system for statistical computation  

Science Conference Proceedings (OSTI)

R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

Simone, James; /Fermilab

2011-07-01T23:59:59.000Z

147

Inspection Report, INSPECTION OF SURPLUS COMPUTER EQUIPMENTMANAGEMENT AT THE SAVANNAH RIVER SITE, DOE/IG-0472  

Energy.gov (U.S. Department of Energy (DOE))

By letter dated November 1, 1999, Senator Strom Thurmond advised the Office of Inspector General of an allegation that computer equipment containing over 40 computer hard drives reportedly...

148

Computational Systems & Software Environment | National Nuclear Security  

National Nuclear Security Administration (NNSA)

Computational Systems & Software Environment | National Nuclear Security Computational Systems & Software Environment | National Nuclear Security Administration Our Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Continuing Management Reform Countering Nuclear Terrorism About Us Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Media Room Congressional Testimony Fact Sheets Newsletters Press Releases Speeches Events Social Media Video Gallery Photo Gallery NNSA Archive Federal Employment Apply for Our Jobs Our Jobs Working at NNSA Blog CSSE Computational Systems & Software Environment Home > About Us > Our Programs > Defense Programs > Future Science & Technology Programs > Office of Advanced Simulation and Computing and

149

15.094 Systems Optimization: Models and Computation, Spring 2002  

E-Print Network (OSTI)

A computational and application-oriented introduction to the modeling of large-scale systems in a wide variety of decision-making domains and the optimization of such systems using state-of-the-art optimization software. ...

Freund, Robert Michael

150

Tracking Mobile Users Using User Locality in Mobile Computing Systems  

Science Conference Proceedings (OSTI)

Managing location information of mobile terminals is an important issue in mobile computing systems. The IS-41and the GSM schemes are done inefficiently in the following situations: 1 ) mobile terminals frequently move to neighboring registration area, ... Keywords: location update, location query, user locality, Mobile Computing Systems

1999-09-01T23:59:59.000Z

151

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

s Office of Advanced Scientific Computing Research, whichOffice of Advanced Scientific Computing Research The primaryof the Advanced Scientific Computing Research (ASCR) program

Hules, John A.

2008-01-01T23:59:59.000Z

152

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

and Directions in High Performance Computing for the Officein the evolution of high performance computing and networks.Hectopascals High performance computing High Performance

Hules, John A.

2008-01-01T23:59:59.000Z

153

The Argonne Leadership Computing Facility 2010 annual report.  

SciTech Connect

Researchers found more ways than ever to conduct transformative science at the Argonne Leadership Computing Facility (ALCF) in 2010. Both familiar initiatives and innovative new programs at the ALCF are now serving a growing, global user community with a wide range of computing needs. The Department of Energy's (DOE) INCITE Program remained vital in providing scientists with major allocations of leadership-class computing resources at the ALCF. For calendar year 2011, 35 projects were awarded 732 million supercomputer processor-hours for computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. Argonne also continued to provide Director's Discretionary allocations - 'start up' awards - for potential future INCITE projects. And DOE's new ASCR Leadership Computing (ALCC) Program allocated resources to 10 ALCF projects, with an emphasis on high-risk, high-payoff simulations directly related to the Department's energy mission, national emergencies, or for broadening the research community capable of using leadership computing resources. While delivering more science today, we've also been laying a solid foundation for high performance computing in the future. After a successful DOE Lehman review, a contract was signed to deliver Mira, the next-generation Blue Gene/Q system, to the ALCF in 2012. The ALCF is working with the 16 projects that were selected for the Early Science Program (ESP) to enable them to be productive as soon as Mira is operational. Preproduction access to Mira will enable ESP projects to adapt their codes to its architecture and collaborate with ALCF staff in shaking down the new system. We expect the 10-petaflops system to stoke economic growth and improve U.S. competitiveness in key areas such as advancing clean energy and addressing global climate change. Ultimately, we envision Mira as a stepping-stone to exascale-class computers that will be faster than petascale-class computers by a factor of a thousand. Pete Beckman, who served as the ALCF's Director for the past few years, has been named director of the newly created Exascale Technology and Computing Institute (ETCi). The institute will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems. Just as Pete's leadership propelled the ALCF to great success, we know that that ETCi will benefit immensely from his expertise and experience. Without question, the future of supercomputing is certainly in good hands. I would like to thank Pete for all his effort over the past two years, during which he oversaw the establishing of ALCF2, the deployment of the Magellan project, increases in utilization, availability, and number of projects using ALCF1. He managed the rapid growth of ALCF staff and made the facility what it is today. All the staff and users are better for Pete's efforts.

Drugan, C. (LCF)

2011-05-09T23:59:59.000Z

154

Computer Science and Computer Information Systems Faculty: M. Branton, H. ElAarag, D. Plante, H. Pulapaka  

E-Print Network (OSTI)

Computer Science and Computer Information Systems Faculty: M. Branton, H. ElAarag, D. Plante, H. Pulapaka The computer science major at Stetson University provides students a flexible curriculum where they can concentrate in one of two defined degree tracks, Computer Science and Computer Information Systems

Miles, Will

155

From Power Laws to Power Grids: A Mathematical and Computational Foundation for Complex Interactive Networks: EPRI/DoD Complex Inter active Networks/Systems Initiative: Second Annual Report  

Science Conference Proceedings (OSTI)

This report details the second-year research accomplishments for one of six research consortia established under the Complex Interactive Networks/Systems Initiative. This particular report focuses on understanding the behavior of large-scale complex interactive networks and investigating their mathematical underpinnings.

2001-06-21T23:59:59.000Z

156

Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.  

Science Conference Proceedings (OSTI)

The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.

Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

2012-08-16T23:59:59.000Z

157

Measured Performance of Energy-Efficient Computer Systems  

E-Print Network (OSTI)

The intent of this study is to explore the potential performance of both Energy Star computers/printers and add-on control devices individually, and their expected savings if collectively applied in a typical office building in a hot and humid climate. Recent surveys have shown that the use of personal computer systems in commercial office buildings is expanding rapidly. The energy consumption of such a growing end-use also has a significant impact on the total building power demand. In warmer climates, office equipment energy use has important implications for building cooling loads as well as those directly associated with computing tasks. Recently, the Environmental Protection Agency (EPA) has developed an Energy Star (ES) rating system intended to endorse more efficient equipment. To research the comparative performance of conventional and low-energy computer systems, four Energy Star computer systems and two computer systems equipped with energy saving devices were monitored for power demand. Comparative data on the test results are summarized. In addition, a brief analysis uses the DOE-2.1E computer simulation to examine the impact of the test results and HVAC interactions if generically applied to computer systems in a modern office building in Florida's climate.

Floyd, D. B.; Parker, D. S.

1996-01-01T23:59:59.000Z

158

Computer Science Research Institute 2005 annual report of activities.  

SciTech Connect

This report summarizes the activities of the Computer Science Research Institute (CSRI) at Sandia National Laboratories during the period January 1, 2005 to December 31, 2005. During this period, the CSRI hosted 182 visitors representing 83 universities, companies and laboratories. Of these, 60 were summer students or faculty. The CSRI partially sponsored 2 workshops and also organized and was the primary host for 3 workshops. These 3 CSRI sponsored workshops had 105 participants, 78 from universities, companies and laboratories, and 27 from Sandia. Finally, the CSRI sponsored 12 long-term collaborative research projects and 3 Sabbaticals.

Watts, Bernadette M.; Collis, Samuel Scott; Ceballos, Deanna Rose; Womble, David Eugene

2008-04-01T23:59:59.000Z

159

Computer Science Research Institute 2004 annual report of activities.  

Science Conference Proceedings (OSTI)

This report summarizes the activities of the Computer Science Research Institute (CSRI) at Sandia National Laboratories during the period January 1, 2004 to December 31, 2004. During this period the CSRI hosted 166 visitors representing 81 universities, companies and laboratories. Of these 65 were summer students or faculty. The CSRI partially sponsored 2 workshops and also organized and was the primary host for 4 workshops. These 4 CSRI sponsored workshops had 140 participants--74 from universities, companies and laboratories, and 66 from Sandia. Finally, the CSRI sponsored 14 long-term collaborative research projects and 5 Sabbaticals.

DeLap, Barbara J.; Womble, David Eugene; Ceballos, Deanna Rose

2006-03-01T23:59:59.000Z

160

Computer Science Research Institute 2003 annual report of activities.  

SciTech Connect

This report summarizes the activities of the Computer Science Research Institute (CSRI) at Sandia National Laboratories during the period January 1, 2003 to December 31, 2003. During this period the CSRI hosted 164 visitors representing 78 universities, companies and laboratories. Of these 78 were summer students or faculty members. The CSRI partially sponsored 5 workshops and also organized and was the primary host for 3 workshops. These 3 CSRI sponsored workshops had 178 participants--137 from universities, companies and laboratories, and 41 from Sandia. Finally, the CSRI sponsored 18 long-term collaborative research projects and 5 Sabbaticals.

DeLap, Barbara J.; Womble, David Eugene; Ceballos, Deanna Rose

2006-03-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

The NSTX Trouble Reporting System  

Science Conference Proceedings (OSTI)

An online Trouble Reporting System (TRS) has been introduced at the National Spherical Torus Experiment (NSTX). The TRS is used by NSTX operators to report problems that affect NSTX operations. The purpose of the TRS is to enhance NSTX reliability and maintainability by identifying components, occurrences, and trends that contribute to machine downtime. All NSTX personnel have access to the TRS. The user interface is via a web browser, such as Netscape or Internet Explorer. This web-based feature permits any X-terminal, PC, or MAC access to the TRS. The TRS is based upon a trouble reporting system developed at the DIII-D Tokamak, at General Atomics Technologies. This paper will provide a detailed description of the TRS software architecture, user interface, MS SQL server interface and operational experiences. In addition, sample data from the TRS database will be summarized and presented.

S. Sengupta; G. Oliaro

2002-01-28T23:59:59.000Z

162

Multiscale analysis of nonlinear systems using computational homology  

DOE Green Energy (OSTI)

This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

Konstantin Mischaikow, Rutgers University /Georgia Institute of Technology, Michael Schatz, Georgia Institute of Technology, William Kalies, Florida Atlantic University, Thomas Wanner,George Mason University

2010-05-19T23:59:59.000Z

163

Multiscale analysis of nonlinear systems using computational homology  

DOE Green Energy (OSTI)

This is a collaborative project between the principal investigators. However, as is to be expected, different PIs have greater focus on different aspects of the project. This report lists these major directions of research which were pursued during the funding period: (1) Computational Homology in Fluids - For the computational homology effort in thermal convection, the focus of the work during the first two years of the funding period included: (1) A clear demonstration that homology can sensitively detect the presence or absence of an important flow symmetry, (2) An investigation of homology as a probe for flow dynamics, and (3) The construction of a new convection apparatus for probing the effects of large-aspect-ratio. (2) Computational Homology in Cardiac Dynamics - We have initiated an effort to test the use of homology in characterizing data from both laboratory experiments and numerical simulations of arrhythmia in the heart. Recently, the use of high speed, high sensitivity digital imaging in conjunction with voltage sensitive fluorescent dyes has enabled researchers to visualize electrical activity on the surface of cardiac tissue, both in vitro and in vivo. (3) Magnetohydrodynamics - A new research direction is to use computational homology to analyze results of large scale simulations of 2D turbulence in the presence of magnetic fields. Such simulations are relevant to the dynamics of black hole accretion disks. The complex flow patterns from simulations exhibit strong qualitative changes as a function of magnetic field strength. Efforts to characterize the pattern changes using Fourier methods and wavelet analysis have been unsuccessful. (4) Granular Flow - two experts in the area of granular media are studying 2D model experiments of earthquake dynamics where the stress fields can be measured; these stress fields from complex patterns of 'force chains' that may be amenable to analysis using computational homology. (5) Microstructure Characterization - We extended our previous work on studying the time evolution of patterns associated with phase separation in conserved concentration fields. (6) Probabilistic Homology Validation - work on microstructure characterization is based on numerically studying the homology of certain sublevel sets of a function, whose evolution is described by deterministic or stochastic evolution equations. (7) Computational Homology and Dynamics - Topological methods can be used to rigorously describe the dynamics of nonlinear systems. We are approaching this problem from several perspectives and through a variety of systems. (8) Stress Networks in Polycrystals - we have characterized stress networks in polycrystals. This part of the project is aimed at developing homological metrics which can aid in distinguishing not only microstructures, but also derived mechanical response fields. (9) Microstructure-Controlled Drug Release - This part of the project is concerned with the development of topological metrics in the context of controlled drug delivery systems, such as drug-eluting stents. We are particularly interested in developing metrics which can be used to link the processing stage to the resulting microstructure, and ultimately to the achieved system response in terms of drug release profiles. (10) Microstructure of Fuel Cells - we have been using our computational homology software to analyze the topological structure of the void, metal and ceramic components of a Solid Oxide Fuel Cell.

Konstantin Mischaikow; Michael Schatz; William Kalies; Thomas Wanner

2010-05-24T23:59:59.000Z

164

High Performance Computing Systems and Applications edited by Nikitas J. Dimopoulos; Dept. of Electrical and Computer Engineering, University of  

E-Print Network (OSTI)

High Performance Computing Systems and Applications edited by Nikitas J. Dimopoulos; Dept AND COMPUTER SCIENCE 657 November 2001 Hardbound 544 pp. ISBN 0-7923-7617-X High Performance Computing Systems on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book

Baranoski, Gladimir V. G.

165

Parallel Computing Environments and Methods for Power Distribution System Simulation  

SciTech Connect

The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

2005-11-10T23:59:59.000Z

166

Toward a new metric for ranking high performance computing systems.  

SciTech Connect

The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

Heroux, Michael Allen; Dongarra, Jack. [University of Tennessee, Knoxville, TN

2013-06-01T23:59:59.000Z

167

Computer-aided engineering annual report for calendar year 1989  

Science Conference Proceedings (OSTI)

During calendar year 1989, EG G Idaho completed the initial procurement and implementation of a major new Computer-Aided Engineering (CAE) system. Seventy new workstations and associated engineering applications were installed over 100 personal computers (PCs) were integrated into the environment, and communications links to the IBM mainframes and the Cray supercomputer were established. The system achieves integration through sophisticated data communications and application interfaces that allow data sharing across the entire environment. Applications available on the system facilitate engineering relate to full three-dimensional (3-D) piping, heating/ventilating/air conditioning (HAVC), structural and steel design, solids modeling and analysis, desktop publishing, design and drafting, and include automated links to various analysis codes on the Cray supercomputer. The system also provides commonly used engineering tools such as spreadsheets, language compilers, terminal emulation, and file transfer facilities. Although difficult to quantify, recent information has shown that projected annual productivity improvement to the Idaho National Engineering Laboratory (INEL) will be in excess of $2,000,000. This improvement will be generated in a variety of areas including improvement in the efficiency of individual users, checkplot production, data management, file transfers, plotting. design-analysis interfaces, and the benefit of full 3-D design. Current plans call for a significant expansion of the CAE system in 1990 with continued expansion through 1993. Additional workstations, system software and utilities, networking facilities and applications software will be procured and implemented. More than one hundred people will receive training on the various application packages during 1990. Efforts to extend networking throughout the INEL will be continued. 2 refs.

Brockelsby, H.C. Jr.

1990-03-01T23:59:59.000Z

168

Performance tuning for high performance computing systems.  

E-Print Network (OSTI)

??A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet (more)

Pahuja, Himanshu

2011-01-01T23:59:59.000Z

169

CHI '11 Extended Abstracts on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

Over the last year or so, we have been blessed with the challenge, the opportunity, and the distinct pleasure of organizing the CHI 2011 Conference on Human Factors in Computing Systems, the premier international conference for the field of human-computer ...

Desney Tan; Bo Begole; Wendy A. Kellogg

2011-05-01T23:59:59.000Z

170

Human-aware computer system design  

Science Conference Proceedings (OSTI)

In this paper, we argue that human-factors studies are critical in building a wide range of dependable systems. In particular, only with a deep understanding of the causes, types, and likelihoods of human mistakes can we build systems that prevent, hide, ...

Ricardo Bianchini; Richard P. Martin; Kiran Nagaraja; Thu D. Nguyen; Fbio Oliveira

2005-06-01T23:59:59.000Z

171

Associative computer: a hybrid connectionistic production system  

Science Conference Proceedings (OSTI)

In this paper, we introduce a connectionistic hybrid production system, which relies on the distributed representation and the usage of associative memories. Benefits of the distributed representation include heuristics resulting from pictogram representation. ... Keywords: Connectionism, Distributed representation, Learning, Problem solving, Production system

Andreas Wichert

2005-06-01T23:59:59.000Z

172

6.033 Computer System Engineering (SMA 5501), Spring 2005  

E-Print Network (OSTI)

Topics on the engineering of computer software and hardware systems: techniques for controlling complexity; strong modularity using client-server design, virtual memory, and threads; networks; atomicity and coordination ...

Balakrishnan, Hari

173

Dynamic self-assembly in living systems as computation  

Science Conference Proceedings (OSTI)

ment. It is our view that much, if not all, of the business of a living system's building and maintaining itself is also a physical form of sto- chastic computing. 2.

174

The role of computer systems in the nuclear power debate  

Science Conference Proceedings (OSTI)

One of the primary reasons for the current "decline" of nuclear power is that reactors have not operated reliably. This unreliability has raised questions of both safety and economics. Computer systems have been a part of this failure of technology. ...

Kevin W. Bowyer

1980-04-01T23:59:59.000Z

175

Autonomic Computing for Pervasive ICT A Whole-System Perspective  

Science Conference Proceedings (OSTI)

It is unlikely that we can expect to apply traditional centralised management approaches to large-scale pervasive computing scenarios. Approaches that require manual intervention for system management will similarly not be sustainable in the context ...

M. Shackleton; F. Saffre; R. Tateson; E. Bonsma; C. Roadknight

2004-07-01T23:59:59.000Z

176

Ontology-based models in pervasive computing systems  

Science Conference Proceedings (OSTI)

Pervasive computing is by its nature open and extensible, and must integrate the information from a diverse range of sources. This leads to a problem of information exchange, so sub-systems must agree on shared representations. Ontologies potentially ...

Juan Ye; Lorcan Coyle; Simon Dobson; Paddy Nixon

2007-12-01T23:59:59.000Z

177

FY12 Quarter 3 Computing Utilization Report LANL  

Science Conference Proceedings (OSTI)

DSW continues to dominate the capacity workload, with a focus in Q3 on common model baselining runs in preparation for the Annual Assessment Review (AAR) of the weapon systems. There remains unmet demand for higher fidelity simulations, and for increased throughput of simulations. Common model baselining activities would benefit from doubling the resolution of the models and running twice as many simulations. Capacity systems were also utilized during the quarter to prepare for upcoming Level 2 milestones. Other notable DSW activities include validation of new physics models and safety studies. The safety team used the capacity resources extensively for projects involving 3D computer simulations for the Furrow series of experiments at DARHT (a Level 2 milestone), fragment impact, surety theme, PANTEX assessments, and the 120-day study. With the more than tripling of classified capacity computing resources with the addition of the Luna system and the safety team's imminent access to the Cielo system, demand has been met for current needs. The safety team has performed successful scaling studies on Luna up to 16K PE size-jobs with linear scaling, running the large 3D simulations required for the analysis of Furrow. They will be investigating scaling studies on the Cielo system with the Lustre file system in Q4. Overall average capacity utilization was impacted by negative effects of the LANL Voluntary Separation Program (VSP) at the beginning of Q3, in which programmatic staffing was reduced by 6%, with further losses due to management backfills and attrition, resulting in about 10% fewer users. All classified systems were impacted in April by a planned 2 day red network outage. ASC capacity workload continues to focus on code development, regression testing, and verification and validation (V&V) studies. Significant capacity cycles were used in preparation for a JOWOG in May and several upcoming L2 milestones due in Q4. A network transition has been underway on the unclassified networks to increase access of all ASC users to the unclassified systems through the Yellow Turquoise Integration (YeTI) project. This will help to alleviate the longstanding shortage of resources for ASC unclassified code development and regression testing, and also make a broader palette of machines available to unclassified ASC users, including PSAAP Alliance users. The Moonlight system will be the first capacity resource to be made available through the YETI project, and will make available a significant increase in cycles, as well as GPGPU accelerator technology. The Turing and Lobo machines will be decommissioned in the next quarter. ASC projects running on Cielo as part of the CCC-3 include turbulence, hydrodynamics, burn, asteroids, polycrystals, capability and runtime performance improvements, and materials including carbon and silicone.

Wampler, Cheryl L. [Los Alamos National Laboratory; McClellan, Laura Ann [Los Alamos National Laboratory

2012-07-25T23:59:59.000Z

178

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments are improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area and sample sequences in another area on a display device.

Chee, Mark S. (3199 Waverly St., Palo Alto, CA 94306)

1998-08-18T23:59:59.000Z

179

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

Chee, Mark S. (Palo Alto, CA)

2001-06-05T23:59:59.000Z

180

Computer-aided visualization and analysis system for sequence evaluation  

DOE Patents (OSTI)

A computer system (1) for analyzing nucleic acid sequences is provided. The computer system is used to perform multiple methods for determining unknown bases by analyzing the fluorescence intensities of hybridized nucleic acid probes. The results of individual experiments may be improved by processing nucleic acid sequences together. Comparative analysis of multiple experiments is also provided by displaying reference sequences in one area (814) and sample sequences in another area (816) on a display device (3).

Chee, Mark S. (Palo Alto, CA)

1999-10-26T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

Development of a Computer Heating Monitoring System and Its Applications  

E-Print Network (OSTI)

This paper develops a computer heating monitoring system, introduces the components and principles of the monitoring system, and provides a study on its application to residential building heating including analysis of indoor and outdoor air temperature, heating index and energy savings. The results show that the current heating system has a great potential for energy conservation.

Chen, H.; Li, D.; Shen, L.

2006-01-01T23:59:59.000Z

182

NSLS beam line data acquisition and analysis computer system  

SciTech Connect

A versatile computer environment to manage instrumentation alignment and experimental control at NSLS beam lines has been developed. The system is based on a 386/486 personal computer running under a UNIX operating system with X11 Windows. It offers an ideal combination of capability, flexibility, compatibility, and cost. With a single personal computer, the beam line user can run a wide range of scattering and spectroscopy experiments using a multi-tasking data collection program which can interact with CAMAC, GPIB and AT-Bus interfaces, and simultaneously examine and analyze data and communicate with remote network nodes.

Feng-Berman, S.K.; Siddons, D.P.; Berman, L.

1993-11-01T23:59:59.000Z

183

Algorithmic support for commodity-based parallel computing systems.  

SciTech Connect

The Computational Plant or Cplant is a commodity-based distributed-memory supercomputer under development at Sandia National Laboratories. Distributed-memory supercomputers run many parallel programs simultaneously. Users submit their programs to a job queue. When a job is scheduled to run, it is assigned to a set of available processors. Job runtime depends not only on the number of processors but also on the particular set of processors assigned to it. Jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This report introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in Release 2.0 of the Cplant System Software that was phased into the Cplant systems at Sandia by May 2002. Experimental results then demonstrated that the average number of communication hops between the processors allocated to a job strongly correlates with the job's completion time. This report also gives processor-allocation algorithms for minimizing the average number of communication hops between the assigned processors for grid architectures. The associated clustering problem is as follows: Given n points in {Re}d, find k points that minimize their average pairwise L{sub 1} distance. Exact and approximate algorithms are given for these optimization problems. One of these algorithms has been implemented on Cplant and will be included in Cplant System Software, Version 2.1, to be released. In more preliminary work, we suggest improvements to the scheduler separate from the allocator.

Leung, Vitus Joseph; Bender, Michael A. (State University of New York, Stony Brook, NY); Bunde, David P. (University of Illinois, Urbna, IL); Phillips, Cynthia Ann

2003-10-01T23:59:59.000Z

184

Void fraction system computer software design description  

DOE Green Energy (OSTI)

This document describes the software that controls the void fraction instrument. The format of the document may differ from typical Software Design Reports because it was created with a graphical programming language. Hardware is described in Section 2. The purpose of this document is describe the software, so the hardware description is brief. Software is described in Section 3. LabVIEW was used to develop the viscometer software, so Section 3 begins with an introduction to LabVIEW. This is followed by a description of the main program. Finally each Westinghouse developed subVI (sub program) is discussed.

Gimera, M.

1995-02-15T23:59:59.000Z

185

Computer modeling and experimental verification of figure-eight-shaped null-flux coil suspension system  

DOE Green Energy (OSTI)

This report discusses the computer modeling and experimental verification of the magnetic forces associated with a figure-eight-shaped null-flux coil suspension system. A set of computer codes called COILGDWY, were developed on the basis of the dynamic circuit model and verified by means of a laboratory model. The experimental verification was conducted with a rotating PVC drum, the surface of which held various types of figure-eight-shaped null-flux coils that interacted with a stationary permanent magnet. The transient and dynamic magnetic forces between the stationary magnet and the rotating conducting coils were measured and compared with results obtained from the computer model. Good agreement between the experimental results and computer simulations was obtained. The computer model can also be used to calculate magnetic forces in a large-scale magnetic-levitation system.

He, J.L.; Mulcahey, T.M.; Rote, D.M.; Kelly, T.

1994-12-01T23:59:59.000Z

186

Electrical Engineering and Computer Science Department Technical Report  

E-Print Network (OSTI)

, 2011 VNET/P: Bridging the Cloud and High Performance Computing Through Fast Overlay Networking Lei Xia for high performance computing (HPC) on collections of virtual machines (VMs). With the emergence of cloud, cloud computing, high performance computing, overlay networking #12;VNET/P: Bridging the Cloud and High

Shahriar, Selim

187

New Computational Methods for Characterizing Systems Biology of Low Dose  

NLE Websites -- All DOE Office Websites (Extended Search)

New Computational Methods for Characterizing Systems Biology of Low Dose New Computational Methods for Characterizing Systems Biology of Low Dose and Adaptive Response Bahram Parvin Lawrence Berkeley National Laboratory Abstract We present preliminary results on a new computational method for systems biology of adaptive response and low dose effect from transcript and phenotypic data. The underlying concept is that a small subset of genes is triggered for each treatment condition or a phenotypic index. The concept of a small subset of genes translates to the sparsity constraint, which is applied computationally. The main advantage of this technique over traditional statistical methods is (i) direct application of sparsity, (ii) incorporating multi-class and multidimensional phenotypic profiles in one framework, and (iii) hypothesizing interaction networks simultaneously. Our

188

Review: The use of computational intelligence in intrusion detection systems: A review  

Science Conference Proceedings (OSTI)

Intrusion detection based upon computational intelligence is currently attracting considerable interest from the research community. Characteristics of computational intelligence (CI) systems, such as adaptation, fault tolerance, high computational speed ... Keywords: Artificial immune systems, Artificial neural networks, Computational intelligence, Evolutionary computation, Fuzzy systems, Intrusion detection, Soft computing, Survey, Swarm intelligence

Shelly Xiaonan Wu; Wolfgang Banzhaf

2010-01-01T23:59:59.000Z

189

Introduction to the Report "Interlanguages and Synchronic Models of Computation."  

E-Print Network (OSTI)

A novel language system has given rise to promising alternatives to standard formal and processor network models of computation. An interstring linked with a abstract machine environment, shares sub-expressions, transfers data, and spatially allocates resources for the parallel evaluation of dataflow. Formal models called the a-Ram family are introduced, designed to support interstring programming languages (interlanguages). Distinct from dataflow, graph rewriting, and FPGA models, a-Ram instructions are bit level and execute in situ. They support sequential and parallel languages without the space/time overheads associated with the Turing Machine and l-calculus, enabling massive programs to be simulated. The devices of one a-Ram model, called the Synchronic A-Ram, are fully connected and simpler than FPGA LUT's. A compiler for an interlanguage called Space, has been developed for the Synchronic A-Ram. Space is MIMD. strictly typed, and deterministic. Barring memory allocation and compilation, modules are ref...

Berka, Alexander Victor

2010-01-01T23:59:59.000Z

190

Environmental Systems Research FY-99 Annual Report  

SciTech Connect

The Environmental Systems Research (ESR) Program, a part of the Environmental Systems Research and Analysis (ESRA) Program, was implemented to enhance and augment the technical capabilities of the Idaho National Engineering and Environmental Laboratory (INEEL). The purpose for strengthening technical capabilities of the INEEL is to provide the technical base to serve effectively as the Environmental Management Laboratory for the Department of Energy's Office of Environmental Management (EM). The original portfolio of research activities was assembled after an analysis of the EM technology development and science needs as gathered by the Site Technology Coordination Groups (STCGs) complex-wide. Current EM investments in science and technology throughout the research community were also included in this analysis to avoid duplication of efforts. This is a progress report for the second year of the ESR Program (Fiscal Year 99). A report of activities is presented for the five ESR research investment areas: (a) Transport Aspects of Selective Mass Transport Agents, (b) Chemistry of Environmental Surfaces, (c) Materials Dynamics, (d) Characterization Science, and (e) Computational Simulation of Mechanical and Chemical Systems. In addition to the five technical areas, activities in the Science and Technology Foundations element of the program, e.g., interfaces between ESR and the EM Science Program (EMSP) and the EM Focus Areas, are described.

Miller, D.L.

2000-01-01T23:59:59.000Z

191

Environmental Systems Research, FY-99 Annual Report  

Science Conference Proceedings (OSTI)

The Environmental Systems Research (ESR) Program, a part of the Environmental Systems Research and Analysis (ESRA) Program, was implemented to enhance and augment the technical capabilities of the Idaho National Engineering and Environmental Laboratory (INEEL). The purpose for strengthening technical capabilities of the INEEL is to provide the technical base to serve effectively as the Environmental Management Laboratory for the Department of Energy's Office of Environmental Management (EM). The original portfolio of research activities was assembled after an analysis of the EM technology development and science needs as gathered by the Site Technology Coordination Groups (STCGs) complex-wide. Current EM investments in science and technology throughout the research community were also included in this analysis to avoid duplication of efforts. This is a progress report for the second year of the ESR Program (Fiscal Year 99). A report of activities is presented for the five ESR research investment areas: (a) Transport Aspects of Selective Mass Transport Agents, (b) Chemistry of Environmental Surfaces, (c) Materials Dynamics, (d) Characterization Science, and (e) Computational Simulation of Mechanical and Chemical Systems. In addition to the five technical areas, activities in the Science and Technology Foundations element of the program, e.g., interfaces between ESR and the EM Science Program (EMSP) and the EM Focus Areas, are described.

Miller, David Lynn

2000-01-01T23:59:59.000Z

192

Overview of the DIII-D program computer systems  

Science Conference Proceedings (OSTI)

Computer systems pervade every aspect of the DIII-D National Fusion Research program. This includes real-time systems acquiring experimental data from data acquisition hardware; cpu server systems performing short term and long term data analysis; desktop activities such as word processing, spreadsheets, and scientific paper publication; and systems providing mechanisms for remote collaboration. The DIII-D network ties all of these systems together and connects to the ESNET wide area network. This paper will give an overview of these systems, including their purposes and functionality and how they connect to other systems. Computer systems include seven different types of UNIX systems (HP-UX, REALIX, SunOS, Solaris, Digital UNIX, Ultrix, and IRIX), OpenVMS systems (both BAX and Alpha), MACintosh, Windows 95, and more recently Windows NT systems. Most of the network internally is ethernet with some use of FDDI. A T3 link connects to ESNET and thus to the Internet. Recent upgrades to the network have notably improved its efficiency, but the demand for bandwidth is ever increasing. By means of software and mechanisms still in development, computer systems at remote sites are playing an increasing role both in accessing and analyzing data and even participating in certain controlling aspects for the experiment. The advent of audio/video over the interest is now presenting a new means for remote sites to participate in the DIII-D program.

McHarg, B.B. Jr.

1997-11-01T23:59:59.000Z

193

Searching the Occurrence Reporting and Processing Systems (ORPS) database  

Science Conference Proceedings (OSTI)

The Occurrence Reporting and Processing System (ORPS) is a computerized method to submit, collect, update, and sign occurrence reports required by US Department of Energy (DOE) Order 5000.3B, Occurrence Reporting and Processing of Operations Information. The basic reason for investigating and reporting the causes of occurrences is to enable the identification of corrective actions to prevent recurrence and, thereby, protect the health and safety of the public, the workers, and the environment. ORPS provides the DOE community with a readily accessible database that contains information about occurrences at DOE facilities, causes of those occurrences, and corrective actions. This information can, therefore, be used to identify and analyze trends in occurrences. The ORPS database resides on a host computer located at the Idaho National Engineering Laboratory (INEL) in Idaho Falls, Idaho. The database can be accessed from any DOE site via computer terminals or personal computers (PCs) that are set up to access ORPS.

Commander, S.L.

1992-11-01T23:59:59.000Z

194

Template based parallel checkpointing in a massively parallel computer system  

DOE Patents (OSTI)

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

2009-01-13T23:59:59.000Z

195

Computer system design description for SY-101 hydrogen mitigation test project data acquisition and control system (DACS-1)  

DOE Green Energy (OSTI)

Description of the Proposed Activity/REPORTABLE OCCURRENCE or PIAB: This ECN changes the computer systems design description support document describing the computers system used to control, monitor and archive the processes and outputs associated with the Hydrogen Mitigation Test Pump installed in SY-101. There is no new activity or procedure associated with the updating of this reference document. The updating of this computer system design description maintains an agreed upon documentation program initiated within the test program and carried into operations at time of turnover to maintain configuration control as outlined by design authority practicing guidelines. There are no new credible failure modes associated with the updating of information in a support description document. The failure analysis of each change was reviewed at the time of implementation of the Systems Change Request for all the processes changed. This document simply provides a history of implementation and current system status.

Ermi, A.M.

1997-05-01T23:59:59.000Z

196

TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH  

SciTech Connect

The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

Lee, Hsien-Hsin S

2010-05-11T23:59:59.000Z

197

Clock distribution system for digital computers  

DOE Patents (OSTI)

Apparatus for eliminating, in each clock distribution amplifier of a clock distribution system, sequential pulse catch-up error due to one pulse "overtaking" a prior clock pulse. The apparatus includes timing means to produce a periodic electromagnetic signal with a fundamental frequency having a fundamental frequency component V'.sub.01 (t); an array of N signal characteristic detector means, with detector means No. 1 receiving the timing means signal and producing a change-of-state signal V.sub.1 (t) in response to receipt of a signal above a predetermined threshold; N substantially identical filter means, one filter means being operatively associated with each detector means, for receiving the change-of-state signal V.sub.n (t) and producing a modified change-of-state signal V'.sub.n (t) (n=1, . . . , N) having a fundamental frequency component that is substantially proportional to V'.sub.01 (t-.theta..sub.n (t) with a cumulative phase shift .theta..sub.n (t) having a time derivative that may be made uniformly and arbitrarily small; and with the detector means n+1 (1.ltoreq.n

Wyman, Robert H. (Brentwood, CA); Loomis, Jr., Herschel H. (Davis, CA)

1981-01-01T23:59:59.000Z

198

Alchemi: A .NET-based Enterprise Grid Computing System  

E-Print Network (OSTI)

Computational grids that couple geographically distributed resources are becoming the de-facto computing platform for solving large-scale problems in science, engineering, and commerce. Software to enable grid computing has been primarily written for Unix-class operating systems, thus severely limiting the ability to effectively utilize the computing resources of the vast majority of Windows-based desktop computers. Addressing Windows-based grid computing is particularly important from the software industry's viewpoint where interest in grids is emerging rapidly. Microsoft's .NET Framework has become near-ubiquitous for implementing commercial distributed systems for Windows-based platforms, positioning it as the ideal platform for grid computing in this context. In this paper we present Alchemi , a .NETbased framework that provides the runtime machinery and programming environment required to construct enterprise/desktop grids and develop grid applications. It allows flexible application composition by supporting an object-oriented application programming model in addition to a file-based job model. Cross-platform support is provided via a web services interface and a flexible execution model supports dedicated and non-dedicated (voluntary) execution by grid nodes.

Akshay Luther; Rajkumar Buyya; Rajiv Ranjan; Srikumar Venugopal

2005-01-01T23:59:59.000Z

199

Software Requirements for a System to Compute Mean Failure Cost  

SciTech Connect

In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

2010-01-01T23:59:59.000Z

200

Noncompliance Tracking System Registration and Reporting  

NLE Websites -- All DOE Office Websites (Extended Search)

Noncompliance Tracking System Registration and Reporting Office of Enforcement and Oversight NTS Reporting NTS Registration (For new registration and password changes) REGISTRATION INFORMATION Registrants for the Noncompliance Tracking System (NTS) with an ACTIVE ACCOUNT for the HSS Reporting Systems: Occurrence Reports & Processing System (ORPS), Computerized Accident/Incident Reporting System (CAIRS), Suspect Counterfeit Items (SCI), or the Daily Occurrence (DO) reports can use the same credentials to access NTS. Please access NTS REPORTING. Registrants who DO NOT have an HSS Reporting Systems account, or who have not accessed their account within the past six months MUST REGISTER for a NTS account. Please register at: HSS Reporting Systems Registration. If you need additional information or assistance in registering, please contact HSS User Support.

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

Pollution Prevention Tracking and Reporting System  

Energy.gov (U.S. Department of Energy (DOE))

Welcome to the Department of Energy's Pollution Prevention Tracking and Reporting System (PPTRS). DOE uses this system to collect information about, and assess the performance of, the Department's...

202

Smart Grid System Report | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Smart Grid System Report Smart Grid System Report This annex presents papers covering each of the 20 metrics identified in Section 2.1. These metric papers were prepared in advance...

203

Evaluation of Computer- Based Procedure System Prototype | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Computer- Based Procedure System Prototype Computer- Based Procedure System Prototype Evaluation of Computer- Based Procedure System Prototype This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The introduction of advanced technology in existing nuclear power plants

204

Evaluation of Computer- Based Procedure System Prototype | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Evaluation of Computer- Based Procedure System Prototype Evaluation of Computer- Based Procedure System Prototype Evaluation of Computer- Based Procedure System Prototype This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The introduction of advanced technology in existing nuclear power plants

205

Occurrence Reporting and Processing System  

NLE Websites -- All DOE Office Websites (Extended Search)

for generic implications and operational improvements. The Occurrence Reporting Program directives are DOE Order 232.2, Occurrence Reporting and Processing of Operations...

206

final report for Center for Programming Models for Scalable Parallel Computing  

SciTech Connect

This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

Johnson, Ralph E.

2013-04-10T23:59:59.000Z

207

Hybrid approach to failure prediction for advanced computing systems |  

NLE Websites -- All DOE Office Websites (Extended Search)

Hybrid approach to failure prediction for advanced computing systems Hybrid approach to failure prediction for advanced computing systems January 8, 2014 Tweet EmailPrint "Fault tolerance is no longer an option but a necessity," states Franck Cappello, project manager of research on resilience at the extreme scale at Argonne National Laboratory. "And the ability to reliably predict failures can significantly reduce the overhead of fault-tolerance strategies and the recovery cost." In a special issue article in the International Journal of High Performance Computing Applications, Cappello and his colleagues at Argonne and the University of Illinois at Urbana-Champaign (UIUC) discuss issues in failure prediction and present a new hybrid approach to overcome the limitations of current models. One popular way of building prediction models is to analyze log files,

208

The PVM Concurrent Computing System: Evolution, Experiences, and Trends  

E-Print Network (OSTI)

The PVM system, a software framework for heterogeneous concurrent computing in networked environments, has evolved in the past several years into a viable technology for distributed and parallel processing in a variety of disciplines. PVM supports a straightforward but functionally complete message passing model, and is capable of harnessing the combined resources of typically heterogeneous networked computing platforms to deliver high levels of performance and functionality. In this paper, we describe the architecture of PVM system, and discuss its computing model, the programming interface it supports, auxiliary facilities for process groups and MPP support, and some of the internal implementation techniques employed. Performance issues, dealing primarily with communication overheads, are analyzed, and recent findings as well as experimental enhancements to are presented. In order to demonstrate the viability of PVM for large scale scientific supercomputing, the paper incl...

V. S. Sunderam; G. A. Geist; J. Dongarra; R. Manchek

1994-01-01T23:59:59.000Z

209

PC Electronic Data Reporting Option (PEDRO) System  

U.S. Energy Information Administration (EIA)

Windows-based personal computer (PC) Windows NT, 2000, XP, or Vista; Approximately 30 Mb of free disk space for the PEDRO system files, plus up to 10 Mb for each survey;

210

Computer-aided coordination and overcurrent protection for distribution systems  

Science Conference Proceedings (OSTI)

Overcurrent protection and coordination studies for electrical distribution systems have become much easier to perform with the emergence of several commercially available software programs that run on a personal computer. These programs have built-in libraries of protective device time-current curves, damage curves for cable and transformers, and motor starting curves, thereby facilitating the design of a selectively coordinated protection system which is also well-protected. Additionally, design time when utilizing computers is far less than the previous method of tracing manufacturers` curves on transparent paper. Basic protection and coordination principles are presented in this paper along with several helpful suggestions for designing electrical protection systems. A step-by-step methodology is presented to illustrate the design concepts when using software for selecting and coordinating the protective devices in distribution systems.

Tolbert, L.M.

1995-03-01T23:59:59.000Z

211

Computing Architecture of the ALICE Detector Control System  

E-Print Network (OSTI)

The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinsk, P; Lechman, M; Jirdn, L S

2011-01-01T23:59:59.000Z

212

COMFAR III: Computer Model for Feasibility Analysis and Reporting | Open  

Open Energy Info (EERE)

COMFAR III: Computer Model for Feasibility Analysis and Reporting COMFAR III: Computer Model for Feasibility Analysis and Reporting Jump to: navigation, search Tool Summary Name: COMFAR III: Computer Model for Feasibility Analysis and Reporting Agency/Company /Organization: United Nations Industrial Development Organization Focus Area: Industry Resource Type: Software/modeling tools User Interface: Desktop Application Website: www.unido.org/index.php?id=o3470 Language: "Arabic, Chinese, English, French, German, Japanese, Portuguese, Russian, Spanish; Castilian" is not in the list of possible values (Abkhazian, Achinese, Acoli, Adangme, Adyghe; Adygei, Afar, Afrihili, Afrikaans, Afro-Asiatic languages, Ainu, Akan, Akkadian, Albanian, Aleut, Algonquian languages, Altaic languages, Amharic, Angika, Apache languages, Arabic, Aragonese, Arapaho, Arawak, Armenian, Aromanian; Arumanian; Macedo-Romanian, Artificial languages, Assamese, Asturian; Bable; Leonese; Asturleonese, Athapascan languages, Australian languages, Austronesian languages, Avaric, Avestan, Awadhi, Aymara, Azerbaijani, Balinese, Baltic languages, Baluchi, Bambara, Bamileke languages, Banda languages, Bantu (Other), Basa, Bashkir, Basque, Batak languages, Beja; Bedawiyet, Belarusian, Bemba, Bengali, Berber languages, Bhojpuri, Bihari languages, Bikol, Bini; Edo, Bislama, Blin; Bilin, Blissymbols; Blissymbolics; Bliss, Bosnian, Braj, Breton, Buginese, Bulgarian, Buriat, Burmese, Caddo, Catalan; Valencian, Caucasian languages, Cebuano, Celtic languages, Central American Indian languages, Central Khmer, Chagatai, Chamic languages, Chamorro, Chechen, Cherokee, Cheyenne, Chibcha, Chichewa; Chewa; Nyanja, Chinese, Chinook jargon, Chipewyan; Dene Suline, Choctaw, Chuukese, Chuvash, Classical Newari; Old Newari; Classical Nepal Bhasa, Classical Syriac, Coptic, Cornish, Corsican, Cree, Creek, Creoles and pidgins , Crimean Tatar; Crimean Turkish, Croatian, Cushitic languages, Czech, Dakota, Danish, Dargwa, Delaware, Dinka, Divehi; Dhivehi; Maldivian, Dogri, Dogrib, Dravidian languages, Duala, Dutch; Flemish, Dyula, Dzongkha, Eastern Frisian, Efik, Egyptian (Ancient), Ekajuk, Elamite, English, Erzya, Esperanto, Estonian, Ewe, Ewondo, Fang, Fanti, Faroese, Fijian, Filipino; Pilipino, Finnish, Finno-Ugrian languages, Fon, French, Friulian, Fulah, Ga, Gaelic; Scottish Gaelic, Galibi Carib, Galician, Ganda, Gayo, Gbaya, Geez, Georgian, German, Germanic languages, Gilbertese, Gondi, Gorontalo, Gothic, Grebo, Greek, Modern, Guarani, Gujarati, Gwich'in, Haida, Haitian; Haitian Creole, Hausa, Hawaiian, Hebrew, Herero, Hiligaynon, Himachali languages; Western Pahari languages, Hindi, Hiri Motu, Hittite, Hmong; Mong, Hungarian, Hupa, Iban, Icelandic, Ido, Igbo, Ijo languages, Iloko, Inari Sami, Indic languages, Indo-European languages, Indonesian, Ingush, Interlingue; Occidental, Inuktitut, Inupiaq, Iranian languages, Irish, Iroquoian languages, Italian, Japanese, Javanese, Judeo-Arabic, Judeo-Persian, Kabardian, Kabyle, Kachin; Jingpho, Kalaallisut; Greenlandic, Kalmyk; Oirat, Kamba, Kannada, Kanuri, Kara-Kalpak, Karachay-Balkar, Karelian, Karen languages, Kashmiri, Kashubian, Kawi, Kazakh, Khasi, Khoisan languages, Khotanese; Sakan, Kikuyu; Gikuyu, Kimbundu, Kinyarwanda, Kirghiz; Kyrgyz, Klingon; tlhIngan-Hol, Komi, Kongo, Konkani, Korean, Kosraean, Kpelle, Kru languages, Kuanyama; Kwanyama, Kumyk, Kurdish, Kurukh, Kutenai, Ladino, Lahnda, Lamba, Land Dayak languages, Lao, Latin, Latvian, Lezghian, Limburgan; Limburger; Limburgish, Lingala, Lithuanian, Lojban, Lower Sorbian, Lozi, Luba-Katanga, Luba-Lulua, Luiseno, Lule Sami, Lunda, Luo (Kenya and Tanzania), Lushai, Luxembourgish; Letzeburgesch, Macedonian, Madurese, Magahi, Maithili, Makasar, Malagasy, Malay, Malayalam, Maltese, Manchu, Mandar, Mandingo, Manipuri, Manobo languages, Manx, Maori, Mapudungun; Mapuche, Marathi, Mari, Marshallese, Marwari, Masai, Mayan languages, Mende, Mi'kmaq; Micmac, Minangkabau, Mirandese, Mohawk, Moksha, Mon-Khmer languages, Mongo, Mongolian, Mossi, Multiple languages, Munda languages, N'Ko, Nahuatl languages, Nauru, Navajo; Navaho, Ndebele, North; North Ndebele, Ndebele, South; South Ndebele, Ndonga, Neapolitan, Nepal Bhasa; Newari, Nepali, Nias, Niger-Kordofanian languages, Nilo-Saharan languages, Niuean, North American Indian languages, Northern Frisian, Northern Sami, Norwegian, Nubian languages, Nyamwezi, Nyankole, Nyoro, Nzima, Occitan (post 1500); Provençal, Ojibwa, Oriya, Oromo, Osage, Ossetian; Ossetic, Otomian languages, Pahlavi, Palauan, Pali, Pampanga; Kapampangan, Pangasinan, Panjabi; Punjabi, Papiamento, Papuan languages, Pedi; Sepedi; Northern Sotho, Persian, Philippine languages, Phoenician, Pohnpeian, Polish, Portuguese, Prakrit languages, Pushto; Pashto, Quechua, Rajasthani, Rapanui, Rarotongan; Cook Islands Maori, Romance languages, Romanian; Moldavian; Moldovan, Romansh, Romany, Rundi, Russian, Salishan languages, Samaritan Aramaic, Sami languages, Samoan, Sandawe, Sango, Sanskrit, Santali, Sardinian, Sasak, Scots, Selkup, Semitic languages, Serbian, Serer, Shan, Shona, Sichuan Yi; Nuosu, Sicilian, Sidamo, Sign Languages, Siksika, Sindhi, Sinhala; Sinhalese, Sino-Tibetan languages, Siouan languages, Skolt Sami, Slave (Athapascan), Slavic languages, Slovak, Slovenian, Sogdian, Somali, Songhai languages, Soninke, Sorbian languages, Sotho, Southern, South American Indian (Other), Southern Altai, Southern Sami, Spanish; Castilian, Sranan Tongo, Sukuma, Sumerian, Sundanese, Susu, Swahili, Swati, Swedish, Swiss German; Alemannic; Alsatian, Syriac, Tagalog, Tahitian, Tai languages, Tajik, Tamashek, Tamil, Tatar, Telugu, Tereno, Tetum, Thai, Tibetan, Tigre, Tigrinya, Timne, Tiv, Tlingit, Tok Pisin, Tokelau, Tonga (Nyasa), Tonga (Tonga Islands), Tsimshian, Tsonga, Tswana, Tumbuka, Tupi languages, Turkish, Turkmen, Tuvalu, Tuvinian, Twi, Udmurt, Ugaritic, Uighur; Uyghur, Ukrainian, Umbundu, Uncoded languages, Undetermined, Upper Sorbian, Urdu, Uzbek, Vai, Venda, Vietnamese, Volapük, Votic, Wakashan languages, Walamo, Walloon, Waray, Washo, Welsh, Western Frisian, Wolof, Xhosa, Yakut, Yao, Yapese, Yiddish, Yoruba, Yupik languages, Zande languages, Zapotec, Zaza; Dimili; Dimli; Kirdki; Kirmanjki; Zazaki, Zenaga, Zhuang; Chuang, Zulu, Zuni) for this property.

213

CB ENCIi TEC APPL SCI Stuctual Dynamic Systems Computational Techniques  

E-Print Network (OSTI)

Airport Terminal, Liquefied Natural Gas (LNG) Facility in Greece, C-1 Computer Center Building in Japan of the layout of 212 isolation bearings of one of two identical LNG storage tanks during construction in Greece in 1995. The isolation system is located about 20 m under the ground surface. It supports the steel LNG

Nagarajaiah, Satish

214

Description and validation of a computer based refrigeration system simulator  

Science Conference Proceedings (OSTI)

This paper describes and evaluates the validation of a novel software package which simulates the transient and steady-state operation of whole refrigeration systems of the type used for the storage and processing of food. This software allows practitioners ... Keywords: Computer simulation, Food processing, Modelling, Refrigeration

I. W. Eames; T. Brown; J. A. Evans; G. G. Maidment

2012-07-01T23:59:59.000Z

215

Optimal Inspection First and Last Policies for a Computer System  

Science Conference Proceedings (OSTI)

When a computer system executes successive jobs and processes with random time intervals, it would be impossible or impractical to make some inspections to check faults that occur intermittently in a strict periodic fashion. From such a viewpoint, by ... Keywords: periodic inspection, random inspection, inspection first, inspection last, checking time

Xufeng Zhao; Syouji Nakamura; Toshio Nakagawa

2012-11-01T23:59:59.000Z

216

PTask: operating system abstractions to manage GPUs as compute devices  

Science Conference Proceedings (OSTI)

We propose a new set of OS abstractions to support GPUs and other accelerator devices as first class computing resources. These new abstractions, collectively called the PTask API, support a dataflow programming model. Because a PTask graph consists ... Keywords: GPGPU, GPUs, OS design, accelerators, dataflow, gestural interface, operating systems

Christopher J. Rossbach; Jon Currey; Mark Silberstein; Baishakhi Ray; Emmett Witchel

2011-10-01T23:59:59.000Z

217

Computer aided design of long-haul optical transmission systems  

Science Conference Proceedings (OSTI)

We present a general overview of the role of computer models in the design and optimization of commercial optical transmission systems. Specifically, we discuss (1) the role of modeling in a commercial setting, (2) achieving the proper balance between ... Keywords: Long-Haul (LH) transmission, Ultra-Long Haul (ULH) transmission, optical communication, optical modeling

James G. Maloney; Brian E. Brewington; Curtis R. Menyuk

2002-06-01T23:59:59.000Z

218

Homepage: High-Performance Computing Systems, HPC-3: High-Performance...  

NLE Websites -- All DOE Office Websites (Extended Search)

1188 667-5243 Fax: 667-7665 MS T080 Computing Solutions that work for you High-Performance Computing Systems The High-Performance Computing Systems Group provides production...

219

The RISC processor module for FASTBUS computation applications. Final technical report  

SciTech Connect

The FASTBUS system specification for high-energy physics and other data-system applications anticipates the use of multiple, high-performance processor modules for data control and event reduction associated with experiments in the physical sciences. Existing processor designs will be unable to cope with the projected data-reduction and event-handling requirements of the complex experiments planned for the next generation of particle accelerators. Data-handling strategies for experimental physics are evolving from systems based upon a single central computer to those with arrays of high-speed, sophisticated, front-end processing elements. The advent of accelerators such as LEP and LHC, and beyond, is forcing the architecture of these processors toward the simpler RISC designs to enhance both speed and the software-development issue. This report describes the prototype development of a FASTBUS RISC Processor Module (FRPM) for use as a standard processing element in FASTBUS data-acquisition systems under a Phase II SBIR grant through the U.S. Department of Energy, Division of Energy Research. The FRPM hosts a reduced instruction set computer--the SPARCengine-2 by Sun Microcomputer Systems, Inc.--capable of executing 4.2 million floating point instructions per second with a clock of up to 40 MHz. The prototype FRPM supports a port to the FASTBUS crate segment by way of a standard-logic interface. The FRPM processor operates under a commercially available real-time operating system, and application software can be developed on workstation and mainframe computer systems. We further cover the chronology of the Phase II work, a discussion of the objectives, and our experiences with an ASIC manufacturer in attempting to complete the fabrication of a chip implementing the FASTBUS Master Interface (FMI).

NONE

1996-02-01T23:59:59.000Z

220

Smart Grid System Report | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

System Report System Report Smart Grid System Report This annex presents papers covering each of the 20 metrics identified in Section 2.1. These metric papers were prepared in advance of the main body of the report and collectively form its informational backbone. The list of metrics is derived from the material developed at the Smart Grid Implementation Workshop. The objective of the metric development process was to distill the best ideas into a small number of metrics with a reasonable chance of measurement and assessment. Smart Grid System Report More Documents & Publications 2009 Smart Grid System Report (July 2009) 2010 Smart Grid System Report (February 2012) Statement of Patricia Hoffman, Acting Assistant Secretary for Electricity Delivery and Energy Reliability, U.S. Department of Energy Before the

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

Concurrency in a System for Symbolic and Algebraic Computations  

E-Print Network (OSTI)

As miniaturization of computer components is approaching the limits of physics, researchers in computer architecture are looking for less conventional means to perpetuate Moore's law. Recent trends in hardware ve been adding more cores. Consequently multicore machines are now commodity. To help programmers benefit from Moore's dividend, researchers in programming techniques, tools and languages have been exploring several venues. A dominant theme is the design and implementation of parallel algorithms. Several programming models have been proposed, but none at the moment seem to be substantially better than others. While general parallel programming is a distinctively challenging task, we believe that scientific computation algorithms display algebraic structures, thanks to the rich mathematical objects they manipulate. The present work aims at exploring the extent to which algebraic properties displayed by computer algebra algorithms may be automatically exploited to take advantage of parallelism in the OpenAxiom scientific computation platform. We designed a runtime system that exploits the ubiquitous parallelism of modern CPUs; the system is also scaled to many-system clusters. By taking advantage of the existing InputForm domain in OpenAxiom and connecting of the standard input channel to sockets, we were able to minimize potentially hazardous modifications to the OpenAxiom source while still implementing desired functionality. Additionally, we designed and implemented FFI extensions to the OpenAxiom core to take advantage of SIMD instructions, particularly SSE2 (SIMD Streaming Extensions). The extension allowed us to nearly double the speed of common operations such as multiplying arrays of doubles. We also defined and implemented a foreign function interface for the OpenAxiom system. All of these additions were benchmarked using Berlekamp's algorithm for factorization of polynomials over integers. While much still remains to be done in parallelizing the algebra to work over many calculation nodes, mathematical annotations remain viable in unloading the burden of parallelizing code from the programmer by substituting a simpler activity.

Mai, Stefan

2009-06-09T23:59:59.000Z

222

Functions and Requirements and Specifications for Replacement of the Computer Automated Surveillance System (CASS)  

SciTech Connect

Functional requirements and specifications document for system to replace tank farm computer automated waste tank surveillance system.

DOUKA, K.C.

2000-05-05T23:59:59.000Z

223

Evaluation of Computer-Based Procedure System Prototype  

SciTech Connect

This research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE), performed in close collaboration with industry R&D programs, to provide the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRS program serves to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The introduction of advanced technology in existing nuclear power plants may help to manage the effects of aging systems, structures, and components. In addition, the incorporation of advanced technology in the existing LWR fleet may entice the future workforce, who will be familiar with advanced technology, to work for these utilities rather than more newly built nuclear power plants. Advantages are being sought by developing and deploying technologies that will increase safety and efficiency. One significant opportunity for existing plants to increase efficiency is to phase out the paper-based procedures (PBPs) currently used at most nuclear power plants and replace them, where feasible, with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to PBPs are the management of multiple procedures, place-keeping, finding the correct procedure for the task at hand, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, & Naser, 2009; Le Blanc & Oxstrand, 2012). The main focus of this report is to describe the research activities conducted to address the remaining two objectives; Develop a prototype CBP system based on requirements identified and Evaluate the CBP prototype. The emphasis will be on the evaluation of an initial CBP prototype in at a Nuclear Power Plant.

Johanna Oxstrand; Katya Le Blanc; Seth Hays

2012-09-01T23:59:59.000Z

224

Computational and experimental study of laminar flames. Progress report, September 1, 1990--October 31, 1991  

SciTech Connect

During fiscal year 1991 we have made substantial progress in both the computational and experimental portions of our research. In particular we have continued our study of non-premixed axisymmetric methane-air flames. Computer calculations of multidimensional elliptic flames with two carbon atom chemistry using a shared memory parallel computer are reported for the first time. Also laser spectroscopy of flames utilizing a neodymium laser are also reported. (GHH)

Smooke, M.; Long, M.

1991-12-31T23:59:59.000Z

225

Institute for Scientific Computing Research Fiscal Year 2002 Annual Report  

SciTech Connect

The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory is jointly administered by the Computing Applications and Research Department (CAR) and the University Relations Program (URP), and this joint relationship expresses its mission. An extensively externally networked ISCR cost-effectively expands the level and scope of national computational science expertise available to the Laboratory through CAR. The URP, with its infrastructure for managing six institutes and numerous educational programs at LLNL, assumes much of the logistical burden that is unavoidable in bridging the Laboratory's internal computational research environment with that of the academic community. As large-scale simulations on the parallel platforms of DOE's Advanced Simulation and Computing (ASCI) become increasingly important to the overall mission of LLNL, the role of the ISCR expands in importance, accordingly. Relying primarily on non-permanent staffing, the ISCR complements Laboratory research in areas of the computer and information sciences that are needed at the frontier of Laboratory missions. The ISCR strives to be the ''eyes and ears'' of the Laboratory in the computer and information sciences, in keeping the Laboratory aware of and connected to important external advances. It also attempts to be ''feet and hands, in carrying those advances into the Laboratory and incorporating them into practice. In addition to conducting research, the ISCR provides continuing education opportunities to Laboratory personnel, in the form of on-site workshops taught by experts on novel software or hardware technologies. The ISCR also seeks to influence the research community external to the Laboratory to pursue Laboratory-related interests and to train the workforce that will be required by the Laboratory. Part of the performance of this function is interpreting to the external community appropriate (unclassified) aspects of the Laboratory's own contributions to the computer and information sciences--contributions that its unique mission and unique resources give it a unique opportunity and responsibility to make. Of the three principal means of packaging scientific ideas for transfer--people, papers, and software--experience suggests that the most effective means is people. The programs of the ISCR are therefore people-intensive. Finally, the ISCR, together with CAR, confers an organizational identity on the burgeoning computer and information sciences research activity at LLNL and serves as a point of contact within the Laboratory for computer and information scientists from outside.

Keyes, D E; McGraw, J R; Bodtker, L K

2003-03-11T23:59:59.000Z

226

The ATP's Business Reporting System  

Science Conference Proceedings (OSTI)

... participants are reporting significant acceleration of R&D, stimulation of beneficial ... business and economic merit of proposals as well as scientific ...

2011-10-19T23:59:59.000Z

227

Report on the SLC control system  

SciTech Connect

The SLC control system is based on a VAX 11/780 Host computer with approximately 50 microprocessor clusters which provide distributed intelligence and control of all CAMAC interface modules. This paper will present an overview of the system including current status and a description of the software architecture and communication protocols. 8 refs.

Phinney, N.

1985-05-01T23:59:59.000Z

228

Building America Systems Integration Research Annual Report:...  

NLE Websites -- All DOE Office Websites (Extended Search)

Systems Integration Research Annual Report: FY 2012 Prepared for: Building America Building Technologies Program Office of Energy Efficiency and Renewable Energy U.S....

229

Martin Karplus and Computer Modeling for Chemical Systems  

Office of Scientific and Technical Information (OSTI)

Martin Karplus and Computer Modeling for Chemical Systems Martin Karplus and Computer Modeling for Chemical Systems Resources with Additional Information · Karplus Equation Martin Karplus ©Portrait by N. Pitt, 9/10/03 Martin Karplus, the Theodore William Richards Professor of Chemistry Emeritus at Harvard, is one of three winners of the 2013 Nobel Prize in chemistry... The 83-year-old Vienna-born theoretical chemist, who is also affiliated with the Université de Strasbourg, Strasbourg, France, is a 1951 graduate of Harvard College and earned his Ph.D. in 1953 at the California Institute of Technology. While there, he worked with two-time Nobel laureate Linus Pauling, whom Karplus described as an important early influence. He shared the Nobel with researchers Michael Levitt of Stanford University and Arieh Warshel of the University of Southern California, Los Angeles. Warshel was once a postdoctoral student of Karplus ...

230

Enabling Green Energy and Propulsion Systems via Direct Noise Computation |  

NLE Websites -- All DOE Office Websites (Extended Search)

High-fidelity simulation of exhaust nozzle under installed configuration High-fidelity simulation of exhaust nozzle under installed configuration Umesh Paliath, GE Global Research; Joe Insley, Argonne National Laboratory Enabling Green Energy and Propulsion Systems via Direct Noise Computation PI Name: Umesh Paliath PI Email: paliath@ge.com Institution: GE Global Research Allocation Program: INCITE Allocation Hours at ALCF: 105 Million Year: 2013 Research Domain: Engineering GE Global Research is using the Argonne Leadership Computing Facility (ALCF) to deliver significant improvements in efficiency, (renewable's) yield and lower emissions (noise) for advanced energy and propulsion systems. Understanding the fundamental physics of turbulent mixing has the potential to transform product design for components such as airfoils and

231

Computer program for conducting acoustical analyses of HVAC systems.  

Science Conference Proceedings (OSTI)

An interactive Windows?based computer program has been developed that can be used to conduct complete acoustical analyses of HVAC systems. The program can be used to track sound from a HVACsound source such as a fan to a room. Both sound from a single path between a sound source and room or from multiple paths between a sound source and room can be investigated. The program has a full set of editing features that include change

Douglas D. Reynolds; Scott C. Mitchell

1996-01-01T23:59:59.000Z

232

Preoperational test report, primary ventilation system  

SciTech Connect

This represents a preoperational test report for Primary Ventilation Systems, Project W-030. Project W-030 provides a ventilation upgrade for the four Aging Waste Facility tanks. The system provides vapor space filtered venting of tanks AY101, AY102, AZ101, AZ102. The tests verify correct system operation and correct indications displayed by the central Monitor and Control System.

Clifton, F.T.

1997-11-04T23:59:59.000Z

233

Preoperational test report, vent building ventilation system  

Science Conference Proceedings (OSTI)

This represents a preoperational test report for Vent Building Ventilation Systems, Project W-030. Project W-030 provides a ventilation upgrade for the four Aging Waste Facility tanks. The system provides Heating, Ventilation, and Air Conditioning (HVAC) for the W-030 Ventilation Building. The tests verify correct system operation and correct indications displayed by the central Monitor and Control System.

Clifton, F.T.

1997-11-04T23:59:59.000Z

234

The Architecture and Administration of the ATLAS Online Computing System  

E-Print Network (OSTI)

The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; nel, G; Computing In High Energy and Nuclear Physics

2006-01-01T23:59:59.000Z

235

CFAST Computer Code Application Guidance for Documented Safety Analysis, Final Report  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Final CFAST Code Guidance Final CFAST Code Guidance CFAST Computer Code Application Guidance for Documented Safety Analysis Final Report U.S. Department of Energy Office of Environment, Safety and Health 1000 Independence Ave., S.W. Washington, DC 20585-2040 July 2004 DOE/NNSA-DP Technical Report CFAST Computer Code Application Guidance Final Report July 2004 ii INTENTIONALLY BLANK. DOE/NNSA-DP Technical Report CFAST Computer Code Application Guidance Final Report July 2004 iii FOREWORD This document provides guidance to Department of Energy (DOE) facility analysts in the use of the CFAST computer software for supporting Documented Safety Analysis applications. Information is provided herein that supplements information found in the CFAST documentation

236

Computer systems and software description for Standard-E+ Hydrogen Monitoring System (SHMS-E+)  

DOE Green Energy (OSTI)

The primary function of the Standard-E+ Hydrogen Monitoring System (SHMS-E+) is to determine tank vapor space gas composition and gas release rate, and to detect gas release events. Characterization of the gas composition is needed for safety analyses. The lower flammability limit, as well as the peak burn temperature and pressure, are dependent upon the gas composition. If there is little or no knowledge about the gas composition, safety analyses utilize compositions that yield the worst case in a deflagration or detonation. Knowledge of the true composition could lead to reductions in the assumptions and therefore there may be a potential for a reduction in controls and work restrictions. Also, knowledge of the actual composition will be required information for the analysis that is needed to remove tanks from the Watch List. Similarly, the rate of generation and release of gases is required information for performing safety analyses, developing controls, designing equipment, and closing safety issues. This report outlines the computer system design layout description for the Standard-E+ Hydrogen Monitoring System.

Tate, D.D.

1997-05-01T23:59:59.000Z

237

Annual Report 2006-07 Electrical & Computer Engineering  

E-Print Network (OSTI)

of networks and distributed systems. It was probably a stroke of luck that my professional life has mirrored as the power of network and systems thinking. It is fashionable now for educators to refer to the "flatness or even one country? Indeed, the problems facing humanity and our planet are The Art of Systems

New Mexico, University of

238

Electromagnetic exploration system. Progress report  

DOE Green Energy (OSTI)

A design for a cost effective, highly flexible, and portable controlled source EM exploration system is presented. The design goals of the CMOS micro-processor based receiver and its companion transmitter are listed. (MHR)

Not Available

1978-11-01T23:59:59.000Z

239

The CDF computing and analysis system: First experience  

SciTech Connect

The Collider Detector at Fermilab (CDF) collaboration records and analyses proton anti-proton interactions with a center-of-mass energy of 2 TeV at the Tevatron. A new collider run, Run II, of the Tevatron started in April. During its more than two year duration the CDF experiment expects to record about 1 PetaByte of data. With its multi-purpose detector and center-of-mass energy at the frontier, the experimental program is large and versatile. The over 500 scientists of CDF will engage in searches for new particles, like the Higgs boson or supersymmetric particles, precision measurement of electroweak parameters, like the mass of the W boson, measurement of top quark parameters, and a large spectrum of B physics. The experiment has taken data and analyzed them in previous runs. For Run II, however, the computing model was changed to incorporate new methodologies, the file format switched, and both data handling and analysis system redesigned to cope with the increased demands. This paper (4-036 at Chep 2001) gives an overview of the CDF Run II compute system with emphasis on areas where the current system does not match initial estimates and projections. For the data handling and analysis system a more detailed description is given.

R. Colombo et al.

2001-11-02T23:59:59.000Z

240

Network of Excellence in Distributed and Dependable Computing Systems  

E-Print Network (OSTI)

This document presents a CaberNet vision of Research and Technology Development (RTD) in Distributed and Dependable systems. It takes as a basis the state-of-the-art (SOTA) Report prepared by John Bates in 1998: this document was commissioned by CaberNet as a first step towards the definition of a roadmap for European research in distributed and dependable systems. This report overviewed the developments in the main areas to which the CaberNet members made outstanding contributions, which were the most important at the time of its preparation, and analysed the most important trends in R&D in those areas

Alexander Romanovsky (ed); Edited Alexander Romanovsky

2004-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

EcoSystem: A Set of Grid Computing Tools for a Class of Economic Applications Umakishore Ramachandran, Vladimir Urazov, Namgeun Jeong, Hasnain Mandviwala, David Hilley  

E-Print Network (OSTI)

and sim- ple to set up a grid of low-cost hardware to produce high computational capacity, actuallyEcoSystem: A Set of Grid Computing Tools for a Class of Economic Applications Umakishore Report GT-CS-07-09. Submitted for publication. Please do not distribute. Abstract Computational grids

Ramachandran, Umakishore

242

Maintaining scale as a realiable computational system for criticality safety analysis  

SciTech Connect

Accurate and reliable computational methods are essential for nuclear criticality safety analyses. The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer code system was originally developed at Oak Ridge National Laboratory (ORNL) to enable users to easily set up and perform criticality safety analyses, as well as shielding, depletion, and heat transfer analyses. Over the fifteen-year life of SCALE, the mainstay of the system has been the criticality safety analysis sequences that have featured the KENO-IV and KENO-V.A Monte Carlo codes and the XSDRNPM one-dimensional discrete-ordinates code. The criticality safety analysis sequences provide automated material and problem-dependent resonance processing for each criticality calculation. This report details configuration management which is essential because SCALE consists of more than 25 computer codes (referred to as modules) that share libraries of commonly used subroutines. Changes to a single subroutine in some cases affect almost every module in SCALE! Controlled access to program source and executables and accurate documentation of modifications are essential to maintaining SCALE as a reliable code system. The modules and subroutine libraries in SCALE are programmed by a staff of approximately ten Code Managers. The SCALE Software Coordinator maintains the SCALE system and is the only person who modifies the production source, executables, and data libraries. All modifications must be authorized by the SCALE Project Leader prior to implementation.

Bowmann, S.M.; Parks, C.V.; Martin, S.K.

1995-04-01T23:59:59.000Z

243

National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report  

SciTech Connect

As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilts Generic Modeling Envirnment (GME) tool for code generation for the production of user APIs, code stubs, testing harnesses, and model correctness verification. It is used for creating wrappers around LQCD applications so that they can be integrated into existing workflow systems such as Kepler. Implemented a database system for tracking the state of nodes and jobs managed by the Torque batch systems used at Fermilab. This robust system and various canned queuries are used for many tasks, including monitoring the health of the clusters, managing allocated projects, producing accounting reports, and troubleshooting nodes and jobs.

Bapty, Theodore; Dubey, Abhishek

2013-07-18T23:59:59.000Z

244

Seamless Access to Decentralized Storage Services in Computational Grids via a Virtual File System  

Science Conference Proceedings (OSTI)

This paper describes a novel technique for establishing a virtual file system that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions ... Keywords: computational grid, file system, logical account, network-computing, proxy

Renato J. Figueiredo; Nirav Kapadia; Jos A. B. Fortes

2004-04-01T23:59:59.000Z

245

CAD-centric Computation Management System for a Virtual TBM  

SciTech Connect

HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of the analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.

Ramakanth Munipalli; K.Y. Szema; P.Y. Huang; C.M. Rowell; A.Ying; M. Abdou

2011-05-03T23:59:59.000Z

246

ACM curriculum committee report computing programs in small colleges  

Science Conference Proceedings (OSTI)

The Curriculum Committee of the Education Board of ACM has established as an ongoing committee a Small College Group. This committee will make a presentation of its report at this symposium and will provide an opportunity for attendees to comment. Various ...

John Beidler; Richard H. Austing; Lillian N. Cassel

1984-02-01T23:59:59.000Z

247

Evolution of small real-time IBM computer systems  

Science Conference Proceedings (OSTI)

In parallel with the development of data processing applications for computers, effort was directed to other areas in which computers might provide benefits for the user. One early effort was the application of computers tot he monitoring and control ...

Thomas J. Harrison; Bruce W. Landeck; Hal K. St. Clair

1981-09-01T23:59:59.000Z

248

Army Energy and Water Reporting System Assessment  

SciTech Connect

There are many areas of desired improvement for the Army Energy and Water Reporting System. The purpose of system is to serve as a data repository for collecting information from energy managers, which is then compiled into an annual energy report. This document summarizes reported shortcomings of the system and provides several alternative approaches for improving application usability and adding functionality. The U.S. Army has been using Army Energy and Water Reporting System (AEWRS) for many years to collect and compile energy data from installations for facilitating compliance with Federal and Department of Defense energy management program reporting requirements. In this analysis, staff from Pacific Northwest National Laboratory found that substantial opportunities exist to expand AEWRS functions to better assist the Army to effectively manage energy programs. Army leadership must decide if it wants to invest in expanding AEWRS capabilities as a web-based, enterprise-wide tool for improving the Army Energy and Water Management Program or simply maintaining a bottom-up reporting tool. This report looks at both improving system functionality from an operational perspective and increasing user-friendliness, but also as a tool for potential improvements to increase program effectiveness. The authors of this report recommend focusing on making the system easier for energy managers to input accurate data as the top priority for improving AEWRS. The next major focus of improvement would be improved reporting. The AEWRS user interface is dated and not user friendly, and a new system is recommended. While there are relatively minor improvements that could be made to the existing system to make it easier to use, significant improvements will be achieved with a user-friendly interface, new architecture, and a design that permits scalability and reliability. An expanded data set would naturally have need of additional requirements gathering and a focus on integrating with other existing data sources, thus minimizing manually entered data.

Deprez, Peggy C.; Giardinelli, Michael J.; Burke, John S.; Connell, Linda M.

2011-09-01T23:59:59.000Z

249

Site Controller: A System for Computer-Aided Civil Engineering and Construction  

E-Print Network (OSTI)

A revolution\\0\\0\\0 in earthmoving, a $100 billion industry, can be achieved with three components: the GPS location system, sensors and computers in bulldozers, and SITE CONTROLLER, a central computer system that ...

Greenspun, Philip

1993-02-01T23:59:59.000Z

250

How open should an open system be? : essays on mobile computing  

E-Print Network (OSTI)

"Systems" goods-such as computers, telecom networks, and automobiles-are made up of multiple components. This dissertation comprises three essays that study the decisions of system innovators in mobile computing to "open" ...

Boudreau, Kevin J. (Kevin Joseph)

2006-01-01T23:59:59.000Z

251

Reference guide to small cogeneration systems for utilities. Final report  

SciTech Connect

This report covers systems performance and cost data for selected smaller cogeneration systems, which are defined generally as those cogeneration systems in the range below 5 megawatts. The data presented in this guide are expected to be used in two main ways. First, the data can be used to extend the existing DEUS Computer Evaluation Model data base to the smaller cogeneration systems. Second, the data will serve as a general guide to smaller cogeneration systems for use by the utilities companies and others. The data pertain to the following cogeneration system: gas turbine with heat recovery boiler, back pressure and extraction/condensing steam turbine, combined cycle, internal combustion (reciprocating) engine, steam bottoming cycle using industrial process exhaust, and gas turbine topping cycle with standard industrial process steam generators. A no-cogeneration base case is included for comparison purposes.

Rodden, R.M.; Boyen, J.L.; Waters, M.H.

1986-02-01T23:59:59.000Z

252

Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing and Storage Requirements Computing and Storage Requirements for FES J. Candy General Atomics, San Diego, CA Presented at DOE Technical Program Review Hilton Washington DC/Rockville Rockville, MD 19-20 March 2013 2 Computing and Storage Requirements Drift waves and tokamak plasma turbulence Role in the context of fusion research * Plasma performance: In tokamak plasmas, performance is limited by turbulent radial transport of both energy and particles. * Gradient-driven: This turbulent transport is caused by drift-wave instabilities, driven by free energy in plasma temperature and density gradients. * Unavoidable: These instabilities will persist in a reactor. * Various types (asymptotic theory): ITG, TIM, TEM, ETG . . . + Electromagnetic variants (AITG, etc). 3 Computing and Storage Requirements Fokker-Planck Theory of Plasma Transport Basic equation still

253

ISDSN Sensor System Phase One Test Report  

Science Conference Proceedings (OSTI)

This Phase 1 Test Report documents the test activities and results completed for the Idaho National Laboratory (INL) sensor systems that will be deployed in the meso-scale test bed (MSTB) at Florida International University (FIU), as outlined in the ISDSN-MSTB Test Plan. This report captures the sensor system configuration tested; test parameters, testing procedure, any noted changes from the implementation plan, acquired test data sets, and processed results.

Gail Heath

2011-09-01T23:59:59.000Z

254

Phase II Final Report Computer Optimization of Electron Guns  

SciTech Connect

This program implemented advanced computer optimization into an adaptive mesh, finite element, 3D, charged particle code. The routines can optimize electron gun performance to achieve a specified current, beam size, and perveance. It can also minimize beam ripple and electric field gradients. The magnetics optimization capability allows design of coil geometries and magnetic material configurations to achieve a specified axial magnetic field profile. The optimization control program, built into the charged particle code Beam Optics Analyzer (BOA) utilizes a 3D solid modeling package to modify geometry using design tables. Parameters within the graphical user interface (currents, voltages, etc.) can be directly modified within BOA. The program implemented advanced post processing capability for the optimization routines as well as the user. A Graphical User Interface allows the user to set up goal functions, select variables, establish ranges of variation, and define performance criteria. The optimization capability allowed development of a doubly convergent multiple beam gun that could not be designed using previous techniques.

R. Lawrence Ives; Thuc Bui; Hien Tran; Michael Read; Adam Attarian; William Tallis

2011-04-15T23:59:59.000Z

255

Computer optimization of refrigeration systems in a textile plant: A case history  

Science Conference Proceedings (OSTI)

A process computer was installed in a large integrated nylon plant in 1976. This dedicated chilled water management system was designed to optimize the operation of chillers and to reduce their energy costs. The computer system was also configured to ... Keywords: Computer applications, control system analysis, energy control, modeling, optimal search techniques, process control

Chun H. Cho; Nelson Norden

1982-11-01T23:59:59.000Z

256

Integration of sensing and computing in an intelligent decision support system for homeland security defense  

Science Conference Proceedings (OSTI)

We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed ... Keywords: Data routing, Distributed computing, Dynamic programming, Intelligent decision support system, Sensor deployment, Sensor fusion

Qishi Wu; Mengxia Zhu; Nageswara S. V. Rao

2009-04-01T23:59:59.000Z

257

Solar thermal power systems. Summary report  

DOE Green Energy (OSTI)

The work accomplished by the Aerospace Corporation from April 1973 through November 1979 in the mission analysis of solar thermal power systems is summarized. Sponsorship of this effort was initiated by the National Science Foundation, continued by the Energy Research and Development Administration, and most recently directed by the United States Department of Energy, Division of Solar Thermal Systems. Major findings and conclusions are sumarized for large power systems, small power systems, solar total energy systems, and solar irrigation systems, as well as special studies in the areas of energy storage, industrial process heat, and solar fuels and chemicals. The various data bases and computer programs utilized in these studies are described, and tables are provided listing financial and solar cost assumptions for each study. An extensive bibliography is included to facilitate review of specific study results and methodology.

Not Available

1980-06-01T23:59:59.000Z

258

Three-dimensional medical imaging: algorithms and computer systems  

Science Conference Proceedings (OSTI)

Keywords: Computer graphics, medical imaging, surface rendering, three-dimensional imaging, volume rendering

M. R. Stytz; G. Frieder; O. Frieder

1991-12-01T23:59:59.000Z

259

HPCT HPM on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

HPM on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance tools High...

260

IBM HPCT on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

IBM HPCT on BGP Systems References IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP - IBM Redbook describing HPCT and other performance tools...

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

A design of cooperation management system to improve reliability in resource sharing computing environment  

Science Conference Proceedings (OSTI)

Resource sharing computing is a project that realizes high performance computing by utilizing the resources of peers that are connected to the Internet. Resource sharing computing provides a dynamic internet environment where peers can freely participate, ... Keywords: cooperation system, reliability, resource sharing computing

Ji Su Park; Kwang Sik Chung; Jin Gon Shon

2007-05-01T23:59:59.000Z

262

Spacecraft computing systems with high-level specifications and FPGAs  

E-Print Network (OSTI)

A typical modem spacecraft requires computer processing in every major subsystem. The most popular method to carry out these processing requirements involves the use of a primary computer based on microprocessors. To carry ...

Ong, Elwin, 1979-

2006-01-01T23:59:59.000Z

263

ALiCE: A Java-based Grid Computing System  

E-Print Network (OSTI)

A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. This talk is divided into three parts. Firstly, ...

Teo, Yong Meng

264

The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report  

SciTech Connect

In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

2009-10-12T23:59:59.000Z

265

Computer Algebra and Computer Algebra Systems L'alg`ebre - CECM  

E-Print Network (OSTI)

OLEG GOLUBITSKY, Ontario Research Centre for Computer Algebra, London, Ontario, Canada, N6A 5B7. Implementation of Arithmetics in Aldor. Aldor is a...

266

Service life of the reinforced concrete bridge deck in corrosive environments: A soft computing system  

Science Conference Proceedings (OSTI)

In the recent years, the soft computing techniques are increasingly applied in many fields of civil engineering due to their capabilities in computation and knowledge processing. In this paper, a soft computing system is developed to estimate the service ... Keywords: ?-Level optimization, Concrete bridge, Corrosion, Fuzzy random, Fuzzy system, Service life

Jafar Sobhani; Ali Akbar Ramezanianpour

2011-06-01T23:59:59.000Z

267

Human Pacman: a sensing-based mobile entertainment system with ubiquitous computing and tangible interaction  

Science Conference Proceedings (OSTI)

Human Pacman is an interactive ubiquitous and mobile entertainment system that is built upon position and perspective sensing via Global Positioning System and inertia sensors; and tangible human-computer interfacing with the use of Bluetooth and capacitive ... Keywords: computer entertainment, mobile entertainment, tangible interaction, ubiquitous computing

Adrian David Cheok; Siew Wan Fong; Kok Hwee Goh; Xubo Yang; Wei Liu; Farzam Farzbiz

2003-05-01T23:59:59.000Z

268

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks.

Hubbell, Earl A. (Mt. View, CA); Lipshutz, Robert J. (Palo Alto, CA); Morris, Macdonald S. (San Jose, CA); Winkler, James L. (Palo Alto, CA)

1997-01-01T23:59:59.000Z

269

An assessment of accountability policies for large-scale distributed computing systems  

Science Conference Proceedings (OSTI)

Grid computing systems offer resources to solve large-scale computational problems and are thus widely used in a large variety of domains, including computational sciences, energy management, and defense. Accountability in these application domains is ... Keywords: accountability, distributed systems, grid, policies, scalability

Wonjun Lee; Anna C. Squicciarini; Elisa Bertino

2009-04-01T23:59:59.000Z

270

Power Systems Development Facility progress report  

Science Conference Proceedings (OSTI)

This is a report on the progress in design and construction of the Power Systems Development Facility. The topics of the report include background information, descriptions of the advanced gasifier, advanced PFBC, particulate control devices, and fuel cell. The major activities during the past year have been the final stages of design, procurement of major equipment and bulk items, construction of the facility, and the preparation for the operation of the Facility in late 1995.

Rush, R.E.; Hendrix, H.L.; Moore, D.L.; Pinkston, T.E.; Vimalchand, P.; Wheeldon, J.M.

1995-11-01T23:59:59.000Z

271

Geographic Information Systems Applications on an ATM-Based Distributed High Performance Computing System  

E-Print Network (OSTI)

. We present a distributed geographic information system (DGIS) built on a distributed high performance computing environment using a number of software infrastructural building blocks and computational resources interconnected by an ATM-based broadband network. Archiving, access and processing of scientific data are discussed in the context of geographic and environmental applications with special emphasis on the potential for local-area weather, agriculture, soil and land management products. Software technologies such as tiling and caching techniques can be used to optimise storage requirements and response time for applications requiring very large data sets such as multi-channel satellite data. Distributed High Performance Computing hardware technology underpins our proposed system. In particular, we discuss the capabilities of a distributed hardware environment incorporating: high bandwidth communications networks such as Telstra's Experimental Broadband Network (EBN); large capa...

November Hawick

1997-01-01T23:59:59.000Z

272

Interim report on verification and benchmark testing of the NUFT computer code  

Science Conference Proceedings (OSTI)

This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

Lee, K.H.; Nitao, J.J. [Lawrence Livermore National Lab., CA (United States); Kulshrestha, A. [Weiss Associates, Emeryville, CA (United States)

1993-10-01T23:59:59.000Z

273

Physics, Computer Science and Mathematics Division annual report, 1 January-31 December 1983  

SciTech Connect

This report summarizes the research performed in the Physics, Computer Science and Mathematics Division of the Lawrence Berkeley Laboratory during calendar year 1983. The major activity of the Division is research in high-energy physics, both experimental and theoretical, and research and development in associated technologies. A smaller, but still significant, program is in computer science and applied mathematics. During 1983 there were approximately 160 people in the Division active in or supporting high-energy physics research, including about 40 graduate students. In computer science and mathematics, the total staff, including students and faculty, was roughly 50. Because of the creation in late 1983 of a Computing Division at LBL and the transfer of the Computer Science activities to the new Division, this annual report is the last from the Physics, Computer Science and Mathematics Division. In December 1983 the Division reverted to its historic name, the Physics Division. Its future annual reports will document high energy physics activities and also those of its Mathematics Department.

Jackson, J.D.

1984-08-01T23:59:59.000Z

274

Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing  

Science Conference Proceedings (OSTI)

In this reply to James H. Fetzer's "Minds and Machines: Limits to Simulations of Thought and Action", the author argues that computationalism should not be the view that human cognition is computation, but that it should be the view that cognition simpliciter ...

William J. Rapaport

2012-01-01T23:59:59.000Z

275

PROTOTOUCH a system for prototyping ubiquitous computing environments mediated by touch  

E-Print Network (OSTI)

Computers as we know them are fading into the background. However, interaction modalities developed for "foreground" computer systems are beginning to seep into the day-to-day interactions that people have with each other ...

Cranor, David (John David)

2011-01-01T23:59:59.000Z

276

Finite Volume Based Computer Program for Ground Source Heat Pump System  

SciTech Connect

This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ?Finite Volume Based Computer Program for Ground Source Heat Pump Systems.? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The price paid for the three-dimensional detail is the large computational times required with GEO3D. The computational times required for GEO2D are reasonable, a few minutes for a 20 year simulation. For a similar simulation, GEO3D takes days of computational time. Because of the small simulation times with GEO2D, a number of attractive features have been added to it. GEO2D has a user friendly interface where inputs and outputs are all handled with GUI (graphical user interface) screens. These GUI screens make the program exceptionally easy to use. To make the program even easier to use a number of standard input options for the most common GSHP situations are provided to the user. For the expert user, the option still exists to enter their own detailed information. To further help designers and GSHP customers make decisions about a GSHP heating and cooling system, cost estimates are made by the program. These cost estimates include a payback period graph to show the user where their GSHP system pays for itself. These GSHP simulation tools should be a benefit to the advancement of GSHP systems.

Menart, James A. [Wright State University] [Wright State University

2013-02-22T23:59:59.000Z

277

Recovery Act: Finite Volume Based Computer Program for Ground Source Heat Pump Systems  

SciTech Connect

This report is a compilation of the work that has been done on the grant DE-EE0002805 entitled ???¢????????Finite Volume Based Computer Program for Ground Source Heat Pump Systems.???¢??????? The goal of this project was to develop a detailed computer simulation tool for GSHP (ground source heat pump) heating and cooling systems. Two such tools were developed as part of this DOE (Department of Energy) grant; the first is a two-dimensional computer program called GEO2D and the second is a three-dimensional computer program called GEO3D. Both of these simulation tools provide an extensive array of results to the user. A unique aspect of both these simulation tools is the complete temperature profile information calculated and presented. Complete temperature profiles throughout the ground, casing, tube wall, and fluid are provided as a function of time. The fluid temperatures from and to the heat pump, as a function of time, are also provided. In addition to temperature information, detailed heat rate information at several locations as a function of time is determined. Heat rates between the heat pump and the building indoor environment, between the working fluid and the heat pump, and between the working fluid and the ground are computed. The heat rates between the ground and the working fluid are calculated as a function time and position along the ground loop. The heating and cooling loads of the building being fitted with a GSHP are determined with the computer program developed by DOE called ENERGYPLUS. Lastly COP (coefficient of performance) results as a function of time are provided. Both the two-dimensional and three-dimensional computer programs developed as part of this work are based upon a detailed finite volume solution of the energy equation for the ground and ground loop. Real heat pump characteristics are entered into the program and used to model the heat pump performance. Thus these computer tools simulate the coupled performance of the ground loop and the heat pump. The price paid for the three-dimensional detail is the large computational times required with GEO3D. The computational times required for GEO2D are reasonable, a few minutes for a 20 year simulation. For a similar simulation, GEO3D takes days of computational time. Because of the small simulation times with GEO2D, a number of attractive features have been added to it. GEO2D has a user friendly interface where inputs and outputs are all handled with GUI (graphical user interface) screens. These GUI screens make the program exceptionally easy to use. To make the program even easier to use a number of standard input options for the most common GSHP situations are provided to the user. For the expert user, the option still exists to enter their own detailed information. To further help designers and GSHP customers make decisions about a GSHP heating and cooling system, cost estimates are made by the program. These cost estimates include a payback period graph to show the user where their GSHP system pays for itself. These GSHP simulation tools should be a benefit to the advancement of GSHP system

James A Menart, Professor

2013-02-22T23:59:59.000Z

278

Analysis of Hybrid Hydrogen Systems: Final Report  

DOE Green Energy (OSTI)

Report on biomass pathways for hydrogen production and how they can be hybridized to support renewable electricity generation. Two hybrid systems were studied in detail for process feasibility and economic performance. The best-performing system was estimated to produce hydrogen at costs ($1.67/kg) within Department of Energy targets ($2.10/kg) for central biomass-derived hydrogen production while also providing value-added energy services to the electric grid.

Dean, J.; Braun, R.; Munoz, D.; Penev, M.; Kinchin, C.

2010-01-01T23:59:59.000Z

279

Audit Report, Evaluation of Classified Information Systems Security...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Report, Evaluation of Classified Information Systems Security Program, DOEIG-0518 Audit Report, Evaluation of Classified Information Systems Security Program, DOEIG-0518 All...

280

Railcar waste transfer system hydrostatic test report  

SciTech Connect

This Acceptance Test Report (ATR) documents for record purposes the field results, acceptance, and approvals of the completed acceptance test per HNF-SD-W417-ATP-001, ''Rail car Waste Transfer System Hydrostatic Test''. The test was completed and approved without any problems or exceptions.

Ellingson, S.D.

1997-04-03T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Acceptance test report: Backup power system  

SciTech Connect

Acceptance Test Report for construction functional testing of Project W-030 Backup Power System. Project W-030 provides a ventilation upgrade for the four Aging Waste Facility tanks. Backup power includes a single 125 KW diesel generator, three 10-kva uninterruptible power supply units, and all necessary control.

Cole, D.B. [Westinghouse Hanford Co., Richland, WA (United States)

1996-01-26T23:59:59.000Z

282

On the Design and Performance of the Maple System - Computer ...  

E-Print Network (OSTI)

a powerful set of facilities for symbolic mathematical computation, portability, and a .... and an external user interface are diff, expand, taylor, type, and coal f (for...

283

High Performance Computing Systems Integration, HPC-5: HPC: LANL...  

NLE Websites -- All DOE Office Websites (Extended Search)

Fax: 664-0172 MS B272 Latest in cluster technologies New technology in High Performance Computing and Simulation HPC-5 provides advanced research, development, testing, and...

284

COMPUTER DESIGN AND OPTIMIZATION OF CRYOGENIC REFRIGERATION SYSTEMS  

E-Print Network (OSTI)

be used to design and optimize refrigeration cycles as wellCOMPUTER DESIGN AND OPTIMIZATION OF CRYOGENIC REFRIGERATIONTrial Design Fixed state parameters (bar) Refrigeration

green, M.A.

2011-01-01T23:59:59.000Z

285

Advanced Turbine Systems Program. Topical report  

SciTech Connect

The Allison Gas Turbine Division (Allison) of General Motors Corporation conducted the Advanced Turbine Systems (ATS) program feasibility study (Phase I) in accordance with the Morgantown Energy Technology Center`s (METC`s) contract DE-AC21-86MC23165 A028. This feasibility study was to define and describe a natural gas-fired reference system which would meet the objective of {ge}60% overall efficiency, produce nitrogen oxides (NO{sub x}) emissions 10% less than the state-of-the-art without post combustion controls, and cost of electricity of the N{sup th} system to be approximately 10% below that of the current systems. In addition, the selected natural gas-fired reference system was expected to be adaptable to coal. The Allison proposed reference system feasibility study incorporated Allison`s long-term experience from advanced aerospace and military technology programs. This experience base is pertinent and crucial to the success of the ATS program. The existing aeroderivative technology base includes high temperature hot section design capability, single crystal technology, advanced cooling techniques, high temperature ceramics, ultrahigh turbomachinery components design, advanced cycles, and sophisticated computer codes.

1993-03-01T23:59:59.000Z

286

Test report : Princeton power systems prototype energy storage system.  

SciTech Connect

The Department of Energy Office of Electricity (DOE/OE), Sandia National Laboratory (SNL) and the Base Camp Integration Lab (BCIL) partnered together to incorporate an energy storage system into a microgrid configured Forward Operating Base to reduce the fossil fuel consumption and to ultimately save lives. Energy storage vendors will be sending their systems to SNL Energy Storage Test Pad (ESTP) for functional testing and then to the BCIL for performance evaluation. The technologies that will be tested are electro-chemical energy storage systems comprised of lead acid, lithium-ion or zinc-bromide. Princeton Power Systems has developed an energy storage system that utilizes lithium ion phosphate batteries to save fuel on a military microgrid. This report contains the testing results and some limited analysis of performance of the Princeton Power Systems Prototype Energy Storage System.

Rose, David Martin; Schenkman, Benjamin L.; Borneo, Daniel R.

2013-08-01T23:59:59.000Z

287

Physics, Computer Science and Mathematics Division. Annual report, 1 January-31 December 1979  

SciTech Connect

This annual report describes the research work carried out by the Physics, Computer Science and Mathematics Division during 1979. The major research effort of the Division remained High Energy Particle Physics with emphasis on preparing for experiments to be carried out at PEP. The largest effort in this field was for development and construction of the Time Projection Chamber, a powerful new particle detector. This work took a large fraction of the effort of the physics staff of the Division together with the equivalent of more than a hundred staff members in the Engineering Departments and shops. Research in the Computer Science and Mathematics Department of the Division (CSAM) has been rapidly expanding during the last few years. Cross fertilization of ideas and talents resulting from the diversity of effort in the Physics, Computer Science and Mathematics Division contributed to the software design for the Time Projection Chamber, made by the Computer Science and Applied Mathematics Department.

Lepore, J.V. (ed.)

1980-09-01T23:59:59.000Z

288

Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems  

Science Conference Proceedings (OSTI)

The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. ... Keywords: FPGA-based coprocessors, application acceleration, bioinformatics, biological sequence alignment, high performance reconfigurable computing

Matt Chiu; Martin C. Herbordt

2010-11-01T23:59:59.000Z

289

Institute for Scientific Computing Research Annual Report for Fiscal Year 2003  

SciTech Connect

The University Relations Program (URP) encourages collaborative research between Lawrence Livermore National Laboratory (LLNL) and the University of California campuses. The Institute for Scientific Computing Research (ISCR) actively participates in such collaborative research, and this report details the Fiscal Year 2003 projects jointly served by URP and ISCR.

Keyes, D; McGraw, J

2004-02-12T23:59:59.000Z

290

HARTEX: a safe real-time kernel for distributed computer control systems  

Science Conference Proceedings (OSTI)

A hard real-time kernel is presented for distributed computer control systems (DCCS), highlighting a number of novel features, such as integrated scheduling of hard and soft real-time tasks as well as tasks and resources; high-performance time management ... Keywords: distributed computer control systems, object modelling techniques, real-time communication, real-time operating systems, scheduling algorithms

C. K. Angelov; I. E. Ivanov; A. Burns

2002-03-01T23:59:59.000Z

291

High-Performance Interconnects and Computing Systems: Quantitative Studies A thesis submitted to the Department of  

E-Print Network (OSTI)

. To measure and to predict the performance of parallel computer systems, parallel benchmarks are designedHigh-Performance Interconnects and Computing Systems: Quantitative Studies By Ying Qian A thesis characteristics, parallel programming paradigms used by the applications, and the machine system's architecture

Afsahi, Ahmad

292

Vitrification Facility integrated system performance testing report  

Science Conference Proceedings (OSTI)

This report provides a summary of component and system performance testing associated with the Vitrification Facility (VF) following construction turnover. The VF at the West Valley Demonstration Project (WVDP) was designed to convert stored radioactive waste into a stable glass form for eventual disposal in a federal repository. Following an initial Functional and Checkout Testing of Systems (FACTS) Program and subsequent conversion of test stand equipment into the final VF, a testing program was executed to demonstrate successful performance of the components, subsystems, and systems that make up the vitrification process. Systems were started up and brought on line as construction was completed, until integrated system operation could be demonstrated to produce borosilicate glass using nonradioactive waste simulant. Integrated system testing and operation culminated with a successful Operational Readiness Review (ORR) and Department of Energy (DOE) approval to initiate vitrification of high-level waste (HLW) on June 19, 1996. Performance and integrated operational test runs conducted during the test program provided a means for critical examination, observation, and evaluation of the vitrification system. Test data taken for each Test Instruction Procedure (TIP) was used to evaluate component performance against system design and acceptance criteria, while test observations were used to correct, modify, or improve system operation. This process was critical in establishing operating conditions for the entire vitrification process.

Elliott, D.

1997-05-01T23:59:59.000Z

293

Integration of renewable energy sources: reliability-constrained power system planning and operations using computational intelligence  

E-Print Network (OSTI)

Renewable sources of energy such as wind turbine generators and solar panels have attracted much attention because they are environmentally friendly, do not consume fossil fuels, and can enhance a nations energy security. As a result, recently more significant amounts of renewable energy are being integrated into conventional power grids. The research reported in this dissertation primarily investigates the reliability-constrained planning and operations of electric power systems including renewable sources of energy by accounting for uncertainty. The major sources of uncertainty in these systems include equipment failures and stochastic variations in time-dependent power sources. Different energy sources have different characteristics in terms of cost, power dispatchability, and environmental impact. For instance, the intermittency of some renewable energy sources may compromise the system reliability when they are integrated into the traditional power grids. Thus, multiple issues should be considered in grid interconnection, including system cost, reliability, and pollutant emissions. Furthermore, due to the high complexity and high nonlinearity of such non-traditional power systems with multiple energy sources, computational intelligence based optimization methods are used to resolve several important and challenging problems in their operations and planning. Meanwhile, probabilistic methods are used for reliability evaluation in these reliability-constrained planning and design. The major problems studied in the dissertation include reliability evaluation of power systems with time-dependent energy sources, multi-objective design of hybrid generation systems, risk and cost tradeoff in economic dispatch with wind power penetration, optimal placement of distributed generators and protective devices in power distribution systems, and reliability-based estimation of wind power capacity credit. These case studies have demonstrated the viability and effectiveness of computational intelligence based methods in dealing with a set of important problems in this research arena.

Wang, Lingfeng

2008-12-01T23:59:59.000Z

294

Computer system design description for SY-101 hydrogen mitigation test project data acquisition and control system (DACS-1). Revision 1  

DOE Green Energy (OSTI)

This document provides descriptions of components and tasks that are involved in the computer system for the data acquisition and control of the mitigation tests conducted on waste tank SY-101 at the Hanford Nuclear Reservation. The system was designed and implemented by Los alamos National Laboratory and supplied to Westinghouse Hanford Company. The computers (both personal computers and specialized data-taking computers) and the software programs of the system will hereafter collectively be referred to as the DACS (Data Acquisition and Control System).

Truitt, R.W. [Westinghouse Hanford Co., Richland, WA (United States)

1994-08-24T23:59:59.000Z

295

A new hybrid computational intelligence algorithm for optimized vehicle routing applications in geographic information systems.  

E-Print Network (OSTI)

??This project explores the application of two developing algorithmic paradigms from the field of computational intelligence towards optimized vehicle routing applications within geographic information systems (more)

Rice, Michael Norris

2005-01-01T23:59:59.000Z

296

How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation  

E-Print Network (OSTI)

How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation Dedicated to the memory of Itamar Pitowsky Abstract The feasibility of computationally superior quantum computers is one of the most exciting and clear-cut sci- entific questions of our time. The question touches

Kalai, Gil

297

Geometry, analysis, and computation in mathematics and applied sciences. Final report  

SciTech Connect

Since 1993, the GANG laboratory has been co-directed by David Hoffman, Rob Kusner and Peter Norman. A great deal of mathematical research has been carried out here by them and by GANG faculty members Franz Pedit and Nate Whitaker. Also new communication tools, such as the GANG Webserver have been developed. GANG has trained and supported nearly a dozen graduate students, and at least half as many undergrads in REU projects.The GANG Seminar continues to thrive, making Amherst a site for short and long term visitors to come to work with the GANG. Some of the highlights of recent or ongoing research at GANG include: CMC surfaces, minimal surfaces, fluid dynamics, harmonic maps, isometric immersions, knot energies, foam structures, high dimensional soap film singularities, elastic curves and surfaces, self-similar curvature evolution, integrable systems and theta functions, fully nonlinear geometric PDE, geometric chemistry and biology. This report is divided into the following sections: (1) geometric variational problems; (2) soliton geometry; (3) embedded minimal surfaces; (4) numerical fluid dynamics and mathematical modeling; (5) GANG graphics and mathematical software; (6) description of the computational and visual analysis facility; and (7) research by undergraduates and GANG graduate seminar.

Kusner, R.B.; Hoffman, D.A.; Norman, P.; Pedit, F.; Whitaker, N.; Oliver, D.

1995-12-31T23:59:59.000Z

298

Large hydrofoil transmission system study. Final report  

SciTech Connect

This report presents the results of a study to determine the performance and physical characteristics of an ac electrical system intended for use as the propulsion system of large hydrofoils. Section I provides a technical description of the system in terms of performance, weight and component characteristics. Section II describes anticipated problems associated with development of the system and its components, and includes activities through design and fabrication, and qualification of an integrated system. Two system configurations are considereed in this study. Configuration I which is the preferred one, consists of two direct-turbine-driven ac generators supplying electrical power to either foilborne or hullborne propeller induction drive motors. Conventional switchgear is used for connecting the generator to either the foil or hull drive motors. Each induction motor supplies power to a fixed pitch propeller via a planetary type gearbox. Speed and power output of each induction motor is controlled by fuel to the turbine. Configuration II consists of independent foilborne and hullborne propulsion systems. For takeoff and foilborne operation, two direct LM2500 turbine-drive ac generators supply electrical power to the foil propeller induction motors. Operation in this mode is identical to Configuration I. Configuration II provides a trade off between greater system weight, against a lower hullborne fuel consumption rate obtained by operating smaller turbines in hullborne mode. This provides, potentially, a greater hullborne range.

1977-10-11T23:59:59.000Z

299

Tehachapi solar thermal system first annual report  

DOE Green Energy (OSTI)

The staff of the Southwest Technology Development Institute (SWTDI), in conjunction with the staff of Industrial Solar Technology (IST), have analyzed the performance, operation, and maintenance of a large solar process heat system in use at the 5,000 inmate California Correctional Institution (CCI) in Tehachapi, CA. This report summarizes the key design features of the solar plant, its construction and maintenance histories through the end of 1991, and the performance data collected at the plant by a dedicated on-site data acquisition system (DAS).

Rosenthal, A. [Southwest Technology Development Inst., Las Cruces, NM (US)

1993-05-01T23:59:59.000Z

300

Noncompliance Tracking System Registration and Reporting | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Noncompliance Tracking System Registration and Reporting Noncompliance Tracking System Registration and Reporting Noncompliance Tracking System Registration and Reporting NTS Reporting NTS Registration (For new registration) REGISTRATION INFORMATION Once NTS account access has been granted, registrants for the Noncompliance Tracking System (NTS) with an ACTIVE ACCOUNT for the HSS Reporting Systems: Occurrence Reports & Processing System (ORPS), Computerized Accident/Incident Reporting System (CAIRS), Suspect Counterfeit Items (SCI), or the Daily Occurrence (DO) reports can use the same credentials to access NTS. Please access NTS REPORTING. Registrants who DO NOT have an HSS Reporting Systems account, or who have not accessed their account within the past six months MUST REGISTER for a NTS account. Please register at: HSS Reporting Systems Registration.

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report ?? Phase I  

SciTech Connect

Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.

Mark S. Schmalz

2011-07-24T23:59:59.000Z

302

Risks to the public in computers and related systems  

Science Conference Proceedings (OSTI)

Edited by Peter G. Neumann (Risks Forum Moderator and Chairman of the ACM Committee on Computers and Public Policy), plus personal contributions by others, as indicated. Opinions expressed are individual rather than organizational, and all of the usual ...

Peter G. Neumann

2004-09-01T23:59:59.000Z

303

CHI '10 Extended Abstracts on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

Welcome to CHI 2010! CHI is the leading international conference in Human Computer Interaction and it is the archival material -- especially the papers and notes -- that establishes its academic credentials. We believe the paper and notes published here ...

Elizabeth Mynatt; Don Schoner; Geraldine Fitzpatrick; Scott Hudson; Keith Edwards; Tom Rodden

2010-04-01T23:59:59.000Z

304

HIGH PERFORMANCE INTEGRATION OF DATA PARALLEL FILE SYSTEMS AND COMPUTING  

E-Print Network (OSTI)

Facility Application Requirements and Strategy OLCF Table of Contents iii CONTENTS TABLES ..................................................................................................................................84 APPENDIX A. OLCF OVERVIEW ...........................................................................................................................................119 #12;ORNL Leadership Computing Facility Application Requirements and Strategy OLCF Tables v TABLES

305

Dynamic Mapping in Energy Constrained Heterogeneous Computing Systems  

Science Conference Proceedings (OSTI)

An ad hoc grid is a wireless heterogeneous computing environment without a fixed infrastructure. The wireless devices have different capabilities, have limited battery capacity, support dynamic voltage scaling, and are expected to be used for eight hours ...

Jong-Kook Kim; H. J. Siegel; Anthony A. Maciejewski; Rudolf Eigenmann

2005-04-01T23:59:59.000Z

306

Risks to the public in computers and related systems  

Science Conference Proceedings (OSTI)

Edited by Peter G. Neumann (Risks Forum Moderator and Chairman of the ACM Committee on Computers and Public Policy), plus personal contributions by others, as indicated. Opinions expressed are individual rather than organizational, and all of the usual ...

Peter G. Neumann

2005-01-01T23:59:59.000Z

307

Java Performance for Scientific Applications on LLNL Computer Systems  

Science Conference Proceedings (OSTI)

Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part of the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.

Kapfer, C; Wissink, A

2002-05-10T23:59:59.000Z

308

Physics, computer science and mathematics division. Annual report, 1 January - 31 December 1982  

SciTech Connect

Experimental physics research activities are described under the following headings: research on e/sup +/e/sup -/ annihilation; research at Fermilab; search for effects of a right-handed gauge boson; the particle data center; high energy astrophysics and interdisciplinary experiments; detector and other research and development; publications and reports of other research; computation and communication; and engineering, evaluation, and support operations. Theoretical particle physics research and heavy ion fusion research are described. Also, activities of the Computer Science and Mathematics Department are summarized. Publications are listed. (WHK)

Jackson, J.D.

1983-08-01T23:59:59.000Z

309

Hydrogen energy systems studies. Final technical report  

SciTech Connect

The results of previous studies suggest that the use of hydrogen from natural gas might be an important first step toward a hydrogen economy based on renewables. Because of infrastructure considerations (the difficulty and cost of storing, transmitting and distributing hydrogen), hydrogen produced from natural gas at the end-user`s site could be a key feature in the early development of hydrogen energy systems. In the first chapter of this report, the authors assess the technical and economic prospects for small scale technologies for producing hydrogen from natural gas (steam reformers, autothermal reformers and partial oxidation systems), addressing the following questions: (1) What are the performance, cost and emissions of small scale steam reformer technology now on the market? How does this compare to partial oxidation and autothermal systems? (2) How do the performance and cost of reformer technologies depend on scale? What critical technologies limit cost and performance of small scale hydrogen production systems? What are the prospects for potential cost reductions and performance improvements as these technologies advance? (3) How would reductions in the reformer capital cost impact the delivered cost of hydrogen transportation fuel? In the second chapter of this report the authors estimate the potential demand for hydrogen transportation fuel in Southern California.

Ogden, J.M.; Kreutz, T.; Kartha, S.; Iwan, L.

1996-08-13T23:59:59.000Z

310

High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this  

E-Print Network (OSTI)

High Performance Computing Update, June 2009 1. A meeting was held with users and potential users of high performance computing systems in April and this considered a proposal from the Director and application "advice" and a core system to host and manage high performance computing nodes (or clusters

Sussex, University of

311

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

Hubbell, Earl A. (Mt. View, CA); Morris, MacDonald S. (San Jose, CA); Winkler, James L. (Palo Alto, CA)

1996-01-01T23:59:59.000Z

312

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays. According to one aspect of the invention, a computer system (100) is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files (104) to design and/or generate lithographic masks (110).

Hubbell, Earl A. (Mt. View, CA); Morris, MacDonald S. (San Jose, CA); Winkler, James L. (Palo Alto, CA)

1999-01-05T23:59:59.000Z

313

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.

Hubbell, E.A.; Morris, M.S.; Winkler, J.L.

1999-01-05T23:59:59.000Z

314

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.

Hubbell, E.A.; Lipshutz, R.J.; Morris, M.S.; Winkler, J.L.

1997-01-14T23:59:59.000Z

315

Computer-aided engineering system for design of sequence arrays and lithographic masks  

DOE Patents (OSTI)

An improved set of computer tools for forming arrays is disclosed. According to one aspect of the invention, a computer system is used to select probes and design the layout of an array of DNA or other polymers with certain beneficial characteristics. According to another aspect of the invention, a computer system uses chip design files to design and/or generate lithographic masks. 14 figs.

Hubbell, E.A.; Morris, M.S.; Winkler, J.L.

1996-11-05T23:59:59.000Z

316

Mathematical Foundations of Quantum Information and Computation and Its Applications to Nano- and Bio-systems  

Science Conference Proceedings (OSTI)

This monograph provides a mathematical foundation to the theory of quantum information and computation, with applications to various open systems including nano and bio systems. It includes introductory material on algorithm, functional analysis, probability ...

Masanori Ohya, I. Volovich

2013-02-01T23:59:59.000Z

317

User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes  

SciTech Connect

This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

2004-09-14T23:59:59.000Z

318

SAMa computer aided design tool for specifying and analyzing modular, heirarchical systems  

Science Conference Proceedings (OSTI)

This paper presents SAM, a computer aided design tool for specifying and analyzing modular, hierarchical systems. SAM is based on Discrete Event System Specification (DEVS) and it uses generic components for specifying coupling relationships among components. ...

Arturo I. Concepcion; Stephen J. Schon

1986-12-01T23:59:59.000Z

319

Use of a Real-Time Computer Graphics System in Analysis and Forecasting  

Science Conference Proceedings (OSTI)

Real-time computer graphics systems are being introduced into weather stations throughout the United States. A sample of student forecasters used such a system to solve specific specialized forecasting problems. Results suggest that for some ...

John J. Cahir; John M. Norman; Dale A. Lowry

1981-03-01T23:59:59.000Z

320

Functions and Requirements and Specifications for Replacement of the Computer Automated Surveillance System (CASS)  

SciTech Connect

This functions, requirements and specifications document defines the baseline requirements and criteria for the design, purchase, fabrication, construction, installation, and operation of the system to replace the Computer Automated Surveillance System (CASS) alarm monitoring.

SCAIEF, C.C.

1999-12-16T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

An object-oriented framework for distributed computational simulation of aerospace propulsion systems  

Science Conference Proceedings (OSTI)

Designing and developing new aerospace propulsion systems is time-consuming and expensive. Computational simulation is a promising means for alleviating this cost, but requires a flexible software simulation system capable of integrating advanced multidisciplinary ...

John A. Reed; Abdollah A. Afjeh

1998-04-01T23:59:59.000Z

322

Modeling water resource systems using a service-oriented computing paradigm  

Science Conference Proceedings (OSTI)

Service-oriented computing is a software engineering paradigm that views complex software systems as an interconnected collection of distributed computational components. Each component has a defined web service interface that allows it to be loosely-coupled ... Keywords: Integrated modeling, Systems analysis, Water management, Web services

Jonathan L. Goodall; Bella F. Robinson; Anthony M. Castronova

2011-05-01T23:59:59.000Z

323

Preprint of the paper "A BEM Formulation for Computational Design of Grounding Systems in  

E-Print Network (OSTI)

. Substation grounding design involves computing the equivalent resistance of the earthing system --for reasons BE approach for the numerical analysis of substation grounding systems in nonuniform soils is presented for substation grounding design and computation have been proposed. Most of them are founded on practice

Colominas, Ignasi

324

2010 Smart Grid System Report (February 2012) | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Smart Grid System Report (February 2012) Smart Grid System Report (February 2012) 2010 Smart Grid System Report (February 2012) Section 1302 of Title XIII of the Energy Independence and Security Act of 2007 directs the Secretary of Energy to "...report to Congress concerning the status of smart grid deployments nationwide and any regulatory or government barriers to continued deployment." This document satisfies this directive and represents the second installment of this report to Congress, which is to be updated biennially. 2010 Smart Grid System Report 2010 Smart Grid System Report Appendix A 2010 Smart Grid System Report Appendix B More Documents & Publications 2009 Smart Grid System Report (July 2009) Smart Grid System Report Statement of Patricia Hoffman, Acting Assistant Secretary for Electricity

325

Modeling and construction of a computer controlled air conditioning system.  

E-Print Network (OSTI)

??As energy efficient devices become more necessary, it is desired to increase the efficiency of air conditioning systems. Current systems use on/off control, where the (more)

Frink, Brandon S.

2007-01-01T23:59:59.000Z

326

National Geoscience Data Repository System. Final report  

SciTech Connect

The American Geological Institute (AGI) has completed the first phase of a study to assess the feasibility of establishing a National Geoscience Data Repository System to capture and preserve valuable geoscientific data. The study was initiated in response to the fact that billions of dollars worth of domestic geological and geophysical data are in jeopardy of being irrevocably lost or destroyed as a consequence of the ongoing downsizing of the US energy and minerals industry. This report focuses on two major issues. First, it documents the types and quantity of data available for contribution to a National Geoscience Data Repository System. Second, it documents the data needs and priorities of potential users of the system. A National Geoscience Data Repository System would serve as an important and valuable source of information for the entire geoscience community for a variety of applications, including environmental protection, water resource management, global change studies, and basic and applied research. The repository system would also contain critical data that would enable domestic energy and minerals companies to expand their exploration and production programs in the United States for improved recovery of domestic oil, gas, and mineral resources.

Schiffries, C.M.; Milling, M.E.

1994-03-01T23:59:59.000Z

327

CHI '07 Extended Abstracts on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

Welcome to the CHI 2007 proceedings. We believe the technical papers and notes herein present some of the best current work in the diverse and dynamic field of human-computer interaction (HCI). CHI is the leading HCI conference. Creating the technical ...

Mary Beth Rosson; David Gilmore

2007-04-01T23:59:59.000Z

328

CHI '06 Extended Abstracts on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

Welcome to the CHI 2006 Extended Abstracts. We hope that you will enjoy this year's extended abstracts and the changes that we have made. For the first time this year, we encouraged submissions from six different sub-communities of the human-computer ...

Gary Olson; Robin Jeffries

2006-04-01T23:59:59.000Z

329

GRworkbench: A Computational System Based on Differential Geometry  

E-Print Network (OSTI)

We have developed a new tool for numerical work in General Relativity: GRworkbench. While past tools have been ad hoc, GRworkbench closely follows the framework of Differential Geometry to provide a robust and general way of computing on analytically defined space-times. We discuss the relationship between Differential Geometry and C++ classes in GRworkbench, and demonstrate their utility.

Susan M Scott; Benjamin J K Evans; Antony C Searle

2001-11-19T23:59:59.000Z

330

CHI '13 Extended Abstracts on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

The CHI Papers and Notes program is continuing to grow along with many of our sister conferences. We are pleased that CHI is still the leading venue for research in human-computer interaction. CHI 2013 continued the use of subcommittees to manage the ...

Wendy E. Mackay, Stephen Brewster, Susanne Bdker

2013-04-01T23:59:59.000Z

331

Toward Codesign in High Performance Computing Systems - 06386705...  

NLE Websites -- All DOE Office Websites (Extended Search)

s f o r t h i s w o r k . 7 . R E F E R E N C E S 1 J . A n g e t a l . High Performance Computing: From Grids and Clouds to Exascale, c h a p t e r E x a s c a l e C o m p u...

332

Computational fluid dynamics applications to improve crop production systems  

Science Conference Proceedings (OSTI)

Computational fluid dynamics (CFD), numerical analysis and simulation tools of fluid flow processes have emerged from the development stage and become nowadays a robust design tool. It is widely used to study various transport phenomena which involve ... Keywords: Decision support tools, Greenhouse, Harvesting machines, Sprayers, Tillage

T. Bartzanas; M. Kacira; H. Zhu; S. Karmakar; E. Tamimi; N. Katsoulas; In Bok Lee; C. Kittas

2013-04-01T23:59:59.000Z

333

DisTec: Towards a Distributed System for Telecom Computing  

Science Conference Proceedings (OSTI)

The continued exponential growth in both the volume and the complexity of information, compared with the computing capacity of the silicon-based devices restricted by Moore's Law, is giving birth to a new challenge to the specific requirements of analysts, ...

Shengqi Yang; Bai Wang; Haizhou Zhao; Yuan Gao; Bin Wu

2009-11-01T23:59:59.000Z

334

CEL-1 lighting computer program - programmer's guide. Final report Jan 80-Sep 82  

SciTech Connect

The basic algorithms and program file structure of the CEL-1 (Conversion of Electric Lighting, Version 1.0) lighting computer program are documented. The CEL-1 computer program aids the illumination in designing energy efficient interior lighting systems. Lighting metrics which may be calculated include illuminance, equivalent sphere illumination, and visual comfort probability. Energy profiles resulting from lighting controls which respond to daylight can be evaluated using CEL-1. This programmer's guide is divided into seven sections: (1) Programs Structures; (2) Basic Techniques; (3) Main Program Descriptions; (4) Subprogram Descriptions; (5) Logical Unit Assignments; (6) Compiling the Programs; and (7) Source and Auxiliary Files.

Brackett, W.E.

1983-01-01T23:59:59.000Z

335

ANL/MCS-TM-331 Mathematics and Computer Science Division Availability of This Report  

NLE Websites -- All DOE Office Websites (Extended Search)

Performance Analysis of Darshan 2.2.3 Performance Analysis of Darshan 2.2.3 on the Cray XE6 Platform ANL/MCS-TM-331 Mathematics and Computer Science Division Availability of This Report This report is available, at no cost, at http://www.osti.gov/bridge. It is also available on paper to the U.S. Department of Energy and its contractors, for a processing fee, from: U.S. Department of Energy Office of Scientific and Technical Information P.O. Box 62 Oak Ridge, TN 37831-0062 phone (865) 576-8401 fax (865) 576-5728 reports@adonis.osti.gov Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor UChicago Argonne, LLC, nor any of their employees or officers, makes any warranty, express

336

Hydrogen, CNG, and HHCNG Dispenser System - Prototye Report  

NLE Websites -- All DOE Office Websites (Extended Search)

FreedomCAR & Vehicle Technologies Program Advanced Vehicle Testing Activity Hydrogen, CNG, and HCNG Dispenser System - Prototype Report TECHNICAL REPORT Don Karner Scott...

337

EIA - The National Energy Modeling System: An Overview 2003-Report...  

Annual Energy Outlook 2012 (EIA)

Report Chapters The National Energy Modeling System: An Overview 2003 Report Chapters pdf image Preface pdf image Introduction pdf image Overview of NEMS pdf image Carbon Dioxide...

338

Cyber-Physical Systems -Are Computing Foundations Adequate?  

E-Print Network (OSTI)

substantially more effective disaster recovery techniques. Networked building control systems (such as HVAC confidence medical devices and systems, traf- fic control and safety, advanced automotive systems, process control (electric power, water resources, and communications systems #12;2 for example), distributed

California at Berkeley, University of

339

Building integrated photovoltaic systems analysis: Preliminary report  

SciTech Connect

The National Renewable Energy Laboratory (NREL) has estimated that the deployment of photovoltaics (PV) in the commercial buildings sector has the potential to contribute as much as 40 gigawatts peak electrical generation capacity and displace up to 1.1 quads of primary fuel use. A significant portion of this potential exists for smaller buildings under 25,000 square feet (2,300 square meters) in size or two stories or less, providing a strong cross over potential for residential applications as well. To begin to achieve this potential, research is needed to define the appropriate match of PV systems to energy end-uses in the commercial building sector. This report presents preliminary findings for a technical assessment of several alternative paths to integrate PV with building energy systems.

Not Available

1993-08-01T23:59:59.000Z

340

DCE Bio Detection System Final Report  

SciTech Connect

The DCE (DNA Capture Element) Bio-Detection System (Biohound) was conceived, designed, built and tested by PNNL under a MIPR for the US Air Force under the technical direction of Dr. Johnathan Kiel and his team at Brooks City Base in San Antonio Texas. The project was directed toward building a measurement device to take advantage of a unique aptamer based assay developed by the Air Force for detecting biological agents. The assay uses narrow band quantum dots fluorophores, high efficiency fluorescence quenchers, magnetic micro-beads beads and selected aptamers to perform high specificity, high sensitivity detection of targeted biological materials in minutes. This final report summarizes and documents the final configuration of the system delivered to the Air Force in December 2008

Lind, Michael A.; Batishko, Charles R.; Morgen, Gerald P.; Owsley, Stanley L.; Dunham, Glen C.; Warner, Marvin G.; Willett, Jesse A.

2007-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation  

SciTech Connect

Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

Engelmann, Christian [ORNL

2013-01-01T23:59:59.000Z

342

Quantum Computing Computer Scientists  

E-Print Network (OSTI)

Quantum Computing for Computer Scientists Noson S. Yanofsky and Mirco A. Mannucci #12;© May 2007 Noson S. Yanofsky Mirco A. Mannucci #12;Quantum Computing for Computer Scientists Noson S. Yanofsky of Vector Spaces 3 The Leap From Classical to Quantum 3.1 Classical Deterministic Systems 3.2 Classical

Yanofsky, Noson S.

343

Fast computation of the performance evaluation of biometric systems: application to multibiometric  

E-Print Network (OSTI)

The performance evaluation of biometric systems is a crucial step when designing and evaluating such systems. The evaluation process uses the Equal Error Rate (EER) metric proposed by the International Organization for Standardization (ISO/IEC). The EER metric is a powerful metric which allows easily comparing and evaluating biometric systems. However, the computation time of the EER is, most of the time, very intensive. In this paper, we propose a fast method which computes an approximated value of the EER. We illustrate the benefit of the proposed method on two applications: the computing of non parametric confidence intervals and the use of genetic algorithms to compute the parameters of fusion functions. Experimental results show the superiority of the proposed EER approximation method in term of computing time, and the interest of its use to reduce the learning of parameters with genetic algorithms. The proposed method opens new perspectives for the development of secure multibiometrics systems by speedi...

Giot, Romain; Rosenberger, Christophe

2012-01-01T23:59:59.000Z

344

Environmental Systems Research Candidates Program--FY2000 Annual report  

SciTech Connect

The Environmental Systems Research Candidates (ESRC) Program, which is scheduled to end September 2001, was established in April 2000 as part of the Environmental Systems Research and Analysis Program at the Idaho National Engineering and Environmental Laboratory (INEEL) to provide key science and technology to meet the clean-up mission of the U.S. Department of Energy Office of Environmental Management, and perform research and development that will help solve current legacy problems and enhance the INEELs scientific and technical capability for solving longer-term challenges. This report documents the progress and accomplishments of the ESRC Program from April through September 2000. The ESRC Program consists of 24 tasks subdivided within four research areas: A. Environmental Characterization Science and Technology. This research explores new data acquisition, processing, and interpretation methods that support cleanup and long-term stewardship decisions. B. Subsurface Understanding. This research expands understanding of the biology, chemistry, physics, hydrology, and geology needed to improve models of contamination problems in the earths subsurface. C. Environmental Computational Modeling. This research develops INEEL computing capability for modeling subsurface contaminants and contaminated facilities. D. Environmental Systems Science and Technology. This research explores novel processes to treat waste and decontaminate facilities. Our accomplishments during FY 2000 include the following: We determined, through analysis of samples taken in and around the INEEL site, that mercury emissions from the INEEL calciner have not raised regional off-INEEL mercury contamination levels above normal background. We have initially demonstrated the use of x-ray fluorescence to image uranium and heavy metal concentrations in soil samples. We increased our understanding of the subsurface environment; applying mathematical complexity theory to the problem of transport of subsurface contaminants. We upgraded the INEELs high-speed computer link to offsite supercomputers from T1 (1.5 MB/s) to DS3 (45 MB/s). Procurements have initiated a further upgrade to OC3 (155 MB/s) with additional onsite computational power that should put the INEEL on the Top 500 Supercomputing Sites list. We developed advanced decontamination, decommissioning, and dismantlement techniques, including the Decontamination, Decommissioning, and Remediation Optimal Planning System.

Piet, Steven James

2001-01-01T23:59:59.000Z

345

ON THE DESIGN OF A VERY HIGH-SPEED COMPUTER. Report No. 80  

SciTech Connect

The feasibility of constrncting a digital computer about one hundrnd times faster than present computers, such as ILLIAC, using transistonized circuits and other presently avnilable components and techniques is reported. The results of two design studies are discussed. One involves a minimum of buffer storage in the form of transistor registers, and the other involves a moderrts ammount of buffer storage in the form of a small-capacity, high-speed, random-access buffer memory. Tbe former design is emphasized because its equipment requiremente can be presentiy met. Two controls are used, arithmetic control and advanced control, as well as buffer storage for instructions and operands, and by such meaan various units of the computer are kept in simultaneous operation. The relative speed of the proposed computer compared to that of existing machines depends upon the problem. For problemas dominated by arithmetic operations, it is estimated that the proposed computer will be 100 to 200 times faster than computers such as ILLLAC. For problem dominated by logical and combinatorial operations, the gain in speed will be at least 50times. The computer has a random-access word-arraagement memory of 8192 words of 52 units each with an access time of 1.5 mu sec. The arithmetic unit is designed so that the digite of a multiplier are sensed and acted upon in such a way that the ase of the adder is reduced. Also, ''carry registers'' are used in this unit, and carriers are assimilated only wben neocssary. The computer wiil have an average multiplication time between 3.5 and 4 mu sec, addition times of 0.3 mu sec, and division times of7 to 20 mu sec. The computer, aside from- input-outpat facilities, will contain approximately 15,400 transistors, 34,000 diodes. and 12,000 resistors. The basic circuits built Lfom these transistors have operation times of 5 to40 x 10/sup -9/ sec, depending upon the circuit. (auth)

Gillies, D.B.; Meagher, R.E.; Muller, D.E.; McKay, R.W.; Nash, J.P.; Robertson, J.E.; Taub, A.H.

1957-10-01T23:59:59.000Z

346

Peer-to-peer architectures for exascale computing : LDRD final report.  

Science Conference Proceedings (OSTI)

The goal of this research was to investigate the potential for employing dynamic, decentralized software architectures to achieve reliability in future high-performance computing platforms. These architectures, inspired by peer-to-peer networks such as botnets that already scale to millions of unreliable nodes, hold promise for enabling scientific applications to run usefully on next-generation exascale platforms ({approx} 10{sup 18} operations per second). Traditional parallel programming techniques suffer rapid deterioration of performance scaling with growing platform size, as the work of coping with increasingly frequent failures dominates over useful computation. Our studies suggest that new architectures, in which failures are treated as ubiquitous and their effects are considered as simply another controllable source of error in a scientific computation, can remove such obstacles to exascale computing for certain applications. We have developed a simulation framework, as well as a preliminary implementation in a large-scale emulation environment, for exploration of these 'fault-oblivious computing' approaches. High-performance computing (HPC) faces a fundamental problem of increasing total component failure rates due to increasing system sizes, which threaten to degrade system reliability to an unusable level by the time the exascale range is reached ({approx} 10{sup 18} operations per second, requiring of order millions of processors). As computer scientists seek a way to scale system software for next-generation exascale machines, it is worth considering peer-to-peer (P2P) architectures that are already capable of supporting 10{sup 6}-10{sup 7} unreliable nodes. Exascale platforms will require a different way of looking at systems and software because the machine will likely not be available in its entirety for a meaningful execution time. Realistic estimates of failure rates range from a few times per day to more than once per hour for these platforms. P2P architectures give us a starting point for crafting applications and system software for exascale. In the context of the Internet, P2P applications (e.g., file sharing, botnets) have already solved this problem for 10{sup 6}-10{sup 7} nodes. Usually based on a fractal distributed hash table structure, these systems have proven robust in practice to constant and unpredictable outages, failures, and even subversion. For example, a recent estimate of botnet turnover (i.e., the number of machines leaving and joining) is about 11% per week. Nonetheless, P2P networks remain effective despite these failures: The Conficker botnet has grown to {approx} 5 x 10{sup 6} peers. Unlike today's system software and applications, those for next-generation exascale machines cannot assume a static structure and, to be scalable over millions of nodes, must be decentralized. P2P architectures achieve both, and provide a promising model for 'fault-oblivious computing'. This project aimed to study the dynamics of P2P networks in the context of a design for exascale systems and applications. Having no single point of failure, the most successful P2P architectures are adaptive and self-organizing. While there has been some previous work applying P2P to message passing, little attention has been previously paid to the tightly coupled exascale domain. Typically, the per-node footprint of P2P systems is small, making them ideal for HPC use. The implementation on each peer node cooperates en masse to 'heal' disruptions rather than relying on a controlling 'master' node. Understanding this cooperative behavior from a complex systems viewpoint is essential to predicting useful environments for the inextricably unreliable exascale platforms of the future. We sought to obtain theoretical insight into the stability and large-scale behavior of candidate architectures, and to work toward leveraging Sandia's Emulytics platform to test promising candidates in a realistic (ultimately {ge} 10{sup 7} nodes) setting. Our primary example applications are drawn from linear algebra: a Jacobi relaxation s

Vorobeychik, Yevgeniy; Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Donald W.

2010-09-01T23:59:59.000Z

347

Performance FAQs on BG/Q Systems | Argonne Leadership Computing...  

NLE Websites -- All DOE Office Websites (Extended Search)

Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System...

348

BG/P File Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Blue GeneQ Versus Blue GeneP MiraCetusVesta IntrepidChallengerSurveyor Decommissioning of BGP Systems and Resources Introducing Challenger Quick Reference Guide System...

349

A system software architecture for high-end computing  

Science Conference Proceedings (OSTI)

Large MPP systems can neither solve grand-challenge scientific problems nor enable large scale industrial and governmental simulations if they rely on extensions to workstation system software. At Sandia National Laboratories we have developed, with ...

David S. Greenberg; Ron Brightwell; Lee Ann Fisk; Arthur Maccabe; Rolf Riesen

1997-11-01T23:59:59.000Z

350

Data Storage & File Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

Systems: An overview of the BGQ file systems available at ALCF. Disk Quota Disk Quota: Information on disk quotas for Mira and Vesta. Using HPSS Using HPSS: HPSS is a data...

351

Surveillance Analysis Computer System (SACS) software design document (SDD)  

SciTech Connect

This document contains the Software Design Description for Phase II of the SACS project, and Impact Level 3Q system

Glasscock, J.A.

1995-09-01T23:59:59.000Z

352

Model discovery for energy-aware computing systems: An experimental evaluation  

Science Conference Proceedings (OSTI)

We present a model-discovery methodology for energy-aware computing systems that achieves high prediction accuracy. Model discovery, or system identification, is a critical first step in designing advanced controllers that can dynamically manage the ... Keywords: SISO model, energy aware computing system, model discovery methodology, energy performance trade off, multiple inputs multiple outputs model, single input single output model, representative server workload, MIMO model

Zhichao Li; R. Grosu; K. Muppalla; S. A. Smolka; S. D. Stoller; E. Zadok

2011-07-01T23:59:59.000Z

353

Computer Systems Principles Copyright c2009-10 by Emery Berger and Mark Corner  

E-Print Network (OSTI)

Computer Systems Principles Copyright c2009-10 by Emery Berger and Mark Corner All rights reserved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Introduction to Operating Systems 13 2.1 A Brief History of Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 11 Networking and Distributed Systems 69 11.1 OS abstractions

Berger, Emery

354

Potassium emission absorption system. Topical report 12  

DOE Green Energy (OSTI)

The Potassium Emission Absorption System is one of the advanced optical diagnostics developed at Mississippi State University to provide support for the demonstration of prototype-scale coal-fired combustion magnetohydrodynamic (MHD) electrical power generation. Intended for application in the upstream of an MHD flow, the system directly measures gas temperature and neutral potassium atom number density through spectroscopic emission absorption techniques. From these measurements the electron density can be inferred from a statistical equilibrium calculation and the electron conductivity in the MHD channel found by use of an electron mobility model. The instrument has been utilized for field test measurements on MHD facilities for almost a decade and has been proven to provide useful measurements as designed for MHD nozzle, channel, and diffuser test sections. The theory of the measurements, a system description, its capabilities, and field test measurement results are reported here. During the development and application of the instrument several technical issues arose which when addressed advanced the state of the art in emission absorption measurement. Studies of these issues are also reported here and include: two-wavelength measurements for particle-laden flows, potassium D-line far wing absorption coefficient, bias in emission absorption measurements arising from dirty windows and misalignments, non-coincident multiwavelength emission absorption sampling errors, and lineshape fitting for boundary layer flow profile information. Although developed for NLHD application, the instrument could be applied to any high temperature flow with a resonance line in the 300 to 800 nm range, for instance other types of flames, rocket plumes or low temperature plasmas.

Bauman, L.E.

1995-04-01T23:59:59.000Z

355

Computational Computational  

E-Print Network (OSTI)

38 Computational complexity Computational complexity In 1965, the year Juris Hartmanis became Chair On the computational complexity of algorithms in the Transactions of the American Mathematical Society. The paper the best talent to the field. Theoretical computer science was immediately broadened from automata theory

Keinan, Alon

356

A computer system for visual recognition using active knowledge  

Science Conference Proceedings (OSTI)

This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of ...

Eugene C. Freuder

1977-08-01T23:59:59.000Z

357

Washington Closure Hanford System Engineer Program FY2010 Annual Report  

SciTech Connect

This report is a summary of the assessments of the vital safety systems (VSS) that are administered under WCHs system engineer program.

J.N. Winters

2010-11-02T23:59:59.000Z

358

System for computer controlled shifting of an automatic transmission  

DOE Patents (OSTI)

In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

Patil, Prabhakar B. (Detroit, MI)

1989-01-01T23:59:59.000Z

359

Novel Kinetic 3D MHD Algorithm for High Performance Parallel Computing Systems  

E-Print Network (OSTI)

The impressive progress of the kinetic schemes in the solution of gas dynamics problems and the development of effective parallel algorithms for modern high performance parallel computing systems led to the development of advanced methods for the solution of the magnetohydrodynamics problem in the important area of plasma physics. The novel feature of the method is the formulation of the complex Boltzmann-like distribution function of kinetic method with the implementation of electromagnetic interaction terms. The numerical method is based on the explicit schemes. Due to logical simplicity and its efficiency, the algorithm is easily adapted to modern high performance parallel computer systems including hybrid computing systems with graphic processors.

B. Chetverushkin; N. D'Ascenzo; V. Saveliev

2013-05-03T23:59:59.000Z

360

Temperature-aware computer systems: opportunities and challenges  

E-Print Network (OSTI)

Published by the IEEE Computer Society In recent years, power density in microprocessors has doubled every three years, and experts expect this rate to increase within one to two generations as feature sizes and frequencies scale faster than operating voltages. Because a microprocessor consumes energy and converts it into heat, the corresponding exponential rise in heat density is creating significant difficulties in maintaining reliability and low manufacturing cost. Any design must remove heat from the surface of the microprocessor die, and for all but the lowest-power designs today, such cooling solutions have become very expensive.

Kevin Skadron; Mircea R. Stan; Wei Huang; Sivakumar Velusamy; David Tarjan

2003-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

TASK XII ANALYTICAL REPORT--SM-1 TRANSIENT ANALYSIS BY ANALOG COMPUTER METHODS  

SciTech Connect

The voltage and frequency response of selected SM-1 plant system parameters to step load changes was analyzed using analog computer measurements. The analog model was that developed for analysis of the SM-2 design. The approach to the analysis, formulation of the model, and analog recordings are presented. The data will be used to prove reliability of the analog model by comparing analog data with test data to be taken at SM-1. (auth)

Barrett, J.A.

1961-05-26T23:59:59.000Z

362

Field demonstrations of communication systems for distribution automation. Final report  

Science Conference Proceedings (OSTI)

This report summarizes the results of a field demonstration of the use of UHF radio links in the 950-MHz portion of the spectrum as a communication medium for performing automated electric power distribution. Prototype radiohardware was combined with logic and control equipment developed by Westinghouse for power-line carrier automated distribution systems. Transceivers were used in the remote interactive terminals. 3dbm transceivers were used in the Central Base Station and the three Primary Radio Terminals, which functioned as system repeaters. Pre-installation field-strength measurements were made along selected radials and in spot locations to characterize the anticipated field-strength contours that would be encountered during the operational testing. A propagation model was developed that accurately predicted the conditions actually recorded. Post-installation measurements were used to further calibrate the computer model. The resulting propagation analysis proved exceedingly effective in characterizing a UHF radio system for digital communication. Data collected during a 10-month operational period supports the conclusion that 950-MHz radio is a viable communications medium for performing utility automated distribution functions. The system consists of 250 remote terminals, each interrogated from three sites. This provides 750 paths for communication performance evaluation.The reliability of the prototype radio units used in this project fell below that predicted through MTBF analysis, and did not meet utility reliability standards. The logic and control circuitry and central station required very little maintenance during the test.

Not Available

1981-06-01T23:59:59.000Z

363

Structural analysis of magnetic fusion energy systems in a combined interactive/batch computer environment  

SciTech Connect

A system of computer programs has been developed to aid in the preparation of input data for and the evaluation of output data from finite element structural analyses of magnetic fusion energy devices. The system utilizes the NASTRAN structural analysis computer program and a special set of interactive pre- and post-processor computer programs, and has been designed for use in an environment wherein a time-share computer system is linked to a batch computer system. In such an environment, the analyst must only enter, review and/or manipulate data through interactive terminals linked to the time-share computer system. The primary pre-processor programs include NASDAT, NASERR and TORMAC. NASDAT and TORMAC are used to generate NASTRAN input data. NASERR performs routine error checks on this data. The NASTRAN program is run on a batch computer system using data generated by NASDAT and TORMAC. The primary post-processing programs include NASCMP and NASPOP. NASCMP is used to compress the data initially stored on magnetic tape by NASTRAN so as to facilitate interactive use of the data. NASPOP reads the data stored by NASCMP and reproduces NASTRAN output for selected grid points, elements and/or data types.

Johnson, N.E.; Singhal, M.K.; Walls, J.C.; Gray, W.H.

1979-01-01T23:59:59.000Z

364

Software Verification and Validation Test Report for the HEPA filter Differential Pressure Fan Interlock System  

Science Conference Proceedings (OSTI)

The HEPA Filter Differential Pressure Fan Interlock System PLC ladder logic software was tested using a Software Verification and Validation (V&V) Test Plan as required by the ''Computer Software Quality Assurance Requirements''. The purpose of his document is to report on the results of the software qualification.

ERMI, A.M.

2000-09-05T23:59:59.000Z

365

Assessing computational methods and science policy in systems biology.  

E-Print Network (OSTI)

??In this thesis, I discuss the development of systems biology and issues in the progression of this science discipline. Traditional molecular biology has been driven (more)

Castillo, Andrea R. (Andrea Redwing)

2009-01-01T23:59:59.000Z

366

The protection of computer and electronic systems against ...  

Science Conference Proceedings (OSTI)

... the above flgurt, It should remain safe (no fires ... the indirect effects of lightning and power system faults ... a curative effort - learning to live and survive ...

2013-05-17T23:59:59.000Z

367

COMPUTER DESIGN AND OPTIMIZATION OF CRYOGENIC REFRIGERATION SYSTEMS  

E-Print Network (OSTI)

wells, fossil fuel burners, wet cooling towers, anddry cooling towers. Fluid properties are calculated usingthe water entering the cooling tower. Given the system state

green, M.A.

2011-01-01T23:59:59.000Z

368

Electrical Engineering (EE) is a diverse discipline encompassing computer and information systems, controls,  

E-Print Network (OSTI)

and multiply excited systems. Concepts in rotating machinery analysis. Direct energy conversion. Prerequisite70 electrical Electrical Engineering (EE) is a diverse discipline encompassing computer information processing. ProgrAmS AVAilAble · ElectricalEngineering Bachelor of Science 131 units · Computer

Rohs, Remo

369

Electrical Engineering (EE) is a diverse discipline encompassing computer and information systems, controls,  

E-Print Network (OSTI)

in singly and multiply excited systems. Concepts in rotating machinery analysis. Direct energy conversion70 electrical Electrical Engineering (EE) is a diverse discipline encompassing computer information processing. ProgrAmS AVAilAble · ElectricalEngineering Bachelor of Science 131 units · Computer

Rohs, Remo

370

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems  

Science Conference Proceedings (OSTI)

Over the last year or so, we have been blessed with the challenge, the opportunity, and the distinct pleasure of organizing the CHI 2011 Conference on Human Factors in Computing Systems, the premier international conference for the field of human-computer ...

Desney Tan; Geraldine Fitzpatrick; Carl Gutwin; Bo Begole; Wendy A. Kellogg

2011-05-01T23:59:59.000Z

371

Visual Analysis of I/O System Behavior for HighEnd Computing  

E-Print Network (OSTI)

at the Argonne Leadership Computing Facility (ALCF). On the ALCF systems, we use the 40-rack Intrepid Blue Gene network. When tracing applications in the ALCF environment, we set up a temporary PVFS2 storage cluster by the ALCF. The extra compute nodes we al- locate for the I/O software are accessible only by our application

Islam, M. Saif

372

I-MINDS: A Multiagent System for Intelligent Computer- Supported Collaborative Learning and Classroom Management  

Science Conference Proceedings (OSTI)

I-MINDS provides a computer-supported collaborative learning (CSCL) infrastructure and environment for learners in synchronous learning and classroom management applications for instructors, for large classroom or distance education situations. For supporting ... Keywords: Computer-supported collaborative learning, multiagent system

Leen-Kiat Soh; Nobel Khandaker; Hong Jiang

2008-04-01T23:59:59.000Z

373

Experimental Analysis of Task-based Energy Consumption in Cloud Computing Systems  

E-Print Network (OSTI)

this model, we have conducted extensive experiments to profile the energy consumption in cloud computingExperimental Analysis of Task-based Energy Consumption in Cloud Computing Systems Feifei Chen, John is that large cloud data centres consume large amounts of energy and produce significant carbon footprints

Schneider, Jean-Guy

374

A case study of a system-level approach to power-aware computing  

Science Conference Proceedings (OSTI)

This paper introduces a systematic approach to power awareness in mobile, handheld computers. It describes experimental evaluations of several techniques for improving the energy efficiency of a system, ranging from the network level down to the physical ... Keywords: Power-aware, battery properties, dynamic power management, energy-aware, handheld computers, multihop wireless network

Thomas L. Martin; Daniel P. Siewiorek; Asim Smailagic; Matthew Bosworth; Matthew Ettus; Jolin Warren

2003-08-01T23:59:59.000Z

375

The E ect of Heavy-Tailed Job Size Distributions on Computer System Design.  

E-Print Network (OSTI)

The E ect of Heavy-Tailed Job Size Distributions on Computer System Design. Mor Harchol-Balter Laboratory for Computer Science MIT, NE43-340 Cambridge, MA 02139 harchol@theory.lcs.mit.edu Abstract Heavy physical phenomena to sociological phenomena. Recently heavy-tailed distributions have been discovered

Harchol-Balter, Mor

376

Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities for Modeling and Simulation Communities  

E-Print Network (OSTI)

for Modeling and Simulation Communities Rajkumar Buyya and Anthony Sulistio Grid Computing and Distributed- oriented computing systems such as Data Centers and Grids. We present various case studies on the use by the electrical power grid's pervasiveness and reliability, began exploring the design and development of a new

Buyya, Rajkumar

377

Maximizing Profit in Cloud Computing System via Resource Allocation Hadi Goudarzi and Massoud Pedram  

E-Print Network (OSTI)

--With increasing demand for high performance computing and data storage, distributed computing systems have IT infrastructure comprising of servers, storage, network bandwidth, physical infrastructure, Electrical Grid attracted a lot of attention. Resource allocation is one of the most important challenges in the distributed

Pedram, Massoud

378

An analysis of computational workloads for the ORNL Jaguar system  

Science Conference Proceedings (OSTI)

This study presents an analysis of science application workloads for the Jaguar Cray XT5 system during its tenure as a 2.3 petaflop supercomputer at Oak Ridge National Laboratory. Jaguar was the first petascale system to be deployed for open science ... Keywords: applications, cray, exascale, hpc, metrics, ornl, petascale, scaling, science, workload

Wayne Joubert; Shi-Quan Su

2012-06-01T23:59:59.000Z

379

The visual computing of projector-camera systems  

Science Conference Proceedings (OSTI)

This article focuses on real-time image correction techniques that enable projector-camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, colored and textured surfaces. It reviews hardware ... Keywords: GPU rendering, image-correction, projector-camera systems, virtual and augmented reality

Oliver Bimber; Daisuke Iwai; Gordon Wetzstein; Anselm Grundhfer

2008-08-01T23:59:59.000Z

380

SYSTEM DESIGN AND PERFORMANCE FOR THE RECENT DIII-D NEUTRAL BEAM COMPUTER UPGRADE  

SciTech Connect

OAK-B135 This operating year marks an upgrade to the computer system charged with control and data acquisition for neutral beam injection system's heating at the DIII-D National Fusion Facility, funded by the US Department of Energy and operated by General Atomics (GA). This upgrade represents the third and latest major revision to a system which has been in service over twenty years. The first control and data acquisition computers were four 16 bit mini computers running a proprietary operating system. Each of the four controlled two ion source over dedicated CAMAC highway. In a 1995 upgrade, the system evolved to be two 32 bit Motorola mini-computers running a version of UNIX. Each computer controlled four ion sources with two CAMAC highways per CPU. This latest upgrade builds on this same logical organization, but makes significant advances in cost, maintainability, and the degree to which the system is open to future modification. The new control and data acquisition system is formed of two 2 GHz Intel Pentium 4 based PC's, running the LINUX operating system. Each PC drives two CAMAC serial highways using a combination of Kinetic Systems PCI standard CAMAC Hardware Drivers and a low-level software driver written in-house expressly for this device. This paper discusses the overall system design and implementation detail, describing actual operating experience for the initial six months of operation.

PHILLIPS,J.C; PENAFLOR,B.G; PHAM,N.Q; PIGLOWSKI,D.A

2003-10-01T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Pentek metal coating removal system: Baseline report  

SciTech Connect

The Pentek coating removal technology was tested and is being evaluated at Florida International University (FIU) as a baseline technology. In conjunction with FIU`s evaluation of efficiency and cost, this report covers evaluation conducted for safety and health issues. It is a commercially available technology and has been used for various projects at locations throughout the country. The Pentek coating removal system consisted of the ROTO-PEEN Scaler, CORNER-CUTTER{reg_sign}, and VAC-PAC{reg_sign}. They are designed to remove coatings from steel, concrete, brick, and wood. The Scaler uses 3M Roto Peen tungsten carbide cutters while the CORNER-CUTTER{reg_sign} uses solid needles for descaling activities. These hand tools are used with the VAC-PAC{reg_sign} vacuum system to capture dust and debris as removal of the coating takes place. The safety and health evaluation during the testing demonstration focused on two main areas of exposure: dust and noise. Dust exposure minimal, but noise exposure was significant. Further testing for each exposure is recommended because of the environment where the testing demonstration took place. It is feasible that the dust and noise levels will be higher in an enclosed operating environment of different construction. In addition, other areas of concern found were arm-hand vibration, whole-body, ergonomics, heat stress, tripping hazards, electrical hazards, machine guarding, and lockout/tagout.

1997-07-31T23:59:59.000Z

382

Performance Technology for Tera-Class Parallel Computers: Evolution of the TAU Performance System  

Science Conference Proceedings (OSTI)

In this project, we proposed to create new technology for performance observation and analysis of large-scale tera-class parallel computer systems and applications in this project.

Allen D. Malony

2005-06-21T23:59:59.000Z

383

A Computational Web Portal for the Distributed Marine Environment Forecast System  

Science Conference Proceedings (OSTI)

This paper describes a prototype computational Web Portal for the Distributed Marine Environment Forecast System (DMEFS). DMEFS is a research framework to develop and operate validated Climate-Weather-Ocean models. The DMEFS portal is implemented as ...

Tomasz Haupt; Purushotham Bangalore; Gregory Henley

2001-06-01T23:59:59.000Z

384

Computer-Aided Dispatch System as a Decision Making Tool in Public and Private Sectors  

E-Print Network (OSTI)

We describe in detail seven distinct areas in both public and private sectors in which a real-time computer-aided dispatch system is applicable to the allocation of scarce resources. Characteristics of a real-time ...

Lee, I-Jen

385

UPC Hardware on BG/P Systems | Argonne Leadership Computing Facility  

NLE Websites -- All DOE Office Websites (Extended Search)

V3.0 Overview of the IBM Blue GeneP project IBM System Blue Gene Solution: High Performance Computing Toolkit for Blue GeneP Introduction This section gives a low level...

386

PLA FOR A COMPUTER-BASED CONSULTANT SYSTEM Ni Is J. Ni lsson  

E-Print Network (OSTI)

: Apprentice Electrician Cable Installer Electrician Electrical Technician Electronics and communications and Construction Computer Systems Engineering Electro technology - Electrical, Electronics and Communications expected across 2014-15. Job titles may include the following: Apprentice Builder Apprentice Carpenter

387

Computing the Atmospheric Absorption for the DMSP Operational Linescan System Infrared Channel  

Science Conference Proceedings (OSTI)

An accurate and rapid means is presented for computing the atmospheric absorption for the infrared channel (10.212.7 ?m) on the Defense Meteorological Satellite Program operational linescan system (OLS) for use in remote sensing studies of ...

Thomas J. Greenwald; Charles J. Drummond

1999-12-01T23:59:59.000Z

388

A Variational Method for Computing Surface Heat Fluxes from ARM Surface Energy and Radiation Balance Systems  

Science Conference Proceedings (OSTI)

A variational method is developed to compute surface fluxes of sensible and latent heat from observed wind, temperature, humidity, and surface energy and radiation budget by the surface energy and radiation balance systems (SERBS). In comparison ...

Qin Xu; Chong-Jian Qiu

1997-01-01T23:59:59.000Z

389

Measurement and modeling of computer reliability as affected by system activity  

Science Conference Proceedings (OSTI)

This paper demonstrates a practical approach to the study of the failure behavior of computer systems. Particular attention is devoted to the analysis of permanent failures. A number of important techniques, which may have general applicability in both ...

R. K. Iyer; D. J. Rossetti; M. C. Hsueh

1986-08-01T23:59:59.000Z

390

Assessing computational methods and science policy in systems biology  

E-Print Network (OSTI)

In this thesis, I discuss the development of systems biology and issues in the progression of this science discipline. Traditional molecular biology has been driven by reductionism with the belief that breaking down a ...

Castillo, Andrea R. (Andrea Redwing)

2009-01-01T23:59:59.000Z

391

High Performance Computing Systems for Autonomous Spaceborne Missions  

Science Conference Proceedings (OSTI)

Future-generation space missions across the solar system to the planets, moons, asteroids, and comets may someday incorporate supercomputers both to expand the range of missions being conducted and to significantly reduce their cost. By performing science ...

Thomas Sterling; Daniel S. Katz; Larry Bergman

2001-08-01T23:59:59.000Z

392

Optical Interconnections within Modern High-performance Computing Systems  

Science Conference Proceedings (OSTI)

Optical technologies are ubiquitous in telecommunications networks and systems, providing multiple wave-length channels of transport at 2.5-10 Gbps data rates over single fiber-optic cables. Market pressures continue to drive the number of wavelength ...

Howard Davidson; Rick Lytel; Nyles Nettleton; Theresa Sze

2000-05-01T23:59:59.000Z

393

8 -Circuits, Systems and Communications Microelectromechanical Systems 8 RLE Progress Report 144  

E-Print Network (OSTI)

MEMS Analysis System and Implementation Sponsors Computer Microvision for Microelectromechanical. This is where computer microvision acts as a good analysis stool to analyze the X, Y, and Z motions of a MEMS device during the development and testing stages of the design process. Computer microvision

394

Occurrence Reporting and Processing System | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Occurrence Reporting and Processing System Occurrence Reporting and Processing System Occurrence Reporting and Processing System The Department of Energy's Occurrence Reporting Program provides timely notification to the DOE complex of events that could adversely affect: public or DOE worker health and safety, the environment, national security, DOE's safeguards and security interests, functioning of DOE facilities, or the Department's reputation. DOE analyzes aggregate occurrence information for generic implications and operational improvements. The Occurrence Reporting Program directives are DOE Order 232.2, Occurrence Reporting and Processing of Operations Information, and DOE Standard DOE-STD-1197-2011, Occurrence Reporting Causal Analysis. Contact Ashley Ruocco for information and assistance on

395

Computational Human Performance Modeling For Alarm System Design  

SciTech Connect

The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

Jacques Hugo

2012-07-01T23:59:59.000Z

396

Reverse Computation for Rollback-based Fault Tolerance in Large Parallel Systems  

SciTech Connect

Reverse computation is presented here as an important future direction in addressing the challenge of fault tolerant execution on very large cluster platforms for parallel computing. As the scale of parallel jobs increases, traditional checkpointing approaches suffer scalability problems ranging from computational slowdowns to high congestion at the persistent stores for checkpoints. Reverse computation can overcome such problems and is also better suited for parallel computing on newer architectures with smaller, cheaper or energy-efficient memories and file systems. Initial evidence for the feasibility of reverse computation in large systems is presented with detailed performance data from a particle simulation scaling to 65,536 processor cores and 950 accelerators (GPUs). Reverse computation is observed to deliver very large gains relative to checkpointing schemes when nodes rely on their host processors/memory to tolerate faults at their accelerators. A comparison between reverse computation and checkpointing with measurements such as cache miss ratios, TLB misses and memory usage indicates that reverse computation is hard to ignore as a future alternative to be pursued in emerging architectures.

Perumalla, Kalyan S [ORNL; Park, Alfred J [ORNL

2013-01-01T23:59:59.000Z

397

A Novel Meta Learning System and Its Application to Optimization of Computing Agents' Results  

Science Conference Proceedings (OSTI)

We present a description of our multi-agent system where computational intelligence methods are embodied as software agents. This system is designed in order to allow easy experiments with learning, meta learning, gathering experience based on previous ... Keywords: meta-learning, multi-agent system, ontology, roles, data-mining

Ondrej Kazik; Klara Peskova; Martin Pilat; Roman Neruda

2012-12-01T23:59:59.000Z

398

PARALLEL COMPUTING AIDED DESIGN OF EARTHING SYSTEMS FOR ELECTRICAL SUBSTATIONS IN NON HOMOGENEOUS SOIL  

E-Print Network (OSTI)

and design of grounding systems of electrical substations have been proposed, most of them based on practicePARALLEL COMPUTING AIDED DESIGN OF EARTHING SYSTEMS FOR ELECTRICAL SUBSTATIONS IN NON HOMOGENEOUS Abstract. An accurate design of grounding systems is essential to assure the safety of the persons

Colominas, Ignasi

399

The ISR Argus 500 system - control of the beam transfer power supplies by the Argus 500 computer operators manual  

E-Print Network (OSTI)

The ISR Argus 500 system - control of the beam transfer power supplies by the Argus 500 computer operators manual

Kemp, D

1970-01-01T23:59:59.000Z

400

A Dialogue Concerning Two World Systems: Info-Computational vs. Mechanistic  

E-Print Network (OSTI)

The dialogue develops arguments for and against adopting a new world system, info-computationalist naturalism, that is poised to replace the traditional mechanistic world system. We try to figure out what the info-computational paradigm would mean, in particular its pancomputationalism. We make some steps towards developing the notion of computing that is necessary here, especially in relation to traditional notions. We investigate whether pancomputationalism can possibly provide the basic causal structure to the world, whether the overall research programme appears productive and whether it can revigorate computationalism in the philosophy of mind.

Gordana Dodig-Crnkovic; Vincent C. Mueller

2009-10-26T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

SEACC: the systems engineering and analysis computer code for small wind systems  

DOE Green Energy (OSTI)

The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.

Tu, P.K.C.; Kertesz, V.

1983-03-01T23:59:59.000Z

402

Federal Energy Management Program: EISA Compliance Tracking System Reports  

NLE Websites -- All DOE Office Websites (Extended Search)

EISA Compliance EISA Compliance Tracking System Reports and Data to someone by E-mail Share Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on Facebook Tweet about Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on Twitter Bookmark Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on Google Bookmark Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on Delicious Rank Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on Digg Find More places to share Federal Energy Management Program: EISA Compliance Tracking System Reports and Data on AddThis.com... Requirements by Subject Requirements by Regulation Notices & Rules

403

Computational methods for criticality safety analysis within the scale system  

SciTech Connect

The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs.

Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.

1986-01-01T23:59:59.000Z

404

Coordinating government funding of file system and I/O research through the high end computing university research activity  

Science Conference Proceedings (OSTI)

In 2003, the High End Computing Revitalization Task Force designated file systems and I/O as an area in need of national focus. The purpose of the High End Computing Interagency Working Group (HECIWG) is to coordinate government spending on File Systems ... Keywords: file systems, high end computing, storage

Gary Grider; James Nunez; John Bent; Steve Poole; Rob Ross; Evan Felix

2009-01-01T23:59:59.000Z

405

May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance  

E-Print Network (OSTI)

May 28, 2007 Middleware in Modern High Performance Computing System Architectures 1/20 Middleware in Modern High Performance Computing System Architectures Christian Engelmann1,2, Hong Ong1, Stephen L 28, 2007 Middleware in Modern High Performance Computing System Architectures 2/20 Talk Outline

Engelmann, Christian

406

Computing Frontier: Distributed Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and simulations, and allowing for wide-spread participation of large groups of researchers. For a variety of reasons, these resources have become more distributed over a large geographic area, and some resources are highly specialized computing machines. In this report for the Snowmass Computing Frontier Study, we consider several questions about distributed computing

407

Modelling and computation for designs of multistage heat exchanger systems  

Science Conference Proceedings (OSTI)

A multistage heat exchanger system is formed when it is desired to heat a single cold fluid stream with the help of several available hot streams. Usually only one specific size combination will lead to total minimum cost. The determination of these ... Keywords: Heat Exchangers, multistage, optimisation

A. Malhotra; S. B. Muhaddin

1990-12-01T23:59:59.000Z

408

Development of a numerical computer code and circuit element models for simulation of firing systems  

SciTech Connect

Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

Carpenter, K.H. (Kansas State Univ., Manhattan, KS (USA). Dept. of Electrical and Computer Engineering)

1990-07-02T23:59:59.000Z

409

Quick Reference Guide for BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Introducing Challenger Quick Reference Guide System Overview Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Quick Reference Guide for BG/P Systems Contents Hardware Description Compiling/Linking Running/Queuing Libraries/Applications Performance Tools Debugging Back to top Hardware Description Surveyor - 13.6 TF/s 1 rack BG/P (1024 compute nodes/4096 CPUs) Intrepid - 557.1 TF/s 40 rack BG/P (40960 compute nodes/163840 CPUs) Front-end nodes (FENs), or login nodes - Regular Linux-based computers for

410

Ocean energy conversion systems annual research report  

DOE Green Energy (OSTI)

Alternative power cycle concepts to the closed-cycle Rankine are evaluated and those that show potential for delivering power in a cost-effective and environmentally acceptable fashion are explored. Concepts are classified according to the ocean energy resource: thermal, waves, currents, and salinity gradient. Research projects have been funded and reported in each of these areas. The lift of seawater entrained in a vertical steam flow can provide potential energy for a conventional hydraulic turbine conversion system. Quantification of the process and assessment of potential costs must be completed to support concept evaluation. Exploratory development is being completed in thermoelectricity and 2-phase nozzles for other thermal concepts. Wave energy concepts are being evaluated by analysis and model testing with present emphasis on pneumatic turbines and wave focussing. Likewise, several conversion approaches to ocean current energy are being evaluated. The use of salinity resources requires further research in membranes or the development of membraneless processes. Using the thermal resource in a Claude cycle process as a power converter is promising, and a program of R and D and subsystem development has been initiated to provide confirmation of the preliminary conclusion.

Not Available

1981-03-01T23:59:59.000Z

411

Evaluation of Revised Computer-Based Procedure System Prototype  

SciTech Connect

The nuclear power industry is very procedure driven, i.e. almost all activities that take place at a nuclear power plant are conducted by following procedures. The paper-based procedures (PBPs) currently used by the industry do a good job at keeping the industry safe. However, these procedures are most often paired with methods and tools put in place to anticipate, prevent, and catch errors related to hands-on work. These tools are commonly called human performance tools. The drawback with the current implementation of these tools is that the task of performing one procedure becomes time and labor intensive. For example, concurrent and independent verification of procedure steps are required at times, which essentially means that at least two people have to be actively involved in the task. Even though the current use of PBPs and human performance tools are keeping the industry safe, there is room for improvement. The industry could potentially increase their efficiency and safety by replacing their existing PBPs with CBPs. If implemented correctly, the CBP system could reduce the time and focus spent on using the human performance tools. Some of the tools can be completely incorporated in the CBP system in a manner that the performer does not think about the fact that these tools are being used. Examples of these tools are procedure use and adherence, placekeeping, and peer checks. Other tools can be partly integrated in a fashion that reduce the time and labor they require, such as concurrent and independent verification. The incorporation of advanced technology, such as CBP systems, may help to manage the effects of aging systems, structures, and components. The introduction of advanced technology may also make the existing LWR fleet more attractive to the future workforce, which will be of importance when the future workforce will chose between existing fleet and the newly built nuclear power plants.

Katya Le Blanc; Johanna Oxstrand; Cheradan Fikstad

2013-01-01T23:59:59.000Z

412

Fast computation of multi-scale combustion systems  

E-Print Network (OSTI)

In the present work, we illustrate the process of constructing a simplified model for complex multi-scale combustion systems. To this end, reduced models of homogeneous ideal gas mixtures of methane and air are first obtained by the novel Relaxation Redistribution Method (RRM) and thereafter used for the extraction of all the missing variables in a reactive flow simulation with a global reaction model.

Chiavazzo, Eliodoro; Asinari, Pietro

2010-01-01T23:59:59.000Z

413

Center for Power Electronics Systems PROGRESS REPORT  

E-Print Network (OSTI)

PROGRESS REPORT 2010 SMES program S uperconducting magnetic energy storage (SMES) is a way of storing TECHNOLOGIES Standard-Cell Passive IPEMs Motor and Converter Integration Control and Sensor Integration Thermal;CPES 10 YEAR PROGRESS REPORT 2010 CPES 10 YEAR PROGRESS REPORT Chapter 1: Introduction Outlines

Beex, A. A. "Louis"

414

CURRENT STATUS OF DIII-D PLASMA CONTORL SYSTEM COMPUTER UPGADES  

SciTech Connect

OAK-B135 This paper presents the latest status of the DIII-D Real-Time Digital Plasma Control System (PCS). The primary focus will be on the new computing and data acquisition hardware recently incorporated into the PCS and the added features provided by the new hardware. The upgrade of the PCS real-time computing and data acquisition hardware has been in progress for the past three and a half years. Since the initial proposal to migrate off the old VME i860 based system to more modern computing hardware, the PCS has seen a number of changes and improvements occurring in several planned phases. The final phase of the upgrade was successfully completed earlier this year. This phase included the complete removal of the i860 computers and Traq digitizer hardware and the incorporation of high performance Intel PCI based computers fitted with 32 Channel PCI digitizers from D-TACQ corporation. Distinguishing features of the completed PCS upgrade include a custom real-time data acquisition and control solution based on the Linux Intel computing platform and a scaleable multi CPU parallel processing architecture connected by a 2 gigabit per second Myrinet network. Among the many improvements resulting from the upgrade are the higher computing performance (factor 20 over the previous system), connectivity to remote DIII-D diagnostics through Myrinet fiber optic links and plasma diagnostic data displayed in real-time on digital oscilloscopes.

PENAFLOR,B.G; FERRON,J.R; JOHNSON,R.D; PIGLOWSKI,D.A

2003-07-01T23:59:59.000Z

415

Proceedings of the IEEE Conf. on RealTime Computer Systems and Applications, Hiroshima, Japan, Oct. 1998 1 Simulation and Tracing of Hybrid Task Sets on Distributed Systems  

E-Print Network (OSTI)

Proceedings of the IEEE Conf. on Real­Time Computer Systems and Applications, Hiroshima, Japan, Oct of the IEEE Conf. on Real­Time Computer Systems and Applications, Hiroshima, Japan, Oct. 1998 2 Tokuda et al

Lipari, Giuseppe

416

Conceptual design and systems analysis of photovoltaic power systems. Final report. Volume III(2). Technology  

DOE Green Energy (OSTI)

Conceptual designs were made and analyses were performed on three types of solar photovoltaic power systems. Included were Residential (1 to 10 kW), Intermediate (0.1 to 10 MW), and Central (50 to 1000 MW) Power Systems to be installed in the 1985 to 2000 time period. The following analyses and simulations are covered: residential power system computer simulations, intermediate power systems computer simulation, central power systems computer simulation, array comparative performance, utility economic and margin analyses, and financial analysis methodology.

Pittman, P.F.

1977-05-01T23:59:59.000Z

417

Control System Applicable Use Assessment of the Secure Computing Corporation - Secure Firewall (Sidewinder)  

Science Conference Proceedings (OSTI)

Battelles National Security & Defense objective is, applying unmatched expertise and unique facilities to deliver homeland security solutions. From detection and protection against weapons of mass destruction to emergency preparedness/response and protection of critical infrastructure, we are working with industry and government to integrate policy, operational, technological, and logistical parameters that will secure a safe future. In an ongoing effort to meet this mission, engagements with industry that are intended to improve operational and technical attributes of commercial solutions that are related to national security initiatives are necessary. This necessity will ensure that capabilities for protecting critical infrastructure assets are considered by commercial entities in their development, design, and deployment lifecycles thus addressing the alignment of identified deficiencies and improvements needed to support national cyber security initiatives. The Secure Firewall (Sidewinder) appliance by Secure Computing was assessed for applicable use in critical infrastructure control system environments, such as electric power, nuclear and other facilities containing critical systems that require augmented protection from cyber threat. The testing was performed in the Pacific Northwest National Laboratorys (PNNL) Electric Infrastructure Operations Center (EIOC). The Secure Firewall was tested in a network configuration that emulates a typical control center network and then evaluated. A number of observations and recommendations are included in this report relating to features currently included in the Secure Firewall that support critical infrastructure security needs.

Hadley, Mark D.; Clements, Samuel L.

2009-01-01T23:59:59.000Z

418

Oak Ridge Leadership Computing Facility User Update: SmartTruck Systems |  

NLE Websites -- All DOE Office Websites (Extended Search)

Leadership Computing Facility User Update: SmartTruck Systems Leadership Computing Facility User Update: SmartTruck Systems Startup zooms to success improving fuel efficiency of long-haul trucks by more than 10 percent Supercomputing simulations at Oak Ridge National Laboratory enabled SmartTruck Systems engineers to develop the UnderTray System, some components of which are shown here. The system dramatically reduces drag-and increases fuel mileage-in long-haul trucks. Image: Michael Matheson, Oak Ridge National Laboratory Supercomputing simulations at Oak Ridge National Laboratory enabled SmartTruck Systems engineers to develop the UnderTray System, some components of which are shown here. The system dramatically reduces drag-and increases fuel mileage-in long-haul trucks. Image: Michael Matheson, Oak Ridge National Laboratory (hi-res image)

419

A computer music instrumentarium  

E-Print Network (OSTI)

Chapter 6. COMPUTERS: To Solder or Not toMusic Models : A Computer Music Instrumentarium . . . . .Interactive Computer Systems . . . . . . . . . . . . . . 101

Oliver La Rosa, Jaime Eduardo

2011-01-01T23:59:59.000Z

420

ETTM: Low Voltage System Protection and Coordination Computer-Based Training Version 1.0  

Science Conference Proceedings (OSTI)

ETTM Low Voltage System Protection and Coordination is a computer-based training module that allows users to access training when desired and review it at their own pace. It provides graphics and limited interactive features to enhance learning. Low voltage power systems are generally classified as AC power systems operating at the 120 600 Volt level. Because they often serve a large number of diverse loads in residences, businesses, and power plants, low voltage power systems present a unique set of cha...

2010-11-30T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

WIPP Waste Information System Waste Container Data Report  

E-Print Network (OSTI)

WIPP Waste Information System Waste Container Data Report 06/06/2008 07:50 2.6 % LASB00411 % % Report Date Run by Report Site Id Container Number Waste Stream Data Status Code PEARCYM Version RP0360 Selection Criteria - Total Pages PRD02Instance 5 #12;Waste Isolation Pilot Plant Waste Container Data Report

422

WIPP Waste Information System Waste Container Data Report  

E-Print Network (OSTI)

WIPP Waste Information System Waste Container Data Report 06/06/2008 07:49 2.6 % LAS817174 % % Report Date Run by Report Site Id Container Number Waste Stream Data Status Code PEARCYM Version RP0360 Selection Criteria - Total Pages PRD02Instance 5 #12;Waste Isolation Pilot Plant Waste Container Data Report

423

Environmental Systems Research and Analysis FY 2000 Annual Report  

SciTech Connect

The Environmental Systems Research (ESR) Program, a part of the Environmental Systems Research and Analysis (ESRA) Program, was implemented to enhance and augment the technical capabilities of the INEEL. Strengthening the Technical capabilities of the INEEL will provide the technical base to serve effectively as the Environmental Management Laboratory for the Office of Environmental Management (EM). This is a progress report for the third year of the ESR Program (FY 2000). A report of activities is presented for the five ESR research investment areas: (1) Transport Aspects of Selective Mass Transport Agents, (2) Chemistry of Environmental Surfaces, (3) Materials Dynamics, (4) Characterization Science, and (5) Computational Simulation of Mechanical and Chemical Systems. In addition to the five technical areas, the report describes activities in the Science and Technology Foundations element of the program, e.g., interfaces between ESR and the EM Science Program (EMSP) and the EM Focus Areas. The five research areas are subdivided into 18 research projects. FY 2000 research in these 18 projects has resulted in more than 50 technical papers that are in print, in press, in review, or in preparation. Additionally, more than 100 presentations were made at professional society meetings nationally and internationally. Work supported by this program was in part responsible for one of our researchers, Dr. Mason Harrup, receiving the Department of Energys Bright Light and Energy at 23 awards. Significant accomplishments were achieved. Non-Destructive Assay hardware and software was deployed at the INEEL, enhancing the quality and efficiency of TRU waste characterization for shipment. The advanced tensiometer has been employed at numerous sites around the complex to determine hydrologic gradients in variably saturated vadose zones. An ion trap, secondary ion mass spectrometer (IT-SIMS) was designed and fabricated to deploy at the INEEL site to measure the chemical speciation of radionuclides and toxic metals on the surfaces of environmentally significant minerals. The FY 2001 program will have a significantly different structure and research content. This report presents the final summary of projects coming to an end in FY 2000 and is a bridge to the FY 2001 program.

David L. Miller; Castle, Peter Myer; Steven J. Piet

2001-01-01T23:59:59.000Z

424

PHYSICS, COMPUTER SCIENCE AND MATHEMATICS DIVISION. ANNUAL REPORT, 1 JAN. - 31 DEC. 1976  

E-Print Network (OSTI)

Steele J. Wagner 'J COMPUTER SCIENCE AND ApPLIED MATHEMATICSS. White Peter M. Wood COMPUTER CENTER STAFF U. F. J. E. P.Distributed Data Management and Computer Networks, LBL-S3lS,

Authors, Various

2010-01-01T23:59:59.000Z

425

Status Report, Essential System Functionality - January 2006 | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Status Report, Essential System Functionality - January 2006 Status Report, Essential System Functionality - January 2006 Status Report, Essential System Functionality - January 2006 Report on Essential System Functionality In 2004 and 2005, the Office of Independent Oversight, within the Office of Security and Safety Performance Assurance, performed ten evaluations of essential system functionality (ESF). These ESF reviews are highly technical, detailed engineering evaluations of selected essential systems within one or more facilities at each site. This report summarizes the observations and insights from these reviews. Although most essential safety systems that were reviewed were well maintained, tested, and operated, there were significant weaknesses in some aspects of engineering design and analysis that, for some safety systems,

426

System integration of marketable subsystems (second collection of progress reports)  

DOE Green Energy (OSTI)

These monthly reports, covering the period from February 1978 through June 1978, reflect the progress in five major areas: systems integration of marketable subsystems; development, design, and building of site data acquisition subsystems; development and operation of the central data processing system; operation of the MSFC solar test facility; and systems analysis. Some of these reports are in presentation form (charts)

Not Available

1978-07-01T23:59:59.000Z

427

Prototype solar heating and cooling systems. Monthly progress reports  

DOE Green Energy (OSTI)

This report is a collection of monthly status reports from the AiResearch Manufacturing Company, who is developing eight prototype solar heating and cooling systems under NASA Contract NAS8-32091. This effort calls for the development, manufacture, test, system installation, maintenance, problem resolution, and performance evaluation. The systems are 3-, 25-, and 75-ton size units.

Not Available

1978-10-01T23:59:59.000Z

428

Computational Research Challenges and Opportunities for the Optimization of Fossil Energy Power Generation System  

Science Conference Proceedings (OSTI)

Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.

Zitney, S.E.

2007-06-01T23:59:59.000Z

429

Systems, methods and computer-readable media for modeling cell performance fade of rechargeable electrochemical devices  

SciTech Connect

A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

Gering, Kevin L

2013-08-27T23:59:59.000Z

430

Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices  

DOE Patents (OSTI)

A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

Gering, Kevin L.

2013-01-01T23:59:59.000Z

431

Systems, methods and computer readable media for estimating capacity loss in rechargeable electrochemical cells  

DOE Patents (OSTI)

A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples charge characteristics of the electrochemical cell. The computing system periodically determines cell information from the charge characteristics of the electrochemical cell. The computing system also periodically adds a first degradation characteristic from the cell information to a first sigmoid expression, periodically adds a second degradation characteristic from the cell information to a second sigmoid expression and combines the first sigmoid expression and the second sigmoid expression to develop or augment a multiple sigmoid model (MSM) of the electrochemical cell. The MSM may be used to estimate a capacity loss of the electrochemical cell at a desired point in time and analyze other characteristics of the electrochemical cell. The first and second degradation characteristics may be loss of active host sites and loss of free lithium for Li-ion cells.

Gering, Kevin L.

2013-06-18T23:59:59.000Z

432

An OVAL-based active vulnerability assessment system for enterprise computer networks  

Science Conference Proceedings (OSTI)

Many security problems are caused by vulnerabilities hidden in enterprise computer networks. It is very important for system administrators to have knowledge about the security vulnerabilities. However, current vulnerability assessment methods may encounter ... Keywords: Attack path, Network security, Open vulnerability assessment language, Predicate logic, Relational database management system, Security vulnerability

Xiuzhen Chen; Qinghua Zheng; Xiaohong Guan

2008-11-01T23:59:59.000Z

433

Adaptive Schemes for Home-based DSM Systems School of Computing  

E-Print Network (OSTI)

Adaptive Schemes for Home-based DSM Systems M.C. Ng School of Computing National University Home-based consistency model, a variant of lazy release consistency model (LRC), is a recent, we present 2 adaptive schemes for home-based DSM systems: home migration and dynamic adaptation

Wong, Weng Fai

434

A Paired-Orientation Alignment Problem in a Hybrid Tracking System for Computer Assisted Surgery  

Science Conference Proceedings (OSTI)

Coordinate Alignment (CA) is an important problem in hybrid tracking systems involving two or more tracking devices. CA typically associates the measurements from two or more tracking systems with respect to distinct base frames and makes them comparable ... Keywords: Computer assisted surgery (CAS), Coordinate alignment, Paired-Orientation Alignment (POA), Quaternions, Surgical navigation

Hongliang Ren; Peter Kazanzides

2011-08-01T23:59:59.000Z

435

A Computer-Controlled Continuous Air Drying and Flask Sampling System  

Science Conference Proceedings (OSTI)

A computer-controlled continuous air drying and flask sampling system has been developed and is discussed here. This system is set up for taking air samples automatically at remote places. Twenty glass flasks can be connected one by one or in ...

R. E. M. Neubert; L. L. Spijkervet; J. K. Schut; H. A. Been; H. A. J. Meijer

2004-04-01T23:59:59.000Z

436

Research Note: A high performance algorithm for static task scheduling in heterogeneous distributed computing systems  

Science Conference Proceedings (OSTI)

Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and ... Keywords: Directed acyclic graph, Heterogeneous systems, Heuristics, Parallel processing, Task scheduling

Mohammad I. Daoud; Nawwaf Kharma

2008-04-01T23:59:59.000Z

437

Inertial-Dissipation Air-Sea Flux Measurements: A Prototype System Using Realtime Spectral Computations  

Science Conference Proceedings (OSTI)

A prototype system for the measurement and computation of airsea fluxes in realtime was tested in the Humidity Exchange Over the Sea (HEXOS) main experiment, HEXMAX. The system used a sonic anemometer/thermometer for wind speed, surface stress ...

C. W. Fairall; J. B. Edson; S. E. Larsen; P. G. Mestayer

1990-06-01T23:59:59.000Z

438

Janus II: a new generation application-driven computer for spin-system simulations  

E-Print Network (OSTI)

This paper describes the architecture, the development and the implementation of Janus II, a new generation application-driven number cruncher optimized for Monte Carlo simulations of spin systems (mainly spin glasses). This domain of computational physics is a recognized grand challenge of high-performance computing: the resources necessary to study in detail theoretical models that can make contact with experimental data are by far beyond those available using commodity computer systems. On the other hand, several specific features of the associated algorithms suggest that unconventional computer architectures, which can be implemented with available electronics technologies, may lead to order of magnitude increases in performance, reducing to acceptable values on human scales the time needed to carry out simulation campaigns that would take centuries on commercially available machines. Janus II is one such machine, recently developed and commissioned, that builds upon and improves on the successful JANUS m...

Baity-Jesi, M; Cruz, A; Fernandez, L A; Gil-Narvion, J M; Gordillo-Guerrero, A; Iiguez, D; Maiorano, A; Mantovani, F; Marinari, E; Martin-Mayor, V; Monforte-Garcia, J; Sudupe, A Muoz; Navarro, D; Parisi, G; Perez-Gaviro, S; Pivanti, M; Ricci-Tersenghi, F; Ruiz-Lorenzo, J J; Schifano, S F; Seoane, B; Tarancon, A; Tripiccione, R; Yllanes, D

2013-01-01T23:59:59.000Z

439

2010 Smart Grid System Report Available (February 2012) | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

2010 Smart Grid System Report Available (February 2012) 2010 Smart Grid System Report Available (February 2012) 2010 Smart Grid System Report Available (February 2012) February 24, 2012 - 2:58pm Addthis The Department of Energy has submitted the 2010 Smart Grid System Report in response to Section 1302 of Title XIII of the Energy Independence and Security Act (EISA), which directs the Secretary of Energy to report to Congress concerning the status of smart grid deployments nationwide and any regulatory or government barriers to continued deployment. This is the second installment of this report to Congress. A smart grid uses digital technology to improve the reliability, security, efficiency, and environmental impact of the electricity system, from large generation through the delivery systems to electricity consumers.

440

The family of standard hydrogen monitoring system computer software design description: Revision 2  

DOE Green Energy (OSTI)

In March 1990, 23 waste tanks at the Hanford Nuclear Reservation were identified as having the potential for the buildup of gas to a flammable or explosive level. As a result of the potential for hydrogen gas buildup, a project was initiated to design a standard hydrogen monitoring system (SHMS) for use at any waste tank to analyze gas samples for hydrogen content. Since it was originally deployed three years ago, two variations of the original system have been developed: the SHMS-B and SHMS-C. All three are currently in operation at the tank farms and will be discussed in this document. To avoid confusion in this document, when a feature is common to all three of the SHMS variants, it will be referred to as ``The family of SHMS.`` When it is specific to only one or two, they will be identified. The purpose of this computer software design document is to provide the following: the computer software requirements specification that documents the essential requirements of the computer software and its external interfaces; the computer software design description; the computer software user documentation for using and maintaining the computer software and any dedicated hardware; and the requirements for computer software design verification and validation.

Bender, R.M.

1994-11-16T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Development of Computation Capabilities to Predict the Corrosion Wastage of Boiler Tubes in Advanced Combustion Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Computation Capabilities Computation Capabilities to Predict the Corrosion Wastage of Boiler Tubes in Advanced Combustion Systems Background Staged combustion is a method of reducing nitrogen oxide (NO x ) emissions in boilers by controlling the combustion mixture of air and fuel. Its process conditions are particularly corrosive to lower furnace walls. Superheaters and/or reheaters are often employed in the upper furnace to reuse hot combustion gasses to further raise the

442

Report of the Snowmass 2013 Computing Frontier working group on Lattice Field Theory -- Lattice field theory for the energy and intensity frontiers: Scientific goals and computing needs  

E-Print Network (OSTI)

This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

Blum, T; Holmgren, D; Brower, R; Catterall, S; Christ, N; Kronfeld, A; Kuti, J; Mackenzie, P; Neil, E T; Sharpe, S R; Sugar, R

2013-01-01T23:59:59.000Z

443

Report of the Snowmass 2013 Computing Frontier working group on Lattice Field Theory -- Lattice field theory for the energy and intensity frontiers: Scientific goals and computing needs  

E-Print Network (OSTI)

This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study ("Snowmass"). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.

T. Blum; R. S. Van de Water; D. Holmgren; R. Brower; S. Catterall; N. Christ; A. Kronfeld; J. Kuti; P. Mackenzie; E. T. Neil; S. R. Sharpe; R. Sugar

2013-10-23T23:59:59.000Z

444

Options for improving computing and data system support for HQ USTRANSCOM (Headquarters, US Transportation Command) deployment planning  

Science Conference Proceedings (OSTI)

The Decision Systems Research Section of the Oak Ridge National Laboratory (ORNL) is assisting the Deployment Systems Division of the Headquarters, US Transportation Command (HQ USTRANSCOM) with an evaluation of options for improving the computing and data systems support for deliberate and time-critical joint deployment planning. USTRANSCOM, which is a unified command (i.e., personnel are drawn from all the services), was created in the fall of 1987 to consolidate the functions of the former military transportation operating agencies (the Military Airlift Command, the Military Traffic Management Command, and the Military Sealift Command). An important factor in the creation of USTRANSCOM was the possibility of achieving more efficient joint deployment planning through consolidation of the computing and data systems used by the command's strategic mobility planners and operation center personnel. This report, the third in a series to be produced in the course of ORNL studies for USTRANSCOM, presents options for improving automation support for HQ USTRANSCOM deployment planning. The study covered methods for improving data concepts used in deployment databases, recommendations for extending the life of the Joint Deployment system, and alternatives for integrating HQ USTRANSCOM planning support with systems at MAC, MTMC, and MSC. 36 refs.

Not Available

1988-08-01T23:59:59.000Z

445

3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009  

E-Print Network (OSTI)

3rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009 for High Performance Computing (HPCVirt) 2009, Nuremberg, Germany, March 30, 2009 Outline · Background work #12;3/193rd Workshop on System-level Virtualization for High Performance Computing (HPCVirt) 2009

Engelmann, Christian

446

IEEE TRANSACTIONS ON COMPUTERS, VOL. c-30, NO. 12, DECEMBER 1981 PASM: A Partitionable SIMD/MIMD System  

E-Print Network (OSTI)

IEEE TRANSACTIONS ON COMPUTERS, VOL. c-30, NO. 12, DECEMBER 1981 PASM: A Partitionable SIMD/or MIMD machines. PASM consists of a parallel computation unit, which contains N processors, N memories systems, multiple-SIMD machines, parallel processing, partitionable computer systems, PASM, recon

447

Project Assessment and Reporting System (PARS II) | Department...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

byDeputySecretaryPoneman17May2010.pdf More Documents & Publications Earned Value (EV) Analysis and Project Assessment & Reporting System (PARS II) PARS II Data Quality...

448

Model documentation report: Industrial sector demand module of the national energy modeling system  

SciTech Connect

This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Model. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code. This document serves three purposes. First, it is a reference document providing a detailed description of the NEMS Industrial Model for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its model. Third, it facilitates continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements as future projects.

NONE

1998-01-01T23:59:59.000Z

449

The Development of Gait Training System for Computer-Aided Rehabilitation  

Science Conference Proceedings (OSTI)

This paper reports on the development of a gait training system and its experimental results. The outstanding feature of the system is the constant dynamic adjustment of the tension lifting the trainee based on the information from floor reaction force. ...

Hidetaka Ikeuchi; Satoshi Arakane; Kengo Ohnishi; Keiji Imado; Yukio Saito; Hiroomi Miyagawa

2002-07-01T23:59:59.000Z

450

SUPPORT OF NEW COMPUTER HARDWARE AT LUCH'S MC&A SYSTEM: PROBLEMS AND A SOLUTION  

Science Conference Proceedings (OSTI)

Abstract Microsoft Windows NT 4.0 operating system is the only software product certified in Russia for using in MC&A systems. In the paper a solution for allowing the installation of this outdated operating system on new computers is discussed. The solution has been successfully tested and has been in use at Luch's network since March 2008. Furthermore, it is being recommended for other Russian enterprises for the same purpose. Introduction Typically, the software part of a nuclear material control and accounting (MC&A) system consists of an operating system (OS), database management systems (DBMS), accounting program itself and database of nuclear materials. Russian regulations require the operating system and database for MC&A be certified for information security, and the whole system must pass an accreditation. Historically, the only certified operating system for MC&A still continues to be Microsoft Windows NT 4.0 Server/Workstation. Attempts to certify newer versions of Windows failed. Luch, like most other Russian sites, uses Microsoft Windows NT 4.0 and SQL Server 6.5. Luch's specialists have developed an application (LuchMAS) for accounting purposes. Starting from about 2004, some problems appeared in Luch's accounting system. They were related to the complexity of installing Windows NT 4.0 on new computers. At first, it was possible to solve the problem choosing computer equipment that is compatible with Windows NT 4.0 or selecting certain operating system settings. Over time, the problem worsened and now it is almost impossible to install Windows NT 4.0 on new computers. The reason is the lack of hardware drivers in the outdated operating system. The problem was serious enough that it could have affected the long-term sustainability of Luch's MC&A system if adequate alternate measures were not developed.

Fedoseev, Victor; Shanin, Oleg

2009-07-14T23:59:59.000Z

451

Analysis of fuel savings associated with fuel computers in multifamily buildings. Final report  

SciTech Connect

This research was undertaken to quantify the energy savings associated with the installation of a direct monitoring control system (DMC) on steam heating plants in multi-family buildings located in the New York City metropolitan area. The primary objective was to determine whether fuel consumption was lower in buildings employing a DMC relative to those using the more common indirect monitoring control system (IMC) and if so, to what extent. The analysis compares the fuel consumption of 442 buildings over 12 months. The type of control system installed in these buildings was either a Heat-Timer (identified as IMC equipment) or a computer-based unit (identified as DMC equipment). IMC provides control by running the boiler for longer or shorter periods depending on outdoor temperature. This system is termed indirect because there is no feedback from indoor (apartment) temperatures to the control. DMC provides control by sensing apartment temperatures. In a typical multifamily building, sensors are hard wired to between 5 and 10 apartments sensors. The annual savings and simple payback were computed for the DMC buildings by comparing annual fuel consumption among the building groupings. The comparison is based on mean BTUs per degree day consumed annually and normalized for building characteristics, such as, equipment maintenance and boiler steady state efficiency as well as weather conditions. The average annual energy consumption for the DMC buildings was 14.1 percent less than the annual energy consumption for the IMC buildings. This represents 3,826 gallons of No. 6 fuel oil or $2,295 at a price of $0.60 per gallon. A base DMC system costs from $8,400 to $10,000 installed depending on the number of sensors and complexity of the system. The standard IMC system costs from $2,000 to $3,000 installed. Based on this analysis the average simple payback is 2.9 or 4.0 years depending on either an upgrade from IMC to DMC (4.0 years) or a new installation (2.9) years.

McNamara, M.; Anderson, J.; Huggins, E. [EME Group, New York, NY (US)

1993-06-01T23:59:59.000Z

452

SIMS Prototype System 4: performance test report  

DOE Green Energy (OSTI)

The results obtained during testing of a self-contained, preassembled air type solar system, designed for installation remote from the dwelling, to provide space heating and hot water are presented. Data analysis is included which documents the system performance and verifies the suitability of SIMS Prototype System 4 for field installation.

Not Available

1978-10-09T23:59:59.000Z

453

Acceptance Test Report for 241-U compressed air system  

SciTech Connect

This Acceptance Test Report (ATR) documents the results of acceptance testing of a newly upgraded compressed air system at 241-U Farm. The system was installed and the test successfully performed under work package 2W-92-01027.

Freeman, R.D.

1994-10-20T23:59:59.000Z

454

A. TECHNICAL PROGRESS REPORT ON METABOLISM IN NERVOUS SYSTEM  

SciTech Connect

Progress is reported in investigations into nucleic acid and protein metabolism in the central nervous system. Data are presented from biochemical and histochemical studies of metabolism in the nervous system of cats. (C.H.)

1961-01-01T23:59:59.000Z

455

Machine Environment FAQs on BG/P Systems | Argonne Leadership Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

BG/P Driver Information Prior BG/P Driver Information Internal Networks Machine Environment FAQs Block and Job State Documentation Machine Partitions Data Transfer Data Storage & File Systems Compiling and Linking Queueing and Running Jobs Debugging and Profiling Performance Tools and APIs IBM References Software and Libraries Tukey Eureka / Gadzooks Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Machine Environment FAQs on BG/P Systems What is the /proc filesystem? The CNK OS on the compute nodes does not provide a standard /proc file system with information about the processes running on the compute node. A /jobs directory does however exist which provides limited information about

456

DualTrust: A Trust Management Model for Swarm-Based Autonomic Computing Systems  

Science Conference Proceedings (OSTI)

Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

Maiden, Wendy M.

2010-05-01T23:59:59.000Z

457

Computer Aided Fault Tree Analysis System (CAFTA), Version 6.0 Demo  

Science Conference Proceedings (OSTI)

CAFTA is a computer software program used for developing reliability models of large complex systems, using fault tree and event tree methodology.DescriptionCAFTA is designed to meet the many needs of reliability analysts while performing fault tree/event tree analysis on a system or group of systems. It includes:Fault Tree Editor for building, updating and printing fault tree modelsEvent Tree Editor for building, ...

2013-02-18T23:59:59.000Z

458

2009 Smart Grid System Report (July 2009)  

Energy.gov (U.S. Department of Energy (DOE))

Section 1302 of Title XIII of the Energy Independence and Security Act of 2007 directs the Secretary of Energy to report to Congress concerning the status of smart grid deployments nationwide and...

459

Hardware and Software Upgrades to DIII-D Main Computer Control System  

Science Conference Proceedings (OSTI)

The complexities of monitoring and controlling the various DIII-D tokamak systems have always required the aid of high-speed computer resources. Because of recent improvements in computing technology, DIII-D has upgraded both hardware and software for the central DIII-D control system. This system is responsible for coordination of all main DIII-D subsystems during a plasma discharge. The replacement of antiquated older hardware has increased reliability and reduced costs both in the initial procurement and eventual maintenance of the system. As expected, upgrading the corresponding computer software has become the more time consuming and expensive part of this upgrade. During this redesign, the main issues focused on making the most of existing in-house codes, speed with which the new system could be brought on-line, the ability to add new features/enhancements, ease of integration with all DIII-D systems and future portability/upgrades. The resulting system has become a template by which other DIII-D systems can follow during similar upgrade paths; in particular DIII-D's main data acquisition system and neutral beam injection (NBI).

Piglowski, D. A.; Penaflor, B.G.; McHarg, JR., B.B.; Greene, K.L.; Coon, R.M.; Phillips, J.C.

2002-09-01T23:59:59.000Z

460

Manzanita Hybrid Power system Project Final Report  

DOE Green Energy (OSTI)

The Manzanita Indian Reservation is located in southeastern San Diego County, California. The Tribe has long recognized that the Reservation has an abundant wind resource that could be commercially utilized to its benefit, and in 1995 the Tribe established the Manzanita Renewable Energy Office. Through the U.S. Department of Energy's Tribal Energy Program the Band received funds to install a hybrid renewable power system to provide electricity to one of the tribal community buildings, the Manzanita Activities Center (MAC building). The project began September 30, 1999 and was completed March 31, 2005. The system was designed and the equipment supplied by Northern Power Systems, Inc, an engineering company with expertise in renewable hybrid system design and development. Personnel of the National Renewable Energy Laboratory provided technical assistance in system design, and continued to provide technical assistance in system monitoring. The grid-connected renewable hybrid wind/photovoltaic system provides a demonstration of a solar/wind energy hybrid power-generating project on Manzanita Tribal land. During the system design phase, the National Renewable Energy Lab estimated that the wind turbine is expected to produce 10,000-kilowatt hours per year and the solar array 2,000-kilowatt hours per year. The hybrid system was designed to provide approximately 80 percent of the electricity used annually in the MAC building. The project proposed to demonstrate that this kind of a system design would provide highly reliable renewable power for community uses.

Trisha Frank

2005-03-31T23:59:59.000Z

Note: This page contains sample records for the topic "reporting computer system" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

A new approach in utilizing a computer data acquisition system for criticality safety control  

SciTech Connect

A new approach in utilizing a computer data acquisition system is proposed to address many issues associated with criticality safety control. This Criticality Safety Support System (CSSS) utilizes many features of computer and information process technology such as digital pictures, barcodes, voice data entry, etc. to enhance criticality safety in an R and D environment. Due to on-line data retrieving, data recording, and data management offered by new technology, the CSSS would provide a framework to design new solutions to old problems. This pilot program is the first step in developing this application for the years to come.

Hopkins, H; Song, H; Warren, F

1999-05-06T23:59:59.000Z

462

Condition monitoring through advanced sensor and computational technology : final report (January 2002 to May 2005).  

SciTech Connect

The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties.

Kim, Jung-Taek (Korea Atomic Energy Research Institute, Daejon, Korea); Luk, Vincent K.

2005-05-01T23:59:59.000Z

463

AN ENERGY EFFICIENT WINDOW SYSTEM FINAL REPORT.  

E-Print Network (OSTI)

greenhouses, passive solar heating and other non-viewingpassive and active solar heating systems. the fact remainsthe expense of winter solar heating), Life Cycle Costing We

Authors, Various

2011-01-01T23:59:59.000Z

464

Microelectromechanical Systems Academic and Research Staff  

E-Print Network (OSTI)

Foundation Career Development Professorship (Freeman) 1 Computer Microvision for MEMS Academic and Research Systems ­ Computer Microvision for Microelectromechanical Systems 26 RLE Progress Report 143 #12;Zeiss

465

A grid-enabled MPI : message passing in heterogeneous distributed computing systems.  

SciTech Connect

Application development for high-performance distributed computing systems, or computational grids as they are sometimes called, requires grid-enabled tools that hide mundate aspects of the heterogeneous grid environment without compromising performance. As part of an investigation of these issues, they have developed MPICH-G, a grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers at different sites using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the globus grid toolkit. In this paper, they describe the MPICH-G implementation and present preliminary performance results.

Foster, I.; Karonis, N. T.

2000-11-30T23:59:59.000Z

466

Ris Energy Report 7 Future low carbon energy systems  

E-Print Network (OSTI)

Risø Energy Report 7 Future low carbon energy systems Reprint of summary and recommendations Risø-R-1651(EN) October 2008 Edited by Hans Larsen and Leif Sønderberg Petersen #12;Risø Energy Report 7 Preface This Risø Energy Report, the seventh of a series that began in 2002, takes as its