Powered by Deep Web Technologies
Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


1

Large Scale Computing and Storage Requirements for Advanced Scientific...  

NLE Websites -- All DOE Office Websites (Extended Search)

Large Scale Computing and Storage Requirements for Advanced Scientific Computing Research: Target 2014 ASCRFrontcover.png Large Scale Computing and Storage Requirements for...

2

Exploring Cloud Computing for Large-scale Scientific Applications  

Science Conference Proceedings (OSTI)

This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address these challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.

Lin, Guang; Han, Binh; Yin, Jian; Gorton, Ian

2013-06-27T23:59:59.000Z

3

Computation in Large-Scale Scientific and Internet Data Applications is a Focus of MMDS 2010  

E-Print Network (OSTI)

The 2010 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2010) was held at Stanford University, June 15--18. The goals of MMDS 2010 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and Internet data sets; and (2) to bring together computer scientists, statisticians, applied mathematicians, and data analysis practitioners to promote cross-fertilization of ideas. MMDS 2010 followed on the heels of two previous MMDS workshops. The first, MMDS 2006, addressed the complementary perspectives brought by the numerical linear algebra and theoretical computer science communities to matrix algorithms in modern informatics applications; and the second, MMDS 2008, explored more generally fundamental algorithmic and statistical challenges in modern large-scale data analysis.

Mahoney, Michael W

2010-01-01T23:59:59.000Z

4

MTC Envelope: Defining the Capability of Large Scale Computers...  

NLE Websites -- All DOE Office Websites (Extended Search)

MTC Envelope: Defining the Capability of Large Scale Computers in the Context of Parallel Scripting Applications Title MTC Envelope: Defining the Capability of Large Scale...

5

DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Seeks Proposals for Expanded Large-Scale Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing May 16, 2005 - 12:47pm Addthis WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support innovative, large-scale computational science projects to enable high-impact advances through the use of advanced computers not commonly available in academia or the private sector. Projects currently funded are helping to reduce engine pollution and to improve our understanding of the stars and solar systems and human genetics. Successful proposers will be given the use of substantial computer time and data storage at the department's scientific

6

DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

DOE's Office of Science Seeks Proposals for Expanded Large-Scale DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific Computing May 16, 2005 - 12:47pm Addthis WASHINGTON, D.C. -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support innovative, large-scale computational science projects to enable high-impact advances through the use of advanced computers not commonly available in academia or the private sector. Projects currently funded are helping to reduce engine pollution and to improve our understanding of the stars and solar systems and human genetics. Successful proposers will be given the use of substantial computer time and data storage at the department's scientific

7

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesNP) Office of Advanced Scientific Computing Research (ASCR)

Gerber, Richard A.

2012-01-01T23:59:59.000Z

8

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesOffice of Advanced Scientific Computing Research (ASCR), and

Gerber, Richard A.

2011-01-01T23:59:59.000Z

9

Large Scale Computing and Storage Requirements for Fusion Energy...  

NLE Websites -- All DOE Office Websites (Extended Search)

at NERSC HPC Requirements Reviews Requirements for Science: Target 2014 Fusion Energy Sciences (FES) Large Scale Computing and Storage Requirements for Fusion Energy...

10

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

outlined in the 2011 DOE Strategic Plan†. † U.S. Departmentstrategic plans. Large Scale Computing and Storage Requirements for Nuclear Physics DOE  

Gerber, Richard A.

2012-01-01T23:59:59.000Z

11

Large Scale Computing and Storage Requirements for Basic Energy...  

NLE Websites -- All DOE Office Websites (Extended Search)

at NERSC HPC Requirements Reviews Requirements for Science: Target 2014 Basic Energy Sciences (BES) Large Scale Computing and Storage Requirements for Basic Energy...

12

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

SciTech Connect

IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

Gerber, Richard A.; Wasserman, Harvey J.

2012-03-02T23:59:59.000Z

13

Benchmarking parallel i/o performance for a large scale scientific application on the teragrid  

Science Conference Proceedings (OSTI)

This paper is a report on experiences in benchmarking I/O performance on leading computational facilities on the NSF TeraGrid network with a large scale scientific application. Instead of focusing only on the raw file I/O bandwidth provided by different ...

Frank Löffler; Jian Tao; Gabrielle Allen; Erik Schnetter

2009-08-01T23:59:59.000Z

14

Large Scale Computing and Storage Requirements for Nuclear Physics  

NLE Websites -- All DOE Office Websites (Extended Search)

Office of Science, Office of Advanced Scientific Computing Research (ASCR), Office of Nuclear Physics (NP), and the National Energy Research Scientific Computing Center (NERSC)...

15

Large Scale Computing and Storage Requirements for Biological...  

NLE Websites -- All DOE Office Websites (Extended Search)

of Energy's Office of Biological & Environmental Research and Advanced Scientific Computing Research (ASCR) to elucidate computing requirements for biological and...

16

Large Scale Computing and Storage Requirements for High Energy Physics  

Science Conference Proceedings (OSTI)

The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

Gerber, Richard A.; Wasserman, Harvey

2010-11-24T23:59:59.000Z

17

A Simulator for Large-Scale Parallel Computer Architectures  

Science Conference Proceedings (OSTI)

Efficient design of hardware and software for large-scale parallel execution requires detailed understanding of the interactions between the application, computer, and network. The authors have developed a macro-scale simulator SST/macro that permits ... Keywords: Computer Architecture Simulation, Macro-scale Simulator, Message Passing Interface, Network Congestion, Network Models

Helgi Adalsteinsson; Scott Cranford; David A. Evensky; Joseph P. Kenny; Jackson Mayo; Ali Pinar; Curtis L. Janssen

2010-04-01T23:59:59.000Z

18

Computational challenges in large-scale air pollution modelling  

Science Conference Proceedings (OSTI)

Many difficulties must be overcome when large-scale air pollution models are treated numerically, because the physical and chemical processes in the atmosphere are very fast. This is why it is necessary to use a large space domain in order ... Keywords: air pollution models, finite elements, ordinary differential equations, parallel computational, partial differential equations, quasi-steady-state-approximation

Tzvetan Ostromsky; Wojciech Owczarz; Zahari Zlatev

2001-06-01T23:59:59.000Z

19

Advanced I/O for large-scale scientific applications.  

SciTech Connect

As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.

Klasky, Scott (Oak Ridge National Laboratory, Oak Ridge, TN); Schwan, Karsten (Georgia Institute of Technology, Atlanta, GA); Oldfield, Ron A.; Lofstead, Gerald F., II (Georgia Institute of Technology, Atlanta, GA)

2010-01-01T23:59:59.000Z

20

Trace-Penalty Minimization for Large-scale Eigenspace Computation  

E-Print Network (OSTI)

Mar 6, 2013 ... U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research (and Basic Energy Sciences) under award number ...

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


21

Large Scale Computing and Storage Requirements for Biological...  

NLE Websites -- All DOE Office Websites (Extended Search)

Sponsored by: U.S. Department of Energy Office of Science Office of Advanced Scientific Computing Research (ASCR) Office of Biological and Environmental Research (BER) National...

22

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

strategic plans. Large  Scale  Computing  and  Storage  Requirements  for  Fusion  Energy  Sciences   DOE  

Gerber, Richard

2012-01-01T23:59:59.000Z

23

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Director Editors Richard Gerber Harvey Wasserman NERSC UserServices Group NERSC User Services Group Large ScaleNERSC

Gerber, Richard A.

2011-01-01T23:59:59.000Z

24

Lightweight computational steering of very large scale molecular dynamics simulations  

Science Conference Proceedings (OSTI)

We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.

Beazley, D.M. [Univ. of Utah, Salt Lake City, UT (United States). Dept. of Computer Science; Lomdahl, P.S. [Los Alamos National Lab., NM (United States)

1996-09-01T23:59:59.000Z

25

Applications of large-scale computation to particle accelerators  

SciTech Connect

The rapid growth in the power of large-scale computers has had a revolutionary effect on the study of charged-particle accelerators that is similar to the impact of smaller computers on everyday life. Before an accelerator is built, it is now the absolute rule to simulate every component and subsystem by computer to establish modes of operation and tolerances. We will bypass the important and fruitful areas of control and operation, and consider only application to design and diagnostic interpretation. Applications of computers can be divided into separate categories including: component design, system design, stability studies, cost optimization, and operating condition simulation. For the purposes of this report, we will choose a few examples from the above categories to illustrate the methods used, and discuss the significance of the work to the project. We also briefly discuss the accelerator project itself. The examples that will be discussed are: The design of accelerator structures for electron-positron linear colliders and circular colliding beam systems, simulation of the wake fields from multibunch electron beams for linear colliders. Particle-in-cell simulation of space-charge dominated beams for an experimental linear induction accelerator for Heavy Ion Fusion.

Herrmannsfeldt, W.B.

1991-05-01T23:59:59.000Z

26

Measuring and tuning energy efficiency on large scale high performance computing platforms.  

Science Conference Proceedings (OSTI)

Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

Laros, James H., III

2011-08-01T23:59:59.000Z

27

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

BES) Office of Advanced Scientific Computing Research (ASCR)of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, Facilities

Gerber, Richard

2012-01-01T23:59:59.000Z

28

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

of Science, Advanced Scientific Computing Research (ASCR)Office of Advanced Scientific Computing Research, FacilitiesOffice of Advanced Scientific Computing Research (ASCR), and

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

29

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

Act of 2009 Advanced Scientific Computing Research Courseof Science, Advanced Scientific Computing Research (ASCR)and for Advanced Scientific Computing Research, Facilities

Gerber, Richard

2012-01-01T23:59:59.000Z

30

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Acronyms   Argonne Leadership Computing Facility adaptivethe Leadership Computing Facilities at Oak Ridge and Argonne

Gerber, Richard A.

2011-01-01T23:59:59.000Z

31

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Computing and Storage Requirements for High Energy Physics [for High Energy Physics Computational  and  Storage  for High Energy Physics Computational  and  Storage  

Gerber, Richard A.

2011-01-01T23:59:59.000Z

32

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

the application of high performance computing (HPC) to theacceleration and high performance computing. He was thelibraries, and high performance computing. Lee is an active

Gerber, Richard A.

2011-01-01T23:59:59.000Z

33

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

proceedings of High Performance Computing – 2011 (HPC-2011)In recent years, high performance computing has becomeNERSC is the primary high-performance computing facility for

Gerber, Richard A.

2012-01-01T23:59:59.000Z

34

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

SciTech Connect

The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

Gerber, Richard; Wasserman, Harvey

2011-03-31T23:59:59.000Z

35

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

SciTech Connect

The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

Gerber, Richard; Wasserman, Harvey

2011-03-31T23:59:59.000Z

36

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

in-depth tracking and analysis of job failures, and supportautomatic analysis after batch compute jobs complete.automatic analysis after batch compute jobs complete.

Gerber, Richard A.

2011-01-01T23:59:59.000Z

37

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

Iowa State University) NERSC Repository: m94 10.2.2.1  Joseph Carlson (LANL) NERSC Repository: m308 10.2.3.1  Scientific  Objectives   This NERSC repository supports NP-

Gerber, Richard A.

2012-01-01T23:59:59.000Z

38

The autonomous concurrent strategy for large scale CAE computation  

Science Conference Proceedings (OSTI)

The paper presents the Agent-Oriented technology for running the parallel CAE computation. Fast and effective distributed diffusion scheduling is available that minimizes computation and communication time necessary for task governing and provides transparency ...

P. Uhruski; W. Toporkiewicz; R. Schaefer; M. Grochowski

2006-05-01T23:59:59.000Z

39

Large Scale Computing and Storage Requirements for High Energy Physics  

NLE Websites -- All DOE Office Websites (Extended Search)

for High Energy Physics for High Energy Physics Accelerator Physics P. Spentzouris, Fermilab Motivation Accelerators enable many important applications, both in basic research and applied sciences Different machine attributes are emphasized for different applications * Different particle beams and operation principles * Different energies and intensities Accelerator science and technology objectives for all applications * Achieve higher energy and intensity, faster and cheaper machine design, more reliable operation a wide spectrum of requirements for very complex instruments. Assisting their design and operation requires an equally complex set of computational tools. High Energy Physics Priorities High energy frontier * Use high-energy colliders to discover new particles and

40

On-demand computation of policy based routes for large-scale network simulation  

Science Conference Proceedings (OSTI)

Routing table storage demands pose a significant obstacle for large-scale network simulation. On-demand computation of routes can alleviate those problems for models that do not require representation of routing dynamics. However, policy based routes, ...

Michael Liljenstam; David M. Nicol

2004-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


41

An assessment of accountability policies for large-scale distributed computing systems  

Science Conference Proceedings (OSTI)

Grid computing systems offer resources to solve large-scale computational problems and are thus widely used in a large variety of domains, including computational sciences, energy management, and defense. Accountability in these application domains is ... Keywords: accountability, distributed systems, grid, policies, scalability

Wonjun Lee; Anna C. Squicciarini; Elisa Bertino

2009-04-01T23:59:59.000Z

42

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

Sciences Report of the NERSC / BES / ASCR RequirementsScientific Computing Center (NERSC) Editors Richard A.Gerber, NERSC Harvey J. Wasserman, NERSC Lawrence Berkeley

Gerber, Richard

2012-01-01T23:59:59.000Z

43

Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers  

Science Conference Proceedings (OSTI)

In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore ... Keywords: Hybrid MPI/OpenMP, Memory bandwidth contention time, Multicore supercomputers, Performance modeling

Xingfu Wu, Valerie Taylor

2013-12-01T23:59:59.000Z

44

A scalable messaging system for accelerating discovery from large scale scientific simulations  

SciTech Connect

Emerging scientific and engineering simulations running at scale on leadership-class High End Computing (HEC) environments are producing large volumes of data, which has to be transported and analyzed before any insights can result from these simulations. The complexity and cost (in terms of time and energy) associated with managing and analyzing this data have become significant challenges, and are limiting the impact of these simulations. Recently, data-staging approaches along with in-situ and in-transit analytics have been proposed to address these challenges by offloading I/O and/or moving data processing closer to the data. However, scientists continue to be overwhelmed by the large data volumes and data rates. In this paper we address this latter challenge. Specifically, we propose a highly scalable and low-overhead associative messaging framework that runs on the data staging resources within the HEC platform, and builds on the staging-based online in-situ/in- transit analytics to provide publish/subscribe/notification-type messaging patterns to the scientist. Rather than having to ingest and inspect the data volumes, this messaging system allows scientists to (1) dynamically subscribe to data events of interest, e.g., simple data values or a complex function or simple reduction (max()/min()/avg()) of the data values in a certain region of the application domain is greater/less than a threshold value, or certain spatial/temporal data features or data patterns are detected; (2) define customized in-situ/in-transit actions that are triggered based on the events, such as data visualization or transformation; and (3) get notified when these events occur. The key contribution of this paper is a design and implementation that can support such a messaging abstraction at scale on high- end computing (HEC) systems with minimal overheads. We have implemented and deployed the messaging system on the Jaguar Cray XK6 machines at Oak Ridge National Laboratory and the Lonestar system at the Texas Advanced Computing Center (TACC), and we present the experimental performance evaluation using these HEC platforms in the paper.

Jin, Tong [Rutgers University; Zhang, Fan [Rutgers University; Parashar, Manish [Rutgers University; Klasky, Scott A [ORNL; Podhorszki, Norbert [ORNL; Abbasi, Hasan [ORNL

2012-01-01T23:59:59.000Z

45

BCS MPI: N new approach i the system software design for large-scale parallel computers  

SciTech Connect

Buffered Co-Scheduled (BCS) MPI proposes a new approach to design the communication libraries for large-scale parallel machines. The emphasis of BCS MPI is on the global coordination of a large number of processes rather than in the traditional optimization of the local performance of a pair of communicating processes. BCS MPI delays the interprocessor communication in order to schedule globally the communication pattern and it is designed on top of a minimal set of collective communication primitives. In this paper we describe a prototype implementation of BCS MDI and its Communication protocols. The experimental results, executed on a set of scientific applications representative of the ASCI workload, show that BCS MPI is only marginally slower than the production-level MPI, but much simpler to implement, debug and analyze.

Fernández, J. C. (Juan C.); Petrini, F. (Fabrizio); Frachtenberg, E. (Eitan)

2003-01-01T23:59:59.000Z

46

BCS MPI: a new approach in the software design for large-scale parallel computers  

SciTech Connect

BCS MPI proposes a new approach to design the communication libraries for large scale parallel machines. The emphasis of BCS MPI is on the global coordination of the potentially large number of processes and in the reduction of the non determinism rather than in the traditional optimization of the local performance of a pair of communicating processes. BCS MPI delays the interprocessor communication in order to schedule globally the communication pattern and it is designed on top of a minimal set of collective communication primitives. In this paper we describe a prototype implementation of BCS MPI and its communication protocols. The experimental results, executed on a set of scientific applications representative of the ASCI workload, show that BCS MPI is only marginally slower than the production-level MPI, but much simpler to implement, debug and analyze.

Peinador, J. F. (Juan Fernandez); Petrini, F. (Fabrizio)

2003-01-01T23:59:59.000Z

47

Efficient Feature-Driven Visualization of Large-Scale Scientific Data  

Science Conference Proceedings (OSTI)

Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

Lu, Aidong

2012-12-12T23:59:59.000Z

48

MTC envelope: defining the capability of large scale computers in the context of parallel scripting applications  

Science Conference Proceedings (OSTI)

Many scientific applications can be efficiently expressed with the parallel scripting (many-task computing, MTC) paradigm. These applications are typically composed of several stages of computation, with tasks in different stages coupled by a shared ... Keywords: MTC, distributed file system, parallel scripting application, performance measurements

Zhao Zhang; Daniel S. Katz; Michael Wilde; Justin M. Wozniak; Ian Foster

2013-06-01T23:59:59.000Z

49

Large-scale three-dimensional geothermal reservoir simulation on small computer systems  

DOE Green Energy (OSTI)

The performance of TOUGH2, Lawrence Berkeley Laboratory`s general purpose simulator for mass and heat flow and transport enhanced with the addition of a set of preconditioned conjugate gradient solvers, was tested on three PCs (486-33, 486-66, Pentium-90), a MacIntosh Quadra 800, and a workstation IBM RISC 6000. A two-phase, single porosity, 3-D geothermal reservoir model with 1,411 irregular grid blocks, with production from and injection into the reservoir was used as the test model. The code modifications to TOUGH2 and its setup in each machine environment are described. Computational work per time step and CPU time requirements are reported for each of the machines used. It is concluded that the current PCs provide the best price/performance platform for running large-scale geothermal field simulations that just a few years ago could only be executed on mainframe computers and high-end workstations.

Antunez, E.; Moridis, G.; Pruess, K.

1995-05-01T23:59:59.000Z

50

NEKTAR, SPICE and Vortonics: using federated grids for large scale scientific applications  

Science Conference Proceedings (OSTI)

In response to a joint call from US's NSF and UK's EPSRC for applications that aim to utilize the combined computational resources of the US and UK, three computational science groups from UCL, Tufts and Brown Universities teamed up with a middleware ... Keywords: Co-scheduling, Distributed supercomputers, Federated grids, Interoperability, MPICH-G2, Optical lightpaths

Bruce Boghosian; Peter Coveney; Suchuan Dong; Lucas Finn; Shantenu Jha; George Karniadakis; Nicholas Karonis

2007-09-01T23:59:59.000Z

51

National Energ y Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

Annual Report Annual Report This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC 03-76SF00098. LBNL-49186, December 2001 National Energ y Research Scientific Computing Center 2001 Annual Report NERSC aspires to be a world leader in accelerating scientific discovery through computation. Our vision is to provide high- performance computing tools to tackle science's biggest and most challenging problems, and to play a major role in advancing large- scale computational science and computing technology. The result will be a rate of scientific progress previously unknown. NERSC's mission is to accelerate the pace of scientific discovery in the Department of Energy Office

52

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

Science Conference Proceedings (OSTI)

In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

DOE Office of Science, Biological and Environmental Research Program Office (BER),

2009-09-30T23:59:59.000Z

53

PowerGrid - A Computation Engine for Large-Scale Electric Networks  

Science Conference Proceedings (OSTI)

This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

Chika Nwankpa

2011-01-31T23:59:59.000Z

54

Computing and Data Infrastructure for Large-Scale Science NERSC and the DOE Science Grid  

E-Print Network (OSTI)

-bandwidth connectivity end to end (high-speed links from site systems to ESnet gateways) ­ Storage resources: four ­ Collaboration with ESnet for security and directory services #12;Initial Science Grid Configuration NERSC Supercomputing & Large-Scale Storage PNNL LBNL ANL ESnet Europe DOE Science Grid ORNL ESNet MDS CA Grid Managed

55

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

COMPUTING AND STORAGE REQUIREMENTS Basic Energy SciencesEnergy  Sciences   8.2.1.4   Computational  and  Storage  Computing  and  Storage  Requirements  for  Basic  Energy  

Gerber, Richard

2012-01-01T23:59:59.000Z

56

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

providing high-performance computing (HPC) resources to moreof NERSC—high performance computing (HPC) and NERSC haveafforded by high performance computing, advanced simulations

Gerber, Richard

2012-01-01T23:59:59.000Z

57

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

Overview   Andrew Felmy, PNNL The BES Geosciences researchtable (PI, Andrew Felmy, PNNL) and included in the summarySciences Division at PNNL, Chief Scientist for Scientific

Gerber, Richard

2012-01-01T23:59:59.000Z

58

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

Office (BER), DOE Office of Science National Energy ResearchDepartment of Energy, Office of Science, Advanced ScientificDirectors of the Office of Science, Office of Biological &

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

59

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

Energy  Sciences   8.2.3.4   Computational  and  Storage  Energy  Sciences   13.1.1.4   Computational  and  Storage  Energy  Sciences   8.2.4.4   Computational  and  Storage  

Gerber, Richard

2012-01-01T23:59:59.000Z

60

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

manufacturer, or otherwise, does not necessarily constituteProgram Office (BER), DOE Office of Science National EnergyIn May 2009, NERSC, DOE’s Office of Advanced Scientific

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


61

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

ALCF ALE AMR API ARRA ASCR CGP CICART Alfvén Eigenmode / Energetic Particle Mode Argonne Leadership Computing Facility

Gerber, Richard

2012-01-01T23:59:59.000Z

62

Studying how the past is remembered: towards computational history through large scale text mining  

Science Conference Proceedings (OSTI)

History helps us understand the present and even to predict the future to certain extent. Given the huge amount of data about the past, we believe computer science will play an increasingly important role in historical studies, with computational history ... Keywords: computational history, news analysis, temporal analysis

Ching-man Au Yeung; Adam Jatowt

2011-10-01T23:59:59.000Z

63

Large-scale application of some modern CSM methodologies by parallel computation  

E-Print Network (OSTI)

,b,*, R.A. Urasc , M.D. Adleyb , S. Lia a Mechanical Engineering and Army High Performance Computing.g. Message Passing Interface, MPI) have increased the use of High Performance Computing (HPC). Several Army Engineer Research and Development Center (ERDC) and the Army High Performance Computing Research

Li, Shaofan

64

Measuring and tuning energy efficiency on large scale high performance computing platforms.  

E-Print Network (OSTI)

??Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never… (more)

Laros, James Howard III

2012-01-01T23:59:59.000Z

65

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

limit of available NERSC and OLCF computing on heterogeneousperspective. Centers like the OLCF have imposed a paradigmNTM Neoclassical Tearing Mode OLCF Oak Ridge Leadership

Gerber, Richard

2012-01-01T23:59:59.000Z

66

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

Labs (LBNL), and the NSLS Figure   9-­3.   Computed   II atMRT NAMD NERSC NGF NIH NSF NSLS OLCF ORNL OS PCET PCM PIMD

Gerber, Richard

2012-01-01T23:59:59.000Z

67

Energy based performance tuning for large scale high performance computing systems  

Science Conference Proceedings (OSTI)

Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. In response to this challenge, we exploit the unique power measurement ... Keywords: energy efficiency, frequency scaling, high performance computing (HPC), power

James H. Laros, III; Kevin T. Pedretti; Suzanne M. Kelly; Wei Shu; Courtenay T. Vaughan

2012-03-01T23:59:59.000Z

68

Computational Issues for Large-Scale Land Surface Data Assimilation Problems  

Science Conference Proceedings (OSTI)

Land surface data assimilation problems are often limited by the high dimensionality of states created by spatial discretization over large high-resolution computational grids. Yet field observations and simulation both confirm that soil moisture ...

Dennis McLaughlin; Yuhua Zhou; Dara Entekhabi; Virat Chatdarong

2006-06-01T23:59:59.000Z

69

Using Computing and Data Grids for Large-Scale Science and Engineering  

Science Conference Proceedings (OSTI)

The term Gridis used to refer to a software system that provides uniform and location-independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids ...

William E. Johnston

2001-08-01T23:59:59.000Z

70

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

allocate computing time on the OLCF and ALCF systems for 12-Allocation of ALCF and OLCF resources are primarilyMillion at NERSC, ALCF, & OLCF 10 TB – 160 TB <2 with MPI/

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

71

Visualizing large-scale data in educational, behavioral, psychometrical and social sciences: utilities and design patterns of the SEER computational and graphical statistics  

Science Conference Proceedings (OSTI)

This paper introduces a graphical method SEE Repeated-measure data (SEER) to visually analyze data commonly collected in large-scale surveys, market research, biostatistics, and educational and psychological measurement. Many researchers in these disciplines ... Keywords: SEER, computational statistics, data visualization, large-scale data, research methodology, social and behavioral sciences

Christopher WT Chiu; Peter Pashley; Marilyn Seastrom; Peggy Carr

2005-10-01T23:59:59.000Z

72

A novel decomposition and distributed computing approach for the solution of large scale optimization models  

Science Conference Proceedings (OSTI)

Abstract: Biomass feedstock production is an important component of the biomass based energy sector. Seasonal and distributed collection of low energy density material creates unique challenges, and optimization of the complete value chain is critical ... Keywords: Agent-based modeling, Biomass feedstock, Computation, Decomposition, Optimization

Yogendra Shastri; Alan Hansen; Luis Rodríguez; K. C. Ting

2011-03-01T23:59:59.000Z

73

Computational and data Grids in large-scale science and engineering  

Science Conference Proceedings (OSTI)

As the practice of science moves beyond the single investigator due to the complexity of the problems that now dominate science, large collaborative and multi-institutional teams are needed to address these problems. In order to support this shift in ... Keywords: DOE science grid, NASA's Information Power Grid (IPG), grid applications, grids, heterogeneous, widely distributed computing

William E. Johnston

2002-10-01T23:59:59.000Z

74

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2013-01-01T23:59:59.000Z

75

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2012-07-01T23:59:59.000Z

76

Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing  

SciTech Connect

Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data between replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.

Fiala, David J [ORNL; Mueller, Frank [North Carolina State University; Engelmann, Christian [ORNL; Ferreira, Kurt Brian [Sandia National Laboratories (SNL); Brightwell, Ron [Sandia National Laboratories (SNL); Riesen, Rolf [IBM Research, Ireland

2012-07-01T23:59:59.000Z

77

Can Cloud Computing Address the Scientific Computing Requirements...  

NLE Websites -- All DOE Office Websites (Extended Search)

Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for...

78

GrenchMark: A Framework for Testing Large-Scale Distributed Computing Systems Alexandru Iosup (Delft University of Technology, The Netherlands)  

E-Print Network (OSTI)

computing - Computing as utility (similar to electricity) - Small components, distributed cost of ownership://grenchmark.st.ewi.tudelft.nlgrenchmark.st.ewi.tudelft.nl// The GrenchMark framework for testing large-scale distributed systems Testing Multi-Cluster Grids · Generate and annotation data · Tested in grids, peer-to-peer systems, and heterogeneous clusters - Extensible reference

Iosup, Alexandru

79

Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis  

Science Conference Proceedings (OSTI)

Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

2010-09-30T23:59:59.000Z

80

Scientific computations on modern parallel vector systems  

E-Print Network (OSTI)

Computational scientists have seen a frustrating trend of stagnating application performance despite dramatic increases in the claimed peak capability of high performance computing systems. This trend has been widely attributed to the use of superscalar-based commodity components who’s architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. Recently, two innovative parallel-vector architectures have become operational: the Japanese Earth Simulator (ES) and the Cray X1. In order to quantify what these modern vector capabilities entail for the scientists that rely on modeling and simulation, it is critical to evaluate this architectural paradigm in the context of demanding computational algorithms. Our evaluation study examines four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based ES and X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Our research team was the first international group to conduct a performance evaluation study at the Earth Simulator Center; remote ES access in not available. Results demonstrate that the vector systems achieve excellent performance on our application suite – the highest of any architecture tested to date. However, vectorization of a particle-incell code highlights the potential difficulty of expressing irregularly structured algorithms as data-parallel programs. 1.

Leonid Oliker; Andrew Canning; Jonathan Carter; John Shalf; Stephane Ethier

2004-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


81

Strategic plan for scientific computing  

SciTech Connect

Computing technology continues to undergo rapid and dramatic changes. Technological improvements in both hardware and software continue to permit analysts to model problems much more realistically than heretofore practicable. New visualization technologies vastly increase our ability to understand the results of those complex models. The mission of SRS is also undergoing very rapid change as a result of international events. While the typical demands of reactor oriented calculations may decline, environmental regulations require us to study new classes of problems in ever increasing detail. Hence, the computational workload is actually increasing rapidly. At the same time, the budget constraints demand a continued increase in cost effectiveness of scientific computing. A comprehensive strategy for scientific computing is required to adapt to these changes and still produce timely solutions to ensure continued safe operation of SRS facilities. An important goal of this strategy is to ensure that productivity gains available with new systems and technologies are truly achieved.

Church, J.P.

1992-05-01T23:59:59.000Z

82

Berkeley Lab Scientific Programs: Computing Sciences  

NLE Websites -- All DOE Office Websites (Extended Search)

data-intensive, international scientific collaborations. National Energy Research Scientific Computing Center (NERSC) Located at Berkeley Lab, NERSC is the flagship...

83

Java Performance for Scientific Applications on LLNL Computer Systems  

Science Conference Proceedings (OSTI)

Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part of the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.

Kapfer, C; Wissink, A

2002-05-10T23:59:59.000Z

84

Advanced Scientific Computing Research Jobs  

Office of Science (SC) Website

about/jobs/ Below is a list of currently about/jobs/ Below is a list of currently open federal employment opportunities in the Office of Science. Prospective applicants should follow the links to the formal position announcements on USAJOBS.gov for more information. en {D1C7BEC4-D6F9-4FB7-A95E-142A6B699F6B}https://www.usajobs.gov/GetJob/ViewDetails/358465200 Computer Scientist Computer Science Research & Partnerships Division Job Title: Computer Scientist Computer Science Research & Partnerships DivisionOffice: Advanced Scientific Computing ResearchURL: USAjobs listingVacancy Number: 14-DE-SC-HQ-005Location:

85

Berkeley Lab Computing Sciences: Accelerating Scientific Discovery  

Science Conference Proceedings (OSTI)

Scientists today rely on advances in computer science, mathematics, and computational science, as well as large-scale computing and networking facilities, to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab's Computing Sciences organization researches, develops, and deploys new tools and technologies to meet these needs and to advance research in such areas as global climate change, combustion, fusion energy, nanotechnology, biology, and astrophysics.

Hules, John A

2008-12-12T23:59:59.000Z

86

In Situ Visualization for Large-Scale Combustion Simulations  

Science Conference Proceedings (OSTI)

As scientific supercomputing moves toward petascale and exascale levels, in situ visualization stands out as a scalable way for scientists to view the data their simulations generate. This full picture is crucial particularly for capturing and understanding ... Keywords: in situ visualization, large-scale simulation, parallel rendering, supercomputing, scalability, computer graphics, graphics and multimedia

Hongfeng Yu; Chaoli Wang; Ray W. Grout; Jacqueline H. Chen; Kwan-Liu Ma

2010-05-01T23:59:59.000Z

87

NERSC: National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

and share massive bio-imaging datasets. Read More National Energy Research Scientific Computing Center Computing at NERSC OURSYSTEMS GETTINGSTARTED DOCUMENTATIONFOR USERS...

88

Computer simulation and scientific visualization  

SciTech Connect

The simulation of processes in engineering and the physical sciences has progressed rapidly over the last several years. With rapid developments in supercomputers, parallel processing, numerical algorithms and software, scientists and engineers are now positioned to quantitatively simulate systems requiring many billions of arithmetic operations. The need to understand and assimilate such massive amounts of data has been a driving force in the development of both hardware and software to create visual representations of the underling physical systems. In this paper, and the accompanying videotape, the evolution and development of the visualization process in scientific computing will be reviewed. Specific applications and associated imaging hardware and software technology illustrate both the computational needs and the evolving trends. 6 refs.

Weber, D.P.; Moszur, F.M.

1990-01-01T23:59:59.000Z

89

Large Scale Computing Requirements for Basic Energy Sciences (An BES / ASCR / NERSC Workshop) Hilton Washington DC/Rockville Meeting Center, Rockville MD 3D Geophysical Imaging  

NLE Websites -- All DOE Office Websites (Extended Search)

Requirements Requirements for Basic Energy Sciences (An BES / ASCR / NERSC Workshop) Hilton Washington DC/Rockville Meeting Center, Rockville MD 3D Geophysical Modeling and Imaging G. A. Newman Lawrence Berkeley National Laboratory February 9 - 10 , 2010 Talk Outline * SEAM Geophysical Modeling Project - Its Really Big! * Geophysical Imaging (Seismic & EM) - Its 10 to 100x Bigger! - Reverse Time Migration - Full Waveform Inversion - 3D Imaging & Large Scale Considerations - Offshore Brazil Imaging Example (EM Data Set) * Computational Bottlenecks * Computing Alternatives - GPU's & FPGA's - Issues Why ? So that the resource industry can tackle grand geophysical challenges (Subsalt imaging, land acquisition, 4-D, CO2, carbonates ......) SEAM Mission Advance the science and technology of applied

90

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

s Office of Advanced Scientific Computing Research, whichOffice of Advanced Scientific Computing Research The primaryof the Advanced Scientific Computing Research (ASCR) program

Hules, John A.

2008-01-01T23:59:59.000Z

91

Large-Scale Hydropower  

Energy.gov (U.S. Department of Energy (DOE))

Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 MW in size, and there is more than 80,000 MW...

92

Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms  

Science Conference Proceedings (OSTI)

Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is completely general, applicable to any crystal symmetry and to both metals and insulators alike. We have developed and implemented a full self-consistent Kohn-Sham method, including both total energies and forces for molecular dynamics, and developed a full MPI parallel implementation for large-scale calculations. We have applied the method to the gamut of physical systems, from simple insulating systems with light atoms to complex d- and f-electron systems, requiring large numbers of atomic-orbital enrichments. In every case, the new PU FE method attained the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the current state-of-the-art PW method. Finally, our initial MPI implementation has shown excellent parallel scaling of the most time-critical parts of the code up to 1728 processors, with clear indications of what will be required to achieve comparable scaling for the rest. Having shown that the key remaining disadvantage of real-space methods can in fact be overcome, the work has attracted significant attention: with sixteen invited talks, both domestic and international, so far; two papers published and another in preparation; and three new university and/or national laboratory collaborations, securing external funding to pursue a number of related research directions. Having demonstrated the proof of principle, work now centers on the necessary extensions and optimizations required to bring the prototype method and code delivered here to production applications.

Pask, J E; Sukumar, N; Guney, M; Hu, W

2011-02-28T23:59:59.000Z

93

National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

Scientific Computing Center Scientific Computing Center 2004 annual report Cover image: Visualization based on a simulation of the density of a fuel pellet after it is injected into a tokamak fusion reactor. See page 40 for more information. National Energy Research Scientific Computing Center 2004 annual report Ernest Orlando Lawrence Berkeley National Laboratory * University of California * Berkeley, California 94720 This work was supported by the Director, Office of Science, Office of Advanced Scientific Computing Research of the U.S. Department of Energy under Contract No. DE-AC 03-76SF00098. LBNL-57369, April 2005 ii iii The Year in Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Advances in Computational Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis  

E-Print Network (OSTI)

Chang. Changes in Tropical Cyclone Number, Duration, andSimulation of Future Tropical Cyclone Statistics in a High-Finding Tropical Cyclones on a Cloud Computing Cluster:

Hasenkamp, Daren

2011-01-01T23:59:59.000Z

95

VACET: Twists and Turns State-of-the-art computational science simulations generate large-scale vector  

E-Print Network (OSTI)

. Weak scaling behavior of the CASTRO code on the jaguarpf machine at the OLCF. For the two-based parallelism, on the jaguarpf ma- chine at the Oak Ridge Leadership Computing Facility (OLCF). A weak scaling

96

National Energy Research Scientific Computing Center (NERSC)...  

NLE Websites -- All DOE Office Websites (Extended Search)

Contract to Cray August 5, 2009 BERKELEY, CA - The Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National...

97

High speed signal and data processing using very large scale integrated (VLSI)/VHSIC general purpose computer systems  

SciTech Connect

The combined requirements of size, weight, throughput, reliability, and testability imposed on signal and data processing systems by new electro-optical sensors cannot be met with conventional architectures or circuit technology. One solution to this problem is described by the authors. This solution is a result of five years of work done to date on the Modular Missile Borne Computer (MMBC) combined with the more recent very high speed integrated circuit (VHSIC) program. 12 references.

Ramseyer, R.; Johnson, M.; Thomas, J.

1982-01-01T23:59:59.000Z

98

Scientific Computing Programs and Projects  

Science Conference Proceedings (OSTI)

... High Performance Computing Last Updated Date: 03/05/2012 High Performance Computing (HPC) enables work on challenging ...

2010-05-24T23:59:59.000Z

99

Running Large Scale Jobs  

NLE Websites -- All DOE Office Websites (Extended Search)

NERSC Powering Scientific Discovery Since 1974 Login Site Map | My NERSC search... Go Home About Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders...

100

Barbara Helland Advanced Scientific Computing Research NERSC...  

NLE Websites -- All DOE Office Websites (Extended Search)

7-28, 2012 Barbara Helland Advanced Scientific Computing Research NERSC-HEP Requirements Review 1 Science C ase S tudies d rive d iscussions Program R equirements R eviews ...

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


101

Uncertainty Quantification in Scientific Computing  

Science Conference Proceedings (OSTI)

... Mike initiated the DAKOTA effort shortly after joining Sandia in 1994 and ... computing began in 1971 at the Thames Polytechnic in South East London ...

2011-07-25T23:59:59.000Z

102

Exploring HPCS Languages in Scientific Computing  

SciTech Connect

As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

Barrett, Richard F [ORNL; Alam, Sadaf R [ORNL; de Almeida, Valmor F [ORNL; Bernholdt, David E [ORNL; Elwasif, Wael R [ORNL; Kuehn, Jeffery A [ORNL; Poole, Stephen W [ORNL; Shet, Aniruddha G [ORNL

2008-01-01T23:59:59.000Z

103

An evolving infrastructure for scientific computing and the integration of new graphics technology  

SciTech Connect

The National Energy Research Supercomputer Center (NERSC) at the Lawrence Livermore National Laboratory is currently pursuing several projects to implement and integrate new hardware and software technologies. While each of these projects ought to be and is in fact individually justifiable, there is an appealing metaphor for viewing them collectively which provides a simple and memorable way to understand the future direction not only of supercomputing services but of computer centers in general. Once this general direction is understood, it becomes clearer what future computer graphics technologies would be possible and desirable, at least within the context of large scale scientific computing.

Fong, K.W.

1993-02-01T23:59:59.000Z

104

Institute for Scientific Computing Research Fiscal Year 2002 Annual Report  

SciTech Connect

The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory is jointly administered by the Computing Applications and Research Department (CAR) and the University Relations Program (URP), and this joint relationship expresses its mission. An extensively externally networked ISCR cost-effectively expands the level and scope of national computational science expertise available to the Laboratory through CAR. The URP, with its infrastructure for managing six institutes and numerous educational programs at LLNL, assumes much of the logistical burden that is unavoidable in bridging the Laboratory's internal computational research environment with that of the academic community. As large-scale simulations on the parallel platforms of DOE's Advanced Simulation and Computing (ASCI) become increasingly important to the overall mission of LLNL, the role of the ISCR expands in importance, accordingly. Relying primarily on non-permanent staffing, the ISCR complements Laboratory research in areas of the computer and information sciences that are needed at the frontier of Laboratory missions. The ISCR strives to be the ''eyes and ears'' of the Laboratory in the computer and information sciences, in keeping the Laboratory aware of and connected to important external advances. It also attempts to be ''feet and hands, in carrying those advances into the Laboratory and incorporating them into practice. In addition to conducting research, the ISCR provides continuing education opportunities to Laboratory personnel, in the form of on-site workshops taught by experts on novel software or hardware technologies. The ISCR also seeks to influence the research community external to the Laboratory to pursue Laboratory-related interests and to train the workforce that will be required by the Laboratory. Part of the performance of this function is interpreting to the external community appropriate (unclassified) aspects of the Laboratory's own contributions to the computer and information sciences--contributions that its unique mission and unique resources give it a unique opportunity and responsibility to make. Of the three principal means of packaging scientific ideas for transfer--people, papers, and software--experience suggests that the most effective means is people. The programs of the ISCR are therefore people-intensive. Finally, the ISCR, together with CAR, confers an organizational identity on the burgeoning computer and information sciences research activity at LLNL and serves as a point of contact within the Laboratory for computer and information scientists from outside.

Keyes, D E; McGraw, J R; Bodtker, L K

2003-03-11T23:59:59.000Z

105

NERSC: National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites

NERSC Powering Scientific Discovery Since 1974 NERSC Powering Scientific Discovery Since 1974 Login Site Map | My NERSC search... Go Home About Overview NERSC Mission Contact us Staff Org Chart NERSC History NERSC Stakeholders NERSC Usage Demographics Careers Visitor Info Web Policies Science at NERSC NERSC HPC Achievement Awards Accelerator Science Astrophysics Biological Sciences Chemistry & Materials Science Climate & Earth Science Energy Science Engineering Science Environmental Science Fusion Science Math & Computer Science Nuclear Science Science Highlights NERSC Citations HPC Requirements Reviews Systems Computational Systems Table Data Systems Table Edison Cray XC30 Hopper Cray XE6 Carver IBM iDataPlex PDSF Genepool NERSC Global Filesystem HPSS data archive Data Transfer Nodes History of Systems NERSC-8 Procurement

106

EAGLES: An interactive environment for scientific computing  

Science Conference Proceedings (OSTI)

The EAGLES Project is creating a computing system and interactive environment for scientific applications using object-oriented software principles. This software concept leads to well defined data interfaces for integrating experiment control with acquisition and analysis codes. Tools for building object-oriented systems for user interfaces and codes are discussed. Also the terms of object-oriented programming are introduced and later defined in the appendix. These terms include objects, methods, messages, encapsulation and inheritance.

Lawver, B.S.; O'Brien, D.W.; Poggio, M.E.; Shectman, R.M.

1987-08-01T23:59:59.000Z

107

EAGLES: An interactive environment for scientific computing  

Science Conference Proceedings (OSTI)

The EAGLES Project is creating a computing system and interactive environment for scientific applications using object-oriented software principles. This software concept leads to well defined data interfaces for integrating experiments control with acquisition and analysis codes. Tools for building object-oriented systems for user interfaces and codes are discussed. Also the terms of object-oriented programming are introduced and later defined in the appendix. These terms include objects, methods, messages, encapsulation and inheritance.

Lawver, B.S.; O'Brien, D.W.; Poggio, M.E.; Shectman, R.M.

1987-05-11T23:59:59.000Z

108

Large Scale Quantum-mechanical Calculations of Proteins, Nanomaterials...  

NLE Websites -- All DOE Office Websites (Extended Search)

Large Scale Quantum-mechanical Calculations of Proteins, Nanomaterials and Other Large Systems Event Sponsor: Leadership Computing Facility Seminar Start Date: Dec 5 2013 - 2:00pm...

109

Parallel I/O Software Infrastructure for Large-Scale Systems  

NLE Websites -- All DOE Office Websites (Extended Search)

Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems | Tags: Math & Computer Science Choudhary.png An...

110

Scientific Methods in Computer Science Gordana Dodig-Crnkovic  

E-Print Network (OSTI)

Scientific Methods in Computer Science Gordana Dodig-Crnkovic Department of Computer Science analyzes scientific aspects of Computer Science. First it defines science and scientific method in general. It gives a dis- cussion of relations between science, research, development and technology. The existing

Cunningham, Conrad

111

Computer Science Research: Computation Directorate  

Science Conference Proceedings (OSTI)

This report contains short papers in the following areas: large-scale scientific computation; parallel computing; general-purpose numerical algorithms; distributed operating systems and networks; knowledge-based systems; and technology information systems.

Durst, M.J. (ed.); Grupe, K.F. (ed.)

1988-01-01T23:59:59.000Z

112

National facility for advanced computational science: A sustainable path to scientific discovery  

E-Print Network (OSTI)

Office of Advanced Scientific Computing Research of the U.S.Office of Advanced Scientific Computing Research (OASCR) andOASCR Office of Advanced Scientific Computing Research (DOE

2004-01-01T23:59:59.000Z

113

JASMIN: a parallel software infrastructure for scientific computing  

Science Conference Proceedings (OSTI)

The exponential growth of computer power in the last 10 years is now creating a great challenge for parallel programming toward achieving realistic performance in the field of scientific computing. To improve on the traditional program for numerical ... Keywords: J Adaptive Structured Meshes applications INfrastructure (JASMIN), parallel computing, scientific computing

Zeyao Mo; Aiqing Zhang; Xiaolin Cao; Qingkai Liu; Xiaowen Xu; Hengbin An; Wenbing Pei; Shaoping Zhu

2010-12-01T23:59:59.000Z

114

Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures  

SciTech Connect

This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

None

2007-06-27T23:59:59.000Z

115

Scientific Computing Kernels on the Cell Processor  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, 2004. [Conference on High Performance Computing in the Asia Paci?cMeeting on High Performance Computing for Computational

Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

2008-01-01T23:59:59.000Z

116

Energy Department Requests Proposals for Advanced Scientific Computing  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Requests Proposals for Advanced Scientific Requests Proposals for Advanced Scientific Computing Research Energy Department Requests Proposals for Advanced Scientific Computing Research December 27, 2005 - 4:55pm Addthis WASHINGTON, DC - The Department of Energy's Office of Science and the National Nuclear Security Administration (NNSA) have issued a joint Request for Proposals for advanced scientific computing research. DOE expects to fund $67 million annually for three years to five years under its Scientific Discovery through Advanced Computing (SciDAC) research program. Scientific computing, including modeling and simulation, has become crucial for research problems that are insoluble by traditional theoretical and experimental approaches, hazardous to study in the laboratory, or time-consuming or expensive to solve by traditional means.

117

NERSC Role in Advanced Scientific Computing Research Katherine Yelick  

NLE Websites -- All DOE Office Websites (Extended Search)

Advanced Advanced Scientific Computing Research Katherine Yelick NERSC Director Requirements Workshop NERSC Mission The mission of the National Energy Research Scientific Computing Center (NERSC) is to accelerate the pace of scientific discovery by providing high performance computing, information, data, and communications services for all DOE Office of Science (SC) research. Sample Scientific Accomplishments at NERSC 3 Award-winning software uses massively-parallel supercomputing to map hydrocarbon reservoirs at unprecedented levels of detail. (Greg Newman, LBNL) . Combustion Adaptive Mesh Refinement allows simulation of a fuel- flexible low-swirl burner that is orders of magnitude larger & more detailed than traditional reacting flow simulations allow.

118

Data mining techniques for large-scale gene expression analysis  

E-Print Network (OSTI)

Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...

Palmer, Nathan Patrick

2011-01-01T23:59:59.000Z

119

National Energy Research Scientific Computing Center NERSC Exceeds Reliability  

NLE Websites -- All DOE Office Websites (Extended Search)

Scientific Scientific Computing Center NERSC Exceeds Reliability Standards With Tape-Based Active Archive Research Facility Accelerates Access to Data while Supporting Exponential Growth Founded in 1974, the National Energy Research Scientific Computing Center (NERSC) is the primary scientific com- puting facility for the Office of Science in the U.S. Department of Energy. NERSC is located at Lawrence Berkeley National Laboratory's Oakland Scientific Facility in Oakland, California and is mandated with providing computational resources and expertise for scientific research to about 5,000 scientists at national labora- tories and universities across the United States, as well as their international col- laborators. A division of Lawrence Berke- ley National Laboratory, NERSC supports

120

National Energy Research Scientific Computing Center (NERSC) Awards  

NLE Websites -- All DOE Office Websites (Extended Search)

National Energy National Energy Research Scientific Computing Center (NERSC) Awards Supercomputer Contract to Cray National Energy Research Scientific Computing Center (NERSC) Awards Supercomputer Contract to Cray August 5, 2009 BERKELEY, CA - The Department of Energy's (DOE) National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory announced today that a contract for its next generation supercomputing system will be awarded to Cray Inc. The multi-year supercomputing contract includes delivery of a Cray XT5(tm) massively parallel processor supercomputer, which will be upgraded to a future-generation Cray supercomputer. When completed, the new system will deliver a peak performance of more than one petaflops, equivalent to more

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


121

Scientific Computations on Modern Parallel Vector Systems  

Science Conference Proceedings (OSTI)

Computational scientists have seen a frustrating trend of stagnating application performance despite dramatic increases in the claimed peak capability of high performance computing systems. This trend has been widely attributed to the use of superscalar-based ...

Leonid Oliker; Andrew Canning; Jonathan Carter; John Shalf; Stephane Ethier

2004-11-01T23:59:59.000Z

122

Accelerating Scientific Discovery Through Computation and ...  

Science Conference Proceedings (OSTI)

... in microchannel devices, and the modeling of hydro- dynamic dispersion. ... period, Moore's law [51] has increased computing power dramatically, so ...

2010-10-19T23:59:59.000Z

123

New methods of secure outsourcing of scientific computations  

Science Conference Proceedings (OSTI)

In this paper, we present several methods of secure outsourcing of numerical and scientific computations. Current outsourcing techniques are inspired by the numerous problems in computational mathematics, where a solution is obtained in the form of an ... Keywords: Secure cloud computing, Secure outsourcing

Yerzhan N. Seitkulov

2013-07-01T23:59:59.000Z

124

Energy Department Requests Proposals for Advanced Scientific Computing  

Office of Science (SC) Website

Energy Energy Department Requests Proposals for Advanced Scientific Computing Research News Featured Articles Science Headlines 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Presentations & Testimony News Archives Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 12.27.05 Energy Department Requests Proposals for Advanced Scientific Computing Research Print Text Size: A A A Subscribe FeedbackShare Page WASHINGTON, DC - The Department of Energy's Office of Science and the National Nuclear Security Administration (NNSA) have issued a joint Request for Proposals for advanced scientific computing research. DOE expects to fund $67 million annually for three years to five years under its Scientific Discovery through Advanced Computing (SciDAC) research program.'

125

Partial Evaluation for Scientific Computing: The Supercomputer Toolkit Experience  

E-Print Network (OSTI)

We describe the key role played by partial evaluation in the Supercomputer Toolkit, a parallel computing system for scientific applications that effectively exploits the vast amount of parallelism exposed by partial ...

Berlin, Andrew

1994-05-01T23:59:59.000Z

126

Energy Department Seeks Proposals to Use Scientific Computing...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

DOE's missions," said Secretary Bodman. "This program opens up the world of high-performance computing to a broad array of scientific users. Through the use of these advanced...

127

National Energy Research Scientific Computing Center 2007 Annual Report  

SciTech Connect

This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

2008-10-23T23:59:59.000Z

128

Berkeley Lab Computing Sciences: Accelerating Scientific Discovery  

E-Print Network (OSTI)

facilities — NERSC and ESnet — and by conduct- ing appliedCOMPUTATIONAL SCIENCE ESnet is a reliable, high- bandwidthdevelopment. NERSC and ESnet staff participate in advanced

Hules, John A

2009-01-01T23:59:59.000Z

129

Exploring Cloud Computing for DOE's Scientific Mission  

NLE Websites -- All DOE Office Websites (Extended Search)

computing is gaining traction in the commercial world, with companies like Amazon, Google, and Yahoo offering pay-to-play cycles to help organizations meet cyclical demands for...

130

Energy Basics: Large-Scale Hydropower  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

Energy Basics Renewable Energy Printable Version Share this resource Biomass Geothermal Hydrogen Hydropower Large-Scale Hydropower Microhydropower Hydropower Resources...

131

Virtual screening on large scale grids  

Science Conference Proceedings (OSTI)

Large scale grids for in silico drug discovery open opportunities of particular interest to neglected and emerging diseases. In 2005 and 2006, we have been able to deploy large scale virtual docking within the framework of the WISDOM initiative against ... Keywords: Avian influenza, Large scale grids, Malaria, Virtual screening

Nicolas Jacq; Vincent Breton; Hsin-Yen Chen; Li-Yung Ho; Martin Hofmann; Vinod Kasam; Hurng-Chun Lee; Yannick Legré; Simon C. Lin; Astrid Maaí; Emmanuel Medernach; Ivan Merelli; Luciano Milanesi; Giulio Rastelli; Matthieu Reichstadt; Jean Salzemann; Horst Schwichtenberg; Ying-Ta Wu; Marc Zimmermann

2007-05-01T23:59:59.000Z

132

Storage Hierarchy Management for Scientific Computing  

E-Print Network (OSTI)

of the driving forces behind the design of computer systems. As a result, many advances in CPU architecture were-terabyte tertiary storage system attached to a high- speed computer. The analysis finds that the number of files instead of the two separate views of the system studied. This finding was a major motivation of the design

Miller, Ethan L.

133

Storage Hierarchy Management for Scientific Computing  

E-Print Network (OSTI)

the design of com- puter systems. As a result, many advances in CPU architecture were first developed for high-speed supercomputer systems, keeping them among the fastest computers in the world. However system attached to a high-speed computer. The analysis finds that the number of files and average file

Miller, Ethan L.

134

A Tractable Approach to Understanding the Results from Large-Scale 3D Transient  

E-Print Network (OSTI)

) problems or NASA's HPCC (High Performance Computing & Communication) grand challenges, can easily. Introduction Large-scale simulations of physical phenomena on high performance computing systems (often on mas

Peraire, Jaime

135

Multicore Platforms for Scientific Computing: Cell BE and NVIDIA Tesla  

E-Print Network (OSTI)

Multicore Platforms for Scientific Computing: Cell BE and NVIDIA Tesla J. Fern´andez, M.E. Acacio Tesla computing solutions. The former is a re- cent heterogeneous chip-multiprocessor (CMP) architecture, multicore, Cell BE, NVIDIA Tesla, CUDA 1 Introduction Nowadays, multicore architectures are omnipresent

Acacio, Manuel

136

Matthew R. Norman Scientific Computing Group  

E-Print Network (OSTI)

-present, Porting the Community Atmosphere Model - Spectral Element (CAM-SE) to ORNL's Titan Supercomputer National Laboratory PO BOX 2008 MS6016 Oak Ridge, TN 37831, USA normanmr@ornl.gov (865) 576-1757 Education-scale atmospheric simu- lation code, to run on Oak Ridge Leadership Computing Facility's (OLCF's) Titan super

137

Scientific computations section monthly report, November 1993  

Science Conference Proceedings (OSTI)

This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

Buckner, M.R.

1993-12-30T23:59:59.000Z

138

A Desktop Grid Computing Approach Scientific Computing and Visualization  

E-Print Network (OSTI)

limit the availability of such systems. A more convenient solution, which is becoming more and more of desktop computers provides for this solution. In a desktop grid system, the execution of an application and Visualization experiments. We present here QADPZ, an open source system for desktop grid computing that have

139

MA50177: Scientific Computing Nuclear Reactor Simulation Generalised Eigenvalue Problems  

E-Print Network (OSTI)

MA50177: Scientific Computing Case Study Nuclear Reactor Simulation ­ Generalised Eigenvalue of a malfunction or of an accident experimentally, the numerical simulation of nuclear reactors is of utmost balance in a nuclear reactor are the two-group neutron diffusion equations -div (K1 u1) + (a,1 + s) u1 = 1

Scheichl, Robert

140

Exposing Digital Forgeries in Scientific Images Department of Computer Science  

E-Print Network (OSTI)

that as many as 20% of accepted manuscripts contain figures with inappropriate manipulations, and 1% with fraudulent manipulations. Several scientific editors are considering putting safeguards in place to help to be a need for computational techniques that automatically detect common forms of tam- pering. We describe

Farid, Hany

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


141

Advanced Scientific Computing Advisory Committee (ASCAC) Homepage | U.S.  

Office of Science (SC) Website

ASCAC Home ASCAC Home Advanced Scientific Computing Advisory Committee (ASCAC) ASCAC Home Meetings Members Charges/Reports Charter .pdf file (38KB) ASCR Committees of Visitors ASCR Home Exascale Advisory Committee Report .pdf file (2.1MB) The Opportunities and Challenges of Exascale Computing The Exascale initiative will be significant and transformative for Department of Energy missions. The ASCAC Subcommitte report is available to revew.Read More .pdf file (2.1MB) Exascale picture 1 of 2 Print Text Size: A A A RSS Feeds FeedbackShare Page ADDITIONAL INFORMATION About ASCAC Contact ASCAC Email: ascr@science.doe.gov Phone: 301-903-7486 ASCAC DFO: Mrs. Christine Chalk COMMITTEE MANAGERS: Mrs. Melea Baker Dr. Lucy Nowell COMMITTEE CHAIR Dr. Roscoe C. Giles ASCR AD J. Steve Binkley The Advanced Scientific Computing Advisory Committee (ASCAC), established

142

National Energy Research Scientific Computing Center (NERSC) | U.S. DOE  

Office of Science (SC) Website

National National Energy Research Scientific Computing Center (NERSC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Accessing ASCR Supercomputers Oak Ridge Leadership Computing Facility (OLCF) Argonne Leadership Computing Facility (ALCF) National Energy Research Scientific Computing Center (NERSC) Energy Sciences Network (ESnet) Research & Evaluation Prototypes (REP) Innovative & Novel Computational Impact on Theory and Experiment (INCITE) ASCR Leadership Computing Challenge (ALCC) Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301)

143

Remote visualization of large scale data for ultra-high resolution display environments  

Science Conference Proceedings (OSTI)

ParaView is one of the most widely used scientific tools that support parallel visualization of large scale data. The Scalable Adaptive Graphics Environment (SAGE) is a graphics middleware that enables real-time streaming of ultra-high resolution visual ... Keywords: ParaView, SAGE, large-scale data, remote visualization, ultra-high resolution visualization

Sungwon Nam; Byungil Jeong; Luc Renambot; Andrew Johnson; Kelly Gaither; Jason Leigh

2009-11-01T23:59:59.000Z

144

Large Scale Quantum-mechanical Calculations of Proteins, Nanomaterials and  

NLE Websites -- All DOE Office Websites (Extended Search)

Large Scale Quantum-mechanical Calculations of Proteins, Nanomaterials and Large Scale Quantum-mechanical Calculations of Proteins, Nanomaterials and Other Large Systems Event Sponsor: Leadership Computing Facility Seminar Start Date: Dec 5 2013 - 2:00pm Building/Room: Building 240/Room 4301 Location: Argonne National Laboratory Speaker(s): Dmitri G. Fedorov Speaker(s) Title: National Institute of Advanced Industrial Science and Technology (AIST) Host: Yuri Alexeev Our approach to large scale calculations is based on fragmenting a molecular system into pieces, and performing quantum-mechanical calculations of these fragments and their pairs in the fragment molecular orbital method (FMO). After a brief summary of the methodology, some typical applications to protein-ligand complexes, chemical reactions in explicit solvent, and nanomaterials (silicon nanowires, zeolites.

145

Energy Basics: Large-Scale Hydropower  

Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

These plants are more than 30 MW in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a...

146

Program Management for Large Scale Engineering Programs  

E-Print Network (OSTI)

The goal of this whitepaper is to summarize the LAI research that applies to program management. The context of most of the research discussed in this whitepaper are large-scale engineering programs, particularly in the ...

Oehmen, Josef

147

Large-Scale Offshore Wind Power  

NLE Websites -- All DOE Office Websites (Extended Search)

Large-Scale Offshore Wind Power in the United States EXECUTIVE SUMMARY September 2010 NOTICE This report was prepared as an account of work sponsored by an agency of the United...

148

Large-Scale Hydrogen Combustion Experiments  

Science Conference Proceedings (OSTI)

Large-scale combustion experiments show that deliberate ignition can limit hydrogen accumulation in reactor containments. The collected data allow accurate evaluation of containment pressures and temperatures associated with hydrogen combustion.

1988-10-18T23:59:59.000Z

149

Large-Scale Dynamics and Global Warming  

Science Conference Proceedings (OSTI)

Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing ...

Isaac M. Held

1993-02-01T23:59:59.000Z

150

A Parallel Euler Approach for Large-Scale Biological Sequence Assembly  

Science Conference Proceedings (OSTI)

Biological sequence assembly is an essential step for sequencing the genomes of organisms. Sequence assembly is very computing intensive especially for the large-scale sequence assembly. Parallel computing is an effective way to reduce the computing ...

2005-07-01T23:59:59.000Z

151

The Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA)  

NLE Websites -- All DOE Office Websites (Extended Search)

LBA (Amazon) LBA (Amazon) The Large Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) Overview [LBA Logo] The Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) is an international research initiative conducted from 1995-2005 and led by Brazil. The LBA Project encompasses several scientific disciplines, or components. The LBA-ECO component focuses on the question: "How do tropical forest conversion, regrowth, and selective logging influence carbon storage, nutrient dynamics, trace gas fluxes, and the prospect for sustainable land use in Amazonia?" The Amazon rain forest or Amazonia, is the largest remaining expanse of tropical rain forest on Earth, harboring approximately one-third of all Earth's species. Although the rain forest's area is so large that it

152

Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems  

SciTech Connect

The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

Carey, G.F.; Young, D.M.

1993-12-31T23:59:59.000Z

153

Grid infrastructure to support science portals for large scale instruments.  

SciTech Connect

Soon, a new generation of scientific workbenches will be developed as a collaborative effort among various research institutions in the US. These scientific workbenches will be accessed in the Web via portals. Reusable components are needed to build such portals for different scientific disciplines, allowing uniform desktop access to remote resources. Such components will include tools and services enabling easy collaboration, job submission, job monitoring, component discovery, and persistent object storage. Based on experience gained from Grand Challenge applications for large-scale instruments, we demonstrate how Grid infrastructure components can be used to support the implementation of science portals. The availability of these components will simplify the prototype implementation of a common portal architecture.

von Laszewski, G.; Foster, I.

1999-09-29T23:59:59.000Z

154

U.S. Department of Energy Scientific Discovery through Advanced Computing SciDAC 2010  

E-Print Network (OSTI)

U.S. Department of Energy Scientific Discovery through Advanced Computing SciDAC 2010 Dream beams. Introduction 261 #12;U.S. Department of Energy Scientific Discovery through Advanced Computing SciDAC 2010 Figure 1. 262 #12;U.S. Department of Energy Scientific Discovery through Advanced Computing SciDAC 2010 2

Geddes, Cameron Guy Robinson

155

National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology  

Science Conference Proceedings (OSTI)

National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

Hules, J. [ed.

1996-11-01T23:59:59.000Z

156

Distributed large-scale natural graph factorization  

Science Conference Proceedings (OSTI)

Natural graphs, such as social networks, email graphs, or instant messaging patterns, have become pervasive through the internet. These graphs are massive, often containing hundreds of millions of nodes and billions of edges. While some theoretical models ... Keywords: asynchronous algorithms, distributed optimization, graph algorithms, graph factorization, large-scale machine learning, matrix factorization

Amr Ahmed, Nino Shervashidze, Shravan Narayanamurthy, Vanja Josifovski, Alexander J. Smola

2013-05-01T23:59:59.000Z

157

Scaling Issues for Large-Scale Grids  

E-Print Network (OSTI)

· ESNet Can Play a Very Important Role in the Science Grid � Security Aspects of Grids · ESNet Can Provide will be important and very useful for managing large-scale virtual org. structures #12;·ESNet Can Play a Very Important Role in the Science Grid · ESNet can provide a rooted and managed namespace, and a place to home

158

A Distribution Oblivious Scalable Approach for Large-Scale Scientific...  

NLE Websites -- All DOE Office Websites (Extended Search)

distribution-independent parallel spatial data structures, develop core parallel region query algorithms whose runtimes are data distribution-independent. Benefit: Will enable...

159

Scientific Discovery through Advanced Computing (SciDAC) | U.S. DOE Office  

Office of Science (SC) Website

Scientific Scientific Discovery through Advanced Computing (SciDAC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) Co-Design SciDAC Institutes Computational Science Graduate Fellowship (CSGF) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301) 903-4846 E: sc.ascr@science.doe.gov More Information » Research Scientific Discovery through Advanced Computing (SciDAC)

160

Architecture and applications of the HEP multiprocessor computer system  

Science Conference Proceedings (OSTI)

The HEP computer system is a large scale scientific parallel computer employing shared-resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found useful in programming the system are discussed. 3 references.

Smith, B.J.

1981-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


161

Scientific and Computational Challenges of the Fusion Simulation Program (FSP)  

SciTech Connect

This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e.g., Climate Modeling), the FSP will need to develop software in close collaboration with computers scientists and applied mathematicians and validated against experimental data from tokamaks around the world. Specific examples of expected advances needed to enable such a comprehensive integrated modeling capability and possible "co-design" approaches will be discussed. __________________________________________________

William M. Tang

2011-02-09T23:59:59.000Z

162

Large-Scale Renewable Energy Guide  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Renewable Energy Guide Renewable Energy Guide Brad Gustafson, FEMP 2 Large-scale RE Guide Large-scale RE Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities Introduction and Overview Federal Utility Partnership Working Group May 22, 2013 Federal Energy Management Program Office of Energy Efficiency and Renewable Energy U.S. Department of Energy 3 Federal Energy Management Program FEMP works with key individuals to accomplish energy change within organizations by bringing expertise from all levels of project and policy implementation to enable Federal Agencies to meet energy related goals and to provide energy leadership to the country. 4 FEMP Renewable Energy * Works to increase the proportion of renewable energy in the Federal government's energy mix.

163

A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion  

E-Print Network (OSTI)

A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion Peer a new topological framework for the analysis of large scale, time-varying, turbulent combustion consumption thresh- olds for an entire time-dependent combustion simulation. By computing augmented merge

Tierny, Julien

164

Strategies to Finance Large-Scale Deployment of Renewable Energy...  

Open Energy Info (EERE)

Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Name Strategies to Finance Large-Scale...

165

Supporting large-scale science with workflows  

Science Conference Proceedings (OSTI)

Current workflow systems support data flow through complex analyses in a distributed environment. The scope of scientific workflow systems could expand to support the entire scientific research cycle, which includes data flow, design flow, and knowledge ... Keywords: needs assessment, scientific workflows

Deana D. Pennington

2007-06-01T23:59:59.000Z

166

Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological  

E-Print Network (OSTI)

, Large-Scale Science: DOE's ESnet William E. Johnston ESnet Manager and Senior Scientist, DOE Lawrence approach and architecture for DOE's Energy Sciences Network (ESnet), which is the network that serves all community. 1 ESnet's Role in the DOE Office of Science "The Office of Science of the US Dept. of Energy

167

Microsoft Word - The_Advanced_Networks_and_Services_Underpinning_Modern,Large-Scale_Science.SciDAC.v5.doc  

NLE Websites -- All DOE Office Websites (Extended Search)

: Advanced Networking and Services : Advanced Networking and Services Supporting the Science Mission of DOE's Office of Science William E. Johnston ESnet Dept. Head and Senior Scientist Lawrence Berkeley National Laboratory May, 2007 1 Introduction In many ways, the dramatic achievements in scientific discovery through advanced computing and the discoveries of the increasingly large-scale instruments with their enormous data handling and remote collaboration requirements, have been made possible by accompanying accomplishments in high performance networking. As increasingly advanced supercomputers and experimental research facilities have provided researchers powerful tools with unprecedented capabilities, advancements in networks connecting scientists to these tools have made these research facilities available to broader communities

168

Large-Scale PV Integration Study  

DOE Green Energy (OSTI)

This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

2011-07-29T23:59:59.000Z

169

A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion  

SciTech Connect

The advent of highly accurate, large scale volumetric simulations has made data analysis and visualization techniques an integral part of the modern scientific process. To develop new insights from raw data, scientists need the ability to define features of interest in a flexible manner and to understand how changes in the feature definition impact the subsequent analysis of the data. Therefore, simply exploring the raw data is not sufficient. This paper presents a new topological framework for the analysis of large scale, time-varying, turbulent combustion simulations. It allows the scientists to explore interactively the complete parameter space of fuel consumption thresholds for an entire time-dependent combustion simulation. By computing augmented merge trees and their corresponding data segmentations, the system allows the user complete flexibility to segment, select, and track burning cells through time thanks to a linked view interface. We developed this technique in the context of low-swirl turbulent pre-mixed flame simulation analysis, where the topological abstractions enable an efficient tracking through time of the burning cells and provide new qualitative and quantitative insights into the dynamics of the combustion process.

Bremer, P; Weber, G; Tierny, J; Pascucci, V; Day, M; Bell, J

2009-09-29T23:59:59.000Z

170

Large-Scale Environmental Parameters Associated with Tropical Cyclone Formations in the Western North Pacific  

Science Conference Proceedings (OSTI)

The local environmental conditions associated with 405 tropical cyclone (TC) formations in the western North Pacific during 1990–2001 are examined in this study. Six large-scale parameters are obtained and computed from the NCEP reanalyses with ...

Kevin K. W. Cheung

2004-02-01T23:59:59.000Z

171

Model-constrained optimization methods for reduction of parameterized large-scale systems  

E-Print Network (OSTI)

Most model reduction techniques employ a projection framework that utilizes a reduced-space basis. The basis is usually formed as the span of a set of solutions of the large-scale system, which are computed for selected ...

Bui-Thanh, Tan

2007-01-01T23:59:59.000Z

172

Good bones: anthropological scientific collaboration around computed tomography data  

Science Conference Proceedings (OSTI)

We report preliminary results from a socio-technical analysis of scientific collaboration, specifically a loosely connected group of physical anthropology researchers. Working from a combination of interview data and artifact analysis, we identify current ... Keywords: scientific collaboratories, virtual organizations

Andrea H. Tapia; Rosalie Ocker; Mary Beth Rosson; Bridget Blodgett

2011-02-01T23:59:59.000Z

173

Scientific Application Requirements for Leadership Computing at the Exascale  

Science Conference Proceedings (OSTI)

The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a r

Ahern, Sean [ORNL; Alam, Sadaf R [ORNL; Fahey, Mark R [ORNL; Hartman-Baker, Rebecca J [ORNL; Barrett, Richard F [ORNL; Kendall, Ricky A [ORNL; Kothe, Douglas B [ORNL; Mills, Richard T [ORNL; Sankaran, Ramanan [ORNL; Tharrington, Arnold N [ORNL; White III, James B [ORNL

2007-12-01T23:59:59.000Z

174

Large-scale sodium spray fire code validation (SOFICOV) test  

Science Conference Proceedings (OSTI)

A large-scale, sodium, spray fire code validation test was performed in the HEDL 850-m/sup 3/ Containment System Test Facility (CSTF) as part of the Sodium Spray Fire Code Validation (SOFICOV) program. Six hundred fifty eight kilograms of sodium spray was sprayed in an air atmosphere for a period of 2400 s. The sodium spray droplet sizes and spray pattern distribution were estimated. The containment atmosphere temperature and pressure response, containment wall temperature response and sodium reaction rate with oxygen were measured. These results are compared to post-test predictions using SPRAY and NACOM computer codes.

Jeppson, D.W.; Muhlestein, L.D.

1985-01-01T23:59:59.000Z

175

Solving Large-scale Eigenvalue Problems in SciDACApplications  

SciTech Connect

Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.

Yang, Chao

2005-06-29T23:59:59.000Z

176

National facility for advanced computational science: A sustainable path to scientific discovery  

E-Print Network (OSTI)

Scientific Computing (NERSC) Center, 1996– present. •Services and Systems at NERSC (Oct. 1, 1997- Dec 31, 1998,”History Chief Architect, NERSC Division, Lawrence Berkeley

2004-01-01T23:59:59.000Z

177

Large-scale semidefinite programs in electronic structure calculation  

E-Print Network (OSTI)

National Energy Research Scientific Computing Center (NERSC), and eagle (4 × 375MHz. Power3-II with level two cache of size 8MB, and 2GB of memory per ...

178

The large scale clustering of radio sources  

E-Print Network (OSTI)

The observed two-point angular correlation function, w(theta), of mJy radio sources exhibits the puzzling feature of a power-law behaviour up to very large (almost 10 degrees) angular scales which cannot be accounted for in the standard hierarchical clustering scenario for any realistic redshift distribution of such sources. After having discarded the possibility that the signal can be explained by a high density local source population, we find no alternatives to assuming that - at variance with all the other extragalactic populations studied so far, and in particular with optically selected quasars - radio sources responsible for the large-scale clustering signal were increasingly less clustered with increasing look-back time, up to at least z=1. The data are accurately accounted for in terms of a bias function which decreases with increasing redshift, mirroring the evolution with cosmic time of the characteristic halo mass, M_{star}, entering the non linear regime. In the framework of the `concordance cosmology', the effective halo mass controlling the bias parameter is found to decrease from about 10^{15} M_{sun}/h at z=0 to the value appropriate for optically selected quasars, 10^{13} M_{sun}/h, at z=1.5. This suggests that, in the redshift range probed by the data, the clustering evolution of radio sources is ruled by the growth of large-scale structure, and that they are associated with the densest environments virializing at any cosmic epoch. The data provide only loose constraints on radio source clustering at z>1 so we cannot rule out the possibility that at these redshifts the clustering evolution of radio sources enters a different regime, perhaps similar to that found for optically selected quasars. The dependence of w(theta) on cosmological parameters is also discussed.

M. Negrello; M. Magliocchetti; G. De Zotti

2006-02-13T23:59:59.000Z

179

Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts  

SciTech Connect

This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

NONE

1997-12-31T23:59:59.000Z

180

The Potential of the Cell Processor for Scientific Computing  

E-Print Network (OSTI)

Journal of High Performance Computing Applications, 2004. L.Novel Processor Architecture for High Performance Computing.High Performance Computing in the Asia- Pacific Region,

Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

2005-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


181

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

and Directions in High Performance Computing for the Officein the evolution of high performance computing and networks.Hectopascals High performance computing High Performance

Hules, John A.

2008-01-01T23:59:59.000Z

182

Parallel Index and Query for Large Scale Data Analysis  

SciTech Connect

Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

2011-07-18T23:59:59.000Z

183

Merit Review Procedures for Advanced Scientific Computing Research...  

Office of Science (SC) Website

News In the News In Focus Presentations & Testimony Recovery Act About Organization Budget Field Offices Federal Advisory Committees History Scientific and Technical...

184

Evaluating the potential of multithreaded platforms for irregular scientific computations  

Science Conference Proceedings (OSTI)

The resurgence of current and upcoming multithreaded architectures and programming models led us to conduct a detailed study to understand the potential of these platforms to increase the performance of data-intensive, irregular scientific applications. ... Keywords: data-intensive applications, irregular scientific applications, multithreaded architectures

Jarek Nieplocha; Andrès Márquez; John Feo; Daniel Chavarría-Miranda; George Chin; Chad Scherrer; Nathaniel Beagley

2007-05-01T23:59:59.000Z

185

Federal Energy Management Program: Large-scale Renewable Energy Projects  

NLE Websites -- All DOE Office Websites (Extended Search)

Large-scale Large-scale Renewable Energy Projects (Larger than 10 MWs) to someone by E-mail Share Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on Facebook Tweet about Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on Twitter Bookmark Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on Google Bookmark Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on Delicious Rank Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on Digg Find More places to share Federal Energy Management Program: Large-scale Renewable Energy Projects (Larger than 10 MWs) on

186

Energy Department Loan Guarantee Would Support Large-Scale Rooftop...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S....

187

Locations of Smart Grid Demonstration and Large-Scale Energy...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Locations of Smart Grid Demonstration and Large-Scale Energy Storage Projects Map of the United States...

188

Planning under uncertainty solving large-scale stochastic linear programs  

Science Conference Proceedings (OSTI)

For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)

1992-12-01T23:59:59.000Z

189

Lessons Learned from Large-Scale User Studies: Using Android Market as a Source of Data  

Science Conference Proceedings (OSTI)

User studies with mobile devices have typically been cumbersome, since researchers have had to recruit participants, hand out or configure devices, and offer incentives and rewards. The increasing popularity of application stores has allowed researchers ... Keywords: Application Stores, Computer Science, Large-Scale Study, Mobile Computing, Mobile Devices, Ubiquitous Computing

Denzil Ferreira; Vassilis Kostakos; Anind K. Dey

2012-07-01T23:59:59.000Z

190

What are the Computational Keys to Future Scientific Discoveries...  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Center (NERSC) developed a Data Intensive Computing Pilot. "Many of the big data challenges that have long existed in the particle and high energy physics world...

191

Investigating the Limits of SOAP Performance for Scientific Computing  

Science Conference Proceedings (OSTI)

The growing synergy between Web Services and Grid-based technologies [7] will potentially enable profound, dynamic interactions between scientific applications dispersed in geographic, institutional, and conceptual space. Such deep interoperability requires ...

Kenneth Chiu; Madhusudhan Govindaraju; Randall Bramley

2002-07-01T23:59:59.000Z

192

ROARS: a scalable repository for data intensive scientific computing  

Science Conference Proceedings (OSTI)

As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with ...

Hoang Bui; Peter Bui; Patrick Flynn; Douglas Thain

2010-06-01T23:59:59.000Z

193

Building a Large Scale Climate Data System in Support of HPC Environment  

SciTech Connect

The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

Wang, Feiyi [ORNL; Harney, John F [ORNL; Shipman, Galen M [ORNL

2011-01-01T23:59:59.000Z

194

2 Large-Scale Data Sets  

E-Print Network (OSTI)

help in preparing this material. Computational Science ? Use of computer simulation as a tool for greater understanding of the real world – Complements experimentation and theory ? Problems are increasingly computationally challenging – Large parallel machines needed to perform calculations – Critical to leverage parallelism in all phases ? Data access is a huge challenge – Using parallelism to obtain performance – Finding usable, efficient, portable interfaces – Understanding and tuning I/O IBM BG/L system. Visualization of entropy in Terascale Supernova Initiative application. Image from Kwan-Liu Ma’s visualization team at UC Davis. Argonne National

Rob Ross; Thanks To Rob Latham; Rajeev Thakur; Marc Unangst; Brent Welch For Their

2009-01-01T23:59:59.000Z

195

Algorithms for Large-Scale Internet Measurements  

E-Print Network (OSTI)

As the Internet has grown in size and importance to society, it has become increasingly difficult to generate global metrics of interest that can be used to verify proposed algorithms or monitor performance. This dissertation tackles the problem by proposing several novel algorithms designed to perform Internet-wide measurements using existing or inexpensive resources. We initially address distance estimation in the Internet, which is used by many distributed applications. We propose a new end-to-end measurement framework called Turbo King (T-King) that uses the existing DNS infrastructure and, when compared to its predecessor King, obtains delay samples without bias in the presence of distant authoritative servers and forwarders, consumes half the bandwidth, and reduces the impact on caches at remote servers by several orders of magnitude. Motivated by recent interest in the literature and our need to find remote DNS nameservers, we next address Internet-wide service discovery by developing IRLscanner, whose main design objectives have been to maximize politeness at remote networks, allow scanning rates that achieve coverage of the Internet in minutes/hours (rather than weeks/months), and significantly reduce administrator complaints. Using IRLscanner and 24-hour scan durations, we perform 20 Internet-wide experiments using 6 different protocols (i.e., DNS, HTTP, SMTP, EPMAP, ICMP and UDP ECHO). We analyze the feedback generated and suggest novel approaches for reducing the amount of blowback during similar studies, which should enable researchers to collect valuable experimental data in the future with significantly fewer hurdles. We finally turn our attention to Intrusion Detection Systems (IDS), which are often tasked with detecting scans and preventing them; however, it is currently unknown how likely an IDS is to detect a given Internet-wide scan pattern and whether there exist sufficiently fast stealth techniques that can remain virtually undetectable at large-scale. To address these questions, we propose a novel model for the windowexpiration rules of popular IDS tools (i.e., Snort and Bro), derive the probability that existing scan patterns (i.e., uniform and sequential) are detected by each of these tools, and prove the existence of stealth-optimal patterns.

Leonard, Derek Anthony

2010-12-01T23:59:59.000Z

196

Advanced Scientific Computing Research (ASCR) Homepage | U.S. DOE Office of  

Office of Science (SC) Website

ASCR Home ASCR Home Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) News & Resources Contact Information Advanced Scientific Computing Research U.S. Department of Energy SC-21/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-7486 F: (301) 903-4846 E: sc.ascr@science.doe.gov More Information » ASCR Advisory Committee Exascale Report Synergistic Challenges in Data-Intensive Science and Exascale Computing ASCAC Subcommittee Summary Report. This new report discusses the natural synergies among the challenges facing data-intensive science and exascale computing, including the need for a new scientific workflow.

197

Large Scale Video Conferencing: A Digital Amphitheater  

E-Print Network (OSTI)

Gharai,L. Perkins,C.S. Riley,R. Mankin,A. Proceedings of the 8th International Conference on Distributed Multimedia Systems , San Francisco, CA, USA Dept of Computing Science, University of Glasgow

Gharai, L.; Perkins, C.S.; Riley, R.; Mankin, A.; Proceedings of the 8th International Conference on Distributed Multimedia Systems , San Francisco, CA, USA Dept of Computing Science, University of Glasgow [More Details

198

Large-Scale Analyses of Glycosylation in Cellulases  

NLE Websites -- All DOE Office Websites (Extended Search)

Article Article Large-Scale Analyses of Glycosylation in Cellulases Fengfeng Zhou 1,2 , Victor Olman 1,2 , and Ying Xu 1,2 * 1 Computational Systems Biology Laboratory, Department of Biochemistry and Molecular Biology / Institute of Bioinformatics, University of Georgia, Athens, GA 30602-7229, USA; 2 BioEnergy Science Center, Oak Ridge National Laboratory, Oak Ridge, TN 37830-8050, USA. *Corresponding author. E-mail: xyn@bmb.uga.edu DOI: 10.1016/S1672-0229(08)60049-2 Cellulases are important glycosyl hydrolases (GHs) that hydrolyze cellulose poly- mers into smaller oligosaccharides by breaking the cellulose β (1→4) bonds, and they are widely used to produce cellulosic ethanol from the plant biomass. N-linked and O-linked glycosylations were proposed to impact the catalytic ef f iciency, cel- lulose binding af f inity and the stability of cellulases based on observations

199

An adaptive clustering-based resource discovery scheme for large scale MANETs  

Science Conference Proceedings (OSTI)

An increasing number of smart mobile devices offering the ability to perform various types of ubiquitous computation are emerging as large computer networks with an unprecedented scale. Large Scale Mobile Ad Hoc Networks (MANETs) place strong challenges ... Keywords: adaptive network clustering, distributed algorithms, mobile ad hoc networks, multi-hop head-based non-overlapping clustering, network resource discovery

Saad Al-Ahmadi; Abdullah Al-Dhelaan

2012-04-01T23:59:59.000Z

200

A sequential cooperative game theoretic approach to scheduling multiple large-scale applications in grids  

Science Conference Proceedings (OSTI)

Scheduling large-scale applications in heterogeneous distributed computing systems is a fundamental NP-complete problem that is critical to obtaining good performance and execution cost. In this paper, we address the scheduling problem of an important ... Keywords: Economic cost, Game theory, Grid computing, Performance, Scheduling, Storage

Rubing Duan, Radu Prodan, Xiaorong Li

2014-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


201

Large-Scale Software Unit Testing on the Grid Yaohang Li, 2  

E-Print Network (OSTI)

-scale and cost-efficient computational grid resources as a software testing test bed to support automated. Grid computing is characterized by large-scale sharing and cooperation of dynamically distributed a grid-based software testing framework to facilitate the automated process of utilizing the grid

Li, Yaohang

202

Cosmology with CMB and Large Scale Structure  

E-Print Network (OSTI)

fitting ?CDM model as determined from the 5-year WMAP analysis [3]. (b) Frequency distributions of S1/2 for statistically isotropic, Gaussian, realizations of the ?CDM model [3]. The red (solid) histogram shows the frequency distribution for the pixel ACF... estimator applied to the whole sky. The blue (dotted) histogram shows the distribution computed with the pixel ACF estimator with the KQ75 sky cut applied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 2.8 Pixel based...

Ma, Yin-Zhe

2011-07-12T23:59:59.000Z

203

Solving large scale polynomial convex problems on \\ell_1/nuclear ...  

E-Print Network (OSTI)

Oct 24, 2012 ... Solving large scale polynomial convex problems on \\ell_1/nuclear norm balls by randomized first-order algorithms. Aharon Ben-Tal (abental ...

204

Training a Large Scale Classifier with the Quantum Adiabatic Algorithm  

E-Print Network (OSTI)

In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.

Hartmut Neven; Vasil S. Denchev; Geordie Rose; William G. Macready

2009-12-04T23:59:59.000Z

205

National Energy Research Scientific Computing Center 2007 Annual Report  

E-Print Network (OSTI)

by the Director, Office of Science, Office of Ad- vancedComputing for the Office of Science. A Report from the NERSCWashington, D.C. : DOE Office of Science, Vol. 1, July 30,

Hules, John A.

2008-01-01T23:59:59.000Z

206

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

Type Ia supernovae, gamma-ray bursts, X-ray bursts and corerelativistic jet, making a gamma-ray burst, the luminositythose that lead to gamma-ray bursts. The current frontier is

Gerber, Richard A.

2011-01-01T23:59:59.000Z

207

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

day experimental fusion devices and in nuclear reactors thatnuclear energy both for next-generation fission reactors and for fusion reactors

Gerber, Richard A.

2012-01-01T23:59:59.000Z

208

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

personnel from Brookhaven National Lab (BNL), ThomasIon Collider at Brookhaven National Lab. Participation in

Gerber, Richard A.

2012-01-01T23:59:59.000Z

209

Large Scale Computing and Storage Requirements for High Energy Physics  

E-Print Network (OSTI)

at NERSC, Intrepid at ALCF, and Linux clusters. Most of themoved to Intrepid at the ALCF. The completion of this taskEnergy Physics Appendix  C. ALCF AMR ASCR BAO BELLA CCSE

Gerber, Richard A.

2011-01-01T23:59:59.000Z

210

Large-Scale Parallel Computing of Cloud Resolving Storm Simulator  

Science Conference Proceedings (OSTI)

A sever thunderstorm is composed of strong convective clouds. In order to perform a simulation of this type of storms, a very finegrid system is necessary to resolve individual convective clouds within a large domain. Since convective clouds are highly ...

Kazuhisa Tsuboki; Atsushi Sakakibara

2002-05-01T23:59:59.000Z

211

Design Proposed for Large-Scale Quantum Computer  

Science Conference Proceedings (OSTI)

... experimental results. A preprint of this paper can be found at http://xxx. lanl.gov/ or by contacting NIST at the above address.].

2012-12-13T23:59:59.000Z

212

Large Scale Computing and Storage Requirements for Nuclear Physics Research  

E-Print Network (OSTI)

fusion, vortices in the crusts of neutron stars, and even dynamics in nonnuclear systems such as cold

Gerber, Richard A.

2012-01-01T23:59:59.000Z

213

Image Galleries of the National Energy Research Scientific Computing Center (NERSC)  

DOE Data Explorer (OSTI)

The National Energy Research Scientific Computing Center (NERSC) is the flagship scientific computing facility for the Office of Science in the U.S. Department of Energy. As one of the largest facilities in the world devoted to providing computational resources and expertise for basic scientific research, NERSC is a world leader in accelerating scientific discovery through computation. NERSC is located at Lawrence Berkeley National Laboratory in Berkeley, California. The more than 3,000 computational scientists who use NERSC perform basic scientific research across a wide range of disciplines. These disciplines include climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors. NERSC provides three image galleries: the vizualizations image gallery (visualizations produced at NERSC from datasets resulting from experiments, simulations, or data analysis), the NERSC systems gallery (images and videos of the systems that undergird all NERSC work), and a collection of NERSC logos.

214

A large scale study of text-messaging use  

Science Conference Proceedings (OSTI)

Text messaging has become a popular form of communication with mobile phones worldwide. We present findings from a large scale text messaging study of 70 university students in the United States. We collected almost 60, 000 text messages over a period ... Keywords: large-scale study, mobile device, short message service, sms, text messaging, texting

Agathe Battestini; Vidya Setlur; Timothy Sohn

2010-09-01T23:59:59.000Z

215

Large-Scale Eucalyptus Energy Farms and Power Cogeneration1  

E-Print Network (OSTI)

Large-Scale Eucalyptus Energy Farms and Power Cogeneration1 Robert C. Noronla2 The initiation of a large-scale cogeneration project, especially one that combines construction of the power generation supplemental fuel source must be sought if the cogeneration facility will consume more fuel than

Standiford, Richard B.

216

Large-Scale Hydropower Basics | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Hydropower Basics Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the stored water is released, it passes through and rotates turbines, which spin generators to produce electricity. Water stored in a reservoir can be accessed quickly for use during times when the demand for electricity is high. Dammed hydropower projects can also be built as power storage facilities.

217

Large-Scale Hydropower Basics | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Hydropower Basics Large-Scale Hydropower Basics Large-Scale Hydropower Basics August 14, 2013 - 3:11pm Addthis Large-scale hydropower plants are generally developed to produce electricity for government or electric utility projects. These plants are more than 30 megawatts (MW) in size, and there is more than 80,000 MW of installed generation capacity in the United States today. Most large-scale hydropower projects use a dam and a reservoir to retain water from a river. When the stored water is released, it passes through and rotates turbines, which spin generators to produce electricity. Water stored in a reservoir can be accessed quickly for use during times when the demand for electricity is high. Dammed hydropower projects can also be built as power storage facilities.

218

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

the Argonne and Oak Ridge Leadership Computing Facilitieslike Leadership Computing Facilities at Argonne and Oak

Antypas, Katie

2013-01-01T23:59:59.000Z

219

Laboratory Directed Research & Development Page National Energy Research Scientific Computing Center  

NLE Websites -- All DOE Office Websites (Extended Search)

& Development & Development Page National Energy Research Scientific Computing Center T3E Individual Node Optimization Michael Stewart, SGI/Cray, 4/9/98 * Introduction * T3E Processor * T3E Local Memory * Cache Structure * Optimizing Codes for Cache Usage * Loop Unrolling * Other Useful Optimization Options * References 1 Laboratory Directed Research & Development Page National Energy Research Scientific Computing Center Introduction * Primary topic will be single processor optimization * Most codes on the T3E are dominated by computation * Processor interconnect specifically designed for high performance codes, unlike the T3E processor * More detailed information available on the web (see References) * Fortran oriented, but I will give C compiler flag equivalents.

220

Computing Frontier: Distributed Computing  

NLE Websites -- All DOE Office Websites (Extended Search)

Computing Computing Frontier: Distributed Computing and Facility Infrastructures Conveners: Kenneth Bloom 1 , Richard Gerber 2 1 Department of Physics and Astronomy, University of Nebraska-Lincoln 2 National Energy Research Scientific Computing Center (NERSC), Lawrence Berkeley National Laboratory 1.1 Introduction The field of particle physics has become increasingly reliant on large-scale computing resources to address the challenges of analyzing large datasets, completing specialized computations and simulations, and allowing for wide-spread participation of large groups of researchers. For a variety of reasons, these resources have become more distributed over a large geographic area, and some resources are highly specialized computing machines. In this report for the Snowmass Computing Frontier Study, we consider several questions about distributed computing

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


221

NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center  

E-Print Network (OSTI)

NERSC 2011 High Performance Computing Facility Operationalby providing high-performance computing, information, data,s deep knowledge of high performance computing to overcome

Antypas, Katie

2013-01-01T23:59:59.000Z

222

Surface Layer Transport of Sulfate Particles in the Western United States by the Large-Scale Wind Field  

Science Conference Proceedings (OSTI)

The transport patterns of fine sulfur aerosols in the western United States are shown. The large-scale resultant horizontal flux was computed in terms of that contributed by the mean flux versus that contributed by a turbulence, or eddy, ...

Lowell L. Ashbaugh; Leonard O. Myrup; Robert G. Flocchini

1984-05-01T23:59:59.000Z

223

Using Markov chain analysis to study dynamic behaviour in large-scale grid systems  

Science Conference Proceedings (OSTI)

In large-scale grid systems with decentralized control, the interactions of many service providers and consumers will likely lead to emergent global system behaviours that result in unpredictable, often detrimental, outcomes. This possibility argues ... Keywords: discrete Markov chain, grid computing, perturbation analysis, piece-wise homogenous Markov chain

Christopher Dabrowski; Fern Hunt

2009-01-01T23:59:59.000Z

224

DRAM errors in the wild: a large-scale field study  

Science Conference Proceedings (OSTI)

Errors in dynamic random access memory (DRAM) are a common form of hardware failure in modern compute clusters. Failures are costly both in terms of hardware replacement costs and service disruption. While a large body of work exists on DRAM in laboratory ... Keywords: data corruption, dimm, dram, dram reliability, ecc, empirical study, hard error, large-scale systems, memory, soft error

Bianca Schroeder; Eduardo Pinheiro; Wolf-Dietrich Weber

2009-06-01T23:59:59.000Z

225

Hurricane Climatic Fluctuations. Part II: Relation to Large-Scale Circulation  

Science Conference Proceedings (OSTI)

Correlations are computed between interannual fluctuations of hurricane incidence in the Atlantic basin and large-scale patterns of seasonally-averaged sea-level pressure (SLP; 1899–1978), sea-surface temperature (SST; 1899–1967), and 500 mb ...

Lloyd J. Shapiro

1982-08-01T23:59:59.000Z

226

Towards Ontology-based Data Quality Inference in Large-Scale Sensor Networks  

Science Conference Proceedings (OSTI)

This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent ... Keywords: Wireless Sensor Networks, Semantic Web, Distributed Computing

Sam Esswein; Sebastien Goasguen; Chris Post; Jason Hallstrom; David White; Gene Eidson

2012-05-01T23:59:59.000Z

227

National facility for advanced computational science: A sustainable path to scientific discovery  

Science Conference Proceedings (OSTI)

Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

2004-04-02T23:59:59.000Z

228

Software System Design for Large Scale, Spatially-explicit Agroecosystem Modeling  

SciTech Connect

Recently, site-based agroecosystem model has been applied at regional and state level to enable comprehensive analyses of environmental sustainability of food and biofuel production. Those large-scale, spatially-explicit simulations present computational challenges in software systems design. Herein, we describe our software system design for large-scale, spatially-explicit agroecosystem modeling and data analysis. First, we describe the software design principles in three major phases: data preparation, high performance simulation, and data management and analysis. Then, we use a case study at a regional intensive modeling area (RIMA) to demonstrate our system implementation and capability.

Wang, Dali [ORNL; Nichols, Dr Jeff A [ORNL; Kang, Shujiang [ORNL; Post, Wilfred M [ORNL; Liu, Sumang [ORNL

2012-01-01T23:59:59.000Z

229

Planning and implementing a large-scale polymer flood  

Science Conference Proceedings (OSTI)

The motive for the Eliasville polymerflood originated while planning a waterflood in this light oil, limestone reservoir. Adverse reservoir waterflood characteristics were identified prior to unitization and laboratory work was undertaken to demonstrate the benefits of reducing water mobility by increasing water vicosity with several different polyacrylamides. Computer simulations incorporating polymer properties from laboratory work and known Caddo waterflood performance were used to design the polymerflood. Three injection tests were conducted to determine polymer injectivity. Pressure transient tests were used to measure the in-situ polymer viscosity. One of the injection tests included an off-pattern producing well which permitted an estimation of polymer retention and incremental oil recovery in a short time. Based on the injection tests and simulation work a large scale polymer project was implemented. The optimum slug size required 30,000,000 lb of emulsion polymer. Facilities used to mix and feed this large amount of polymer are described. A low-shear polymer flow control method was developed to insure maximum fluid viscosity at the formation perforations. Product specifications were verified prior to accepting delivery and injection fluid quality was monitored in laboratories constructed for the project. Early production response to field wide polymer injection is comparable to that observed at the off-pattern producing well during the injection test. While the early field response is encouraging, the effects of salt water injection on slug integrity and increased pattern size on oil recovery are still to be determined.

Weiss, W.W.; Baldwin, R.W.

1984-04-01T23:59:59.000Z

230

Dark energy, integrated Sachs-Wolfe effect and large-scale magnetic fields  

E-Print Network (OSTI)

The impact of large-scale magnetic fields on the interplay between the ordinary and integrated Sachs-Wolfe effects is investigated in the presence of a fluctuating dark energy component. The modified initial conditions of the Einstein-Boltzmann hierarchy allow for the simultaneous inclusion of dark energy perturbations and of large-scale magnetic fields. The temperature and polarization angular power spectra are compared with the results obtained in the magnetized version of the (minimal) concordance model. Purported compensation effects arising at large scales are specifically investigated. The fluctuating dark energy component modifies, in a computable manner, the shapes of the 1- and 2-$\\sigma$ contours in the parameter space of the magnetized background. The allowed spectral indices and magnetic field intensities turn out to be slightly larger than those determined in the framework of the magnetized concordance model where the dark energy fluctuations are absent.

Massimo Giovannini

2009-07-18T23:59:59.000Z

231

Energy Department Applauds Nation's First Large-Scale Industrial Carbon  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Nation's First Large-Scale Industrial Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur, Illinois. Supported by the 2009 economic stimulus legislation - the American Recovery and Reinvestment Act - the ambitious project will capture and store one million tons of carbon dioxide (CO2) per year produced as the result of processing corn into fuel-grade ethanol from the nearby Archer Daniels Midland biofuels plant. Since all of

232

DOE Completes Large-Scale Carbon Sequestration Project Awards | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

DOE Completes Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the Department's seventh large-scale carbon sequestration project. Led by Montana State University-Bozeman, the Partnership will conduct a large-volume test in the Nugget Sandstone formation to demonstrate the ability of a geologic formation to safely, permanently and economically

233

ARM - Evaluation Product - Vertical Air Motion during Large-Scale  

NLE Websites -- All DOE Office Websites (Extended Search)

ProductsVertical Air Motion during Large-Scale ProductsVertical Air Motion during Large-Scale Stratiform Rain Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Vertical Air Motion during Large-Scale Stratiform Rain Site(s) NIM SGP General Description The Vertical Air Motion during Large-Scale Stratiform Rain (VERVELSR) value-added product (VAP) uses the unique properties of a 95-GHz radar Doppler velocity spectra to produce vertical profiles of air motion during low-to-moderate (1-20 mm/hr) rainfall events It is designed to run at ARM sites that include a W-band ARM cloud radar (WACR) radar with spectra data processing. The VERVELSR VAP, based on the work of Giangrande et al. (2010), operates by exploiting a resonance effect that occurs in

234

Energy Department Applauds Nation's First Large-Scale Industrial Carbon  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Energy Department Applauds Nation's First Large-Scale Industrial Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility Energy Department Applauds Nation's First Large-Scale Industrial Carbon Capture and Storage Facility August 24, 2011 - 6:23pm Addthis Washington, D.C. - The U.S. Department of Energy issued the following statement in support of today's groundbreaking for construction of the nation's first large-scale industrial carbon capture and storage (ICCS) facility in Decatur, Illinois. Supported by the 2009 economic stimulus legislation - the American Recovery and Reinvestment Act - the ambitious project will capture and store one million tons of carbon dioxide (CO2) per year produced as the result of processing corn into fuel-grade ethanol from the nearby Archer Daniels Midland biofuels plant. Since all of

235

Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Industrial Carbon Capture, Storage Plant Begins Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration project funded by the American Recovery and Reinvestment Act (ARRA) to move into the construction phase. Led by the Archer Daniels Midland Company (ADM), a member of DOE's Midwest Geological Sequestration Consortium, the Illinois-ICCS project is designed to sequester approximately 2,500 metric tons of carbon dioxide

236

Large-Scale Aspects of the United States Hydrologic Cycle  

Science Conference Proceedings (OSTI)

A large-scale, gridpoint, atmospheric, hydrologic climatology consisting of atmospheric precipitable water, precipitation, atmospheric moisture flux convergence, and a residual evaporation for the conterminous United States is described. A large-...

John O. Roads; Shyh-C. Chen; Alexander K. Guetter; Konstantine P. Georgakakos

1994-09-01T23:59:59.000Z

237

DOE Completes Large-Scale Carbon Sequestration Project Awards | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Completes Large-Scale Carbon Sequestration Project Awards Completes Large-Scale Carbon Sequestration Project Awards DOE Completes Large-Scale Carbon Sequestration Project Awards November 17, 2008 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of 2 Million Tons of CO2 at Wyoming Site WASHINGTON, DC - Completing a series of awards through its Regional Carbon Sequestration Partnership Program, the U.S. Department of Energy (DOE) today awarded $66.9 million to the Big Sky Regional Carbon Sequestration Partnership for the Department's seventh large-scale carbon sequestration project. Led by Montana State University-Bozeman, the Partnership will conduct a large-volume test in the Nugget Sandstone formation to demonstrate the ability of a geologic formation to safely, permanently and economically

238

Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Industrial Carbon Capture, Storage Plant Begins Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction Large-Scale Industrial Carbon Capture, Storage Plant Begins Construction August 24, 2011 - 1:00pm Addthis Washington, DC - Construction activities have begun at an Illinois ethanol plant that will demonstrate carbon capture and storage. The project, sponsored by the U.S. Department of Energy's Office of Fossil Energy, is the first large-scale integrated carbon capture and storage (CCS) demonstration project funded by the American Recovery and Reinvestment Act (ARRA) to move into the construction phase. Led by the Archer Daniels Midland Company (ADM), a member of DOE's Midwest Geological Sequestration Consortium, the Illinois-ICCS project is designed to sequester approximately 2,500 metric tons of carbon dioxide

239

Materialized community ground models for large-scale earthquake simulation  

Science Conference Proceedings (OSTI)

Large-scale earthquake simulation requires source datasets which describe the highly heterogeneous physical characteristics of the earth in the region under simulation. Physical characteristic datasets are the first stage in a simulation pipeline which ...

Steven W. Schlosser; Michael P. Ryan; Ricardo Taborda; Julio López; David R. O'Hallaron; Jacobo Bielak

2008-11-01T23:59:59.000Z

240

Advanced concepts in large-scale network simulation  

Science Conference Proceedings (OSTI)

This tutorial paper reviews existing concepts and future directions in selected areas related to simulation of large-scale networks. It covers specifically topics in traffic modeling, simulation of routing, network emulation, and real-time simulation.

David M. Nicol; Michael Liljenstam; Jason Liu

2005-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


241

Large-Scale Meteorology and Deep Convection during TRMM KWAJEX  

Science Conference Proceedings (OSTI)

An overview of the large-scale behavior of the atmosphere during the Tropical Rainfall Measuring Mission (TRMM) Kwajalein Experiment (KWAJEX) is presented. Sounding and ground radar data collected during KWAJEX, and several routinely available ...

Adam H. Sobel; Sandra E. Yuter; Christopher S. Bretherton; George N. Kiladis

2004-02-01T23:59:59.000Z

242

Infrastructure for large-scale tests in marine autonomy  

E-Print Network (OSTI)

This thesis focuses on the development of infrastructure for research with large-scale autonomous marine vehicle fleets and the design of sampling trajectories for compressive sensing (CS). The newly developed infrastructure ...

Hummel, Robert A. (Robert Andrew)

2012-01-01T23:59:59.000Z

243

Platforms and real options in large-scale engineering systems  

E-Print Network (OSTI)

This thesis introduces a framework and two methodologies that enable engineering management teams to assess the value of real options in programs of large-scale, partially standardized systems implemented a few times over ...

Kalligeros, Konstantinos C., 1976-

2006-01-01T23:59:59.000Z

244

Technoeconomic Evaluation of Large-Scale Electrolytic Hydrogen Production Technologies  

Science Conference Proceedings (OSTI)

Large-scale production of electrolytic hydrogen and oxygen could increase use of baseload and off-peak surplus power. To be competitive, however, water electrolysis will require low-cost electricity.

1985-09-20T23:59:59.000Z

245

Decomposition methods for large scale stochastic and robust optimization problems  

E-Print Network (OSTI)

We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a ...

Becker, Adrian Bernard Druke

2011-01-01T23:59:59.000Z

246

Student Pages: RFP-Large-Scale Diversion of Water  

NLE Websites -- All DOE Office Websites (Extended Search)

Supplying Our Water Needs H2O Request For Proposal Large Scale Diversion of Water U.S. Army Corp of Engineers-Chicago District online Be sure to submit the online sign-off each...

247

On solving large scale polynomial convex problems by randomized ...  

E-Print Network (OSTI)

plications), the (unimprovable in the large-scale case) rate of convergence of FOM's ...... mjnj min[mj,nj]) a.o.) and eigenvalue decomposition of a matrix from Sm.

248

Large-Scale Industrial CCS Projects Selected for Continued Testing |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Industrial CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing Large-Scale Industrial CCS Projects Selected for Continued Testing June 10, 2010 - 1:00pm Addthis Washington, DC - Three Recovery Act funded projects have been selected by the U.S. Department of Energy (DOE) to continue testing large-scale carbon capture and storage (CCS) from industrial sources. The projects - located in Texas, Illinois, and Louisiana - were initially selected for funding in October 2009 as part of a $1.4 billion effort to capture carbon dioxide (CO2) from industrial sources for storage or beneficial use. The first phase of research and development (R&D) included $21.6 million in Recovery Act funding and $22.5 million in private funding for a total initial investment of $44.1 million.

249

DOE Awards First Three Large-Scale Carbon Sequestration Projects |  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

First Three Large-Scale Carbon Sequestration Projects First Three Large-Scale Carbon Sequestration Projects DOE Awards First Three Large-Scale Carbon Sequestration Projects October 9, 2007 - 3:14pm Addthis U.S. Projects Total $318 Million and Further President Bush's Initiatives to Advance Clean Energy Technologies to Confront Climate Change WASHINGTON, DC - In a major step forward for demonstrating the promise of clean energy technology, U.S Deputy Secretary of Energy Clay Sell today announced that the Department of Energy (DOE) awarded the first three large-scale carbon sequestration projects in the United States and the largest single set in the world to date. The three projects - Plains Carbon Dioxide Reduction Partnership; Southeast Regional Carbon Sequestration Partnership; and Southwest Regional Partnership for Carbon

250

Institute for Scientific Computing Research Annual Report for Fiscal Year 2003  

SciTech Connect

The University Relations Program (URP) encourages collaborative research between Lawrence Livermore National Laboratory (LLNL) and the University of California campuses. The Institute for Scientific Computing Research (ISCR) actively participates in such collaborative research, and this report details the Fiscal Year 2003 projects jointly served by URP and ISCR.

Keyes, D; McGraw, J

2004-02-12T23:59:59.000Z

251

A distributed computing environment with support for constraint-based task scheduling and scientific experimentation  

SciTech Connect

This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and execute program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.

Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L. [Univ. of Washington, Seattle, WA (United States). Dept. of Computer Science and Engineering

1997-04-01T23:59:59.000Z

252

Parallel computing works  

Science Conference Proceedings (OSTI)

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23T23:59:59.000Z

253

GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY  

SciTech Connect

Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.

Lee, S

2008-05-28T23:59:59.000Z

254

Bayesian Uncertainty Quantification for Large Scale Spatial Inverse Problems  

E-Print Network (OSTI)

We considered a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a high dimension spatial field. The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provides a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. Karhunen-Lo'eve expansion and Discrete Cosine transform were used for dimension reduction of the random spatial field. Furthermore, we used a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we have shown that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. The need for multiple evaluations of the forward model on a high dimension spatial field (e.g. in the context of MCMC) together with the high dimensionality of the posterior, results in many computation challenges. We developed two-stage reversible jump MCMC method which has the ability to screen the bad proposals in the first inexpensive stage. Channelized spatial fields were represented by facies boundaries and variogram-based spatial fields within each facies. Using level-set based approach, the shape of the channel boundaries was updated with dynamic data using a Bayesian hierarchical model where the number of points representing the channel boundaries is assumed to be unknown. Statistical emulators on a large scale spatial field were introduced to avoid the expensive likelihood calculation, which contains the forward simulator, at each iteration of the MCMC step. To build the emulator, the original spatial field was represented by a low dimensional parameterization using Discrete Cosine Transform (DCT), then the Bayesian approach to multivariate adaptive regression spline (BMARS) was used to emulate the simulator. Various numerical results were presented by analyzing simulated as well as real data.

Mondal, Anirban

2011-08-01T23:59:59.000Z

255

Breakthrough Large-Scale Industrial Project Begins Carbon Capture and  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Breakthrough Large-Scale Industrial Project Begins Carbon Capture Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products and Chemicals hydrogen production facility in Port Arthur, Texas, is significant for demonstrating both the effectiveness and commercial viability of CCUS technology as an option in helping mitigate atmospheric CO2 emissions. Funded in part through the American Recovery and Reinvestment Act (ARRA), the project is managed by the U.S.

256

Breakthrough Large-Scale Industrial Project Begins Carbon Capture and  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Breakthrough Large-Scale Industrial Project Begins Carbon Capture Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization January 25, 2013 - 12:00pm Addthis Washington, DC - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). Read the project factsheet The project at Air Products and Chemicals hydrogen production facility in Port Arthur, Texas, is significant for demonstrating both the effectiveness and commercial viability of CCUS technology as an option in helping mitigate atmospheric CO2 emissions. Funded in part through the American Recovery and Reinvestment Act (ARRA), the project is managed by the U.S.

257

COMMENTS OF THE LARGE-SCALE SOLAR ASSOCIATION TO DEPARTMENT  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

COMMENTS OF THE LARGE-SCALE SOLAR ASSOCIATION TO DEPARTMENT COMMENTS OF THE LARGE-SCALE SOLAR ASSOCIATION TO DEPARTMENT OF ENERGY'S RAPID RESPONSE TEAM FOR TRANSMISSION'S REQUEST FOR INFORMATION Submitted by electronic mail to: Lamont.Jackson@hq.doe.gov The Large-scale Solar Association appreciates this opportunity to respond to the Department of Energy's (DOE) Rapid Response Team for Transmission's (RRTT) Request for Information. 1 We applaud the DOE for creating the RRTT and continuing to advance the efforts already made under the Memorandum of Understanding (MOU) entered into by nine Federal agencies in 2009 to expedite electric transmission construction. We also applaud the federal and state agencies that have expanded the Renewable Energy Policy Group and the Renewable Energy Action Team in California to focus on transmission, and hope that the tremendous

258

Nevada Weatherizes Large-Scale Complex | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Nevada Weatherizes Large-Scale Complex Nevada Weatherizes Large-Scale Complex Nevada Weatherizes Large-Scale Complex July 1, 2010 - 10:11am Addthis What does this project do? This nonprofit weatherized a 22-unit low-income multifamily complex, reducing the building's duct leakage from 90 percent to just 5 percent. The weatherization program of the Rural Nevada Development Corporation (RNDC) reached a recent success in its eleven counties-wide territory. In June, the nonprofit finished weatherizing a 22-unit low-income multifamily complex, reducing the building's duct leakage from 90 percent to just 5 percent. "That is one big savings and is why I am proud of this project," says Dru Simerson, RNDC Weatherization Manager. RNDC's crew replaced all windows and 17 furnaces and installed floor

259

Acts -- A collection of high performing software tools for scientific computing  

Science Conference Proceedings (OSTI)

During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

Drummond, L.A.; Marques, O.A.

2002-11-01T23:59:59.000Z

260

Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing  

SciTech Connect

This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

Khaleel, Mohammad A.

2009-10-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


261

Metal hydride based isotope separation: Large-scale operations  

DOE Green Energy (OSTI)

A program to develop a metal hydride based hydrogen isotope separation process began at the Savannah River Laboratory in 1980. This semi-continuous gas chromatographic separation process will be used in new tritium facilities at the Savannah River Site. A tritium production unit is scheduled to start operation in 1993. An experimental, large-scale unit is currently being tested using protium and deuterium. Operation of the large-scale unit has demonstrated separation of mixed hydrogen isotopes (55% protium and 45% deuterium), resulting in protium and deuterium product streams with purities better than 99.5%. 3 refs., 4 figs.

Horen, A.S.; Lee, Myung W.

1991-01-01T23:59:59.000Z

262

Metal hydride based isotope separation: Large-scale operations  

DOE Green Energy (OSTI)

A program to develop a metal hydride based hydrogen isotope separation process began at the Savannah River Laboratory in 1980. This semi-continuous gas chromatographic separation process will be used in new tritium facilities at the Savannah River Site. A tritium production unit is scheduled to start operation in 1993. An experimental, large-scale unit is currently being tested using protium and deuterium. Operation of the large-scale unit has demonstrated separation of mixed hydrogen isotopes (55% protium and 45% deuterium), resulting in protium and deuterium product streams with purities better than 99.5%. 3 refs., 4 figs.

Horen, A.S.; Lee, Myung W.

1991-12-31T23:59:59.000Z

263

Development of high performance scientific components for interoperability of computing packages  

Science Conference Proceedings (OSTI)

Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

Gulabani, Teena Pratap

2008-12-01T23:59:59.000Z

264

Lessons from Large-Scale Renewable Energy Integration Studies: Preprint  

Science Conference Proceedings (OSTI)

In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

Bird, L.; Milligan, M.

2012-06-01T23:59:59.000Z

265

A holonic approach to model and deploy large scale simulations  

Science Conference Proceedings (OSTI)

Multi-Agent Based Simulations (MABS) for real-world problems may require a large number of agents. A possible solution is to distribute the simulation in multiple machines. Thus, we are forced to consider how Large Scale MABS can be deployed in order ...

Sebastian Rodriguez; Vincent Hilaire; Abder Koukam

2006-05-01T23:59:59.000Z

266

A root cause localization model for large scale systems  

Science Conference Proceedings (OSTI)

Root cause localization, the process of identifying the source of problems in a system using purely external observations, is a significant challenge in many large-scale systems. In this paper, we propose an abstract model that captures the common issues ...

Emre Kiciman; Lakshminarayanan Subramanian

2005-06-01T23:59:59.000Z

267

Predictive discrete latent factor models for large scale dyadic data  

Science Conference Proceedings (OSTI)

We propose a novel statistical method to predict large scale dyadic response variables in the presence of covariate information. Our approach simultaneously incorporates the effect of covariates and estimates local structure that is induced by interactions ... Keywords: co-clustering, dyadic data, generalized linear regression, latent factor modeling

Deepak Agarwal; Srujana Merugu

2007-08-01T23:59:59.000Z

268

Believability in simplifications of large scale physically based simulation  

Science Conference Proceedings (OSTI)

We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: ... Keywords: 3D graphics and realism, animation, physically based simulation

Donghui Han; Shu-wei Hsu; Ann McNamara; John Keyser

2013-08-01T23:59:59.000Z

269

The cube: a very large-scale interactive engagement space  

Science Conference Proceedings (OSTI)

"The Cube" is a unique facility that combines 48 large multi-touch screens and very large-scale projection surfaces to form one of the world's largest interactive learning and engagement spaces. The Cube facility is part of the Queensland University ... Keywords: interactive wall displays, multi-touch, very large displays

Markus Rittenbruch, Andrew Sorensen, Jared Donovan, Debra Polson, Michael Docherty, Jeff Jones

2013-10-01T23:59:59.000Z

270

Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.  

SciTech Connect

The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

Khaleel, Mohammad A.

2011-02-06T23:59:59.000Z

271

Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)  

Science Conference Proceedings (OSTI)

This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem important to the nations scientific progress as described shortly. Further, SLAC researchers routinely generate massive amounts of data, and frequently collaborate with other researchers located around the world. Thus SLAC is an ideal teammate through which to develop, test and deploy this technology. The nature of the datasets generated by simulations performed at SLAC presented unique visualization challenges especially when dealing with higher-order elements that were addressed during this Phase II. During this Phase II, we have developed a strong platform for collaborative visualization based on ParaView. We have developed and deployed a ParaView Web Visualization framework that can be used for effective collaboration over the Web. Collaborating and visualizing over the Web presents the community with unique opportunities for sharing and accessing visualization and HPC resources that hitherto with either inaccessible or difficult to use. The technology we developed in here will alleviate both these issues as it becomes widely deployed and adopted.

William J. Schroeder

2011-11-13T23:59:59.000Z

272

MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing.  

Science Conference Proceedings (OSTI)

A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

Barrett, Richard Frederick; Heroux, Michael Allen; Vaughan, Courtenay Thomas

2012-04-01T23:59:59.000Z

273

Large-Scale Renewable Energy Development on Public Lands  

NLE Websites -- All DOE Office Websites (Extended Search)

Large-Scale Renewable Energy Large-Scale Renewable Energy Development on Public Lands Boyan Kovacic boyan.kovacic@ee.doe.gov 5/2/12 2 | FEDERAL ENERGY MANAGEMENT PROGRAM femp.energy.gov * BLM RE Drivers * BLM RE Programs * BLM Permitting and Revenues * Case Studies * Withdrawn Military Land Outline 3 | FEDERAL ENERGY MANAGEMENT PROGRAM femp.energy.gov BLM: Bureau of Land Management BO: Biological Opinion CSP: Concentrating Solar Power DOE: Department of Energy DOI: Department of Interior EA: Environmental Assessment EIS: Environmental Impact Statement FONSI: Finding of No Significant Impact FS: U.S. Forrest Service IM: Instruction Memorandum MPDS: Maximum Potential Development Scenario NEPA: National Environmental Policy Act NOI: Notice of Intent NOP: Notice to Proceed

274

Large-Scale Renewable Energy Development on Public Lands  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-Scale Renewable Energy Large-Scale Renewable Energy Development on Public Lands Boyan Kovacic boyan.kovacic@ee.doe.gov 5/2/12 2 | FEDERAL ENERGY MANAGEMENT PROGRAM femp.energy.gov * BLM RE Drivers * BLM RE Programs * BLM Permitting and Revenues * Case Studies * Withdrawn Military Land Outline 3 | FEDERAL ENERGY MANAGEMENT PROGRAM femp.energy.gov BLM: Bureau of Land Management BO: Biological Opinion CSP: Concentrating Solar Power DOE: Department of Energy DOI: Department of Interior EA: Environmental Assessment EIS: Environmental Impact Statement FONSI: Finding of No Significant Impact FS: U.S. Forrest Service IM: Instruction Memorandum MPDS: Maximum Potential Development Scenario NEPA: National Environmental Policy Act NOI: Notice of Intent NOP: Notice to Proceed

275

Large scale meteorological influence during the Geysers 1979 field experiment  

DOE Green Energy (OSTI)

A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

Barr, S.

1980-01-01T23:59:59.000Z

276

Safety aspects of large-scale handling of hydrogen  

DOE Green Energy (OSTI)

Since the decade of the 1950s, there has been a large increase in the quantity of hydrogen, especially liquid hydrogen, that has been produced, transported, and used. The technology of hydrogen, as it relates to safety, has also developed at the same time. The possible sources of hazards that can arise in the large-scale handling of hydrogen are recognized, and for the most part, sufficiently understood. These hazard sources are briefly discussed. 26 refs., 4 figs.

Edeskuty, F.J.; Stewart, W.F.

1988-01-01T23:59:59.000Z

277

The Phoenix series large scale LNG pool fire experiments.  

SciTech Connect

The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

2010-12-01T23:59:59.000Z

278

Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators  

E-Print Network (OSTI)

A new generation of laser wakefield accelerators, supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modeling for further understanding of the underlying physics and identification of optimal regimes, but large scale modeling of these scenarios is computationally heavy and requires efficient use of state-of-the-art Petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed / shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modeling of LWFA, demonstrating speedups of over 1 order of magni...

Fonseca, Ricardo A; Fiúza, Frederico; Davidson, Asher; Tsung, Frank S; Mori, Warren B; Silva, Luís O

2013-01-01T23:59:59.000Z

279

A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE  

SciTech Connect

The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

2007-01-30T23:59:59.000Z

280

DOE Supercomputing Resources Available for Advancing Scientific  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Supercomputing Resources Available for Advancing Scientific Supercomputing Resources Available for Advancing Scientific Breakthroughs DOE Supercomputing Resources Available for Advancing Scientific Breakthroughs April 15, 2009 - 12:00am Addthis Washington, DC - The U.S. Department of Energy (DOE) announced today it is accepting proposals for a program to support high-impact scientific advances through the use of some of the world's most powerful supercomputers located at DOE national laboratories. Approximately 1.3 billion supercomputer processor-hours will be awarded in 2010 through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program for large-scale, computationally intensive projects addressing some of the toughest challenges in science and engineering. Researchers are currently using supercomputing time under this year's

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


281

Advanced Scientific Computing Research User Facilities | U.S. DOE Office of  

Office of Science (SC) Website

ASCR User Facilities ASCR User Facilities User Facilities ASCR User Facilities BES User Facilities BER User Facilities FES User Facilities HEP User Facilities NP User Facilities User Facilities Frequently Asked Questions User Facility Science Highlights Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 ASCR User Facilities Print Text Size: A A A RSS Feeds FeedbackShare Page The Advanced Scientific Computing Research program supports the operation of the following national scientific user facilities: Energy Sciences Network (ESnet): External link The Energy Sciences Network, or ESnet External link , is the Department of Energy's high-speed network that provides the high-bandwidth, reliable connections that link scientists at national laboratories, universities and

282

PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING  

Science Conference Proceedings (OSTI)

Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.

Koopman, D.; Martino, C.; Poirier, M.

2012-04-26T23:59:59.000Z

283

Smart Libraries: Best SQE Practices for Libraries with an Emphasis on Scientific Computing  

Science Conference Proceedings (OSTI)

As scientific computing applications grow in complexity, more and more functionality is being packaged in independently developed libraries. Worse, as the computing environments in which these applications run grow in complexity, it gets easier to make mistakes in building, installing and using libraries as well as the applications that depend on them. Unfortunately, SQA standards so far developed focus primarily on applications, not libraries. We show that SQA standards for libraries differ from applications in many respects. We introduce and describe a variety of practices aimed at minimizing the likelihood of making mistakes in using libraries and at maximizing users' ability to diagnose and correct them when they occur. We introduce the term Smart Library to refer to a library that is developed with these basic principles in mind. We draw upon specific examples from existing products we believe incorporate smart features: MPI, a parallel message passing library, and HDF5 and SAF, both of which are parallel I/O libraries supporting scientific computing applications. We conclude with a narrative of some real-world experiences in using smart libraries with Ale3d, VisIt and SAF.

Miller, M C; Reus, J F; Matzke, R P; Koziol, Q A; Cheng, A P

2004-12-15T23:59:59.000Z

284

PHASE TRANSITION GENERATED COSMOLOGICAL MAGNETIC FIELD AT LARGE SCALES  

SciTech Connect

We constrain a primordial magnetic field (PMF) generated during a phase transition (PT) using the big bang nucleosynthesis bound on the relativistic energy density. The amplitude of the PMF at large scales is determined by the shape of the PMF spectrum outside its maximal correlation length scale. Even if the amplitude of the PMF at 1 Mpc is small, PT-generated PMFs can leave observable signatures in the potentially detectable relic gravitational wave background if a large enough fraction (1%-10%) of the thermal energy is converted into the PMF.

Kahniashvili, Tina [McWilliams Center for Cosmology and Department of Physics, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213 (United States); Tevzadze, Alexander G. [Abastumani Astrophysical Observatory, Ilia State University, 2A Kazbegi Ave., Tbilisi 0160 (Georgia); Ratra, Bharat, E-mail: tinatin@phys.ksu.edu, E-mail: aleko@tevza.org, E-mail: ratra@phys.ksu.edu [Department of Physics, Kansas State University, 116 Cardwell Hall, Manhattan, KS 66506 (United States)

2011-01-10T23:59:59.000Z

285

Solar cycle variations of large scale flows in the Sun  

E-Print Network (OSTI)

Using data from the Michelson Doppler Imager (MDI) instrument on board the Solar and Heliospheric Observatory (SOHO), we study the large-scale velocity fields in the outer part of the solar convection zone using the ring diagram technique. We use observations from four different times to study possible temporal variations in flow velocity. We find definite changes in both the zonal and meridional components of the flows. The amplitude of the zonal flow appears to increase with solar activity and the flow pattern also shifts towards lower latitude with time.

Sarbani Basu; H. M. Antia

2000-01-17T23:59:59.000Z

286

Large Scale Deployment of Renewables for Electricity Generation  

E-Print Network (OSTI)

-cellulose material. Anaerobic digestion or gasification of biomass produces gas that can be used in similar applications to natural gas. Small-scale biogas production is now a well-established technology and large-scale application is in the advanced stages... of development. The possibility of using biogas in fuel cells exists, but there are a number of technical difficulties that remain to be overcome in this area. Source: www.britishbiogen.co.uk and WEA (2000). 5 All figures refer to electricity.Where necessary...

Neuhoff, Karsten

2006-03-14T23:59:59.000Z

287

Large scale obscuration and related climate effects open literature bibliography  

SciTech Connect

Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

1994-05-01T23:59:59.000Z

288

Large Scale GSHP as Alternative Energy for American Farmers Geothermal  

Open Energy Info (EERE)

GSHP as Alternative Energy for American Farmers Geothermal GSHP as Alternative Energy for American Farmers Geothermal Project Jump to: navigation, search Last modified on July 22, 2011. Project Title Large Scale GSHP as Alternative Energy for American Farmers Project Type / Topic 1 Recovery Act - Geothermal Technologies Program: Ground Source Heat Pumps Project Type / Topic 2 Topic Area 1: Technology Demonstration Projects Project Description We propose a large scale demonstration of solar assisted GSHP systems on two poultry farms in mid-Missouri. The heating load of Farm A with 4 barns will be 510 tons and Farm B with 5 barns will be 440 tons. Solar assisted GSHP systems will be installed, and new utility business model will be applied to both farms. Farm A will be constructed with commercial products in order to bring immediate impact to the industry. Farm B will also have a thermal energy storage system installed, and improved solar collectors will be used. A comprehensive energy analysis and economic study will be conducted.

289

DOE's Office of Science Seeks Proposals for Expanded Large-Scale...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

of new energy technologies." "This unique program opens up the world of high-performance computing to a broad array of new scientific users," Bodman said. "Through the use of...

290

Large Scale Geothermal Exchange System for Residential, Office and Retail  

Open Energy Info (EERE)

Geothermal Exchange System for Residential, Office and Retail Geothermal Exchange System for Residential, Office and Retail Development Geothermal Project Jump to: navigation, search Last modified on July 22, 2011. Project Title Large Scale Geothermal Exchange System for Residential, Office and Retail Development Project Type / Topic 1 Recovery Act - Geothermal Technologies Program: Ground Source Heat Pumps Project Type / Topic 2 Topic Area 1: Technology Demonstration Projects Project Description RiverHeath will be a new neighborhood, with residences, shops, restaurants, and offices. The design incorporates walking trails, community gardens, green roofs, and innovative stormwater controls. A major component of the project is our reliance on renewable energy. One legacy of the land's industrial past is an onsite hydro-electric facility which formerly powered the paper factories. The onsite hydro is being refurbished and will furnish 100% of the project's electricity demand.

291

Breakthrough Large-Scale Industrial Project Begins Carbon Capture and  

NLE Websites -- All DOE Office Websites (Extended Search)

28, 2013 28, 2013 Breakthrough Large-Scale Industrial Project Begins Carbon Capture and Utilization DOE-Supported Project in Texas Demonstrates Viability of CCUS Technology Washington, D.C. - A breakthrough carbon capture, utilization, and storage (CCUS) project in Texas has begun capturing carbon dioxide (CO2) and piping it to an oilfield for use in enhanced oil recovery (EOR). MORE INFO Read the project factsheet The project at Air Products and Chemicals hydrogen production facility in Port Arthur, Texas, is significant for demonstrating both the effectiveness and commercial viability of CCUS technology as an option in helping mitigate atmospheric CO2 emissions. Funded in part through the American Recovery and Reinvestment Act (ARRA), the project is managed by the U.S.

292

Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership  

NLE Websites -- All DOE Office Websites (Extended Search)

Instantaneous velocity magnitude in a flow through an open valve in a valve/piston assembly. Instantaneous velocity magnitude in a flow through an open valve in a valve/piston assembly. Instantaneous velocity magnitude in a flow through an open valve in a valve/piston assembly. Christos Altantzis, MIT, and Martin Schmitt, LAV. All the images were generated from their work at LAV. Cosmological Simulations for Large-Scale Sky Surveys PI Name: Christos Frouzakis PI Email: frouzakis@lav.mavt.ethz.ch Institution: Swiss Federal Institute of Technology Zurich Allocation Program: INCITE Allocation Hours at ALCF: 100 Million Year: 2014 Research Domain: Chemistry The combustion of coal and petroleum-based fuels supply most of the energy needed to meet the world's transportation and power generation demands. To address the anticipated petroleum shortage, along with increasing energy

293

A New Scalable Directory Architecture for Large-Scale Multiprocessors  

E-Print Network (OSTI)

The memory overhead introduced by directories constitutes a major hurdle in the scalability of cc-NUMA architectures, which makes the shared-memory paradigm unfeasible for very large-scale systems. This work is focused on improving the scalability of shared-memory multiprocessors by significantly reducing the size of the directory. We propose multilayer clustering as an effective approach to reduce the directory-entry width. Detailed evaluation for 64 processors shows that using this approach we can drastically reduce the memory overhead, while suffering a performance degradation very similar to previous compressed schemes (such as Coarse Vector). In addition, a novel two-level directory architecture is proposed in order to eliminate the penalty caused by these compressed directories. This organization consists of a small Full-Map firstlevel directory (which provides precise information for the most recently referenced lines) and a compressed secondlevel directory (which provides in-ex...

Manuel E. Acacio; José González; José M. García; José Duato

2001-01-01T23:59:59.000Z

294

Atypical Behavior Identification in Large Scale Network Traffic  

SciTech Connect

Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.

Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.

2011-10-23T23:59:59.000Z

295

Nuclear-pumped lasers for large-scale applications  

SciTech Connect

Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficiently short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system; to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to demonstrate the performance of large-scale optics and the beam quality that may be obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 8 figs., 5 tabs.

Anderson, R.E.; Leonard, E.M.; Shea, R.F.; Berggren, R.R.

1989-05-01T23:59:59.000Z

296

Unified architecture for large-scale attested metering  

E-Print Network (OSTI)

We introduce a secure architecture called an attested meter for advanced metering that supports large-scale deployments, flexible configurations, and enhanced protection for consumer privacy and metering integrity. Our study starts with a threat analysis for advanced metering networks and formulates protection requirements for those threats. The attested meter satisfies these through a unified set of system interfaces based on virtual machines and attestation for the software agents of various parties that use the meter. We argue that this combination provides a well-adapted architecture for advanced metering and we take a step towards demonstrating its feasibility with a prototype implementation based on the Trusted Platform Module (TPM) and Xen Virtual Machine Monitor (VMM). This is the first effort use virtual machines and attestation in an advanced meter. 1.

Michael Lemay; George Gross; Carl A. Gunter; Sanjam Garg

2007-01-01T23:59:59.000Z

297

Modeling The Large Scale Bias of Neutral Hydrogen  

E-Print Network (OSTI)

We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

2009-01-01T23:59:59.000Z

298

Hydrogen-combustion analyses of large-scale tests  

DOE Green Energy (OSTI)

This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters.

Gido, R.G.; Koestel, A.

1986-01-01T23:59:59.000Z

299

Detecting and mitigating abnormal events in large scale networks: budget constrained placement on smart grids  

Science Conference Proceedings (OSTI)

Several scenarios exist in the modern interconnected world which call for an efficient network interdiction algorithm. Applications are varied, including various monitoring and load shedding applications on large smart energy grids, computer network security, preventing the spread of Internet worms and malware, policing international smuggling networks, and controlling the spread of diseases. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs, specifically focusing on the sensor/switch placement problem for large-scale energy grids. Many of these questions turn out to be computationally hard to tackle. We present a particular form of the interdiction question which is practically relevant and which we show as computationally tractable. A polynomial-time algorithm will be presented for solving this problem.

Santhi, Nandakishore [Los Alamos National Laboratory; Pan, Feng [Los Alamos National Laboratory

2010-10-19T23:59:59.000Z

300

Environmental Consequences of Large-Scale Deployment of New Energy Systems  

SciTech Connect

This project's scientific goal was to achieve better understanding of where land cover change may mitigate climate change, accounting for both direct climate effects as well as the impacts on the global carbon cycle. As tools for investigating this problem, several models of different complexities were used: an offline land model, a standard coupled climate model, and a model in which coupled carbon-climate interactions were explicitly represented. Results from all model simulations were qualitatively similar: climate mitigation projects involving large-scale re-growth of forests are predicted to be beneficial in mitigating future CO{sub 2}-induced global warming if these are carried out in the tropical latitudes, to be largely ineffectual if conducted in temperate latitudes, and to be counterproductive if implemented at high latitudes. Details of the quantitative differences in these predictions which are exhibited by the chosen climate models also are discussed.

Phillips, T J

2007-02-23T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


301

Environmental Consequences of Large-Scale Deployment of New Energy Systems  

DOE Green Energy (OSTI)

This project's scientific goal was to achieve better understanding of where land cover change may mitigate climate change, accounting for both direct climate effects as well as the impacts on the global carbon cycle. As tools for investigating this problem, several models of different complexities were used: an offline land model, a standard coupled climate model, and a model in which coupled carbon-climate interactions were explicitly represented. Results from all model simulations were qualitatively similar: climate mitigation projects involving large-scale re-growth of forests are predicted to be beneficial in mitigating future CO{sub 2}-induced global warming if these are carried out in the tropical latitudes, to be largely ineffectual if conducted in temperate latitudes, and to be counterproductive if implemented at high latitudes. Details of the quantitative differences in these predictions which are exhibited by the chosen climate models also are discussed.

Phillips, T J

2007-02-23T23:59:59.000Z

302

High Performance Scientific and Engineering Computing: Proceedings of the International Fortwihr Conference on Hpsec, Munich, March 16-18, 1998, 1st edition  

Science Conference Proceedings (OSTI)

From the Publisher:This volume contains the proceedings of an international conference on high performance scientific and engineering computing held in Munich in March 1998 and organized by FORTWIHR, the Bavarian Consortium for High Performance Scientific ...

Hans-Joachim -J Bungartz; F. Durst; C. Zenger

1999-01-01T23:59:59.000Z

303

On the Velocity in the Effective Field Theory of Large Scale Structures  

E-Print Network (OSTI)

We compute the renormalized two-point functions of density, divergence and vorticity of the velocity in the Effective Field Theory of Large Scale Structures. We show that the mass-weighted velocity, as opposed to the volume-weighted velocity, is the natural variable to use. We then prove that, Because of momentum and mass conservation, the corrections from short scales to the large-scale power spectra of density, divergence and vorticity must start at order $k^{4}$. For the vorticity this constitutes the leading term. Exact (approximated) self-similarity of an Einstein-de Sitter ($\\Lambda$CDM) background fixes the time dependence so that the vorticity power spectrum at leading order is uniquely determined, up to a normalization, by the symmetries of the problem. Focusing on density and velocity divergence, we show that the current formulation of the theory does not have enough counterterms to cancel all divergences. At the lowest order, the missing terms are a new stochastic noise and a heat conduction term in the continuity equation. For an Einstein de Sitter universe, we show that all three renormalized cross- and auto-correlation functions have the same structure but different numerical coefficients, which we compute. Using momentum instead of velocity, one can re-absorb the new terms and work with an uncorrected continuity equation but at the cost of having uncancelled IR divergences in equal-time correlators and a more complicated perturbation theory.

Lorenzo Mercolli; Enrico Pajer

2013-07-11T23:59:59.000Z

304

Confidence in ASCI scientific simulations  

SciTech Connect

The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.

Ang, J.A.; Trucano, T.G. [Sandia National Labs., Albuquerque, NM (United States); Luginbuhl, D.R. [Dept. of Energy, Washington, DC (United States)

1998-06-01T23:59:59.000Z

305

Optimal and hierarchical clustering of large-scale hybrid networks for scientific mapping  

Science Conference Proceedings (OSTI)

Previous studies have shown that hybrid clustering methods based on textual and citation information outperforms clustering methods that use only one of these components. However, former methods focus on the vector space model. In this paper we apply ... Keywords: Bibliometric analysis, Modularity optimization, Network analysis, Optimal and hierarchical clustering, Text mining

Xinhai Liu; Wolfgang Glänzel; Bart Moor

2012-05-01T23:59:59.000Z

306

Using cross-layer adaptations for dynamic data management in large scale coupled scientific workflows  

Science Conference Proceedings (OSTI)

As system scales and application complexity grow, managing and processing simulation data has become a significant challenge. While recent approaches based on data staging and in-situ/in-transit data processing are promising, dynamic data volumes and ... Keywords: coupled simulation workflows, cross-layer adaptation, data management, in-situ/in-transit, staging

Tong Jin, Fan Zhang, Qian Sun, Hoang Bui, Manish Parashar, Hongfeng Yu, Scott Klasky, Norbert Podhorszki, Hasan Abbasi

2013-11-01T23:59:59.000Z

307

FlexIO: Location Flexible Execution of In Situ Analytics for Large Scale Scientific Applications  

E-Print Network (OSTI)

@cc.gatech.edu) Scott Klasky (klasky@ornl.gov) Web: https://research.cc.gatech.edu/sdmatcercs/ http://www.olcf

Epema, Dick H.J.

308

A preview and exploratory technique for large-scale scientific simulations  

Science Conference Proceedings (OSTI)

Successful in-situ and remote visualization solutions must have minimal storage requirements and account for only a small percentage of supercomputing time. One solution that meets these requirements is to store a compact intermediate representation ...

Anna Tikhonova; Hongfeng Yu; Carlos D. Correa; Jacqueline H. Chen; Kwan-Liu Ma

2011-04-01T23:59:59.000Z

309

Large-Scale Spray Releases: Additional Aerosol Test Results  

SciTech Connect

One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used to mimic the relevant physical properties projected for actual WTP process streams.

Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

2013-08-01T23:59:59.000Z

310

Ferroelectric opening switches for large-scale pulsed power drivers.  

DOE Green Energy (OSTI)

Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of these unexpected discoveries have lead to new research directions to address challenges.

Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

2009-11-01T23:59:59.000Z

311

Enabling large-scale next-generation sequence assembly with Blacklight  

Science Conference Proceedings (OSTI)

A variety of extremely challenging biological sequence analyses were conducted on the XSEDE large shared memory resource Blacklight, using current bioinformatics tools and encompassing a wide range of scientific applications. These include genomic sequence ... Keywords: NGS, RNA-Seq, data-intensive computing, de novo assembly, genome, high-performance computing, metagenome, primates, shared memory, transcriptome

M. Brian Couger; Lenore Pipes; Philip D. Blood; Christopher E. Mason

2013-07-01T23:59:59.000Z

312

Model Abstraction Techniques for Large-Scale Power Systems  

E-Print Network (OSTI)

Report on System Simulation using High Performance Computing Prepared by New Mexico Tech New Mexico: Application of High Performance Computing to Electric Power System Modeling, Simulation and Analysis Task Two

313

DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestratio...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

126.6 Million for Two More Large-Scale Carbon Sequestration Projects DOE Awards 126.6 Million for Two More Large-Scale Carbon Sequestration Projects May 6, 2008 - 11:30am Addthis...

314

Energy Department Awards $66.7 Million for Large-Scale Carbon...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards 66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis...

315

The Distinction between Large-Scale and Mesoscale Contribution to Severe Convection: A Case Study Example  

Science Conference Proceedings (OSTI)

Using a case study of a relatively modest severe weather event as an example, a framework for understanding the large-scale-mesoscale interaction is developed and discussed. Large-scale processes are limited, by definition, to those which are ...

Charles A. Doswell III

1987-03-01T23:59:59.000Z

316

A New Scalable Directory Architecture for Large-Scale Multiprocessors  

E-Print Network (OSTI)

The memory overhead introduced by directories constitutes a major hurdle in the scalability of cc-NUMA architectures, which makes the shared-memory paradigm unfeasible for very large-scale systems. This work is focused on improving the scalability of shared-memory multiprocessors by significantly reducing the size of the directory. We propose multilayer clustering as an effective approach to reduce the directory-entry width. Detailed evaluation for 64 processors shows that using this approach we can drastically reduce the memory overhead, while suffering a performance degradation very similar to previous compressed schemes (such as Coarse Vector). In addition, a novel two-level directory architecture is proposed in order to eliminate the penalty caused by these compressed directories. This organization consists of a small Full-Map firstlevel directory (which provides precise information for the most recently referenced lines) and a compressed secondlevel directory (which provides in-excess information). Results show that a system with this directory architecture can achieve the same performance as a multiprocessor with a big and non-scalable Full-Map directory, with a very significant reduction of the memory overhead.

Manuel Acacio Jos; José González; José M. García

2001-01-01T23:59:59.000Z

317

LARGE SCALE METHOD FOR THE PRODUCTION AND PURIFICATION OF CURIUM  

DOE Patents (OSTI)

A large-scale process for production and purification of Cm/sup 242/ is described. Aluminum slugs containing Am are irradiated and declad in a NaOH-- NaHO/sub 3/ solution at 85 to 100 deg C. The resulting slurry filtered and washed with NaOH, NH/sub 4/OH, and H/sub 2/O. Recovery of Cm from filtrate and washings is effected by an Fe(OH)/sub 3/ precipitation. The precipitates are then combined and dissolved ln HCl and refractory oxides centrifuged out. These oxides are then fused with Na/sub 2/CO/sub 3/ and dissolved in HCl. The solution is evaporated and LiCl solution added. The Cm, rare earths, and anionic impurities are adsorbed on a strong-base anfon exchange resin. Impurities are eluted with LiCl--HCl solution, rare earths and Cm are eluted by HCl. Other ion exchange steps further purify the Cm. The Cm is then precipitated as fluoride and used in this form or further purified and processed. (T.R.H.)

Higgins, G.H.; Crane, W.W.T.

1959-05-19T23:59:59.000Z

318

Safety aspects of large-scale combustion of hydrogen  

DOE Green Energy (OSTI)

Recent hydrogen-safety investigations have studied the possible large-scale effects from phenomena such as the accumulation of combustible hydrogen-air mixtures in large, confined volumes. Of particular interest are safe methods for the disposal of the hydrogen and the pressures which can arise from its confined combustion. Consequently, tests of the confined combustion of hydrogen-air mixtures were conducted in a 2100 m/sup 3/ volume. These tests show that continuous combustion, as the hydrogen is generated, is a safe method for its disposal. It also has been seen that, for hydrogen concentrations up to 13 vol %, it is possible to predict maximum pressures that can occur upon ignition of premixed hydrogen-air atmospheres. In addition information has been obtained concerning the survivability of the equipment that is needed to recover from an accident involving hydrogen combustion. An accident that involved the inadvertent mixing of hydrogen and oxygen gases in a tube trailer gave evidence that under the proper conditions hydrogen combustion can transit to a detonation. If detonation occurs the pressures which can be experienced are much higher although short in duration.

Edeskuty, F.J.; Haugh, J.J.; Thompson, R.T.

1986-01-01T23:59:59.000Z

319

Enabling Large-Scale Deliberation Using Attention-Mediation Metrics  

Science Conference Proceedings (OSTI)

Humanity now finds itself faced with a range of highly complex and controversial challenges--such as climate change, the spread of disease, international security, scientific collaborations, product development, and so on--that call upon us to bring ... Keywords: Argumentation, Deliberation, Metrics

Mark Klein

2012-10-01T23:59:59.000Z

320

Large-Scale Spray Releases: Initial Aerosol Test Results  

Science Conference Proceedings (OSTI)

One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of this report is to present the experimental results and analyses for the aerosol measurements obtained in the large-scale test stand. The report includes a description of the simulants used and their properties, equipment and operations, data analysis methodology, and test results. The results of tests investigating the role of slurry particles in plugging of small breaches are reported in Mahoney et al. 2012a. The results of the aerosol measurements in the small-scale test stand are reported in Mahoney et al. (2012b).

Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

2012-12-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


321

Large-Scale Data Challenges in Future Power Grids  

Science Conference Proceedings (OSTI)

This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

2013-03-25T23:59:59.000Z

322

Parallel Stochastic Gradient Algorithms for Large-Scale Matrix  

E-Print Network (OSTI)

Apr 26, 2011 ... For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 hours, while Jellyfish solves the same ...

323

Accelerating Satellite Image Based Large-Scale Settlement Detection with GPU  

Science Conference Proceedings (OSTI)

Computer vision algorithms for image analysis are often computationally demanding. Application of such algorithms on large image databases\\---- such as the high-resolution satellite imagery covering the entire land surface, can easily saturate the computational capabilities of conventional CPUs. There is a great demand for vision algorithms running on high performance computing (HPC) architecture capable of processing petascale image data. We exploit the parallel processing capability of GPUs to present a GPU-friendly algorithm for robust and efficient detection of settlements from large-scale high-resolution satellite imagery. Feature descriptor generation is an expensive, but a key step in automated scene analysis. To address this challenge, we present GPU implementations for three different feature descriptors\\-- multiscale Historgram of Oriented Gradients (HOG), Gray Level Co-Occurrence Matrix (GLCM) Contrast and local pixel intensity statistics. We perform extensive experimental evaluations of our implementation using diverse and large image datasets. Our GPU implementation of the feature descriptor algorithms results in speedups of 220 times compared to the CPU version. We present an highly efficient settlement detection system running on a multiGPU architecture capable of extracting human settlement regions from a city-scale sub-meter spatial resolution aerial imagery spanning roughly 1200 sq. kilometers in just 56 seconds with detection accuracy close to 90\\%. This remarkable speedup gained by our vision algorithm maintaining high detection accuracy clearly demonstrates that such computational advancements clearly hold the solution for petascale image analysis challenges.

Patlolla, Dilip Reddy [ORNL; Cheriyadat, Anil M [ORNL; Weaver, Jeanette E [ORNL; Bright, Eddie A [ORNL

2012-01-01T23:59:59.000Z

324

Horizontal Entrainment and Detrainment in Large-Scale Eddies  

Science Conference Proceedings (OSTI)

We compute the evolution of disturbances on a circularly symmetric eddy having uniform vorticity in a central core, in a surrounding annulus, and in the irrotational exterior water mass. This vortex is known to be (Kelvin-Helmholtz) unstable when ...

Melvin E. Stern

1987-10-01T23:59:59.000Z

325

Efficient random coordinate descent algorithms for large-scale ...  

E-Print Network (OSTI)

rate of convergence for the expected values of the objective function. We also ...... complexity of computing the direction dij is O(ni + nj) (see [16,30] for a de-.

326

ANL/ALCF/ESP-13/14 NAMD - The Engine for Large-Scale Classical MD  

NLE Websites -- All DOE Office Websites (Extended Search)

4 4 NAMD - The Engine for Large-Scale Classical MD Simulations of Biomolecular Systems Based on a Polarizable Force Field ALCF-2 Early Science Program Technical Report Argonne Leadership Computing Facility About Argonne National Laboratory Argonne is a U.S. Department of Energy laboratory managed by UChicago Argonne, LLC under contract DE-AC02-06CH11357. The Laboratory's main facility is outside Chicago, at 9700 South Cass Avenue, Argonne, Illinois 60439. For information about Argonne and its pioneering science and technology programs, see www.anl.gov. Availability of This Report This report is available, at no cost, at http://www.osti.gov/bridge. It is also available on paper to the U.S. Department of Energy and its contractors, for a processing fee, from:

327

Large scale test simulations using the Virtual Environment for Test Optimization (VETO)  

SciTech Connect

The Virtual Environment for Test Optimization (VETO) is a set of simulation tools under development at Sandia to enable test engineers to do computer simulations of tests. The tool set utilizes analysis codes and test information to optimize design parameters and to provide an accurate model of the test environment which aides in the maximization of test performance, training, and safety. Previous VETO effort has included the development of two structural dynamics simulation modules that provide design and optimization tools for modal and vibration testing. These modules have allowed test engineers to model and simulate complex laboratory testing, to evaluate dynamic response behavior, and to investigate system testability. Further development of the VETO tool set will address the accurate modeling of large scale field test environments at Sandia. These field test environments provide weapon system certification capabilities and have different simulation requirements than those of laboratory testing.

Klenke, S.E.; Heffelfinger, S.R.; Bell, H.J.; Shierling, C.L.

1997-10-01T23:59:59.000Z

328

Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design  

SciTech Connect

A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.

Liao, Ben-Shan; Bai, Zhaojun; /UC, Davis; Lee, Lie-Quan; Ko, Kwok; /SLAC

2006-09-28T23:59:59.000Z

329

Energy Consumption Models and Predictions for Large-Scale Systems  

Science Conference Proceedings (OSTI)

Responsible, efficient and well-planned power consumption is becoming a necessity for monetary returns and scalability of computing infrastructures. While there are numerous sources from which power data can be obtained, analyzing this data is an intrinsically ... Keywords: Energy model, Grid'5000, distrbuted systems

Taghrid Samak, Christine Morin, David Bailey

2013-05-01T23:59:59.000Z

330

The Oceanic Response to Large-Scale Atmospheric Disturbances  

Science Conference Proceedings (OSTI)

This paper is an analytical and numerical study of the response of the ocean to the fluctuating component of the wind stress as computed from twice-daily weather maps for the period 1973 to 1976. The results are described in terms of (time) mean ...

J. Willebrand; S. G. H. Philander; R. C. Pacanowski

1980-03-01T23:59:59.000Z

331

A Critical Look at Design, Verification, and Validation of Large Scale Simulations  

E-Print Network (OSTI)

two papers on the Department of Energy's Accelerated Scientific Computing Initiative or ASCI nuclear weapon has crashed in the center of this holiest of all Islamic sites. Although the weapon. The city was filled with people making the hajj. Because of the pilgrimage, there is no estimate the number

Stevenson, D. E. "Steve"

332

A CRITICAL LOOK AT DESIGN, VERIFICATION, AND VALIDATION OF LARGE SCALE SIMULATIONS  

E-Print Network (OSTI)

's Accelerated Scientific Computing Initiative or ASCI. These two papers come from two respected authors. John.] This just in. From Makkah, Saudi Arabia. An FA­18 fighter carrying a tactical nuclear weapon has crashed experts cannot explain how this explosion could have occurred. The city was filled with people making

Stevenson, D. E. "Steve"

333

NETL: News Release - DOE Awards First Three Large-Scale Carbon...  

NLE Websites -- All DOE Office Websites (Extended Search)

9, 2007 DOE Awards First Three Large-Scale Carbon Sequestration Projects U.S. Projects Total 318 Million and Further President Bush's Initiatives to Advance Clean Energy...

334

Large-scale solar projects in the United States have made great...  

NLE Websites -- All DOE Office Websites (Extended Search)

the United States have made great progress in delivering competitively priced renewable electricity September 2013 The price at which electricity from large-scale solar power...

335

A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion  

E-Print Network (OSTI)

comparison of terascale combustion simulation data. Mathe-premixed hydrogen ?ames. Combustion and Flame, [7] J. L.of Large Scale Turbulent Combustion Peer-Timo Bremer 1 ,

Bremer, Peer-Timo

2010-01-01T23:59:59.000Z

336

Technical R eport A practical method for solving large-scale TRS  

E-Print Network (OSTI)

R eport. University of Patras. Department of Mathematics. GR-265 04 Patras, Greece. http://www.math.upatras.gr/. A practical method for solving large-scale TRS.

337

Agent Based Modeling of large- scale socio-technical metal networks  

Science Conference Proceedings (OSTI)

17-02-10. Challenge the future. Delft. University of. Technology. Agent Based Modeling of large- scale socio-technical metal networks. Dr. Igor Nikolic, A.

338

ESS 2012 Peer Review - Nitrogen-Oxygen Battery for Large Scale...  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

2012 Frank Delnick, David Ingersoll, Karen Waldrip, Peter Feibelman NitrogenOxygen Battery A Transformational Architecture for Large Scale Energy Storage Power Sources...

339

NIChE Workshop on Materials for Large-Scale Energy ...  

Science Conference Proceedings (OSTI)

... Workshop on Materials for Large-Scale Energy Storage. Purpose: This workshop will delve into the end-use applications and market drivers for large ...

2010-10-05T23:59:59.000Z

340

NREL: News - NREL Offers an Open-Source Solution for Large-Scale...  

NLE Websites -- All DOE Office Websites (Extended Search)

Version News Release NR-3613 NREL Offers an Open-Source Solution for Large-Scale Energy Data Collection and Analysis June 18, 2013 The Energy Department's National...

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


341

System aspects of large scale implementation of a photovoltaic power plant.  

E-Print Network (OSTI)

?? In this thesis the static and dynamic behavior of large scale grid connected PV power plants are analyzed. A model of a 15 MW… (more)

Ruiz, Álvaro

2011-01-01T23:59:59.000Z

342

Workshop report on large-scale matrix diagonalization methods in chemistry theory institute  

SciTech Connect

The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

1996-10-01T23:59:59.000Z

343

NESC-VII: Fracture Mechanics Analyses of WPS Experiments on Large-scale Cruciform Specimen  

SciTech Connect

This paper describes numerical analyses performed to simulate warm pre-stress (WPS) experiments conducted with large-scale cruciform specimens within the Network for Evaluation of Structural Components (NESC-VII) project. NESC-VII is a European cooperative action in support of WPS application in reactor pressure vessel (RPV) integrity assessment. The project aims in evaluation of the influence of WPS when assessing the structural integrity of RPVs. Advanced fracture mechanics models will be developed and performed to validate experiments concerning the effect of different WPS scenarios on RPV components. The Oak Ridge National Laboratory (ORNL), USA contributes to the Work Package-2 (Analyses of WPS experiments) within the NESCVII network. A series of WPS type experiments on large-scale cruciform specimens have been conducted at CEA Saclay, France, within the framework of NESC VII project. This paper first describes NESC-VII feasibility test analyses conducted at ORNL. Very good agreement was achieved between AREVA NP SAS and ORNL. Further analyses were conducted to evaluate the NESC-VII WPS tests conducted under Load-Cool-Transient- Fracture (LCTF) and Load-Cool-Fracture (LCF) conditions. This objective of this work is to provide a definitive quantification of WPS effects when assessing the structural integrity of reactor pressure vessels. This information will be utilized to further validate, refine, and improve the WPS models that are being used in probabilistic fracture mechanics computer codes now in use by the NRC staff in their effort to develop risk-informed updates to Title 10 of the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G.

Yin, Shengjun [ORNL; Williams, Paul T [ORNL; Bass, Bennett Richard [ORNL

2011-01-01T23:59:59.000Z

344

A multiperiod optimization model to schedule large-scale petroleum development projects  

E-Print Network (OSTI)

This dissertation solves an optimization problem in the area of scheduling large-scale petroleum development projects under several resources constraints. The dissertation focuses on the application of a metaheuristic search Genetic Algorithm (GA) in solving the problem. The GA is a global search method inspired by natural evolution. The method is widely applied to solve complex and sizable problems that are difficult to solve using exact optimization methods. A classical resource allocation problem in operations research known under Knapsack Problems (KP) is considered for the formulation of the problem. Motivation of the present work was initiated by certain petroleum development scheduling problem in which large-scale investment projects are to be selected subject to a number of resources constraints in several periods. The constraints may occur from limitations in various resources such as capital budgets, operating budgets, and drilling rigs. The model also accounts for a number of assumptions and business rules encountered in the application that motivated this work. The model uses an economic performance objective to maximize the sum of Net Present Value (NPV) of selected projects over a planning horizon subject to constraints involving discrete time dependent variables. Computational experiments of 30 projects illustrate the performance of the model. The application example is only illustrative of the model and does not reveal real data. A Greedy algorithm was first utilized to construct an initial estimate of the objective function. GA was implemented to improve the solution and investigate resources constraints and their effect on the assets value. The timing and order of investment decisions under constraints have the prominent effect on the economic performance of the assets. The application of an integrated optimization model provides means to maximize the financial value of the assets, efficiently allocate limited resources and to analyze more scheduling alternatives in less time.

Husni, Mohammed Hamza

2008-12-01T23:59:59.000Z

345

Formation of large-scale structures in ablative Kelvin-Helmholtz instability  

SciTech Connect

In this research, we studied numerically nonlinear evolutions of the Kelvin-Helmholtz instability (KHI) with and without thermal conduction, aka, the ablative KHI (AKHI) and the classical KHI (CKHI). The second order thermal conduction term with a variable thermal conductivity coefficient is added to the energy equation in the Euler equations in the AKHI to investigate the effect of thermal conduction on the evolution of large and small scale structures within the shear layer which separate the fluids with different velocities. The inviscid hyperbolic flux of Euler equation is computed via the classical fifth order weighted essentially nonoscillatory finite difference scheme and the temperature is solved by an implicit fourth order finite difference scheme with variable coefficients in the second order parabolic term to avoid severe time step restriction imposed by the stability of the numerical scheme. As opposed to the CKHI, fine scale structures such as the vortical structures are suppressed from forming in the AKHI due to the dissipative nature of the second order thermal conduction term. With a single-mode sinusoidal interface perturbation, the results of simulations show that the growth of higher harmonics is effectively suppressed and the flow is stabilized by the thermal conduction. With a two-mode sinusoidal interface perturbation, the vortex pairing is strengthened by the thermal conduction which would allow the formation of large-scale structures and enhance the mixing of materials. In summary, our numerical studies show that thermal conduction can have strong influence on the nonlinear evolutions of the KHI. Thus, it should be included in applications where thermal conduction plays an important role, such as the formation of large-scale structures in the high energy density physics and astrophysics.

Wang, L. F. [SMCE, China University of Mining and Technology, Beijing 100083 (China); CAPT, Peking University, Beijing 100871 (China) and LCP, Institute of Applied Physics and Computational Mathematics, Beijing 100088 (China); Department of Mathematics, Hong Kong Baptist University, Kowloon Tong (Hong Kong); Ye, W. H.; He, X. T. [CAPT, Peking University, Beijing 100871 (China) and LCP, Institute of Applied Physics and Computational Mathematics, Beijing 100088 (China); Department of Physics, Zhejiang University, Hangzhou 310027 (China); Don, Wai-Sun [Department of Mathematics, Hong Kong Baptist University, Kowloon Tong (Hong Kong); Sheng, Z. M. [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, CAS, Beijing 100190 (China) and Department of Physics, Shanghai Jiao Tong University, Shanghai 200240 (China); Li, Y. J. [SMCE, China University of Mining and Technology, Beijing 100083 (China)

2010-12-15T23:59:59.000Z

346

LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS  

DOE Green Energy (OSTI)

Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a “hydrogen economy.” The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

James E. O'Brien

2010-08-01T23:59:59.000Z

347

Reconfigurable middleware architectures for large scale sensor networks  

SciTech Connect

Wireless sensor networks, in an e#11;ffort to be energy efficient, typically lack the high-level abstractions of advanced programming languages. Though strong, the dichotomy between these two paradigms can be overcome. The SENSIX software framework, described in this dissertation, uniquely integrates constraint-dominated wireless sensor networks with the flexibility of object-oriented programming models, without violating the principles of either. Though these two computing paradigms are contradictory in many ways, SENSIX bridges them to yield a dynamic middleware abstraction unifying low-level resource-aware task recon#12;figuration and high-level object recomposition.

Brennan, Sean M.

2010-03-01T23:59:59.000Z

348

Certification of version 1.2 of the PORFLO-3 code for the WHC scientific and engineering computational center  

SciTech Connect

Version 1.2 of the PORFLO-3 Code has migrated from the Hanford Cray computer to workstations in the WHC Scientific and Engineering Computational Center. The workstation-based configuration and acceptance testing are inherited from the CRAY-based configuration. The purpose of this report is to document differences in the new configuration as compared to the parent Cray configuration, and summarize some of the acceptance test results which have shown that the migrated code is functioning correctly in the new environment.

Kline, N.W.

1994-12-29T23:59:59.000Z

349

Interactive exploration and analysis of large scale turbulent combustion using topology-based data segmentation  

E-Print Network (OSTI)

Abstract—Large-scale simulations are increasingly being used to study complex scientific and engineering phenomena. As a result, advanced visualization and data analysis are also becoming an integral part of the scientific process. Often, a key step in extracting insight from these large simulations involves the definition, extraction, and evaluation of features in the space and time coordinates of the solution. However, in many applications these features involve a range of parameters and decisions that will affect the quality and direction of the analysis. Examples include particular level sets of a specific scalar field, or local inequalities between derived quantities. A critical step in the analysis is to understand how these arbitrary parameters/decisions impact the statistical properties of the features, since such a characterization will help to evaluate the conclusions of the analysis as a whole. We present a new topological framework that in a single pass extracts and encodes entire families of possible features definitions as well as their statistical properties. For each time step we construct a hierarchical merge tree a highly compact, yet flexible feature representation. While this data structure is more than two orders of magnitude smaller than the raw simulation data it allows us to extract a set of feature for any given parameter selection in a post-processing step. Furthermore, we augment the trees with additional attributes making it possible to gather a large number of useful global, local, as well as conditional statistic that would otherwise be extremely difficult to compile. We also use this representation to create tracking graphs that describe the temporal evolution of the features over time. Our system provides a linked-view interface to explore the time-evolution of the graph interactively alongside the segmentation, thus making it possible to perform extensive data analysis in a very efficient manner. We demonstrate our framework

Peer-timo Bremer; Gunther H. Weber; Julien Tierny; Valerio Pascucci; Marcus S. Day; John B. Bell

2011-01-01T23:59:59.000Z

350

Large-Scale Continuous Subgraph Queries on Streams  

SciTech Connect

Graph pattern matching involves finding exact or approximate matches for a query subgraph in a larger graph. It has been studied extensively and has strong applications in domains such as computer vision, computational biology, social networks, security and finance. The problem of exact graph pattern matching is often described in terms of subgraph isomorphism which is NP-complete. The exponential growth in streaming data from online social networks, news and video streams and the continual need for situational awareness motivates a solution for finding patterns in streaming updates. This is also the prime driver for the real-time analytics market. Development of incremental algorithms for graph pattern matching on streaming inputs to a continually evolving graph is a nascent area of research. Some of the challenges associated with this problem are the same as found in continuous query (CQ) evaluation on streaming databases. This paper reviews some of the representative work from the exhaustively researched field of CQ systems and identifies important semantics, constraints and architectural features that are also appropriate for HPC systems performing real-time graph analytics. For each of these features we present a brief discussion of the challenge encountered in the database realm, the approach to the solution and state their relevance in a high-performance, streaming graph processing framework.

Choudhury, Sutanay; Holder, Larry; Chin, George; Feo, John T.

2011-11-30T23:59:59.000Z

351

Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale  

SciTech Connect

The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

2009-07-02T23:59:59.000Z

352

High performance threaded data streaming for large scale simulations  

E-Print Network (OSTI)

We have developed a threaded parallel data streaming approach using Logistical Networking (LN) to transfer multi-terabyte simulation data from computers at NERSC to our local analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and later transferring this data to be post-processed. Our algorithms are network aware, and can stream data at up to 97Mbs on a 100Mbs link from CA to NJ during a live simulation, using less than 5 % CPU overhead at NERSC. This method is the first step in setting up a pipeline for simulation workflow and data management. 1.

Viraj Bhat; Scott Klasky; Scott Atchley; Micah Beck; Doug Mccune; Manish Parashar

2004-01-01T23:59:59.000Z

353

Advanced coal gasifier designs using large-scale simulations  

Science Conference Proceedings (OSTI)

Porting of the legacy code MFIX to a high performance computer (HPC) and the use of high resolution simulations for the design of a coal gasifier are described here. MFIX is based on a continuum multiphase flow model that considers gas and solids to form interpenetrating continua. Low resolution simulations of a commercial scale gasifier with a validated MFIX model revealed interesting physical phenomena with implications on the gasifier design, which prompted the study reported here. To be predictive, the simulations need to model the spatiotemporal variations in gas and solids volume fractions, velocities, temperatures with any associated phase change and chemical reactions. These processes occur at various time- and length-scales requiring very high spatial resolution and large number of iterations with small time-steps. We were able to perform perhaps the largest known simulations of gas-solids reacting flows, providing detailed information about the gas-solids flow structure and the pressure, temperature and species distribution in the gasifier. One key finding is the new features of the coal jet trajectory revealed with the high spatial resolution, which provides information on the accuracy of the lower resolution simulations. Methodologies for effectively combining high and low resolution simulations for design studies must be developed. From a computational science perspective, we found that global communication has to be reduced to achieve scalability to 1000s of cores, hybrid parallelization is required to effectively utilize the multicore chips, and the wait time in the batch queue significantly increases the actual time-to-solution. From our experience, development is required in the following areas: efficient solvers for heterogeneous, massively parallel systems; data analysis tools to extract information from large data sets; and programming environments for easily porting legacy codes to HPC.

Syamlal, M [National Energy Technology Laboratory (NETL); Guenther, Chris [National Energy Technology Laboratory (NETL); Gel, Aytekin [Aeolus Research Inc.; Pannala, Sreekanth [ORNL

2009-01-01T23:59:59.000Z

354

RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND  

Science Conference Proceedings (OSTI)

Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

Chokchai "Box" Leangsuksun

2011-05-31T23:59:59.000Z

355

Introduction to a Large-Scale Biogas Plant in a Dairy Farm  

Science Conference Proceedings (OSTI)

This article describes a large-scale biogas plant in a dairy farm located in the Tongzhou District of Beijing. It is has a treatment capacity of 30t manure and 30t wastewater per day, a total of 60t/d with a residence time of 20 days. Input material ... Keywords: Large scale biogas plant, CHP, Biogas storage within digestor

Xiaolin Fan; Zifu Li; Tingting Wang; Fubin Yin; Xin Jin

2010-12-01T23:59:59.000Z

356

Level-of-detail rendering of large-scale irregular volume datasets using particles  

Science Conference Proceedings (OSTI)

This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets. It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of such datasets. ... Keywords: large-scale irregular volume, level-of-detail, volume rendering of unstructured meshes

Takuma Kawamura; Naohisa Sakamoto; Koji Koyamada

2010-09-01T23:59:59.000Z

357

Structural fatigue assessment and management of large-scale port logistics equipments  

Science Conference Proceedings (OSTI)

With the advances of port enterprises, much intensive research has been gradually involved in the structural fatigue assessment and management of port logistics equipments. However, relevant work on large-scale port logistics equipments is still ... Keywords: S-N curve, crack formation, crack propagation life, fatigue assessment, fracture mechanics, gantry cranes, large-scale port logistics equipment, structural safety assessment

Yuan Liu; Weijian Mi; Huiqiang Zheng

2008-11-01T23:59:59.000Z

358

A time management optimization framework for large-scale distributed hardware-in-the-loop simulation  

Science Conference Proceedings (OSTI)

Large-scale distributed HIL(Hardware-In-The-Loop) simulation is an important and indispensable method for testing and verifying complex engineering systems. An important necessary condition for realizing HIL simulation is that the speedup ratio of full-speed ... Keywords: hardware-in-the-loop simulation, large-scale distributed simulation, optimization framework, speedup ratio of simulation, time management

Wei Dong

2013-05-01T23:59:59.000Z

359

Large scale continuous visual event recognition using max-margin Hough transformation framework  

Science Conference Proceedings (OSTI)

In this paper we propose a novel method for continuous visual event recognition (CVER) on a large scale video dataset using max-margin Hough transformation framework. Due to high scalability, diverse real environmental state and wide scene variability ... Keywords: Continuous visual event, Event detection, Large scale, Max-margin Hough transform

Bhaskar Chakraborty, Jordi Gonzílez, F. Xavier Roca

2013-10-01T23:59:59.000Z

360

A study of dynamic meta-learning for failure prediction in large-scale systems  

Science Conference Proceedings (OSTI)

Despite years of study on failure prediction, it remains an open problem, especially in large-scale systems composed of vast amount of components. In this paper, we present a dynamic meta-learning framework for failure prediction. It intends to not only ... Keywords: Blue Gene, Dynamic techniques, Failure prediction, Large-scale systems, Meta-learning

Zhiling Lan; Jiexing Gu; Ziming Zheng; Rajeev Thakur; Susan Coghlan

2010-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


361

Online job provisioning for large scale science experiments over an optical grid infrastructure  

Science Conference Proceedings (OSTI)

Many emerging science experiments require that the massive data generated by big instruments be accessible and analyzed by a large number of geographically dispersed users. Such large scale science experiments are enabled by an Optical Grid infrastructure ... Keywords: WDM network, grid, job provisioning, large scale science experiment, resource co-scheduling

Xiang Yu; Chunming Qiao; Dantong Yu

2009-04-01T23:59:59.000Z

362

The Roles of Mean Meridional Motions and Large-Scale Eddies in Zonally Averaged Circulations  

Science Conference Proceedings (OSTI)

A hierarchy of zonally averaged atmospheric models is used to study the role of mean meridional motions and large-scale eddies in determining the zonal climate. Five models are developed: a radiative-convective equilibrium model (no large-scale ...

Karl E. Taylor

1980-01-01T23:59:59.000Z

363

Large-Scale Integration of Deferrable Demand and Renewable Energy Sources  

E-Print Network (OSTI)

1 Large-Scale Integration of Deferrable Demand and Renewable Energy Sources Anthony Papavasiliou model for assessing the impacts of the large-scale integration of renewable energy sources. In order to accurately assess the impacts of renewable energy integration and demand response integration

Oren, Shmuel S.

364

U.S. Energy Infrastructure Investment: Large-Scale Integrated Smart Grid  

E-Print Network (OSTI)

U.S. Energy Infrastructure Investment: Large-Scale Integrated Smart Grid Solutions with High: LargeScale Integrated Smart Grid Solutions with High Penetration of Renewable Resources, Dispersed- ing electricity grid. Much attention is being given to smart grid development in the U.S. and around

365

Uncertainty quantification for large-scale ocean circulation predictions.  

SciTech Connect

Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

2010-09-01T23:59:59.000Z

366

Large-scale three-dimensional geothermal reservoir simulation on PCs  

DOE Green Energy (OSTI)

TOUGH2, Lawrence Berkeley Laboratory`s general purpose simulator for mass and heat flow and transport was enhanced with the addition of a set of preconditioned conjugate gradient solvers and ported to a PC. The code was applied to a number of large 3-D geothermal reservoir problems with up to 10,000 grid blocks. Four test problems were investigated. The first two involved a single-phase liquid system, and a two-phase system with regular Cartesian grids. The last two involved a two-phase field problem with irregular gridding with production from and injection into a single porosity reservoir, and a fractured reservoir. The code modifications to TOUGH2 and its setup in the PC environment are described. Algorithms suitable for solving large matrices that are generally non-symmetric and non-positive definite are reviewed. Computational work per time step and CPU time requirements are reported as function of problem size. The excessive execution time and storage requirements of the direct solver in TOUGH2 limits the size of manageable 3-D reservoir problems to a few hundred grid blocks. The conjugate gradient solvers significantly reduced the execution time and storage requirements making possible the execution of considerably larger problems (10,000 + grid blocks). It is concluded that the current PCs provide an economical platform for running large-scale geothermal field simulations that just a few years ago could only be executed on mainframe computers.

Antunez, E.; Moridis, G.; Pruess, K.

1994-01-01T23:59:59.000Z

367

Large-scale functional models of visual cortex for remote sensing  

SciTech Connect

Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

Brumby, Steven P [Los Alamos National Laboratory; Kenyon, Garrett [Los Alamos National Laboratory; Rasmussen, Craig E [Los Alamos National Laboratory; Swaminarayan, Sriram [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Landecker, Will [PORTLAND STATE UNIV.

2009-01-01T23:59:59.000Z

368

Large-scale three-dimensional geothermal reservoir simulation on PCs  

Science Conference Proceedings (OSTI)

TOUGH2, Lawrence Berkeley Laboratory's general purpose simulator for mass and heat flow and transport was enhanced with the addition of a set of preconditioned conjugate gradient solvers and ported to a PC. The code was applied to a number of large 3-D geothermal reservoir problems with up to 10,000 grid blocks. Four test problems were investigated. The first two involved a single-phase liquid system, and a two-phase system with regular Cartesian grids. The last two involved a two-phase field problem with irregular gridding with production from and injection into a single porosity reservoir, and a fractured reservoir. The code modifications to TOUGH2 and its setup in the PC environment are described. Algorithms suitable for solving large matrices that are generally non-symmetric and non-positive definite are reviewed. Computational work per time step and CPU time requirements are reported as function of problem size. The excessive execution time and storage requirements of the direct solver in TOUGH2 limits the size of manageable 3-D reservoir problems to a few hundred grid blocks. The conjugate gradient solvers significantly reduced the execution time and storage requirements making possible the execution of considerably larger problems (10,000+ grid blocks). It is concluded that the current PCs provide an economical platform for running large-scale geothermal field simulations that just a few years ago could only be executed on mainframe computers.

Antunez, Emilio; Moridis, George; Pruess, Karsten

1994-01-20T23:59:59.000Z

369

Large-scale Renewable Energy Projects (Larger than 10 MWs) | Department of  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Large-scale Renewable Energy Projects (Larger than 10 MWs) Large-scale Renewable Energy Projects (Larger than 10 MWs) Large-scale Renewable Energy Projects (Larger than 10 MWs) October 7, 2013 - 9:32am Addthis Renewable energy projects larger than 10 megawatts (MW) are complex and typically require private-sector financing. The Federal Energy Management Program (FEMP) developed a guide to help Federal agencies, and the developers and financiers that work with them, to successfully install these projects at Federal facilities. The Large-Scale Renewable Energy Guide: Developing Renewable Energy Projects Larger than 10 MWs at Federal Facilities: A Practical Guide to Getting Large-Scale Renewable Energy Projects Financed with Private Capital provides a framework to allow the Federal Government, private developers, and financiers to work in a

370

Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit  

Open Energy Info (EERE)

Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit Programme Jump to: navigation, search Name Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit Programme Agency/Company /Organization European Climate Foundation Sector Energy Focus Area Energy Efficiency, Buildings, - Building Energy Efficiency Topics Co-benefits assessment, Background analysis Resource Type Publications Website http://3csep.ceu.hu/sites/defa Country Hungary UN Region Eastern Europe References Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit Programme[1] Hungary-Employment Impacts of a Large-Scale Deep Building Retrofit Programme Screenshot "The goal of the present research was to gauge the net employment impacts of a largescale deep building energy-efficiency renovation programme in

371

Strategies to Finance Large-Scale Deployment of Renewable Energy Projects:  

Open Energy Info (EERE)

Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL Name: Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Agency/Company /Organization: International Energy Agency (IEA) Sector: Energy Focus Area: Renewable Energy Topics: Finance, Implementation, Policies/deployment programs Resource Type: Publications Website: iea-retd.org/archives/publications/finance-re Cost: Free Language: English Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Screenshot References: Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach[1]

372

(865) 574-6185, mccoydd@ornl.gov Advanced Scientific Computing Research  

E-Print Network (OSTI)

on integrating new software for the science applications which researchers run on high performance computing platforms. One of the key challenges in high performance computing is to ensure that the software which

373

The FES Scientific Discovery through Advanced Computing (SciDAC) Program  

E-Print Network (OSTI)

and researchers are expected to be leaders in the efficient and productive use of High Performance Computing

374

Computing for Perturbative QCD - A Snowmass White Paper  

E-Print Network (OSTI)

We present a study on high-performance computing and large-scale distributed computing for perturbative QCD calculations.

Christian Bauer; Zvi Bern; Radja Boughezal; John Campbell; Neil Christensen; Lance Dixon; Thomas Gehrmann; Stefan Hoeche; Junichi Kanzaki; Alexander Mitov; Pavel Nadolsky; Fredrick Olness; Michael Peskin; Frank Petriello; Stefano Pozzorini; Laura Reina; Frank Siegert; Doreen Wackeroth; Jonathan Walsh; Ciaran Williams; Markus Wobisch

2013-09-13T23:59:59.000Z

375

EcoG: A Power-Efficient GPU Cluster Architecture for Scientific Computing  

Science Conference Proceedings (OSTI)

Researchers built the EcoG GPU-based cluster to show that a system can be designed around GPU computing and still be power efficient.

Mike Showerman; Jeremy Enos; Craig Steffen; Sean Treichler; William Gropp; Wen-mei W. Hwu

2011-01-01T23:59:59.000Z

376

Autonomy-Oriented Computing (AOC): The nature and implications of a paradigm for self-organized computing  

E-Print Network (OSTI)

Facing the increasing needs for large-scale, robust, adaptive, and distributed/decentralized computing capabilities [1, 5] from such fields as Web intelligence, scientific and social computing, Internet commerce, and pervasive computing, an unconventional bottom-up paradigm, based on the notions of Autonomy-Oriented Computing (AOC) and self-organization in open complex systems, offers new opportunities for developing promising architectures, methods, and technologies. The goal of this paper is to describe the key concepts in this computing paradigm, and furthermore, discuss some of the fundamental principles and mechanisms for obtaining self-organized computing solutions. 1.

Jiming Liu

2008-01-01T23:59:59.000Z

377

Linearly Scaling 3D Fragment Method for Large-Scale Electronic Structure Calculations  

Science Conference Proceedings (OSTI)

We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39percent of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFTcalculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N3) methods, and the potential for petascale computation using the LS3DF method.

Wang, Lin-Wang; Lee, Byounghak; Shan, Hongzhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David H.

2008-07-01T23:59:59.000Z

378

Linear scaling 3D fragment method for large-scale electronic structure calculations  

Science Conference Proceedings (OSTI)

We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39% of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFT calculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N{sup 3}) methods, and the potential for petascale computation using the LS3DF method.

Wang, Lin-Wang; Wang, Lin-Wang; Lee, Byounghak; Shan, HongZhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David

2008-07-11T23:59:59.000Z

379

Creating science-driven computer architecture: A new path to scientific leadership  

SciTech Connect

This document proposes a multi-site strategy for creating a new class of computing capability for the U.S. by undertaking the research and development necessary to build supercomputers optimized for science in partnership with the American computer industry.

McCurdy, C. William; Stevens, Rick; Simon, Horst; Kramer, William; Bailey, David; Johnston, William; Catlett, Charlie; Lusk, Rusty; Morgan, Thomas; Meza, Juan; Banda, Michael; Leighton, James; Hules, John

2002-10-14T23:59:59.000Z

380

Estimating Large-Scale Precipitation Minus Evapotranspiration from GRACE Satellite Gravity Measurements  

Science Conference Proceedings (OSTI)

Currently, observations of key components of the earth's large-scale water and energy budgets are sparse or even nonexistent. One key component, precipitation minus evapotranspiration (P ? ET), remains largely unmeasured due to the absence of ...

Sean Swenson; John Wahr

2006-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


381

Observed Large-Scale Structures and Diabatic Heating and Drying Profiles during TWP-ICE  

Science Conference Proceedings (OSTI)

This study documents the characteristics of the large-scale structures and diabatic heating and drying profiles observed during the Tropical Warm Pool–International Cloud Experiment (TWP-ICE), which was conducted in January–February 2006 in ...

Shaocheng Xie; Timothy Hume; Christian Jakob; Stephen A. Klein; Renata B. McCoy; Minghua Zhang

2010-01-01T23:59:59.000Z

382

Horizontal Structure and Seasonality of Large-Scale Circulations Associated with Submonthly Tropical Convection  

Science Conference Proceedings (OSTI)

The relationship between deep tropical convection and large-scale atmospheric circulation in the 6–30-day period range is examined. Regression relationships between filtered outgoing longwave radiation at various locations in the Tropics and 200- ...

George N. Kiladis; Klaus M. Weickmann

1997-09-01T23:59:59.000Z

383

Conjugate-Gradient Methods for Large-Scale Minimization in Meteorology  

Science Conference Proceedings (OSTI)

During the last few years new meteorological variational analysis methods have evolved, requiring large-scale minimization of a nonlinear objective function described in terms of discrete variables. The conjugate-gradient method was found to ...

I. M. Navon; David M. Legler

1987-08-01T23:59:59.000Z

384

How Well Do Large-Scale Models Reproduce Regional Hydrological Extremes in Europe?  

Science Conference Proceedings (OSTI)

This paper presents a new methodology for assessing the ability of gridded hydrological models to reproduce large-scale hydrological high and low flow events (as a proxy for hydrological extremes) as described by catalogues of historical droughts [...

Christel Prudhomme; Simon Parry; Jamie Hannaford; Douglas B. Clark; Stefan Hagemann; Frank Voss

2011-12-01T23:59:59.000Z

385

Energy Department Awards $66.7 Million for Large-Scale Carbon Sequestration  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

66.7 Million for Large-Scale Carbon 66.7 Million for Large-Scale Carbon Sequestration Project Energy Department Awards $66.7 Million for Large-Scale Carbon Sequestration Project December 18, 2007 - 4:58pm Addthis Regional Partner to Demonstrate Safe and Permanent Storage of One Million Tons of CO2 at Illinois Site WASHINGTON, DC - Following closely on the heels of three recent awards through the Department of Energy's (DOE) Regional Carbon Sequestration Partnership Program, DOE today awarded $66.7 million to the Midwest Geological Sequestration Consortium (MGSC) for the Department's fourth large-scale carbon sequestration project. The Partnership led by the Illinois State Geological Survey will conduct large volume tests in the Illinois Basin to demonstrate the ability of a geologic formation to

386

DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

$126.6 Million for Two More Large-Scale Carbon $126.6 Million for Two More Large-Scale Carbon Sequestration Projects DOE Awards $126.6 Million for Two More Large-Scale Carbon Sequestration Projects May 6, 2008 - 11:30am Addthis Projects in California and Ohio Join Four Others in Effort to Drastically Reduce Greenhouse Gas Emissions WASHINGTON, DC - The U.S. Department of Energy (DOE) today announced awards of more than $126.6 million to the West Coast Regional Carbon Sequestration Partnership (WESTCARB) and the Midwest Regional Carbon Sequestration Partnership (MRCSP) for the Department's fifth and sixth large-scale carbon sequestration projects. These industry partnerships, which are part of DOE's Regional Carbon Sequestration Partnership, will conduct large volume tests in California and Ohio to demonstrate the ability of a geologic

387

First U.S. Large-Scale CO2 Storage Project Advances | Department of Energy  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

First U.S. Large-Scale CO2 Storage Project Advances First U.S. Large-Scale CO2 Storage Project Advances First U.S. Large-Scale CO2 Storage Project Advances April 6, 2009 - 1:00pm Addthis Washington, DC - Drilling nears completion for the first large-scale carbon dioxide (CO2) injection well in the United States for CO2 sequestration. This project will be used to demonstrate that CO2 emitted from industrial sources - such as coal-fired power plants - can be stored in deep geologic formations to mitigate large quantities of greenhouse gas emissions. The Archer Daniels Midland Company (ADM) hosted an event April 6 for a CO2 injection test at their Decatur, Ill. ethanol facility. The injection well is being drilled into the Mount Simon Sandstone to a depth more than a mile beneath the surface. This is the first drilling into the sandstone geology

388

Large-Scale Residential Energy Efficiency Programs Based on CFLs | Open  

Open Energy Info (EERE)

Large-Scale Residential Energy Efficiency Programs Based on CFLs Large-Scale Residential Energy Efficiency Programs Based on CFLs Jump to: navigation, search Tool Summary Name: Large-Scale Residential Energy Efficiency Programs Based on CFLs Agency/Company /Organization: Energy Sector Management Assistance Program of the World Bank Sector: Energy Focus Area: Energy Efficiency, Buildings Topics: Implementation, Policies/deployment programs Website: www.esmap.org/filez/pubs/216201021421_CFL_Toolkit_Web_Version_021610_R References: Large-Scale Residential Energy Efficiency Programs Based on CFLs[1] Overview "The World Bank Group and its Energy Sector Management Assitance Progamme (ESMAP) have produced a toolkit for efficient lighting programmes, based on compact fluorescent lamps, that compiles and shares operational (design,

389

ARM - PI Product - Large Scale Ice Water Path and 3-D Ice Water Content  

NLE Websites -- All DOE Office Websites (Extended Search)

ProductsLarge Scale Ice Water Path and 3-D Ice Water ProductsLarge Scale Ice Water Path and 3-D Ice Water Content Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send PI Product : Large Scale Ice Water Path and 3-D Ice Water Content Site(s) SGP TWP General Description Cloud ice water concentration is one of the most important, yet poorly observed, cloud properties. Developing physical parameterizations used in general circulation models through single-column modeling is one of the key foci of the ARM program. In addition to the vertical profiles of temperature, water vapor and condensed water at the model grids, large-scale horizontal advective tendencies of these variables are also required as forcing terms in the single-column models. Observed horizontal advection of condensed water has not been available because the

390

The Nonlinear Response of the Atmosphere to Large-Scale Mechanical and Thermal Forcing  

Science Conference Proceedings (OSTI)

The subject of large-scale mountain waves is reviewed briefly. Existing mountain wave theory based on a linear system is shown to give an inadequate description of the balance of angular momentum. The response of the atmosphere to mechanical ...

Guo-Xiong Wu

1984-08-01T23:59:59.000Z

391

Comparing Large-Scale Hydrological Model Simulations to Observed Runoff Percentiles in Europe  

Science Conference Proceedings (OSTI)

Large-scale hydrological models describing the terrestrial water balance at continental and global scales are increasingly being used in earth system modeling and climate impact assessments. However, because of incomplete process understanding and ...

Lukas Gudmundsson; Lena M. Tallaksen; Kerstin Stahl; Douglas B. Clark; Egon Dumont; Stefan Hagemann; Nathalie Bertrand; Dieter Gerten; Jens Heinke; Naota Hanasaki; Frank Voss; Sujan Koirala

2012-04-01T23:59:59.000Z

392

Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India  

E-Print Network (OSTI)

and are added to the utility’s rate base. Large-scale EE2009a, 2009b, 2009c). utility’s rate base, and the utilityto the grid at a higher rate if the utility does not face

Abhyankar, Nikit

2011-01-01T23:59:59.000Z

393

Sensitivity of Tropical Convection to Sea Surface Temperature in the Absence of Large-Scale Flow  

Science Conference Proceedings (OSTI)

The response of convection to changing sea surface temperature (SST) in the absence of large-scale flow is examined, using a three-dimensional cloud resolving model. The model includes a five-category bulk microphysical scheme representing snow, ...

Adrian M. Tompkins; George C. Craig

1999-02-01T23:59:59.000Z

394

Potential Climatic Impacts and Reliability of Very Large-Scale Wind Farms  

E-Print Network (OSTI)

Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has ...

Wang, Chien

395

A case study in meta-simulation design and performance analysis for large-scale networks  

Science Conference Proceedings (OSTI)

Simulation and emulation techniques are fundamental to aid the process of large-scale protocol design and network operations. However, the results from these techniques are often view with a great deal of skepticism from the networking community. Criticisms ...

David Bauer; Garrett Yaun; Christopher D. Carothers; Murat Yuksel; Shivkumar Kalyanaraman

2004-12-01T23:59:59.000Z

396

Tropical Instability Wave Variability in the Pacific and Its Relation to Large-Scale Currents  

Science Conference Proceedings (OSTI)

Shipboard acoustic Doppler current profiler (ADCP)-derived zonal currents from 170° to 110°W are assembled into composite seasonal and ENSO cycles to produce detailed representations of large-scale ocean flow regimes that favor tropical ...

Eric S. Johnson; Jeffrey A. Proehl

2004-10-01T23:59:59.000Z

397

On the Completeness of Multi-Variate Optimum Interpolation for Large-Scale Meteorological Analysis  

Science Conference Proceedings (OSTI)

The Baer-Tribbia nonlinear modal initialization method implies that large-scale meteorological analyses should focus on analysis of slow mode fields. An idealized multi-variate optimum interpolation analysis is shown to produce grid point results ...

Norman A. Phillips

1982-10-01T23:59:59.000Z

398

A Hybrid Kalman Filter Algorithm for Large-Scale Atmospheric Chemistry Data Assimilation  

Science Conference Proceedings (OSTI)

In the past, a number of algorithms have been introduced to solve data assimilation problems for large-scale applications. Here, several Kalman filters, coupled to the European Operational Smog (EUROS) atmospheric chemistry transport model, are ...

R. G. Hanea; G. J. M. Velders; A. J. Segers; M. Verlaan; A. W. Heemink

2007-01-01T23:59:59.000Z

399

Energy Transmission by Barotropic Rossby Waves across Large-Scale Topography  

Science Conference Proceedings (OSTI)

An analytical study investigates the energy transmission by free, barotropic, linear Rossby waves across a large scale bottom topography when topographic and beta-effects have the same order of magnitude. In open ocean regions which are not ...

Bernard Barnier

1984-02-01T23:59:59.000Z

400

Large-Scale Atmospheric Forcing by Southeast Pacific Boundary Layer Clouds: A Regional Model Study  

Science Conference Proceedings (OSTI)

A regional model is used to study the radiative effect of boundary layer clouds over the southeast Pacific on large-scale atmosphere circulation during August–October 1999. With the standard settings, the model simulates reasonably well the large-...

Yuqing Wang; Shang-Ping Xie; Bin Wang; Haiming Xu

2005-04-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


401

Aerosols in Amazonia: Natural biogenic particles and large scale biomass burning impacts  

Science Conference Proceedings (OSTI)

The Large Scale Biosphere Atmosphere Experiment in Amazonia (LBA) is a long term (20 years) research effort aimed at the understanding of the functioning of the Amazonian ecosystem. In particular

2013-01-01T23:59:59.000Z

402

On the Identification of the Large-Scale Properties of Tropical Convection using Cloud Regimes  

Science Conference Proceedings (OSTI)

The use of cloud regimes in identifying tropical convection and the associated large-scale atmospheric properties is investigated. The regimes are derived by applying cluster analysis to satellite retrievals of daytime-averaged frequency ...

Jackson Tan; Christian Jakob; Todd P. Lane

403

Large-Scale Vertical and Horizontal Circulation in the North Atlantic Ocean  

Science Conference Proceedings (OSTI)

Observations of large-scale hydrography, air–sea forcing, and regional circulation from numerous studies are combined by inverse methods to determine the basin-scale circulation, average diapycnal mixing, and adjustments to air–sea forcing of the ...

Rick Lumpkin; Kevin Speer

2003-09-01T23:59:59.000Z

404

National large-scale Urban True Orthophoto Mapping and its standard initiative  

Science Conference Proceedings (OSTI)

This document would highlight the current project activities, published or unpublished research contributions, success and challenges from March 2005 through December 2005 and plane for the coming years on the project, entitled "National Large-Scale ...

Guoqing Zhou; Wenhan Xie; Susan Benjamin; Robin G. Fegeas; John Simmers; Hap Cluff; Y. Lei; Jeanne Foust

2006-05-01T23:59:59.000Z

405

Large Scale Soft X-ray Loops And Their Magnetic Chirality In Both Hemispheres  

E-Print Network (OSTI)

The magnetic chirality in solar atmosphere has been studied based on the soft X-ray and magnetic field observations. It is found that some of large-scale twisted soft X-ray loop systems occur for several months in the solar atmosphere, before the disappearance of the corresponding background large-scale magnetic field. It provides the observational evidence of the helicity of the large-scale magnetic field in the solar atmosphere and the reverse one relative to the helicity rule in both hemispheres with solar cycles. The transfer of the magnetic helicity from the subatmosphere is consistent with the formation of large-scale twisted soft X-ray loops in the both solar hemispheres.

Zhang, Hongqi; Gao, Yu; Su, Jiangtao; Sokoloff, D D; Kuzanyan, K

2010-01-01T23:59:59.000Z

406

Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India  

E-Print Network (OSTI)

as sources of low-cost baseload power. 4.6.3 Large­Scale EE b is the variable cost of baseload power purchases, and L isbut simply avoids baseload power purchases. Utilities that

Abhyankar, Nikit

2011-01-01T23:59:59.000Z

407

Summertime Precipitation Variability over South America: Role of the Large-Scale Circulation  

Science Conference Proceedings (OSTI)

The observed large-scale circulation mechanisms associated with summertime precipitation variability over South America are investigated. Particular attention is paid to the Altiplano where a close relationship has been observed between rainfall ...

J. D. Lenters; K. H. Cook

1999-03-01T23:59:59.000Z

408

Sensitivity Study of Regional Climate Model Simulations to Large-Scale Nudging Parameters  

Science Conference Proceedings (OSTI)

Previous studies with nested regional climate models (RCMs) have shown that large-scale spectral nudging (SN) seems to be a powerful method to correct RCMs’ weaknesses such as internal variability, intermittent divergence in phase space (IDPS), ...

Adelina Alexandru; Ramon de Elia; René Laprise; Leo Separovic; Sébastien Biner

2009-05-01T23:59:59.000Z

409

Interannual Variability of Tropical Cyclones in the Australian Region: Role of Large-Scale Environment  

Science Conference Proceedings (OSTI)

This study investigates the role of large-scale environmental factors, notably sea surface temperature (SST), low-level relative vorticity, and deep-tropospheric vertical wind shear, in the interannual variability of November–April tropical ...

Hamish A. Ramsay; Lance M. Leslie; Peter J. Lamb; Michael B. Richman; Mark Leplastrier

2008-03-01T23:59:59.000Z

410

A steady-state L-mode tokamak fusion reactor : large scale and minimum scale.  

E-Print Network (OSTI)

??We perform extensive analysis on the physics of L-mode tokamak fusion reactors to identify (1) a favorable parameter space for a large scale steady-state reactor… (more)

Reed, Mark W. (Mark Wilbert)

2010-01-01T23:59:59.000Z

411

In-situ sampling of a large-scale particle simulation for interactive visualization and analysis  

Science Conference Proceedings (OSTI)

We describe a simulation-time random sampling of a large-scale particle simulation, the RoadRunner Universe MC3 cosmological simulation, for interactive post-analysis and visualization. Simulation data generation rates will continue to be ...

J. Woodring; J. Ahrens; J. Figg; J. Wendelberger; S. Habib; K. Heitmann

2011-06-01T23:59:59.000Z

412

LARGE SCALE PERMEABILITY TEST OF THE GRANITE IN THE STRIPA MINE AND THERMAL CONDUCTIVITY TEST  

E-Print Network (OSTI)

No.2 LARGE SCALE PERMEABILITY TEST OF THE GRANITE' IN THEMINE AND, THERMAL CONDUCTIVITY TEST Lars Lundstrom and HakanSUMMARY REPORT Background TEST SITE Layout of test places

Lundstrom, L.

2011-01-01T23:59:59.000Z

413

Anisotropic mesoscopic traffic simulation approach to support large-scale traffic and logistic modeling and analysis  

Science Conference Proceedings (OSTI)

Large-scale traffic and transportation logistics analysis requires a realistic depiction of network traffic condition in a dynamic manner. In the past decades, vehicular traffic simulation approaches have been increasingly developed and applied to describe ...

Ye Tian; Yi-Chang Chiu

2011-12-01T23:59:59.000Z

414

The Dynamics of Large-Scale Cyclogenesis over the North Pacific Ocean  

Science Conference Proceedings (OSTI)

Earlier studies of persistent large-scale flow anomalies have been extended, with the aim of identifying the primary mechanisms for persistent anomaly development. In this study the focus is on wintertime cases of persistent cyclonic flow ...

Robert X. Black; Randall M. Dole

1993-02-01T23:59:59.000Z

415

On the Ocean’s Large-Scale Circulation near the Limit of No Vertical Mixing  

Science Conference Proceedings (OSTI)

By convention, the ocean’s large-scale circulation is assumed to be a thermohaline overturning driven by the addition and extraction of buoyancy at the surface and vertical mixing in the interior. Previous work suggests that the overturning ...

J. R. Toggweiler; B. Samuels

1998-09-01T23:59:59.000Z

416

Some Correlations between the Large-Scale Meridional Eddy Momentum Transport and Zonal Mean Quantities  

Science Conference Proceedings (OSTI)

An empirical study has been made which compares the large-scale meridional eddy momentum transport with some selected zonal mean quantities by calculating correlations between them as a function of time lag and latitude. The basic dataset was the ...

Anne Leach

1984-01-01T23:59:59.000Z

417

Influence of Forced Large-Scale Atmospheric Patterns on Surface Air Temperature in China  

Science Conference Proceedings (OSTI)

The seasonality of the influence of the tropical Pacific sea surface temperature (SST)-forced large-scale atmospheric patterns on the surface air temperature (SAT) over China is investigated for the period from 1969 to 2001. Both observations and ...

Xiaojing Jia; Hai Lin

2011-03-01T23:59:59.000Z

418

Explosive Cyclogenesis and Large-Scale Circulation Changes: Implications for Atmospheric Blocking  

Science Conference Proceedings (OSTI)

Large-scale circulation changes attending explosive surface cyclogenesis are quantitatively examined in two cases selected from recent winter seasons. Both cases feature a rapidly deepening surface cyclone over the western Atlantic Ocean, but ...

Stephen J. Colucci

1985-12-01T23:59:59.000Z

419

Tropical Cyclone Track Characteristics as a Function of Large-Scale Circulation Anomalies  

Science Conference Proceedings (OSTI)

Factors that contribute to intraseasonal variability in western North Pacific tropical cyclone track types are investigated. It is hypothesized that the 700-mb large-scale circulation can affect tropical cyclone track characteristics by enhancing ...

Patrick A. Harr; Russell L. Elsberry

1991-06-01T23:59:59.000Z

420

Agent Based Dynamic Service Synthesis in Large-Scale Open Environments: Experiences from the Agentcities Testbed  

Science Conference Proceedings (OSTI)

The notion of autonomous agents populating large-scale open environments, such as the public Internet, that are able to dynamically discover one another, interact and synthesise new software applications or results has become one of the key technology ...

Steven Willmott; Simon Thompson; David Bonnefoy; Patricia Charlton; Ion Constantinescu; Jonathan Dale; Tianning Zhang

2004-07-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


421

Large-Scale Heat and Moisture Budgets over the ASTEX Region  

Science Conference Proceedings (OSTI)

Rawinsonde data collected from the Atlantic Stratocumulus Transition Experiment (ASTEX) were used to investigate the mean and temporal characteristics of large-scale heat and moisture budgets for a 2-week period in June 1992. During this period a ...

Paul E. Ciesielski; Wayne H. Schubert; Richard H. Johnson

1999-09-01T23:59:59.000Z

422

Dependence of Large-Scale Precipitation Climatologies on Temporal and Spatial Sampling  

Science Conference Proceedings (OSTI)

Large-scale observed precipitation climatologies are needed for a variety of purposes in the fields of climate and environmental modeling. Although new satellite-derived precipitation estimates offer the prospect of near-global climatologies ...

Mike Hulme; Mark New

1997-05-01T23:59:59.000Z

423

Aerosol-Induced Large-Scale Variability in Precipitation over the Tropical Atlantic  

Science Conference Proceedings (OSTI)

Multiyear satellite observations are used to document a relationship between the large-scale variability in precipitation over the tropical Atlantic and aerosol traced to African sources. During boreal winter and spring there is a significant ...

Jingfeng Huang; Chidong Zhang; Joseph M. Prospero

2009-10-01T23:59:59.000Z

424

An Idealized Prototype for Large-Scale Land–Atmosphere Coupling  

Science Conference Proceedings (OSTI)

A process-based, semianalytic prototype model for understanding large-scale land–atmosphere coupling is developed here. The metric for quantifying the coupling is the sensitivity of precipitation P to soil moisture W, . For a range of prototype ...

Benjamin R. Lintner; Pierre Gentine; Kirsten L. Findell; Fabio D’Andrea; Adam H. Sobel; Guido D. Salvucci

2013-04-01T23:59:59.000Z

425

Built-in data-flow integration testing in large-scale component-based systems  

Science Conference Proceedings (OSTI)

Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from ...

Éric Piel; Alberto Gonzalez-Sanchez; Hans-Gerhard Gross

2010-11-01T23:59:59.000Z

426

Large-Scale Vertical Eddy Diffusion in the Main Pycnocline of the Central North Pacific  

Science Conference Proceedings (OSTI)

Indirect procedures are used to estimate the latitudinal distribution of the large-scale vertical eddy diffusivity coefficient in the main pycnocline from the interannual change in T?, ?? structure of the water column in the central midlatitude ...

Warren White; Robert Bernstein

1981-04-01T23:59:59.000Z

427

The MicroGrid: A scientific tool for modeling Computational Grids  

Science Conference Proceedings (OSTI)

The complexity and dynamic nature of the Internet (and the emerging Computational Grid) demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation ...

H. J. Song; X. Liu; D. Jakobsen; R. Bhagwan; X. Zhang; K. Taura; A. Chien

2000-08-01T23:59:59.000Z

428

Attached Scientific Processors for Chemical Computations: A Report to the Chemistry Community  

E-Print Network (OSTI)

emitter-coupled logic), TTL (transistor-transistor logic),of the switching speeds of ECL, TTL and N-MOS is 1, 5 and 50delay. Most current computers use TTL logic, very high speed

Ostlund, Neil S.

2012-01-01T23:59:59.000Z

429

Energy Department Seeks Proposals to Use Scientific Computing Resources at Lawrence Berkeley, Oak Ridge National Laboratories  

Energy.gov (U.S. Department of Energy (DOE))

WASHINGTON, DC -- Secretary of Energy Samuel W. Bodman announced today that DOE's Office of Science is seeking proposals to support computational science projects to enable high-impact advances...

430

Design of large-scale agricultural wireless sensor networks: email from the vineyard  

Science Conference Proceedings (OSTI)

We describe the design and implementation of a large-scale Wireless Sensor Network (WSN) for agriculture monitoring. As a part of validation we have deployed a prototype of 64 sensors to monitor a commercial vineyard. The system provides ... Keywords: WSN testbed, agricultural WSNs, agriculture monitoring, commercial vineyards, data collection, data storage, geographical coverage, large-scale WSNs, spatial resolution, vineyeard monitoring, wireless networks, wireless sensor networks

Christine Jardak; Krisakorn Rerkrai; Aleksandar Kovacevic; Janne Riihijarvi; Petri Mahonen

2010-08-01T23:59:59.000Z

431

The role of large-scale, extratropical dynamics in climate change  

SciTech Connect

The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

Shepherd, T.G. [ed.

1994-02-01T23:59:59.000Z

432

LogGOPSim: simulating large-scale applications in the LogGOPS model  

Science Conference Proceedings (OSTI)

We introduce LogGOPSim---a fast simulation framework for parallel algorithms at large-scale. LogGOPSim utilizes a slightly extended version of the well-known LogGPS model in combination with full MPI message matching semantics and detailed simulation ... Keywords: LogGOPS model, LogGP, LogGPS, LogP, collective operations, large-scale performance, message passing interface, simulation

Torsten Hoefler; Timo Schneider; Andrew Lumsdaine

2010-06-01T23:59:59.000Z

433

Structuring of Large-scale Complex Hybrid Systems: from Illustrative Analysis toward Modelization  

Science Conference Proceedings (OSTI)

System structuring is paramount to the development of large-scale complex hybrid systems (LCHS). However, there is no well-established and effective methodology for the structuring of LCHS. Using the approach of illustrating and abstracting, this paper ... Keywords: autonomous system, block-diagram-based model, distributed system, hierarchical system, large-scale complex hybrid system (LCHS), multiple gradation, nested system, nesting, perception–decision link, system geometry, system modelization, system structuring

Huaglory Tianfield

2001-02-01T23:59:59.000Z

434

Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research  

Science Conference Proceedings (OSTI)

Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems, and (4) design, situational awareness and control of complex networks. The program elements consist of a group of Complex Networked Systems Research Institutes (CNSRI), tightly coupled to an associated individual-investigator-based Complex Networked Systems Basic Research (CNSBR) program. The CNSRI's will be principally located at the DOE National Laboratories and are responsible for identifying research priorities, developing and maintaining a networked systems modeling and simulation software infrastructure, operating summer schools, workshops and conferences and coordinating with the CNSBR individual investigators. The CNSBR individual investigator projects will focus on specific challenges for networked systems. Relevancy of CNSBR research to DOE needs will be assured through the strong coupling provided between the CNSBR grants and the CNSRI's.

Brown, D L

2009-05-01T23:59:59.000Z

435

Statistical Power and Performance Modeling for Optimizing the Energy Efficiency of Scientific Computing  

Science Conference Proceedings (OSTI)

High-performance computing (HPC) has become an indispensable resource in science and engineering, and it has oftentimes been referred to as the "thirdpillar" of science, along with theory and experimentation. Performance tuning is a key aspect in utilizing ... Keywords: energy-efficiency tuning, green supercomputing, regression modeling

Balaji Subramaniam; Wu-chun Feng

2010-12-01T23:59:59.000Z

436

Emergence of learning in computer-supported, large-scale collective dynamics: a research agenda  

Science Conference Proceedings (OSTI)

Seen through the lens of complexity theory, past CSCL research may largely be characterized as small-scale (i.e., small-group) collective dynamics. While this research tradition is substantive and meaningful in its own right, we propose a line of inquiry ...

Manu Kapur; David Hung; Michael Jacobson; John Voiklis; Charles K. Kinzer; Chen Der-Thanq Victor

2007-07-01T23:59:59.000Z

437

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

National Ignition Facility (NIF) coming online, this is theof SRS/2wp instabilities in NIF relevant regimes. However,parameters relevant to NIF. There are important questions

Gerber, Richard

2012-01-01T23:59:59.000Z

438

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

mp138, m526 Randall Cygan Sandia National Laboratories Jamesm744, m1036 Normand Modine Sandia National LaboratoriesLaboratory m783 Habib Najm Sandia National Laboratory

Gerber, Richard

2012-01-01T23:59:59.000Z

439

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

which there is no existing project at NERSC (see Chapter 9).NERSC ProjectID (Repo) NERSC Project Title Principal Investigator

Gerber, Richard

2012-01-01T23:59:59.000Z

440

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

Simulation performed at NERSC by Professor K. H. AckermannSupernovae. Simulation done at NERSC and LLNL by ProfessorReport of the Joint BER / NERSC Workshop Conducted May 7-8,

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


441

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

time on the OLCF and ALCF systems for 12-24 projectsportfolios. Allocation of ALCF and OLCF resources arethe memory limitation on ALCF IBM BG/P (“Blue Gene”) machine

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

442

Architecture for a large-scale ion-trap quantum computer  

Science Conference Proceedings (OSTI)

... Preprint quant-ph/0205094 at khttp://xxx.lanl.govl (2002). 26. ... Preprint quant-ph/0112084 at khttp://xxx.lanl.govl (2001). 32. ...

2010-09-15T23:59:59.000Z

443

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

National Laboratory (PNNL) Contributors: Bruce Palmer,Tartakovsky, Yilin Fang (PNNL); Paul Meakin, Idaho NationalAnother code, TE2THYS, a PNNL (Pacific Northwest National

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

444

Large Scale Computing and Storage Requirements for Biological and Environmental Research  

E-Print Network (OSTI)

problems such as biofuel production, bioremediation, andoptimization for biofuel production we will be able to

DOE Office of Science, Biological and Environmental Research Program Office BER,

2010-01-01T23:59:59.000Z

445

Large Scale Computing and Storage Requirements for Basic Energy Sciences Research  

E-Print Network (OSTI)

NERSC and Jaguar at the OLCF. Methodological advances allowNAMD NERSC NGF NIH NSF NSLS OLCF ORNL OS PCET PCM PIMD PNNL

Gerber, Richard

2012-01-01T23:59:59.000Z

446

Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research  

E-Print Network (OSTI)

testing allocations at NCCS and ALCF; and pending productionand  Acronyms   AE/EPM ALCF ALE AMR API ARRA ASCR CGP CICART

Gerber, Richard

2012-01-01T23:59:59.000Z

447

Diagnostic Downscaling of Large-Scale Wind Fields to Compute Local-Scale Trajectories  

Science Conference Proceedings (OSTI)

This paper describes a simple method, based on routine meteorological data, to produce high-resolution wind analyses throughout the planetary boundary layer (PBL). It is a new way to interpolate wind measurements. According to this method, high-...

Andreas Stohl; Kathrin Baumann; Gerhard Wotawa; Matthias Langer; Bruno Neininger; Martin Piringer; Herbert Formayer

1997-07-01T23:59:59.000Z

448

Scientific Software  

NLE Websites -- All DOE Office Websites (Extended Search)

Science & Education: Science & Education: Science Highlights Conferences Seminars & Meetings Publications Annual Reports APS Upgrade Courses and Schools Graduate Programs Scientific Software Subscribe to APS Recent Publications rss feed Scientific Software Scientists and researchers at the APS develop custom scientific software to help with acquisition and analysis of beamline data. Several packages are available for a variety of platforms and uses. General Diffraction Powder Diffraction Crystallography Synchrotron Radiation / Optical Elements Time-Resolved EXAFS Visualization / Data Processing Detector Controls General Diffraction fprime FPRIME/Absorb This provides utilities for computing approximate x-ray scattering cross sections (f, f' and f") for individual elements using the Cromer & Liberman

449

A Sequential Cooperative Game Theoretic Approach to Storage-Aware Scheduling of Multiple Large-Scale Workflow Applications in Grids  

Science Conference Proceedings (OSTI)

Scheduling large-scale applications in heterogeneous Grid and Cloud systems is a fundamental NP-complete problem for obtaining good performance and execution costs. We address the problem of scheduling an important class of large-scale Grid applications ...

Rubing Duan; Radu Prodan; Xiaorong Li

2012-09-01T23:59:59.000Z

450

Common Effects of Acidic Activators on Large-Scale Chromatin Structure and Transcription  

E-Print Network (OSTI)

Large-scale chromatin decondensation has been observed after the targeting of certain acidic activators to heterochromatic chromatin domains. Acidic activators are often modular, with two or more separable transcriptional activation domains. Whether these smaller regions are sufficient for all functions of the activators has not been demonstrated. We adapted an inducible heterodimerization system to allow systematic dissection of the function of acidic activators, individual subdomains within these activators, and short acidic-hydrophobic peptide motifs within these subdomains. Here, we demonstrate that large-scale chromatin decondensation activity is a general property of acidic activators. Moreover, this activity maps to the same acidic activator subdomains and acidic-hydrophobic peptide motifs that are responsible for transcriptional activation. Two copies of a mutant peptide motif of VP16 (viral protein 16) possess large-scale chromatin decondensation activity but minimal transcriptional activity, and a synthetic acidic-hydrophobic peptide motif had large-scale chromatin decondensation activity comparable to the strongest full-length acidic activator but no transcriptional activity. Therefore, the general property of large-scale chromatin decondensation shared by most acidic activators is not simply a direct result of transcription per se but is most likely the result of the concerted action of coactivator proteins recruited by the activators ’ short acidic-hydrophobic peptide motifs. Several transcriptional activators contain two or more distinct

Anne E. Carpenter; Sevinci Memedula; Matthew J. Plutz; Andrew S. Belmont

2004-01-01T23:59:59.000Z

451

Solar Power in the Desert: Are the current large-scale solar developments really improving California’s environment?  

E-Print Network (OSTI)

from large-scale solar steam generator systems Persistenceof water as steam power generators. The largest of these

Allen, Michael F.; McHughen, Alan

2011-01-01T23:59:59.000Z

452

VP 100: New Facility in Boston to Test Large-Scale Wind Blades | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

VP 100: New Facility in Boston to Test Large-Scale Wind Blades VP 100: New Facility in Boston to Test Large-Scale Wind Blades VP 100: New Facility in Boston to Test Large-Scale Wind Blades July 23, 2010 - 1:19pm Addthis Boston's Wind Technology Testing Center, funded in part with Recovery Act funds, will be first in U.S. to test blades up to 300 feet long. | Photo Courtesy of Massachusetts Clean Energy Center Boston's Wind Technology Testing Center, funded in part with Recovery Act funds, will be first in U.S. to test blades up to 300 feet long. | Photo Courtesy of Massachusetts Clean Energy Center Stephen Graff Former Writer & editor for Energy Empowers, EERE America's first-of-its-kind wind blade testing facility - capable of testing a blade as long as a football field - almost never was. Because of funding woes, the Massachusetts Clean Energy Center (MassCEC),

453

Total Cost Per MwH for all common large scale power generation sources |  

Open Energy Info (EERE)

Total Cost Per MwH for all common large scale power generation sources Total Cost Per MwH for all common large scale power generation sources Home > Groups > DOE Wind Vision Community In the US DOEnergy, are there calcuations for real cost of energy considering the negative, socialized costs of all commercial large scale power generation soruces ? I am talking about the cost of mountain top removal for coal mined that way, the trip to the power plant, the sludge pond or ash heap, the cost of the gas out of the stack, toxificaiton of the lakes and streams, plant decommision costs. For nuclear yiou are talking about managing the waste in perpetuity. The plant decomission costs and so on. What I am tring to get at is the 'real cost' per MWh or KWh for the various sources ? I suspect that the costs commonly quoted for fossil fuels and nucelar are

454

Regional climate consequences of large-scale cool roof and photovoltaic array deployment  

NLE Websites -- All DOE Office Websites (Extended Search)

climate consequences of large-scale cool roof and photovoltaic array deployment climate consequences of large-scale cool roof and photovoltaic array deployment This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2011 Environ. Res. Lett. 6 034001 (http://iopscience.iop.org/1748-9326/6/3/034001) Download details: IP Address: 98.204.49.123 The article was downloaded on 01/07/2011 at 12:38 Please note that terms and conditions apply. View the table of contents for this issue, or go to the journal homepage for more Home Search Collections Journals About Contact us My IOPscience IOP PUBLISHING ENVIRONMENTAL RESEARCH LETTERS Environ. Res. Lett. 6 (2011) 034001 (9pp) doi:10.1088/1748-9326/6/3/034001 Regional climate consequences of large-scale cool roof and photovoltaic array deployment Dev Millstein and Surabi Menon Lawrence

455

U.S. Signs International Fusion Energy Agreement; Large-Scale, Clean Fusion  

NLE Websites -- All DOE Office Websites (Extended Search)

U.S. Signs U.S. Signs International Fusion Energy Agreement; Large-Scale, Clean Fusion Energy Project to Begin Construction News Featured Articles Science Headlines 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Presentations & Testimony News Archives Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 11.21.06 U.S. Signs International Fusion Energy Agreement; Large-Scale, Clean Fusion Energy Project to Begin Construction Print Text Size: A A A Subscribe FeedbackShare Page Large-Scale, Clean Fusion Energy Project to Begin Construction November 21, 2006 PARIS, FRANCE - Representing the United States, Dr. Raymond L. Orbach, Under Secretary for Science of the U.S. Department of Energy (DOE), today joined counterparts from China, the European Union, India, Japan, the

456

Clean Energy Solutions Large Scale CHP and Fuel Cells Program | Department  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Clean Energy Solutions Large Scale CHP and Fuel Cells Program Clean Energy Solutions Large Scale CHP and Fuel Cells Program Clean Energy Solutions Large Scale CHP and Fuel Cells Program < Back Eligibility Commercial Fed. Government Industrial Institutional Local Government Nonprofit State Government Savings Category Commercial Heating & Cooling Manufacturing Buying & Making Electricity Alternative Fuel Vehicles Hydrogen & Fuel Cells Maximum Rebate CHP: $3,000,000 or 30% of project costs Fuel Cells: $3,000,000 or 45% of project costs Program Info Start Date 01/17/2013 State New Jersey Program Type State Grant Program Rebate Amount CHP greater than 1 MW-3 MW: $0.55/wattt CHP > 3 MW: $0.35/watt Fuel Cells > 1 MW with waste heat utilization: $2.00/watt Fuel Cells > 1 MW without waste heat utilization: $1.50/watt

457

Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar  

Energy.gov (U.S. Department of Energy (DOE)) Indexed Site

Loan Guarantee Would Support Large-Scale Rooftop Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing Energy Department Loan Guarantee Would Support Large-Scale Rooftop Solar Power for U.S. Military Housing September 7, 2011 - 2:10pm Addthis Washington D.C. - U.S. Energy Secretary Steven Chu today announced the offer of a conditional commitment for a partial guarantee of a $344 million loan that will support the SolarStrong Project, which is expected to be a record expansion of residential rooftop solar power in the United States. Under the SolarStrong Project, SolarCity Corporation will install, own and operate up to 160,000 rooftop solar installations on as many as 124 U.S. military bases in up to 33 states. SolarCity expects the project to fund approximately 750 construction jobs over five years and 28 full time

458

Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media  

Science Conference Proceedings (OSTI)

This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs.

Ababou, R.

1991-08-01T23:59:59.000Z

459

Observation of femtosecond laser-induced nanostructure-covered large scale waves on metals  

SciTech Connect

Following femtosecond (fs) laser pulse irradiation, we produce a type of periodic surface structure with a period tens of times greater than the laser wavelength and densely covered by an iterating pattern that consists of stripes of nanostructures and microscale cellular structures. The morphology of this large scale wave pattern crucially depends on laser fluence and the number of laser pulses, but not on the laser wavelength. Our study suggests that this large scale wave is initiated by fs laser induced surface unevenness followed by periodically distributed nonuniform surface heating from fs pulse irradiation.

Hwang, Taek Yong; Guo Chunlei [Institute of Optics, University of Rochester, Rochester, New York 14627 (United States)

2011-04-15T23:59:59.000Z

460

Ultra high energy cosmic rays and the large scale structure of the galactic magnetic field  

E-Print Network (OSTI)

We study the deflection of ultra high energy cosmic ray protons in different models of the regular galactic magnetic field. Such particles have gyroradii well in excess of 1 kpc and their propagation in the galaxy reflects only the large scale structure of the galactic magnetic field. A future large experimental statistics of cosmic rays of energy above 10$^{19}$ eV could be used for a study of the large scale structure of the galactic magnetic field if such cosmic rays are indeed charged nuclei accelerated at powerful astrophysical objects and if the distribution of their sources is not fully isotropic.

Todor Stanev

1996-07-17T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


461

Variability of Load and Net Load in Case of Large Scale Distributed Wind Power  

Science Conference Proceedings (OSTI)

Large scale wind power production and its variability is one of the major inputs to wind integration studies. This paper analyses measured data from large scale wind power production. Comparisons of variability are made across several variables: time scale (10-60 minute ramp rates), number of wind farms, and simulated vs. modeled data. Ramp rates for Wind power production, Load (total system load) and Net load (load minus wind power production) demonstrate how wind power increases the net load variability. Wind power will also change the timing of daily ramps.

Holttinen, H.; Kiviluoma, J.; Estanqueiro, A.; Gomez-Lazaro, E.; Rawn, B.; Dobschinski, J.; Meibom, P.; Lannoye, E.; Aigner, T.; Wan, Y. H.; Milligan, M.

2011-01-01T23:59:59.000Z

462

The MicroGrid: a Scientific Tool for Modeling Computational Grids  

E-Print Network (OSTI)

The complexity and dynamic nature of the Internet (and the emerging Computational Grid) demand that middleware and applications adapt to the changes in configuration and availability of resources. However, to the best of our knowledge there are no simulation tools which support systematic exploration of dynamic Grid software (or Grid resource) behavior. We describe our vision and initial efforts to build tools to meet these needs. Our MicroGrid simulation tools enable Globus applications to be run in arbitrary virtual grid resource environments, enabling broad experimentation. We describe the design of these tools, and their validation on microbench marks, the NAS parallel benchmarks, and an entire Grid application. These validation experiments show that the MicroGrid can match actual experiments within a few percent (2% to 4%).

H. J. Song; X. Liu; D. Jakobsen; R. Bhagwan; X. Zhang; K. Taura; A. Chien

2000-01-01T23:59:59.000Z

463

The Research Alliance in Math and Science program is sponsored by the Mathematical, Information, and Computational Sciences Division, Office of Advanced Scientific Computing Research, U.S. Department of Energy. The work was performed at the Oak Ridge Nati  

E-Print Network (OSTI)

, and Computational Sciences Division, Office of Advanced Scientific Computing Research, U.S. Department of Energy Contract No. De-AC05-00OR22725. This work has been authored by a contractor of the U.S. Government, accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce

464

The Research Alliance in Math and Science program is sponsored by the Mathematical, Information, and Computational Sciences Division, Office of Advanced Scientific Computing Research, U.S. Department of Energy. The work was performed at the Oak Ridge Nati  

E-Print Network (OSTI)

, and Computational Sciences Division, Office of Advanced Scientific Computing Research, U.S. Department of Energy NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Improving the Manageability of OSCAR Selima Rollins City Contract No. De-AC05-00OR22725. This work has been authored by a contractor of the U.S. Government

465

Accelerating scientific discovery : 2007 annual report.  

Science Conference Proceedings (OSTI)

As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide guidance for applications that are transitioning to petascale as well as to produce software that facilitates their development, such as the MPICH library, which provides a portable and efficient implementation of the MPI standard--the prevalent programming model for large-scale scientific applications--and the PETSc toolkit that provides a programming paradigm that eases the development of many scientific applications on high-end computers.

Beckman, P.; Dave, P.; Drugan, C.

2008-11-14T23:59:59.000Z

466

Supporting ad-hoc re-planning and shareability at large-scale events  

Science Conference Proceedings (OSTI)

In this paper we present results from a research and development project focusing on the use of mobile phones at a music festival. Our aim is to explore how the festival experience can be enhanced with the introduction of mobile services. Two questions ... Keywords: coordination, ethnography, festival, groups, interaction design, large-scale event, mobile service, mobility, planning

Sarah Lindström; Mårten Pettersson

2010-11-01T23:59:59.000Z

467

Large-scale byzantine fault tolerance: safe but not always live  

Science Conference Proceedings (OSTI)

The overall correctness of large-scale systems composed of many groups of replicas executing BFT protocols scales poorly with the number of groups. This is because the probability of at least one group being compromised (more than 1/3 faulty replicas) ...

Rodrigo Rodrigues; Petr Kouznetsov; Bobby Bhattacharjee

2007-06-01T23:59:59.000Z

468

IDO: intelligent data outsourcing with improved RAID reconstruction performance in large-scale data centers  

Science Conference Proceedings (OSTI)

Dealing with disk failures has become an increasingly common task for system administrators in the face of high disk failure rates in large-scale data centers consisting of hundreds of thousands of disks. Thus, achieving fast recovery from disk failures ...

Suzhen Wu; Hong Jiang; Bo Mao

2012-12-01T23:59:59.000Z

469

Investigating self-similarity and heavy-tailed distributions on a large-scale experimental facility  

Science Conference Proceedings (OSTI)

After the seminal work by Taqqu et al. relating self-similarity to heavy-tailed distributions, a number of research articles verified that aggregated Internet traffic time series show self-similarity and that Internet attributes, like Web file sizes ... Keywords: heavy-tailed distributions, large-scale experiments, monitoring, network traffic, self-similarity

Patrick Loiseau; Paulo Gonçalves; Guillaume Dewaele; Pierre Borgnat; Patrice Abry; Pascale Vicat-Blanc Primet

2010-08-01T23:59:59.000Z

470

Synthesis and control on large scale multi-touch sensing displays  

Science Conference Proceedings (OSTI)

In this paper, we describe our experience in musical interface design for a large scale, high-resolution, multi-touch display surface. We provide an overview of historical and present-day context in multi-touch audio interaction, and describe our approach ... Keywords: bi-manual, dynamic patching, multi-touch, multi-user, synthesis, tactile, touch

Philip L. Davidson; Jefferson Y. Han

2006-06-01T23:59:59.000Z

471

No large scale curvature perturbations during the waterfall phase transition of hybrid inflation  

Science Conference Proceedings (OSTI)

In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of the standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depends crucially on the competition between the classical and the quantum mechanical backreactions to terminate inflation. If one considers only the classical evolution of the system, we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum backreactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical backreactions. The cumulative quantum backreactions of very small scale tachyonic modes terminate inflation very efficiently and shut off the curvature perturbation evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Firouzjahi, Hassan [School of Physics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

2011-03-15T23:59:59.000Z

472

An Analysis of Klemp–Wilhelmson Schemes as Applied to Large-Scale Wave Modes  

Science Conference Proceedings (OSTI)

The use of Klemp–Wilhelmson (KW) time splitting for large-scale and global modeling is assessed through a series of von Neumann accuracy and stability analyses. Two variations of the KW splitting are evaluated in particular: the original acoustic-...

Kevin C. Viner; Craig C. Epifanio

2008-12-01T23:59:59.000Z

473

Efficient data management in a large-scale epidemiology research project  

Science Conference Proceedings (OSTI)

This article describes the concept of a ''Central Data Management'' (CDM) and its implementation within the large-scale population-based medical research project ''Personalized Medicine''. The CDM can be summarized as a conjunction of data capturing, ... Keywords: Central Data Management, Electronic Case Report Forms, Electronic Data Capture, Individualized medicine, Personalized Medicine

Jens Meyer; Stefan Ostrzinski; Daniel Fredrich; Christoph Havemann; Janina Krafczyk; Wolfgang Hoffmann

2012-09-01T23:59:59.000Z

474

Overlay networks for task allocation and coordination in large-scale networks of cooperative agents  

Science Conference Proceedings (OSTI)

This paper proposes a novel method for scheduling and allocating atomic and complex tasks in large-scale networks of homogeneous or heterogeneous cooperative agents. Our method encapsulates the concepts of searching, task allocation and scheduling seamlessly ... Keywords: Cooperation, Cooperative agents, Coordination, Distributed constraint processing, Task and resource allocation

Panagiotis Karagiannis; George Vouros; Kostas Stergiou; Nikolaos Samaras

2012-01-01T23:59:59.000Z

475

Proceedings of the First International Workshop on Data Dissemination for Large Scale Complex Critical Infrastructures  

Science Conference Proceedings (OSTI)

Welcome to Valencia and to the first edition of the workshop on Data Distribution for Large-scale Complex Critical Infrastructures (DD4LCCI 2010). This workshop aims at providing a forum for researchers and engineers in academia and industry to foster ...

Christian Esposito; Aniruddha Gokhale; Domenico Cotroneo; Douglas C. Schmidt

2010-04-01T23:59:59.000Z

476

A practical ontology for the large-scale modeling of scholarly artifacts and their usage  

Science Conference Proceedings (OSTI)

The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. ... Keywords: resource description framework and schema, semantic networks, web ontology language

Marko A. Rodriguez; Johan Bollen; Herbert Van de Sompel

2007-06-01T23:59:59.000Z

477

Large-scale pattern growth of graphene films for stretchable transparent electrodes  

E-Print Network (OSTI)

of these highly conducting and transparent electrodes in flexible, stretchable, foldable electronics8,9 . Graphene growth provides high-quality multilayer graphene samples interacting strongly with their substrates method to grow and transfer high-quality stretchable graphene films on a large scale using CVD on nickel

Kim, Philip

478

PIV Studies of Large Scale Structures in the Near Field of Small Aspect Ratio Elliptic Jets  

Science Conference Proceedings (OSTI)

The near flow field of small aspect ratio elliptic turbulent free jets (issuing from nozzle and orifice) was experimentally studied using a 2D PIV. Two point velocity correlations in these jets revealed the extent and orientation of the large scale structures ... Keywords: Axis switching, Elliptic jet, PIV, Spatial filtering, Two point correlation

G. Ramesh; L. Venkatakrishnan; A. Prabhu

2006-01-01T23:59:59.000Z

479

Sensory experience modifies spontaneous state dynamics in a large-scale barrel cortical model  

Science Conference Proceedings (OSTI)

Experimental evidence suggests that spontaneous neuronal activity may shape and be shaped by sensory experience. However, we lack information on how sensory experience modulates the underlying synaptic dynamics and how such modulation influences the ... Keywords: Barrel cortex, Large-scale model, STDP, Spontaneous dynamics

Elena Phoka; Mark Wildie; Simon R. Schultz; Mauricio Barahona

2012-10-01T23:59:59.000Z

480

Statistical Characteristics of the Large-Scale Response of Coastal Sea Level to Atmospheric Forcing  

Science Conference Proceedings (OSTI)

As part of a study of the large-scale response of coastal sea level to atmospheric forcing along the west coast of North America during June–September 1973, Halliwell and Allen calculate space- and time-lagged cross-correlation coefficients R?? ...

J. S. Allen; D. W. Denbo

1984-06-01T23:59:59.000Z

Note: This page contains sample records for the topic "large-scale scientific computing" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.


481

Hierarchical visibility for guaranteed search in large-scale outdoor terrain  

Science Conference Proceedings (OSTI)

Searching for moving targets in large environments is a challenging task that is relevant in several problem domains, such as capturing an invader in a camp, guarding security facilities, and searching for victims in large-scale search and rescue scenarios. ... Keywords: Exploration, Guaranteed search, HRI, Human---robot-interaction, Moving target search, Path planning, Pursuit-evasion, Task allocation

A. Kleiner; A. Kolling; M. Lewis; K. Sycara

2013-01-01T23:59:59.000Z

482

Linearly scaling 3D fragment method for large-scale electronic structure calculations  

Science Conference Proceedings (OSTI)

We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively ...

Lin-Wang Wang; Byounghak Lee; Hongzhang Shan; Zhengji Zhao; Juan Meza; Erich Strohmaier; David H. Bailey

2008-11-01T23:59:59.000Z

483

Tunable Fano resonance in large scale polymer-dielectric slab photonic crystals  

Science Conference Proceedings (OSTI)

Using interference lithography and deposition technique we have fabricated large scale quasi one dimensional polymer-dielectric photonic crystal that provides sharp and deep Fano resonance in the transmission spectrum of the PC at normal incidence. Due ... Keywords: Interference lithography, Optical switch, Photonic crystals, Polymer, Tunable filter

Reza Asadi; Shahin Bagheri; Mahdi Khaje; Mohammad Malekmohammad; Mohammad-Taghi Tavassoly

2012-09-01T23:59:59.000Z

484

Determining the Mean, Large-Scale Circulation of the Atlantic with the Adjoint Method  

Science Conference Proceedings (OSTI)

A new model approach based on the adjoint formalism and aimed at assimilating large sets of hydrographic data is presented. The goal of the model calculations is to obtain the mean, large-scale ocean circulation together with coefficients of iso- ...

Reiner Schlitzer

1993-09-01T23:59:59.000Z

485

Large-Scale Dynamics of the Meiyu-Baiu Rainband: Environmental Forcing by the Westerly Jet  

Science Conference Proceedings (OSTI)

Meiyu-baiu is the major rainy season from central China to Japan brought by a zonally elongated rainband from June to mid-July. Large-scale characteristics and environmental forcing of this important phenomenon are investigated based on a ...

Takeaki Sampe; Shang-Ping Xie

2010-01-01T23:59:59.000Z

486

A steady-state L-mode tokamak fusion reactor : large scale and minimum scale  

E-Print Network (OSTI)

We perform extensive analysis on the physics of L-mode tokamak fusion reactors to identify (1) a favorable parameter space for a large scale steady-state reactor and (2) an operating point for a minimum scale steady-state ...

Reed, Mark W. (Mark Wilbert)

2010-01-01T23:59:59.000Z

487

Time efficient fabrication of ultra large scale nano dot arrays using electron beam lithography  

Science Conference Proceedings (OSTI)

An astonishingly simple yet versatile alternative method for the creation of ultra large scale nano dot arrays [1-3] utilising the fact that exposure in electron beam lithography (EBL) is performed by addressing single pixels with defined distances is ... Keywords: Electron beam lithography, Nano dot, Patterning, Photonic crystal, Plasmonics

Jochen Grebing; JüRgen FaíBender; Artur Erbe

2012-09-01T23:59:59.000Z

488

Large-scale Probabilistic Forecasting in Energy Systems using Sparse Gaussian Conditional Random Fields  

E-Print Network (OSTI)

pricing. Although it is known that probabilistic forecasts (which give a distribution over possible futureLarge-scale Probabilistic Forecasting in Energy Systems using Sparse Gaussian Conditional Random Fields Matt Wytock and J. Zico Kolter Abstract-- Short-term forecasting is a ubiquitous practice

Kolter, J. Zico

489

An automatic water management system for large-scale rice paddy fields  

Science Conference Proceedings (OSTI)

An automatic water management system for large-scale paddy fields has been developed. The purposes of that are to supply the paddy fields with water or drain water from that automatically, to decrease water consumption, and to have a good harvest. To ... Keywords: estimating mean water level, optimal water allocation, paddy field, predict field consumption, prediction of growth stages, water level control

Teruji Sekozawa

2010-10-01T23:59:59.000Z

490

An Integrated Docking Pipeline for the Prediction of Large-Scale Protein-Protein Interactions  

E-Print Network (OSTI)

An Integrated Docking Pipeline for the Prediction of Large-Scale Protein-Protein Interactions Xin. In this study, we developed a protein-protein docking pipeline (PPDP) that integrates a variety of state studies. In this study, we developed a protein-protein docking pipeline by integrat

491

Interactive remote large-scale data visualization via prioritized multi-resolution streaming  

Science Conference Proceedings (OSTI)

The simulations that run on petascale and future exascale supercomputers pose a difficult challenge for scientists to visualize and analyze their results remotely. They are limited in their ability to interactively visualize their data mainly due to ... Keywords: data intensive supercomputing, distance visualization, large scale data, remote visualization, visualization systems

James P. Ahrens; Jonathan Woodring; David E. DeMarle; John Patchett; Mathew Maltrud

2009-11-01T23:59:59.000Z

492

An analytical framework for particle and volume data of large-scale combustion simulations  

Science Conference Proceedings (OSTI)

This paper presents a framework to enable parallel data analyses and visualizations that combine both Lagrangian particle data and Eulerian field data of large-scale combustion simulations. Our framework is characterized by a new range query based design ... Keywords: data transformation and representation, feature extraction and tracking, scalability issues

Franz Sauer, Hongfeng Yu, Kwan-Liu Ma

2013-11-01T23:59:59.000Z

493

MicroTCA implementation of synchronous Ethernet-Based DAQ systems for large scale experiments  

E-Print Network (OSTI)

the form of a tank filled with liquid Argon maintained at about 87 °K by a cryogenic system. An electric the calculation of the track coordinates in 2 dimensions. The third dimension is given by the measurement software. Proposals of such very large scale Liquid Argon Detector foresee the use of Liquefied Natural Gas

Paris-Sud XI, Université de

494

Fountain Codes Based Distributed Storage Algorithms for Large-Scale Wireless Sensor Networks  

Science Conference Proceedings (OSTI)

We consider large-scale networks with n nodes, out of which k are in possession, (e.g., have