Sample records for large-scale scientific computing

  1. Large Scale Computing and Storage Requirements for Advanced Scientific

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShade LandscapingComputing

  2. June 6, 2007 Large-Scale Scientific Computations'07, Sozopol, Bulgaria -p. 1/29 Multiscale Modeling and Simulation of Fluid

    E-Print Network [OSTI]

    Popov, Peter

    in porous media (soil, porous rocks, etc.) x Elasticity problems in composite materials (adobe, concrete/29 Presentation outline s Brief overview of upscaling methods in deformable porous media s The Fluid upscaling of flow in deformable porous media #12;June 6, 2007 Large-Scale Scientific Computations'07

  3. A Distribution Oblivious Scalable Approach for Large-Scale Scientific...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Distribution Oblivious Scalable Approach for Large-Scale Scientific Data Processing June 12, 2013 Problem Statement: Runtimes of scientific data processing (SDP) methods vary...

  4. Computational Diagnostics based on Large Scale Gene

    E-Print Network [OSTI]

    Spang, Rainer

    Computational Diagnostics based on Large Scale Gene Expression Profiles using MCMC Rainer Spang = Data Loadings Singular values Expression levels of super genes, orthogonal matrix #12;)( genessuperall- #12;Given the Few Profiles With Known Diagnosis: · The uncertainty on the right model is high

  5. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect (OSTI)

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes (FIG)

    2005-01-01T23:59:59.000Z

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  6. Sandia Energy - Computational Fluid Dynamics & Large-Scale Uncertainty...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Large-Scale Uncertainty Quantification for Wind Energy Home Highlights - HPC Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy Previous Next...

  7. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02T23:59:59.000Z

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  8. Global Scientific Information and Computing Center, Tokyo Institute of Technology Large-Scale GPU-Equipped High-Performance Compute Nodes

    E-Print Network [OSTI]

    Furui, Sadaoki

    -Equipped High-Performance Compute Nodes High-Speed Network Interconnect High-Speed and Highly Reliable Storage Systems Low Power Consumption and Green Operation System and Application Software HARDWARE AND SOFTWARE 2 USB Internal Micro SD DIMM6CDIMM6C DIMM5FDIMM5F DIMM4BDIMM4B DIMM3EDIMM3E DIMM2ADIMM2A DIMM1DDIMM1

  9. Large Scale Computing and Storage Requirements for High Energy Physics

    SciTech Connect (OSTI)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24T23:59:59.000Z

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes a section that describes efforts already underway or planned at NERSC that address requirements collected at the workshop. NERSC has many initiatives in progress that address key workshop findings and are aligned with NERSC's strategic plans.

  10. Passive Network Performance Estimation for Large-Scale, Data-Intensive Computing

    E-Print Network [OSTI]

    Weissman, Jon

    --Distributed computing applications are increasingly utilizing distributed data sources. However, the unpredictable cost- intensive scientific workflows [3], [4]. For such data- intensive tasks, data access cost is a significant to consider data access cost in launching data-intensive computing applications. Large-scale computing

  11. GridMate: A Portable Simulation Environment for Large-Scale Adaptive Scientific Applications

    E-Print Network [OSTI]

    Li, Xiaolin "Andy"

    GridMate: A Portable Simulation Environment for Large-Scale Adaptive Scientific Applications: parashar@caip.rutgers.edu Abstract--In this paper, we present a portable sim- ulation environment GridMate for large-scale adaptive scientific applications in multi-site Grid environments. GridMate is a discrete

  12. Advanced I/O for large-scale scientific applications.

    SciTech Connect (OSTI)

    Klasky, Scott (Oak Ridge National Laboratory, Oak Ridge, TN); Schwan, Karsten (Georgia Institute of Technology, Atlanta, GA); Oldfield, Ron A.; Lofstead, Gerald F., II (Georgia Institute of Technology, Atlanta, GA)

    2010-01-01T23:59:59.000Z

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while maintaining a simple deployment for the science code and eliminating the need for allocation of additional computational resources.

  13. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect (OSTI)

    Laros, James H., III

    2011-08-01T23:59:59.000Z

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  14. Classical Control of Large-Scale Quantum Computers

    E-Print Network [OSTI]

    Simon J. Devitt

    2014-05-20T23:59:59.000Z

    The accelerated development of quantum technology has reached a pivotal point. Early in 2014, several results were published demonstrating that several experimental technologies are now accurate enough to satisfy the requirements of fault-tolerant, error corrected quantum computation. While there are many technological and experimental issues that still need to be solved, the ability of experimental systems to now have error rates low enough to satisfy the fault-tolerant threshold for several error correction models is a tremendous milestone. Consequently, it is now a good time for the computer science and classical engineering community to examine the {\\em classical} problems associated with compiling quantum algorithms and implementing them on future quantum hardware. In this paper, we will review the basic operational rules of a topological quantum computing architecture and outline one of the most important classical problems that need to be solved; the decoding of error correction data for a large-scale quantum computer. We will endeavour to present these problems independently from the underlying physics as much of this work can be effectively solved by non-experts in quantum information or quantum mechanics.

  15. Computational study of large-scale p-Median problems

    E-Print Network [OSTI]

    techniques to the simplex method for the solution of large-scale instances. ... instances up to 5535 nodes and 666639 arcs, arising from an industrial ..... For each node v ? TF ? AF we build a “layered” graph rooted in v, where layer.

  16. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    SciTech Connect (OSTI)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31T23:59:59.000Z

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a brief summary of those relevant to issues raised by researchers at the workshop.

  17. Architecture for a large-scale ion-trap quantum computer

    E-Print Network [OSTI]

    individually experimentally demonstrated. The quantum CCD To build up a large-scale quantum computer, we have demonstrated in this system, there exist theoretical and technical obstacles to scaling up the approachArchitecture for a large-scale ion-trap quantum computer D. Kielpinski*, C. Monroe & D. J. Wineland

  18. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    proceedings of High Performance Computing – 2011 (HPC-2011)is manager of High-Performance Computing group in the ITDensity Physics high-performance computing High Performance

  19. Surface codes: Towards practical large-scale quantum computation

    E-Print Network [OSTI]

    Austin G. Fowler; Matteo Mariantoni; John M. Martinis; Andrew N. Cleland

    2012-10-27T23:59:59.000Z

    This article provides an introduction to surface code quantum computing. We first estimate the size and speed of a surface code quantum computer. We then introduce the concept of the stabilizer, using two qubits, and extend this concept to stabilizers acting on a two-dimensional array of physical qubits, on which we implement the surface code. We next describe how logical qubits are formed in the surface code array and give numerical estimates of their fault-tolerance. We outline how logical qubits are physically moved on the array, how qubit braid transformations are constructed, and how a braid between two logical qubits is equivalent to a controlled-NOT. We then describe the single-qubit Hadamard, S and T operators, completing the set of required gates for a universal quantum computer. We conclude by briefly discussing physical implementations of the surface code. We include a number of appendices in which we provide supplementary information to the main text.

  20. Large Scale Computing and Storage Requirements for Basic Energy Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShade LandscapingComputing

  1. Cryogenic Control Architecture for Large-Scale Quantum Computing

    E-Print Network [OSTI]

    J. M. Hornibrook; J. I. Colless; I. D. Conway Lamb; S. J. Pauka; H. Lu; A. C. Gossard; J. D. Watson; G. C. Gardner; S. Fallahi; M. J. Manfra; D. J. Reilly

    2014-09-08T23:59:59.000Z

    Solid-state qubits have recently advanced to the level that enables them, in-principle, to be scaled-up into fault-tolerant quantum computers. As these physical qubits continue to advance, meeting the challenge of realising a quantum machine will also require the engineering of new classical hardware and control architectures with complexity far beyond the systems used in today's few-qubit experiments. Here, we report a micro-architecture for controlling and reading out qubits during the execution of a quantum algorithm such as an error correcting code. We demonstrate the basic principles of this architecture in a configuration that distributes components of the control system across different temperature stages of a dilution refrigerator, as determined by the available cooling power. The combined setup includes a cryogenic field-programmable gate array (FPGA) controlling a switching matrix at 20 millikelvin which, in turn, manipulates a semiconductor qubit.

  2. Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers 1 Stability Analysis of Large-

    E-Print Network [OSTI]

    Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers 1 Stability Analysis of Large- Scale Incompressible Flow Calculations on Massively Parallel disturbances aligned with the associated eigenvectors will grow. The Cayley transformation, cou- pled

  3. Personal Workspace for Large-Scale Data-Driven Computational Experiment

    E-Print Network [OSTI]

    Plale, Beth

    's personal workspace is a virtual repository of a user's data products. Its conceptual space is organizedPersonal Workspace for Large-Scale Data-Driven Computational Experiment Yiming Sun, Scott Jensen@cs.indiana.edu plale@cs.indiana.edu Abstract 1 -- As the scale and complexity of data-driven computational science

  4. Cloud Computing for Large-Scale Complex IT Systems The Proposers

    E-Print Network [OSTI]

    St Andrews, University of

    1 Cloud Computing for Large-Scale Complex IT Systems The Proposers This proposal aims to extend the consortium further as there are no obvious UK partners that would bring additional cloud computing expertise LSCITS consortium members Bristol and St Andrews, both of whom have LSCITS PhD students working in cloud

  5. Fluid computation of the performanceenergy trade-off in large scale Markov models

    E-Print Network [OSTI]

    Imperial College, London

    Fluid computation of the performance­energy trade-off in large scale Markov models Anton Stefanek energy consumption while maintaining multiple service level agreements. 2. VIRTUALISED EXECUTION MODEL optimisation. We show how the fluid analysis naturally leads to a constrained global optimisation prob- lem

  6. Studying the energy efficiency of large-scale computer systems requires models of the relationship

    E-Print Network [OSTI]

    Rivoire, Suzanne

    Abstract Studying the energy efficiency of large-scale computer systems requires models-node clusters using embedded, laptop, desktop, and server processors. These results demonstrate the need usage and power consumption. Therefore, a substantial body of literature models system-level power

  7. G-NetMon: A GPU-accelerated Network Performance Monitoring System for Large Scale Scientific Collaborations

    SciTech Connect (OSTI)

    Wu, Wenji; DeMar, Phil; Holmgren, Don; Singh, Amitoj; Pordes, Ruth; /Fermilab

    2011-08-01T23:59:59.000Z

    At Fermilab, we have prototyped a GPU-accelerated network performance monitoring system, called G-NetMon, to support large-scale scientific collaborations. Our system exploits the data parallelism that exists within network flow data to provide fast analysis of bulk data movement between Fermilab and collaboration sites. Experiments demonstrate that our G-NetMon can rapidly detect sub-optimal bulk data movements.

  8. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    SciTech Connect (OSTI)

    Gerber, Richard

    2014-05-02T23:59:59.000Z

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  9. DOE's Office of Science Seeks Proposals for Expanded Large-Scale Scientific

    Energy Savers [EERE]

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directed off Energy.gov. Are you0and Transparency, and MoreEnergyof EnergyDepartment ofComputing |

  10. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    SciTech Connect (OSTI)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30T23:59:59.000Z

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  11. Very Large Scale Computations of the Free Energies of Eight Low-Lying Structures of Arginine in the Gas Phase

    E-Print Network [OSTI]

    Simons, Jack

    Very Large Scale Computations of the Free Energies of Eight Low-Lying Structures of Arginine and free energies of five canonical and three zwitterionic low-lying structures of the arginine molecule have been used on state-of-the-art parallel computers. The electronic energy and Gibbs free energy

  12. PowerGrid - A Computation Engine for Large-Scale Electric Networks

    SciTech Connect (OSTI)

    Chika Nwankpa

    2011-01-31T23:59:59.000Z

    This Final Report discusses work on an approach for analog emulation of large scale power systems using Analog Behavioral Models (ABMs) and analog devices in PSpice design environment. ABMs are models based on sets of mathematical equations or transfer functions describing the behavior of a circuit element or an analog building block. The ABM concept provides an efficient strategy for feasibility analysis, quick insight of developing top-down design methodology of large systems and model verification prior to full structural design and implementation. Analog emulation in this report uses an electric circuit equivalent of mathematical equations and scaled relationships that describe the states and behavior of a real power system to create its solution trajectory. The speed of analog solutions is as quick as the responses of the circuit itself. Emulation therefore is the representation of desired physical characteristics of a real life object using an electric circuit equivalent. The circuit equivalent has within it, the model of a real system as well as the method of solution. This report presents a methodology of the core computation through development of ABMs for generators, transmission lines and loads. Results of ABMs used for the case of 3, 6, and 14 bus power systems are presented and compared with industrial grade numerical simulators for validation.

  13. Scalable, efficient ion-photon coupling with phase Fresnel lenses for large-scale quantum computing

    E-Print Network [OSTI]

    E. W. Streed; B. G. Norton; J. J. Chapman; D. Kielpinski

    2008-05-16T23:59:59.000Z

    Efficient ion-photon coupling is an important component for large-scale ion-trap quantum computing. We propose that arrays of phase Fresnel lenses (PFLs) are a favorable optical coupling technology to match with multi-zone ion traps. Both are scalable technologies based on conventional micro-fabrication techniques. The large numerical apertures (NAs) possible with PFLs can reduce the readout time for ion qubits. PFLs also provide good coherent ion-photon coupling by matching a large fraction of an ion's emission pattern to a single optical propagation mode (TEM00). To this end we have optically characterized a large numerical aperture phase Fresnel lens (NA=0.64) designed for use at 369.5 nm, the principal fluorescence detection transition for Yb+ ions. A diffraction-limited spot w0=350+/-15 nm (1/e^2 waist) with mode quality M^2= 1.08+/-0.05 was measured with this PFL. From this we estimate the minimum expected free space coherent ion-photon coupling to be 0.64%, which is twice the best previous experimental measurement using a conventional multi-element lens. We also evaluate two techniques for improving the entanglement fidelity between the ion state and photon polarization with large numerical aperture lenses.

  14. Fault prophet : a fault injection tool for large scale computer systems

    E-Print Network [OSTI]

    Tchwella, Tal

    2014-01-01T23:59:59.000Z

    In this thesis, I designed and implemented a fault injection tool, to study the impact of soft errors for large scale systems. Fault injection is used as a mechanism to simulate soft errors, measure the output variability ...

  15. As new computer architectures are developed to exploit large-scale data-level parallelism, techniques are

    E-Print Network [OSTI]

    Wills, Linda Mary

    Abstract As new computer architectures are developed to exploit large-scale data-level parallelism, such as convolution, discrete cosine transform (DCT), and motion estimation [1]. These applications usually have a tremendous potential for parallelism in that they include a large percentage of independent operations

  16. 2006 DOE INCITE Supercomputing Allocations Proposal Title: "Development and Correlations of Large Scale Computational Tools for

    E-Print Network [OSTI]

    Knowles, David William

    ; Todd Michal, The Boeing Company Scientific Discipline: Engineering Physics INCITE allocation: Site: Ronald Waltz Affiliation: General Atomics Co-Investigators: Jeff Candy, General Atomics; Mark Fahey, Oak: "Molecular dynamics of molecular motors" Principal Investigator: Martin Karplus Affiliation: Harvard

  17. Large Scale Production Computing and Storage Requirements for High Energy Physics: Target 2017

    E-Print Network [OSTI]

    Gerber, Richard

    2014-01-01T23:59:59.000Z

    in the use of High Performance Computing (HPC) and in factNERSC is the primary high-performance computing facility forthree major High Performance Computing Centers: NERSC and

  18. Large-scale application of some modern CSM methodologies by parallel computation

    E-Print Network [OSTI]

    Li, Shaofan

    ,b,*, R.A. Urasc , M.D. Adleyb , S. Lia a Mechanical Engineering and Army High Performance Computing.g. Message Passing Interface, MPI) have increased the use of High Performance Computing (HPC). Several Army Engineer Research and Development Center (ERDC) and the Army High Performance Computing Research

  19. Computer Energy Modeling Techniques for Simulation Large Scale Correctional Institutes in Texas 

    E-Print Network [OSTI]

    Heneghan, T.; Haberl, J. S.; Saman, N.; Bou-Saada, T. E.

    1996-01-01T23:59:59.000Z

    Building energy simulation programs have undergone an increase in use for evaluating energy consumption and energy conservation retrofits in buildings. Utilization of computer simulation programs for large facilities with multiple buildings, however...

  20. National Energy Research Scientific Computing Center (NERSC)...

    Office of Science (SC) Website

    News NERSCLBL Study Finds No Evidence of Heartbleed External link Attacks Before the Virus Was Made Public Recent Requirement Workshops Large Scale Computing and Storage...

  1. Large-scale simulations of complex physical systems

    SciTech Connect (OSTI)

    Belic, A. [Scientific Computing Laboratory, Institute of Physics, Pregrevica 118, 11080 Belgrade (Serbia and Montenegro)

    2007-04-23T23:59:59.000Z

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results.In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  2. Advanced Scientific Computing Research Computer Science

    E-Print Network [OSTI]

    Geddes, Cameron Guy Robinson

    Advanced Scientific Computing Research Computer Science FY 2006 Accomplishment HDF5-Fast fundamental Computer Science technologies and their application in production scientific research tools. Our technology ­ index, query, storage and retrieval ­ and use of such technology in computational and computer

  3. DFT modeling of adsorption onto uranium metal using large-scale parallel computing

    SciTech Connect (OSTI)

    Davis, N.; Rizwan, U. [Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, Urbana, IL (United States)

    2013-07-01T23:59:59.000Z

    There is a dearth of atomistic simulations involving the surface chemistry of 7-uranium which is of interest as the key fuel component of a breeder-burner stage in future fuel cycles. Recent availability of high-performance computing hardware and software has rendered extended quantum chemical surface simulations involving actinides feasible. With that motivation, data for bulk and surface 7-phase uranium metal are calculated in the plane-wave pseudopotential density functional theory method. Chemisorption of atomic hydrogen and oxygen on several un-relaxed low-index faces of 7-uranium is considered. The optimal adsorption sites (calculated cohesive energies) on the (100), (110), and (111) faces are found to be the one-coordinated top site (8.8 eV), four-coordinated center site (9.9 eV), and one-coordinated top 1 site (7.9 eV) respectively, for oxygen; and the four-coordinated center site (2.7 eV), four-coordinated center site (3.1 eV), and three-coordinated top2 site (3.2 eV) for hydrogen. (authors)

  4. Advanced Scientific Computing Research Computer Science

    E-Print Network [OSTI]

    Geddes, Cameron Guy Robinson

    Advanced Scientific Computing Research Computer Science FY 2006 Accomplishment High Performance collections of scientific data. In recent years, much of the work in computer and computational science has problem. It is generally accepted that as sciences move into the tera- and peta-scale regimes that one

  5. Advanced Scientific Computing Research

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting the TWP TWP Related LinksATHENAAdministrative80-AAAdvanced Large Scale

  6. Edison Electrifies Scientific Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be...

  7. Materialized community ground models for large-scale earthquake simulation

    E-Print Network [OSTI]

    Shewchuk, Jonathan

    Materialized community ground models for large-scale earthquake simulation Steven W. Schlosser to ground motion sim- ulations, in which ground model datasets are fully materi- alized into octress stored as a service techniques in which scientific computation and storage services become more tightly intertwined. 1

  8. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    SciTech Connect (OSTI)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30T23:59:59.000Z

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  9. The SCIRun Parallel Scientific Computing Problem Solving Environment Christopher R. Johnson

    E-Print Network [OSTI]

    Parker, Steven G.

    efficiency. Steering The primary purpose of SCIRun is to enable the user to interactively control scientific construc­ tion, debugging and steering of large­scale scientific computations. SCIRun can be envisioned a dataflow programming model. SCIRun enables scientists to modify geometric models and interactively change

  10. Large scale disease prediction

    E-Print Network [OSTI]

    Schmid, Patrick R. (Patrick Raphael)

    2008-01-01T23:59:59.000Z

    The objective of this thesis is to present the foundation of an automated large-scale disease prediction system. Unlike previous work that has typically focused on a small self-contained dataset, we explore the possibility ...

  11. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    E-Print Network [OSTI]

    Hasenkamp, Daren

    2011-01-01T23:59:59.000Z

    M. Cusumano. Cloud computing and SaaS as new computingMarket- Oriented Cloud Computing: Vision, Hype, and RealityOpen-Source Cloud-Computing System. In Proceedings of the

  12. APPLIED MATHEMATICS AND SCIENTIFIC COMPUTING

    E-Print Network [OSTI]

    Rogina, Mladen

    APPLIED MATHEMATICS AND SCIENTIFIC COMPUTING Brijuni, Croatia June 23{27, 2003. y x Runge's example; Organized by: Department of Mathematics, Unversity of Zagreb, Croatia. Miljenko Maru#20;si#19;c, chairman;simir Veseli#19;c Andro Mikeli#19;c Sponsors: Ministry of Science and Technology, Croatia, CV Sistemi d

  13. IT Licentiate theses Scientific Computing on Hybrid

    E-Print Network [OSTI]

    Flener, Pierre

    IT Licentiate theses 2013-002 Scientific Computing on Hybrid Architectures MARCUS HOLM UPPSALA of Licentiate of Philosophy in Scientific Computing c Marcus Holm 2013 ISSN 1404-5117 Printed by the Department

  14. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    SciTech Connect (OSTI)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28T23:59:59.000Z

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is completely general, applicable to any crystal symmetry and to both metals and insulators alike. We have developed and implemented a full self-consistent Kohn-Sham method, including both total energies and forces for molecular dynamics, and developed a full MPI parallel implementation for large-scale calculations. We have applied the method to the gamut of physical systems, from simple insulating systems with light atoms to complex d- and f-electron systems, requiring large numbers of atomic-orbital enrichments. In every case, the new PU FE method attained the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the current state-of-the-art PW method. Finally, our initial MPI implementation has shown excellent parallel scaling of the most time-critical parts of the code up to 1728 processors, with clear indications of what will be required to achieve comparable scaling for the rest. Having shown that the key remaining disadvantage of real-space methods can in fact be overcome, the work has attracted significant attention: with sixteen invited talks, both domestic and international, so far; two papers published and another in preparation; and three new university and/or national laboratory collaborations, securing external funding to pursue a number of related research directions. Having demonstrated the proof of principle, work now centers on the necessary extensions and optimizations required to bring the prototype method and code delivered here to production applications.

  15. SIAM Conference on Parallel Processing for Scientific Computing - March 12-14, 2008

    SciTech Connect (OSTI)

    None

    2008-09-08T23:59:59.000Z

    The themes of the 2008 conference included, but were not limited to: Programming languages, models, and compilation techniques; The transition to ubiquitous multicore/manycore processors; Scientific computing on special-purpose processors (Cell, GPUs, etc.); Architecture-aware algorithms; From scalable algorithms to scalable software; Tools for software development and performance evaluation; Global perspectives on HPC; Parallel computing in industry; Distributed/grid computing; Fault tolerance; Parallel visualization and large scale data management; and The future of parallel architectures.

  16. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    E-Print Network [OSTI]

    Laboratory, USA {dhasenkamp, asim, mfwehner, kwu}@lbl.gov ABSTRACT Extensive computing power has been used temperatures. The National Oceanographic Data Center, for example, compiled data from a survey buoy in the Gulf

  17. analysis scientific computing: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the need Kuzmanov, Georgi 3 SCIINSTITUTE Scientific Computing and Imaging Institute Computer Technologies and Information Sciences Websites Summary: SCIINSTITUTE Scientific...

  18. Scientific Discovery Learning with Computer Simulations Scientific Discovery Learning with Computer

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Scientific Discovery Learning with Computer Simulations 1 Scientific Discovery Learning with Computer Simulations 2 Abstract Scientific discovery learning is a highly self-directed and constructivistic form of learning. A computer simulation is a type of computer-based environment that is very

  19. Large scale tracking algorithms.

    SciTech Connect (OSTI)

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01T23:59:59.000Z

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  20. advanced scientific computing: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    22 23 24 25 Next Page Last Page Topic Index 1 Advanced Scientific Computing Research Computer Science Plasma Physics and Fusion Websites Summary: Advanced Scientific Computing...

  1. Scientific Foundations of Computer Graphics Thomas Larsson

    E-Print Network [OSTI]

    Larsson, Thomas

    Scientific Foundations of Computer Graphics Thomas Larsson Department of Computer Engineering M methodological framework and research methods? In this paper, the nature of computer graphics is discussed from a theory of science perspective. The research methods of computer graphics are discussed and reasons

  2. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems | Tags: Math & Computer Science Choudhary.png An...

  3. PROCEEDINGS OF THE RIKEN BNL RESEARCH CENTER WORKSHOP ON LARGE SCALE COMPUTATIONS IN NUCLEAR PHYSICS USING THE QCDOC, SEPTEMBER 26 - 28, 2002.

    SciTech Connect (OSTI)

    AOKI,Y.; BALTZ,A.; CREUTZ,M.; GYULASSY,M.; OHTA,S.

    2002-09-26T23:59:59.000Z

    The massively parallel computer QCDOC (QCD On a Chip) of the RIKEN BNL Research Center (RI3RC) will provide ten-teraflop peak performance for lattice gauge calculations. Lattice groups from both Columbia University and RBRC, along with assistance from IBM, jointly handled the design of the QCDOC. RIKEN has provided $5 million in funding to complete the machine in 2003. Some fraction of this computer (perhaps as much as 10%) might be made available for large-scale computations in areas of theoretical nuclear physics other than lattice gauge theory. The purpose of this workshop was to investigate the feasibility and possibility of using a supercomputer such as the QCDOC for lattice, general nuclear theory, and other calculations. The lattice applications to nuclear physics that can be investigated with the QCDOC are varied: for example, the light hadron spectrum, finite temperature QCD, and kaon ({Delta}I = 1/2 and CP violation), and nucleon (the structure of the proton) matrix elements, to name a few. There are also other topics in theoretical nuclear physics that are currently limited by computer resources. Among these are ab initio calculations of nuclear structure for light nuclei (e.g. up to {approx}A = 8 nuclei), nuclear shell model calculations, nuclear hydrodynamics, heavy ion cascade and other transport calculations for RHIC, and nuclear astrophysics topics such as exploding supernovae. The physics topics were quite varied, ranging from simulations of stellar collapse by Douglas Swesty to detailed shell model calculations by David Dean, Takaharu Otsuka, and Noritaka Shimizu. Going outside traditional nuclear physics, James Davenport discussed molecular dynamics simulations and Shailesh Chandrasekharan presented a class of algorithms for simulating a wide variety of femionic problems. Four speakers addressed various aspects of theory and computational modeling for relativistic heavy ion reactions at RHIC. Scott Pratt and Steffen Bass gave general overviews of how qualitatively different types of physical processes evolve temporally in heavy ion reactions. Denes Molnar concentrated on the application of hydrodynamics, and Alex Krasnitz on a classical Yang-Mills field theory for the initial phase. We were pleasantly surprised by the excellence of the talks and the substantial interest from all parties. The diversity of the audience forced the speakers to give their talks at an understandable level, which was highly appreciated. One particular bonus of the discussions could be the application of highly developed three-dimensional astrophysics hydrodynamics codes to heavy ion reactions.

  4. Second-order adjoint sensitivity analysis procedure (SO-ASAP) for computing exactly and efficiently first- and second-order sensitivities in large-scale linear systems: I. Computational methodology

    E-Print Network [OSTI]

    Dan G. Cacuci

    2014-11-22T23:59:59.000Z

    This work presents the second-order forward and adjoint sensitivity analysis procedures (SO-FSAP and SO-ASAP) for computing exactly and efficiently the second-order functional derivatives of physical (engineering, biological, etc.) system responses to the system's model parameters.The definition of system parameters used in this work includes all computational input data, correlations, initial and/or boundary conditions, etc. For a physical system comprising N parameters and M responses, we note that the SO-FSAP requires a total of 0.5*N**2+1.5*N large-scale computations for obtaining all of the first- and second-order sensitivities, for all M system responses. On the other hand, the SO-ASAP requires a total of 2*N+1 large-scale computations for obtaining all of the first- and second-order sensitivities, for one functional-type system responses. Therefore, the SO-ASAP should be used when M is much larger than N, while the SO-ASAP should be used when N is much larger than M. The original SO-ASAP presented in this work should enable the hitherto very difficult, if not intractable, exact computation of all of the second-order response sensitivities (i.e., functional Gateaux-derivatives) for large-systems involving many parameters, as usually encountered in practice. Very importantly, the implementation of the SO-ASAP requires very little additional effort beyond the construction of the adjoint sensitivity system needed for computing the first-order sensitivities.

  5. National Energy Research Scientific Computing Center

    E-Print Network [OSTI]

    Geddes, Cameron Guy Robinson

    National Energy Research Scientific Computing Center (NERSC) Visualization Tools and Techniques quotas)!! · Dual IR4 graphics accelerators. · Dual GigE channels to HPSS (use hsi to move data) Alternative implementation: SGI's Vizserver · Uses escher's graphics hardware to accelerate rendering

  6. Scalable Cache Memory Design for Large-Scale SMT Architectures

    E-Print Network [OSTI]

    Mudawa, Muhamed F.

    Scalable Cache Memory Design for Large-Scale SMT Architectures Muhamed F. Mudawar Computer Science in existing SMT and superscalar processors is optimized for latency, but not for bandwidth. The size of the L1 is not suitable for future large-scale SMT processors, which will demand high bandwidth instruction and data

  7. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect (OSTI)

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01T23:59:59.000Z

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  8. Scientific Computing Kernels on the Cell Processor

    SciTech Connect (OSTI)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04T23:59:59.000Z

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  9. Visualization of Large-Scale Distributed Data

    E-Print Network [OSTI]

    Johnson, Andrew

    that are now considered the "lenses" for examining large-scale data. THE LARGE-SCALE DATA VISUALIZATIONVisualization of Large-Scale Distributed Data Jason Leigh1 , Andrew Johnson1 , Luc Renambot1 representation of data and the interactive manipulation and querying of the visualization. Large-scale data

  10. Data mining techniques for large-scale gene expression analysis

    E-Print Network [OSTI]

    Palmer, Nathan Patrick

    2011-01-01T23:59:59.000Z

    Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...

  11. NERSC Role in Advanced Scientific Computing Research Katherine...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Center (NERSC) is to accelerate the pace of scientific discovery by providing high performance computing, information, data, and communications services for all DOE...

  12. accelerating scientific computations: Topics by E-print Network

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CSTN-131 Implementing Stereo Vision of GPU-Accelerated Scientific Simulations using Computer Technologies and Information Sciences Websites Summary: 0 Computational Science...

  13. MPI: The Complete Reference Scientific and Engineering Computation

    E-Print Network [OSTI]

    Lu, Paul

    MPI: The Complete Reference #12; Scientific and Engineering Computation Janusz Kowalik, Editor Data Manchek, and Vaidy Sunderam, 1994 Enabling Technologies for Petaflops Computing by Thomas Sterling, Paul

  14. Large-Scale Experiment of Co-allocation Strategies for Peer-to-Peer SuperComputing in P2P-MPI

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Illkirch, France stephane.genaud@loria.fr choopan@icps.u-strasbg.fr Abstract High Performance computing to be simple for the user and meets the main high performance computing requirement which is locality

  15. Advanced Scientific Computing Research Network Requirements

    SciTech Connect (OSTI)

    Dart, Eli; Tierney, Brian

    2013-03-08T23:59:59.000Z

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  16. Large-Scale Renewable Energy Guide Webinar

    Broader source: Energy.gov [DOE]

    Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

  17. Conundrum of the Large Scale Streaming

    E-Print Network [OSTI]

    T. M. Malm

    1999-09-12T23:59:59.000Z

    The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

  18. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect (OSTI)

    None

    2007-06-27T23:59:59.000Z

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  19. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect (OSTI)

    Boehm, Swen [ORNL] [ORNL; Elwasif, Wael R [ORNL] [ORNL; Naughton, III, Thomas J [ORNL; Vallee, Geoffroy R [ORNL] [ORNL

    2014-01-01T23:59:59.000Z

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  20. Computer Science Issues for Large Scale Applications

    E-Print Network [OSTI]

    Allen, Gabrielle

    , E. Schnetter, E. Seidel, H. Shinkai, D. Shoemaker, B. Szilagyi, R. Takahashi, J. Winicour, Towards

  1. Advanced Scientific Computing Research Network Requirements

    E-Print Network [OSTI]

    Dart, Eli

    2014-01-01T23:59:59.000Z

    that have a high-performance computing (HPC) component (with an emphasis on high performance computing facilities.develop and deploy high- performance computing hardware and

  2. DLFM library tools for large scale dynamic applications.

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    DLFM library tools for large scale dynamic applications DLFM library tools for large scale dynamic applications Large scale Python and other dynamic applications may spend huge...

  3. Microfluidic Large-Scale Integration: The Evolution

    E-Print Network [OSTI]

    Quake, Stephen R.

    Microfluidic Large-Scale Integration: The Evolution of Design Rules for Biological Automation, polydimethylsiloxane Abstract Microfluidic large-scale integration (mLSI) refers to the develop- ment of microfluidic, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly

  4. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    SciTech Connect (OSTI)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17T23:59:59.000Z

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  5. Developing A Grid Portal For Large-scale Reservoir Studies

    E-Print Network [OSTI]

    Allen, Gabrielle

    Developing A Grid Portal For Large-scale Reservoir Studies 1 Center for Computation & Technology 2 uncertainty. · Advantages of grid technology · Proposed Solution of the UCoMS Team · What is a Portal? · UCo of reservoir uncertainty... Petroleum drilling consist of many uncertainties. Main objective is to optimize

  6. Writing and Publishing Scientific Articles in Computer Science

    E-Print Network [OSTI]

    Wladmir Cardoso Brandăo

    2015-06-01T23:59:59.000Z

    Over 15 years of teaching, advising students and coordinating scientific research activities and projects in computer science, we have observed the difficulties of students to write scientific papers to present the results of their research practices. In addition, they repeatedly have doubts about the publishing process. In this article we propose a conceptual framework to support the writing and publishing of scientific papers in computer science, providing a kind of guide for computer science students to effectively present the results of their research practices, particularly for experimental research.

  7. SCIENTIFIC & COMPUTATIONAL CHALLENGES OF THE FUSION SIMULATION PROJECT (FSP)

    E-Print Network [OSTI]

    used in ITER will be the same as those required in a power plant but additional R&D will be neededSCIENTIFIC & COMPUTATIONAL CHALLENGES OF THE FUSION SIMULATION PROJECT (FSP) SciDAC 2008 CONFERENCE of the Scientific and Technological Feasibility of Fusion Power · ITER is a truly dramatic step. For the first time

  8. National Energy Research Scientific Computing Center 2007 Annual Report

    SciTech Connect (OSTI)

    Hules, John A.; Bashor, Jon; Wang, Ucilia; Yarris, Lynn; Preuss, Paul

    2008-10-23T23:59:59.000Z

    This report presents highlights of the research conducted on NERSC computers in a variety of scientific disciplines during the year 2007. It also reports on changes and upgrades to NERSC's systems and services aswell as activities of NERSC staff.

  9. Some applications of pipelining techniques in parallel scientific computing 

    E-Print Network [OSTI]

    Deng, Yuanhua

    1996-01-01T23:59:59.000Z

    In this thesis, we study the applicability of pipelining techniques to the development of parallel algorithms for scientific computation. General principles for pipelining techniques are discussed and two applications, Gram-Schmidt orthogonalization...

  10. Some applications of pipelining techniques in parallel scientific computing

    E-Print Network [OSTI]

    Deng, Yuanhua

    1996-01-01T23:59:59.000Z

    In this thesis, we study the applicability of pipelining techniques to the development of parallel algorithms for scientific computation. General principles for pipelining techniques are discussed and two applications, Gram-Schmidt orthogonalization...

  11. Energy Department Seeks Proposals to Use Scientific Computing...

    Energy Savers [EERE]

    DOE's missions," said Secretary Bodman. "This program opens up the world of high-performance computing to a broad array of scientific users. Through the use of these advanced...

  12. Introduction to Scientific Computing, Part I C. David Sherrill

    E-Print Network [OSTI]

    Sherrill, David

    programs (Photoshop); Web browsers; games #12;Scientific Computing: · Complex programs (106 lines, perhaps. Instruction: An elementary, low-level command that the CPU understands. Each CPU has an "instruction set

  13. Theoretical Tools for Large Scale Structure

    E-Print Network [OSTI]

    J. R. Bond; L. Kofman; D. Pogosyan; J. Wadsley

    1998-10-06T23:59:59.000Z

    We review the main theoretical aspects of the structure formation paradigm which impinge upon wide angle surveys: the early universe generation of gravitational metric fluctuations from quantum noise in scalar inflaton fields; the well understood and computed linear regime of CMB anisotropy and large scale structure (LSS) generation; the weakly nonlinear regime, where higher order perturbation theory works well, and where the cosmic web picture operates, describing an interconnected LSS of clusters bridged by filaments, with membranes as the intrafilament webbing. Current CMB+LSS data favour the simplest inflation-based $\\Lambda$CDM models, with a primordial spectral index within about 5% of scale invariant and $\\Omega_\\Lambda \\approx 2/3$, similar to that inferred from SNIa observations, and with open CDM models strongly disfavoured. The attack on the nonlinear regime with a variety of N-body and gas codes is described, as are the excursion set and peak-patch semianalytic approaches to object collapse. The ingredients are mixed together in an illustrative gasdynamical simulation of dense supercluster formation.

  14. Program Management for Large Scale Engineering Programs

    E-Print Network [OSTI]

    Oehmen, Josef

    The goal of this whitepaper is to summarize the LAI research that applies to program management. The context of most of the research discussed in this whitepaper are large-scale engineering programs, particularly in the ...

  15. Scientific Computations section monthly report September 1993

    SciTech Connect (OSTI)

    Buckner, M.R.

    1993-11-01T23:59:59.000Z

    This progress report is computational work that is being performed in the areas of thermal analysis, applied statistics, applied physics, and thermal hydraulics.

  16. Multicore Platforms for Scientific Computing: Cell BE and NVIDIA Tesla

    E-Print Network [OSTI]

    Acacio, Manuel

    Multicore Platforms for Scientific Computing: Cell BE and NVIDIA Tesla J. Fern´andez, M.E. Acacio Tesla computing solutions. The former is a re- cent heterogeneous chip-multiprocessor (CMP) architecture, multicore, Cell BE, NVIDIA Tesla, CUDA 1 Introduction Nowadays, multicore architectures are omnipresent

  17. DOE Office of Advanced Scientific Computing Research

    E-Print Network [OSTI]

    . Interconnect technology: Increasing the performance and energy efficiency of data movement. 3. Memory Facilities ­ Leadership Computing ­ National Energy Research Supercomputing Center (NERSC) ­ High. Energy efficiency: Creating more energy efficient circuit, power, and cooling technologies. 2

  18. Large Scale Periodicity in Redshift Distribution

    E-Print Network [OSTI]

    K. Bajan; M. Biernacka; P. Flin; W. Godlowski; V. Pervushin; A. Zorin

    2004-08-30T23:59:59.000Z

    We review the previous studies of galaxies and quasar redshifts discretisation. We present also the investigations of the large scale periodicity, detected by pencil--beam observations, which revealed 128 (1/h) Mpc period, afterwards confirmed with supercluster studies. We present the theoretical possibility of obtaining such a periodicity using a toy-model. We solved the Kepler problem, i.e. the equation of motion of a particle with null energy moving in the uniform, expanding Universe, decribed by FLRW metrics. It is possible to obtain theoretically the separation between large scale structures similar to the observed one.

  19. Power-aware applications for scientific cluster and distributed computing

    E-Print Network [OSTI]

    Abdurachmanov, David; Eulisse, Giulio; Grosso, Paola; Hillegas, Curtis; Holzman, Burt; Klous, Sander; Knight, Robert; Muzaffar, Shahzad

    2014-01-01T23:59:59.000Z

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton U...

  20. A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion

    E-Print Network [OSTI]

    Knowles, David William

    A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion Peer a new topological framework for the analysis of large scale, time-varying, turbulent combustion consumption thresh- olds for an entire time-dependent combustion simulation. By computing augmented merge

  1. Towards Automatic Incorporation of Search engines into a Large-Scale Metasearch Engine

    E-Print Network [OSTI]

    Meng, Weiyi

    Towards Automatic Incorporation of Search engines into a Large-Scale Metasearch Engine Zonghuan Wu. of Computer Science Univ. of Illinois at Chicago yu@cs.uic.edu Abstract A metasearch engine supports unified access to multiple component search engines. To build a very large-scale metasearch engine that can

  2. GCP: Gossip-based Code Propagation for Large-scale Mobile Wireless Sensor Network

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    GCP: Gossip-based Code Propagation for Large-scale Mobile Wireless Sensor Network Name: Yann Busnel-Marie Kermarrec Extended abstract Wireless sensor networks (WSN) are in a plentiful expansion. They are expected transmission. Keywords Wireless sensor network, mobile computing, large scale, diusion, software update

  3. Scientific computations section monthly report, November 1993

    SciTech Connect (OSTI)

    Buckner, M.R.

    1993-12-30T23:59:59.000Z

    This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

  4. Large-Scale Manifold Learning Ameet Talwalkar

    E-Print Network [OSTI]

    California at Irvine, University of

    Large-Scale Manifold Learning Ameet Talwalkar Courant Institute New York, NY ameet on spectral decom- position, we first analyze two approximate spectral decom- position techniques for large-dimensional embeddings for two large face datasets: CMU-PIE (35 thousand faces) and a web dataset (18 million faces). Our

  5. Network Coding for Large Scale Content Distribution

    E-Print Network [OSTI]

    Keinan, Alon

    Network Coding for Large Scale Content Distribution IEEE Infocom 2005 Christos Gkantsidis College propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks

  6. Can Cloud Computing Address the Scientific Computing Requirements...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for meeting the ever-increasing computational needs of scientists, Department of Energy researchers have issued a report stating that the cloud computing model is useful, but...

  7. Materials Science and Materials Chemistry for Large Scale Electrochemi...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science and Materials Chemistry for Large Scale Electrochemical Energy Storage: From Transportation to Electrical Grid Materials Science and Materials Chemistry for Large Scale...

  8. Overcoming the Barrier to Achieving Large-Scale Production -...

    Office of Environmental Management (EM)

    Overcoming the Barrier to Achieving Large-Scale Production - A Case Study Overcoming the Barrier to Achieving Large-Scale Production - A Case Study This presentation summarizes the...

  9. Overcoming the Barrier to Achieving Large-Scale Production -...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Semprius Confidential 1 Overcoming the Barriers to Achieving Large-Scale Production - A Case Study From concept to large-scale production, one manufacturer tells the story and...

  10. TRACE-PENALTY MINIMIZATION FOR LARGE-SCALE ...

    E-Print Network [OSTI]

    2014-02-07T23:59:59.000Z

    AMS subject classification. ... puting (SciDAC) program funded by U.S. Department of Energy, Office of Science, Advanced Scientific Computing ... ScaLAPACK [3] library for distributed-memory parallel computers, the parallel efficiency of.

  11. MA50177: Scientific Computing Nuclear Reactor Simulation Generalised Eigenvalue Problems

    E-Print Network [OSTI]

    Wirosoetisno, Djoko

    MA50177: Scientific Computing Case Study Nuclear Reactor Simulation ­ Generalised Eigenvalue of a malfunction or of an accident experimentally, the numerical simulation of nuclear reactors is of utmost balance in a nuclear reactor are the two-group neutron diffusion equations -div (K1 u1) + (a,1 + s) u1 = 1

  12. Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological

    E-Print Network [OSTI]

    Supporting Advanced Scientific Computing Research · Basic Energy Sciences · Biological and Environmental Research · Fusion Energy Sciences · High Energy Physics · Nuclear Physics What my students Code ­http://code.google.com/p/net-almanac/ ­Beta release this week #12;Contact Information Jon Dugan

  13. A Generic Grid Interface for Parallel and Adaptive Scientific Computing.

    E-Print Network [OSTI]

    Kornhuber, Ralf

    A Generic Grid Interface for Parallel and Adaptive Scientific Computing. Part I: Abstract Framework definition of a grid for al- gorithms solving partial differential equations. Unlike previous ap- proaches [2, 3], our grids have a hierarchical structure. This makes them suitable for geometric multigrid

  14. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Willcox, Karen [MIT] [MIT; Marzouk, Youssef [MIT] [MIT

    2013-11-12T23:59:59.000Z

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  15. Ikarus: Large-Scale Participatory Sensing at High Altitudes Michael von Kaenel, Philipp Sommer, and Roger Wattenhofer

    E-Print Network [OSTI]

    Ikarus: Large-Scale Participatory Sensing at High Altitudes Michael von Kaenel, Philipp Sommer, and Roger Wattenhofer Computer Engineering and Networks Laboratory ETH Zurich, Switzerland {vkaenemi,sommer

  16. Instruction-Level Characterization of Scientific Computing Applications Using Hardware Performance Counters

    SciTech Connect (OSTI)

    Luo, Y.; Cameron, K.W.

    1998-11-24T23:59:59.000Z

    Workload characterization has been proven an essential tool to architecture design and performance evaluation in both scientific and commercial computing areas. Traditional workload characterization techniques include FLOPS rate, cache miss ratios, CPI (cycles per instruction or IPC, instructions per cycle) etc. With the complexity of sophisticated modern superscalar microprocessors, these traditional characterization techniques are not powerful enough to pinpoint the performance bottleneck of an application on a specific microprocessor. They are also incapable of immediately demonstrating the potential performance benefit of any architectural or functional improvement in a new processor design. To solve these problems, many people rely on simulators, which have substantial constraints especially on large-scale scientific computing applications. This paper presents a new technique of characterizing applications at the instruction level using hardware performance counters. It has the advantage of collecting instruction-level characteristics in a few runs virtually without overhead or slowdown. A variety of instruction counts can be utilized to calculate some average abstract workload parameters corresponding to microprocessor pipelines or functional units. Based on the microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. In particular, the analysis results can provide some insight to the problem that only a small percentage of processor peak performance can be achieved even for many very cache-friendly codes. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. Eventually, these abstract parameters can lead to the creation of an analytical microprocessor pipeline model and memory hierarchy model.

  17. Model-constrained optimization methods for reduction of parameterized large-scale systems

    E-Print Network [OSTI]

    Bui-Thanh, Tan

    2007-01-01T23:59:59.000Z

    Most model reduction techniques employ a projection framework that utilizes a reduced-space basis. The basis is usually formed as the span of a set of solutions of the large-scale system, which are computed for selected ...

  18. Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems

    E-Print Network [OSTI]

    Tan, Bui-Thanh

    Most model reduction techniques employ a projection framework that utilizes a reduced-space basis. The basis is usually formed as the span of a set of solutions of the large-scale system, which are computed for selected ...

  19. The Potential of the Cell Processor for Scientific Computing

    SciTech Connect (OSTI)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14T23:59:59.000Z

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  20. 1 National Roadmap Committee for Large-Scale Research Facilities the netherlands' roadmap for large-scale research facilities

    E-Print Network [OSTI]

    Horn, David

    #12;1 National Roadmap Committee for Large-Scale Research Facilities the netherlands' roadmap for large-scale research facilities #12;2 National Roadmap Committee for Large-Scale Research Facilities1 by Roselinde Supheert) #12;3 National Roadmap Committee for Large-Scale Research Facilities The Netherlands

  1. Large-Scale PV Integration Study

    SciTech Connect (OSTI)

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29T23:59:59.000Z

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  2. Autonomie Large Scale Deployment | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't Your Destiny: The FutureComments fromof Energy Automation Worldof EnergyTAGS,Large Scale

  3. Power-aware applications for scientific cluster and distributed computing

    E-Print Network [OSTI]

    David Abdurachmanov; Peter Elmer; Giulio Eulisse; Paola Grosso; Curtis Hillegas; Burt Holzman; Ruben L. Janssen; Sander Klous; Robert Knight; Shahzad Muzaffar

    2014-10-22T23:59:59.000Z

    The aggregate power use of computing hardware is an important cost factor in scientific cluster and distributed computing systems. The Worldwide LHC Computing Grid (WLCG) is a major example of such a distributed computing system, used primarily for high throughput computing (HTC) applications. It has a computing capacity and power consumption rivaling that of the largest supercomputers. The computing capacity required from this system is also expected to grow over the next decade. Optimizing the power utilization and cost of such systems is thus of great interest. A number of trends currently underway will provide new opportunities for power-aware optimizations. We discuss how power-aware software applications and scheduling might be used to reduce power consumption, both as autonomous entities and as part of a (globally) distributed system. As concrete examples of computing centers we provide information on the large HEP-focused Tier-1 at FNAL, and the Tigress High Performance Computing Center at Princeton University, which provides HPC resources in a university context.

  4. Matrix Computations & Scientific Computing Seminar Organizer: James Demmel & Ming Gu

    E-Print Network [OSTI]

    California at Berkeley, University of

    -scale Eigenvalue Problems in Nuclei Structure Cal- culation One of the emerging computational approaches in nuclear physics is the configuration interaction (CI) method for solving the nuclear many-body problem. Like other for achieving good performance in nuclear CI calculations. #12;

  5. Recovery Act - Large Scale SWNT Purification and Solubilization

    SciTech Connect (OSTI)

    Michael Gemano; Dr. Linda B. McGown

    2010-10-07T23:59:59.000Z

    The goal of this Phase I project was to establish a quantitative foundation for development of binary G-gels for large-scale, commercial processing of SWNTs and to develop scientific insight into the underlying mechanisms of solubilization, selectivity and alignment. In order to accomplish this, we performed systematic studies to determine the effects of G-gel composition and experimental conditions that will enable us to achieve our goals that include (1) preparation of ultra-high purity SWNTs from low-quality, commercial SWNT starting materials, (2) separation of MWNTs from SWNTs, (3) bulk, non-destructive solubilization of individual SWNTs in aqueous solution at high concentrations (10-100 mg/mL) without sonication or centrifugation, (4) tunable enrichment of subpopulations of the SWNTs based on metallic vs. semiconductor properties, diameter, or chirality and (5) alignment of individual SWNTs.

  6. Capacitor placement and real time control in large-scale unbalanced distribution systems: Numerical studies

    SciTech Connect (OSTI)

    Wang, J.C.; Chiang, H.D.; Miu, K.N. [Cornell Univ., Ithaca, NY (United States). School of Electrical Engineering; Darling, G. [NYSEG Corp., Binghamton, NY (United States). Distribution System Dept.

    1997-04-01T23:59:59.000Z

    A novel solution algorithm for capacitor placement and real-time control in real large-scale unbalanced distribution systems is evaluated and implemented to determine the number, locations, sizes, types and control schemes of capacitors to be placed on large-scale unbalanced distribution systems. A detailed numerical study regarding the solution algorithm in large scale unbalanced distribution systems is undertaken. Promising numerical results on both 292 bus and 394 bus real unbalanced distribution systems containing unbalanced loads and phasing and various types of transformers are presented. The computational performance for the capacitor control problem under load variations is encouraging.

  7. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect (OSTI)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18T23:59:59.000Z

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  8. Can Cloud Computing Address the Scientific Computing Requirements for DOE

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisiting the TWPSuccess StoriesFebruary 26,Computers »CafeteriasTours SHARE

  9. Large-scale simulations of reionization

    SciTech Connect (OSTI)

    Kohler, Katharina; /JILA, Boulder /Fermilab; Gnedin, Nickolay Y.; /Fermilab; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01T23:59:59.000Z

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  10. Large-Scale Renewable Energy Projects (Larger than 10 MWs) |...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Large-Scale Renewable Energy Projects (Larger than 10 MWs) Large-Scale Renewable Energy Projects (Larger than 10 MWs) Renewable energy projects larger than 10 megawatts (MW) are...

  11. BLM and Forest Service Consider Large-Scale Geothermal Leasing...

    Energy Savers [EERE]

    and Forest Service Consider Large-Scale Geothermal Leasing BLM and Forest Service Consider Large-Scale Geothermal Leasing June 18, 2008 - 4:29pm Addthis In an effort to encourage...

  12. FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy...

    Office of Environmental Management (EM)

    FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects FEMP Helps Federal Facilities Develop Large-Scale Renewable Energy Projects August 21, 2013 - 12:00am...

  13. Building a Large Scale Climate Data System in Support of HPC Environment

    SciTech Connect (OSTI)

    Wang, Feiyi [ORNL] [ORNL; Harney, John F [ORNL] [ORNL; Shipman, Galen M [ORNL] [ORNL

    2011-01-01T23:59:59.000Z

    The Earth System Grid Federation (ESG) is a large scale, multi-institutional, interdisciplinary project that aims to provide climate scientists and impact policy makers worldwide a web-based and client-based platform to publish, disseminate, compare and analyze ever increasing climate related data. This paper describes our practical experiences on the design, development and operation of such a system. In particular, we focus on the support of the data lifecycle from a high performance computing (HPC) perspective that is critical to the end-to-end scientific discovery process. We discuss three subjects that interconnect the consumer and producer of scientific datasets: (1) the motivations, complexities and solutions of deep storage access and sharing in a tightly controlled environment; (2) the importance of scalable and flexible data publication/population; and (3) high performance indexing and search of data with geospatial properties. These perceived corner issues collectively contributed to the overall user experience and proved to be as important as any other architectural design considerations. Although the requirements and challenges are rooted and discussed from a climate science domain context, we believe the architectural problems, ideas and solutions discussed in this paper are generally useful and applicable in a larger scope.

  14. Planning under uncertainty solving large-scale stochastic linear programs

    SciTech Connect (OSTI)

    Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)

    1992-12-01T23:59:59.000Z

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  15. Molecular Science Computing Facility Scientific Challenges: Linking Across Scales

    SciTech Connect (OSTI)

    De Jong, Wibe A.; Windus, Theresa L.

    2005-07-01T23:59:59.000Z

    The purpose of this document is to define the evolving science drivers for performing environmental molecular research at the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) and to provide guidance associated with the next-generation high-performance computing center that must be developed at EMSL's Molecular Science Computing Facility (MSCF) in order to address this critical research. The MSCF is the pre-eminent computing facility?supported by the U.S. Department of Energy's (DOE's) Office of Biological and Environmental Research (BER)?tailored to provide the fastest time-to-solution for current computational challenges in chemistry and biology, as well as providing the means for broad research in the molecular and environmental sciences. The MSCF provides integral resources and expertise to emerging EMSL Scientific Grand Challenges and Collaborative Access Teams that are designed to leverage the multiple integrated research capabilities of EMSL, thereby creating a synergy between computation and experiment to address environmental molecular science challenges critical to DOE and the nation.

  16. Large-scale simulations of fluctuating biological membranes Andrea Pasqua,1,a

    E-Print Network [OSTI]

    Oster, George

    in their computational demands, these approaches are still limited in the scope of fluctuations and response they can's response to a prodding nanorod. © 2010 American Institute of Physics. doi:10.1063/1.3382349 I. INTRODUCTION feasibly capture. Extending computer simulations to examine large scale behaviors such as aggregation

  17. Decomposition Methods for Large Scale LP Decoding

    E-Print Network [OSTI]

    2012-04-02T23:59:59.000Z

    that have extreme reliability requirements. While suitably ...... parity-check codes. Electronic Notes in Theoretical Computer Science, 74(0):97–104, 2003.

  18. ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D

    2007-03-28T23:59:59.000Z

    Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.

  19. Strategies to Finance Large-Scale Deployment of Renewable Energy...

    Open Energy Info (EERE)

    Strategies to Finance Large-Scale Deployment of Renewable Energy Projects: An Economic Development and Infrastructure Approach Jump to: navigation, search Tool Summary LAUNCH TOOL...

  20. Efficient random coordinate descent algorithms for large-scale ...

    E-Print Network [OSTI]

    2013-05-04T23:59:59.000Z

    (will be inserted by the editor). Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization. Andrei Patrascu · Ion Necoara.

  1. Optimization Online - Large-Scale Linear Programming Techniques ...

    E-Print Network [OSTI]

    Michael Wagner

    2002-02-12T23:59:59.000Z

    Feb 12, 2002 ... Large-Scale Linear Programming Techniques for the Design of Protein Folding Potentials. Michael Wagner (mwagner ***at*** odu.edu)

  2. ORNL, CINCINNATI partner to develop commercial large-scale additive...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Laboratory 865-574-7308 ORNL, CINCINNATI partner to develop commercial large-scale additive manufacturing system (From left) David Danielson, the Energy Department's...

  3. Advancing Cellulosic Ethanol for Large Scale Sustainable Transportation

    E-Print Network [OSTI]

    Wyman, C

    2007-01-01T23:59:59.000Z

    Advancing Cellulosic Ethanol for Large Scale SustainableHydrogen Batteries Nuclear By Lee Lynd, Dartmouth Ethanol •Ethanol, ethyl alcohol, fermentation ethanol, or just “

  4. Large Scale GSHP as Alternative Energy for American Farmers Geothermal...

    Open Energy Info (EERE)

    Scale GSHP as Alternative Energy for American Farmers Geothermal Project Jump to: navigation, search Last modified on July 22, 2011. Project Title Large Scale GSHP as Alternative...

  5. ORNL demonstrates first large-scale graphene fabrication | ornl...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Ron Walli Communications 865.576.0226 ORNL demonstrates first large-scale graphene composite fabrication ORNL's ultrastrong graphene features layers of graphene and polymers and is...

  6. Optimization Online - A fictitious play approach to large-scale ...

    E-Print Network [OSTI]

    Theodore Lambert

    2004-08-01T23:59:59.000Z

    Aug 1, 2004 ... A fictitious play approach to large-scale optimization. Theodore Lambert (tlambert ***at*** tmcc.edu) Marina A. Epelman (mepelman ***at*** ...

  7. Effects of Volcanism, Crustal Thickness, and Large Scale Faulting...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Volcanism, Crustal Thickness, and Large Scale Faulting on the Development and Evolution of Geothermal Systems: Collaborative Project in Chile Effects of Volcanism, Crustal...

  8. Solving large scale polynomial convex problems on \\ell_1/nuclear ...

    E-Print Network [OSTI]

    Aharon Ben-Tal

    2012-10-24T23:59:59.000Z

    Oct 24, 2012 ... Solving large scale polynomial convex problems on \\ell_1/nuclear norm balls by randomized first-order algorithms. Aharon Ben-Tal (abental ...

  9. Training a Large Scale Classifier with the Quantum Adiabatic Algorithm

    E-Print Network [OSTI]

    Hartmut Neven; Vasil S. Denchev; Geordie Rose; William G. Macready

    2009-12-04T23:59:59.000Z

    In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.

  10. Climate impacts of a large-scale biofuels expansion*

    E-Print Network [OSTI]

    Climate impacts of a large-scale biofuels expansion* Willow Hallgren, C. Adam Schlosser, Erwan impacts of a large-scale biofuels expansion Willow Hallgren,1 C. Adam Schlosser,1 Erwan Monier,1 David March 2013. [1] A global biofuels program will potentially lead to intense pressures on land supply

  11. Measuring Similarity in Large-scale Folksonomies Giovanni Quattrone1

    E-Print Network [OSTI]

    Ferrara, Emilio

    Measuring Similarity in Large-scale Folksonomies Giovanni Quattrone1 , Emilio Ferrara2 , Pasquale by power law distributions of tags, over which commonly used similarity metrics, in- cluding the Jaccard to capture similarity in large-scale folksonomies, that is based on a mutual reinforcement principle: that is

  12. Attack Containment Framework for Large-Scale Critical Infrastructures

    E-Print Network [OSTI]

    Nahrstedt, Klara

    Attack Containment Framework for Large-Scale Critical Infrastructures Hoang Nguyen Department-- We present an attack containment framework against value-changing attacks in large-scale critical structure, called attack container, which captures the trust behavior of a group of nodes and assists

  13. POWER SYSTEMS STABILITY WITH LARGE-SCALE WIND POWER PENETRATION

    E-Print Network [OSTI]

    Bak-Jensen, Birgitte

    of offshore wind farms, wind power fluctuations may introduce several challenges to reliable power system behaviour due to natural wind fluctuations. The rapid power fluctuations from the large scale wind farms Generation Control (AGC) system which includes large- scale wind farms for long-term stability simulation

  14. Large-Scale Eucalyptus Energy Farms and Power Cogeneration1

    E-Print Network [OSTI]

    Standiford, Richard B.

    Large-Scale Eucalyptus Energy Farms and Power Cogeneration1 Robert C. Noronla2 The initiation of a large-scale cogeneration project, especially one that combines construction of the power generation supplemental fuel source must be sought if the cogeneration facility will consume more fuel than

  15. Scientific Application Requirements for Leadership Computing at the Exascale

    SciTech Connect (OSTI)

    Ahern, Sean [ORNL; Alam, Sadaf R [ORNL; Fahey, Mark R [ORNL; Hartman-Baker, Rebecca J [ORNL; Barrett, Richard F [ORNL; Kendall, Ricky A [ORNL; Kothe, Douglas B [ORNL; Mills, Richard T [ORNL; Sankaran, Ramanan [ORNL; Tharrington, Arnold N [ORNL; White III, James B [ORNL

    2007-12-01T23:59:59.000Z

    The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, possess fault tolerance, exploit asynchronism, and are power-consumption aware. On the other hand, we must also provide application scientists with the ability to develop software without having to become experts in the computer science components. Numerical algorithms are scattered broadly across science domains, with no one particular algorithm being ubiquitous and no one algorithm going unused. Structured grids and dense linear algebra continue to dominate, but other algorithm categories will become more common. A significant increase is projected for Monte Carlo algorithms, unstructured grids, sparse linear algebra, and particle methods, and a relative decrease foreseen in fast Fourier transforms. These projections reflect the expectation of much higher architecture concurrency and the resulting need for very high scalability. The new algorithm categories that application scientists expect to be increasingly important in the next decade include adaptive mesh refinement, implicit nonlinear systems, data assimilation, agent-based methods, parameter continuation, and optimization. The attributes of leadership computing systems expected to increase most in priority over the next decade are (in order of importance) interconnect bandwidth, memory bandwidth, mean time to interrupt, memory latency, and interconnect latency. The attributes expected to decrease most in relative priority are disk latency, archival storage capacity, disk bandwidth, wide area network bandwidth, and local storage capacity. These choices by application developers reflect the expected needs of applications or the expected reality of available hardware. One interpretation is that the increasing priorities reflect the desire to increase computational efficiency to take advantage of increasing peak flops [floating point operations per second], while the decreasing priorities reflect the expectation that computational efficiency will not increase. Per-core requirements appear to be relatively static, while aggregate requirements will grow with the system. This projection is consistent with a r

  16. Automating large-scale LEMUF calculations

    SciTech Connect (OSTI)

    Picard, R.R. (Los Alamos National Lab., Los Alamos, NM (US))

    1992-05-01T23:59:59.000Z

    To better understand material unaccounted for (MUFs) and, in some cases, to comply with formal regulatory requirements, many facilities are paying increasing attention to software for MUF evaluation. Activities related to improving understanding of MUFs are generic (including the identification, by name, of individual measured values and individual special nuclear material (SNM) items in a data base, and the handling of a wide variety of accounting problems) as well as facility-specific (including interfacing a facility's data base to a computational engine and subsequent uses of that engine). Los Alamos efforts to develop a practical engine are reviewed and some of the lessons learned during that development are described in this paper.

  17. A TWO-STAGE APPROACH TO SOLVING LARGE-SCALE OPTIMAL POWER FLOWS

    E-Print Network [OSTI]

    Gross, George

    A TWO-STAGE APPROACH TO SOLVING LARGE-SCALE OPTIMAL POWER FLOWS *F e l i x F. Wu George Gross James problem is formulated as an unconstrained minimization problem using penalty functions and i s solved ] and Sasson and Merrill [ 3 ] . The s i z e and t h e extensive amount of computation involved i n solving t h

  18. Parallel Implementation of a Large-Scale 3-D Air Pollution Model

    E-Print Network [OSTI]

    Ostromsky, Tzvetan

    Parallel Implementation of a Large-Scale 3-D Air Pollution Model Tzvetan Ostromsky1 and Zahari-4000 Roskilde, Denmark, zz@dmu.dk; http://www.dmu.dk/AtmosphericEnvironment Abstract. Air pollution and analyzed. Keywords: air pollution model, system of PDE's, parallel algorithm, shared memory computer

  19. Development and Deployment of a Large-Scale Flower Recognition Mobile App

    E-Print Network [OSTI]

    engine and re- lies on computer vision recognition technology. The mobile phone app is available freeDevelopment and Deployment of a Large-Scale Flower Recognition Mobile App Anelia Angelova NEC Labs- eration of user generated content, especially from mobile de- vices, there is a need to develop

  20. National Energy Research Scientific Computing Center 2007 Annual Report

    E-Print Network [OSTI]

    Hules, John A.

    2008-01-01T23:59:59.000Z

    and Directions in High Performance Computing for the Officein the evolution of high performance computing and networks.Hectopascals High performance computing High Performance

  1. Stabilization of Large Scale Structure by Adhesive Gravitational Clustering

    E-Print Network [OSTI]

    Thomas Buchert

    1999-08-13T23:59:59.000Z

    The interplay between gravitational and dispersive forces in a multi-streamed medium leads to an effect which is exposed in the present note as the genuine driving force of stabilization of large-scale structure. The conception of `adhesive gravitational clustering' is advanced to interlock the fairly well-understood epoch of formation of large-scale structure and the onset of virialization into objects that are dynamically in equilibrium with their large-scale structure environment. The classical `adhesion model' is opposed to a class of more general models traced from the physical origin of adhesion in kinetic theory.

  2. Schema-Independent and Schema-Friendly Scientific Metadata Management1 Scott Jensen and Beth Plale

    E-Print Network [OSTI]

    Plale, Beth

    . The activities provided to a scientist can include access to public data repositories, large-scale computational Computational science is creating a deluge of data, and being able to reuse this data requires detailed descriptive metadata. Scientific communities have developed detailed metadata schemas to describe data

  3. Exploration of large scale manufacturing of polydimethylsiloxane (PDMS) microfluidic devices

    E-Print Network [OSTI]

    Hum, Philip W. (Philip Wing-Jung)

    2006-01-01T23:59:59.000Z

    Discussion of the current manufacturing process of polydimethylsiloxane (PDMS) parts and the emergence of PDMS use in biomedical microfluidic devices addresses the need to develop large scale manufacturing processes for ...

  4. How Three Retail Buyers Source Large-Scale Solar Electricity

    Office of Energy Efficiency and Renewable Energy (EERE)

    Large-scale, non-utility solar power purchase agreements (PPAs) are still a rarity despite the growing popularity of PPAs across the country. In this webinar, participants will learn more about how...

  5. Parallel Stochastic Gradient Algorithms for Large-Scale Matrix ...

    E-Print Network [OSTI]

    2013-03-21T23:59:59.000Z

    parallel implementation that admits a speed-up nearly proportional to the ... On large-scale matrix completion tasks, Jellyfish is orders of magnitude more ...... get a consistent build of NNLS with mex optimizations at the time of this submission.

  6. Interference management techniques in large-scale wireless networks 

    E-Print Network [OSTI]

    Luo, Yi

    2015-06-29T23:59:59.000Z

    In this thesis, advanced interference management techniques are designed and evaluated for large-scale wireless networks with realistic assumptions, such as signal propagation loss, random node distribution and ...

  7. Channel Meander Migration in Large-Scale Physical Model Study 

    E-Print Network [OSTI]

    Yeh, Po Hung

    2010-10-12T23:59:59.000Z

    A set of large-scale laboratory experiments were conducted to study channel meander migration. Factors affecting the migration of banklines, including the ratio of curvature to channel width, bend angle, and the Froude ...

  8. Chemical engineers design, control and optimize large-scale chemical,

    E-Print Network [OSTI]

    Rohs, Remo

    , Biochemical, Environmental, Petroleum Engineering and Nantoechnology. CHEMICAL&MATERIALSSCIENCE CHE OVERVIEW of Science 131 units · Chemical Engineering (Petroleum) Bachelor of Science 136 units · Chemical Engineering38 Chemical engineers design, control and optimize large-scale chemical, physicochemical

  9. Infrastructure for large-scale tests in marine autonomy

    E-Print Network [OSTI]

    Hummel, Robert A. (Robert Andrew)

    2012-01-01T23:59:59.000Z

    This thesis focuses on the development of infrastructure for research with large-scale autonomous marine vehicle fleets and the design of sampling trajectories for compressive sensing (CS). The newly developed infrastructure ...

  10. Platforms and real options in large-scale engineering systems

    E-Print Network [OSTI]

    Kalligeros, Konstantinos C., 1976-

    2006-01-01T23:59:59.000Z

    This thesis introduces a framework and two methodologies that enable engineering management teams to assess the value of real options in programs of large-scale, partially standardized systems implemented a few times over ...

  11. Large-scale magnetic fields in the inflationary universe

    E-Print Network [OSTI]

    Kazuharu Bamba; Misao Sasaki

    2006-11-22T23:59:59.000Z

    The generation of large-scale magnetic fields is studied in inflationary cosmology. We consider the violation of the conformal invariance of the Maxwell field by dilatonic as well as non-minimal gravitational couplings. We derive a general formula for the spectrum of large-scale magnetic fields for a general form of the coupling term and the formula for the spectral index. The result tells us clearly the (necessary) condition for the generation of magnetic fields with sufficiently large amplitude.

  12. Streamflow forecasting for large-scale hydrologic systems

    E-Print Network [OSTI]

    Awwad, Haitham Munir

    1991-01-01T23:59:59.000Z

    STREAMFLOW FORECASTING FOR LARGE-SCALE HYDROLOGIC SYSTEMS A Thesis by HAITHAM MUNIR AWWAD Submitted to the Office of Graduate Studies of Texas AkM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May... 1991 Major Subject: Civil Engineering STREAMFLOW FORECASTING FOR LARGE-SCALE HYDROLOGIC SYSTEMS A Thesis by HAITHAM MUNIR AWWAD Approved as to style and content by: uan B. Valdes (Chair of Committee) alph A. Wurbs (Member) Marshall J. Mc...

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    E-Print Network [OSTI]

    Gerber, Richard A.

    2011-01-01T23:59:59.000Z

    Type Ia supernovae, gamma-ray bursts, X-ray bursts and corerelativistic jet, making a gamma-ray burst, the luminositythose that lead to gamma-ray bursts. The current frontier is

  14. Opportunistic Evolution: Efficient Evolutionary Computation on Large-Scale

    E-Print Network [OSTI]

    George Mason University

    , ironically, a hybrid of the island and distributed evaluation models, in combina- tion with either, sarmentrout}@parabon.com ABSTRACT We examine opportunistic evolution, a variation of master- slave distributed implementa- tion of opportunistic evolution may be used in conjunction with either a generational or

  15. Large Scale Computing and Storage Requirements for Basic Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Biological and Environmental Science (BER) Fusion Energy Sciences (FES) High Energy Physics (HEP) Nuclear Physics (NP) Overview Published Reports Case Study FAQs Home Science at...

  16. Large Scale Computing and Storage Requirements for High Energy Physics

    E-Print Network [OSTI]

    Gerber, Richard A.

    2011-01-01T23:59:59.000Z

    second resulting from a thermonuclear explosion of materialresult from the thermonuclear burning of a carbon-oxygensensitive to how the thermonuclear runaway is ignited (

  17. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    in the process of thermonuclear incineration of theircore-collapse and thermonuclear events to test predictionsprocesses. In contrast to thermonuclear supernova modeling,

  18. Large Scale Production Computing and Storage Requirements for...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    research. Final Report PDF version Date and Location April 29-30, 2014 Hyatt Regency Bethesda One Bethesda Metro Center (7400 Wisconsin Ave) Bethesda, Maryland, USA 20814...

  19. Large Scale Computing and Storage Requirements for Nuclear Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for Nuclear Physics: Target 2014 NPFrontcover.png May 26-27, 2011 Hyatt Regency Bethesda One Bethesda Metro Center (7400 Wisconsin Ave) Bethesda, Maryland, USA 20814 Final...

  20. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    fusion, vortices in the crusts of neutron stars, and even dynamics in nonnuclear systems such as cold

  1. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    Physics Continuous Electron Beam Accelerator Facility, isas the Continuous Electron Beam Accelerator Facility (CEBAF)

  2. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    E-Print Network [OSTI]

    Gerber, Richard A.

    2012-01-01T23:59:59.000Z

    neutrino matrix. Neutrinoless double beta decay experiments,process called neutrinoless double beta decay in nuclei,

  3. Large Scale Parallel Computing and Scalability Study for Surface Combatant

    E-Print Network [OSTI]

    Yang, Jianming

    52242-1585 #12;boundary method (IBM). For this case, V4- blended k-/k- (BKW), -WF simulations on 615K-deforming control volume [2]. The turbulence modeling is performed using BKW or anisotropic Reynolds stress (ARS

  4. Large Scale Computing and Storage Requirements for Basic Energy...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    SciencesAn BES ASCR NERSC WorkshopFebruary 9-10, 2010... Read More Workshop Logistics Workshop location, directions, and registration information are included here......

  5. Large Scale Computing and Storage Requirements for High Energy Physics

    E-Print Network [OSTI]

    Gerber, Richard A.

    2011-01-01T23:59:59.000Z

    number modeling of type ia supernovae. I. Hydrodynamics.number modeling of type ia supernovae. II. Energy evolution.Mach number modeling of type ia supernovae. III. Reactions.

  6. Sandia Energy - Computational Fluid Dynamics & Large-Scale Uncertainty

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What's PossibleRadiation Protection245C Unlimited ReleaseWelcomeLong Lifetime ofColin Humphreys

  7. Large Scale Computing and Storage Requirements for Biological and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShade

  8. Large Scale Computing and Storage Requirements for Biological and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShadeEnvironmental Research:

  9. Large Scale Computing and Storage Requirements for Fusion Energy Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShadeEnvironmental

  10. Large Scale Computing and Storage Requirements for Nuclear Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShadeEnvironmental Large

  11. Large Scale Production Computing and Storage Requirements for Nuclear

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of Science (SC)Integrated Codes |Is Your Home asLCLSLaboratoryRowland toShadeEnvironmental

  12. Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cnHigh SchoolIn12electron 9 5 - -/e),,sand CERNLandLargefor High Energy

  13. International Scientific Conference Computer Science'2006 Building skills for the knowledge society

    E-Print Network [OSTI]

    Boyer, Edmond

    of highly-qualified staff, lack of new scientific equipment, etc. On the other side, a growing demandInternational Scientific Conference Computer Science'2006 Building skills for the knowledge society, and the focus on ICT and e-business skills, innovation and knowledge management in organizations. It highlights

  14. A Taxonomy of Scientific Workflow Systems for Grid Computing Jia Yu and Rajkumar Buyya*

    E-Print Network [OSTI]

    Melbourne, University of

    on major functions and architectural styles of Grid workflow systems. In Section 3, we map the proposed1 A Taxonomy of Scientific Workflow Systems for Grid Computing Jia Yu and Rajkumar Buyya* Grid Computing and Distributed Systems (GRIDS) Laboratory Department of Computer Science and Software Engineering

  15. Advanced Scientific Computing Research Funding Profile by Subprogram

    E-Print Network [OSTI]

    results in mathematics, high performance computing and advanced networks and a Excludes $1 applications. High-performance computing provides a new window for researchers to observe the natural world in applied mathematics, computer science and high-performance networks and providing the high-performance

  16. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect (OSTI)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01T23:59:59.000Z

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  17. SIMULATING LARGE-SCALE STRUCTURE FORMATION FOR BSI POWER SPECTRA

    E-Print Network [OSTI]

    V. Mueller

    1995-05-30T23:59:59.000Z

    A double inflationary model provides perturbation spectra with enhanced power at large scales (Broken Scale Invariant perturbations -- BSI), leading to a promising scenario for the formation of cosmic structures. We describe a series of high-resolution PM simulations with a model for the thermodynamic evolution of baryons in which we are capable of identifying 'galaxy' halos with a reasonable mass spectrum and following the genesis of large and super-large scale structures. The power spectra and correlation functions of 'galaxies' are compared with reconstructed power spectra of the CfA catalogue and the correlation functions of the Las Campanas Deep Redshift Survey.

  18. Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment

    E-Print Network [OSTI]

    Douglas P. Benjamin

    2008-10-20T23:59:59.000Z

    The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.

  19. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect (OSTI)

    William J. Schroeder

    2011-11-13T23:59:59.000Z

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem important to the nations scientific progress as described shortly. Further, SLAC researchers routinely generate massive amounts of data, and frequently collaborate with other researchers located around the world. Thus SLAC is an ideal teammate through which to develop, test and deploy this technology. The nature of the datasets generated by simulations performed at SLAC presented unique visualization challenges especially when dealing with higher-order elements that were addressed during this Phase II. During this Phase II, we have developed a strong platform for collaborative visualization based on ParaView. We have developed and deployed a ParaView Web Visualization framework that can be used for effective collaboration over the Web. Collaborating and visualizing over the Web presents the community with unique opportunities for sharing and accessing visualization and HPC resources that hitherto with either inaccessible or difficult to use. The technology we developed in here will alleviate both these issues as it becomes widely deployed and adopted.

  20. Parallel computing works

    SciTech Connect (OSTI)

    Not Available

    1991-10-23T23:59:59.000Z

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  1. International Scientific Conference Computer Science'2008 Near-Native Protein Folding

    E-Print Network [OSTI]

    Fidanova, Stefka

    International Scientific Conference Computer Science'2008 61 Near-Native Protein Folding Stefka: The protein folding problem is a fundamental problem in computational molecular biology. The high resolution 3. After that the folding problem is de- fined like optimization problem. Keywords: Protein folding

  2. Cloud computing security: the scientific challenge, and a survey of solutions

    E-Print Network [OSTI]

    Ryan, Mark

    Cloud computing security: the scientific challenge, and a survey of solutions Mark D. Ryan University of Birmingham January 28, 2013 Abstract We briefly survey issues in cloud computing security. The fact that data is shared with the cloud service provider is identified as the core sci- entific problem

  3. Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU

    E-Print Network [OSTI]

    Haque, Imran S

    2009-01-01T23:59:59.000Z

    Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...

  4. Large Scale Simulation of Tor: Modelling a Global Passive Adversary

    E-Print Network [OSTI]

    Blott, Stephen

    . Implementing global passive adversary attacks on currently deployed low latency anonymous networks designs have been developed which attempt to apply mixes to low latency traffic. The most widelyLarge Scale Simulation of Tor: Modelling a Global Passive Adversary Gavin O' Gorman and Stephen

  5. Large Scale Energy Storage: From Nanomaterials to Large Systems

    E-Print Network [OSTI]

    Fisher, Frank

    Large Scale Energy Storage: From Nanomaterials to Large Systems Wednesday October 26, 2011, Babbio energy storage devices. Specifically, this talk discusses 1) the challenges for grid scale of emergent technologies with ultralow costs on new energy storage materials and mechanisms. Dr. Jun Liu

  6. No Large Scale Curvature Perturbations during Waterfall of Hybrid Inflation

    E-Print Network [OSTI]

    Ali Akbar Abolhasani; Hassan Firouzjahi

    2011-01-18T23:59:59.000Z

    In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the classical evolution of the system we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum back-reactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical back-reactions. The cumulative quantum back-reactions of very small scales tachyonic modes terminate inflation very efficiently and shut off the curvature perturbations evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

  7. ORNL 2013-G00021/tcc Large Scale Graphene Production

    E-Print Network [OSTI]

    ORNL 2013-G00021/tcc 02.2013 Large Scale Graphene Production UT-B ID 201102606 Technology Summary Graphene is an emerging one-atom-thick carbon material which has the potential for a wide range research, graphene has quickly attained the status of a wonder nanomaterial and continued to draw

  8. Seamlessly Integrating Software & Hardware Modelling for Large-Scale Systems

    E-Print Network [OSTI]

    Zhao, Yuxiao

    Engineering, with the math- ematical modelling approach, Modelica, to address the software/hardware integration problem. The environment and hardware components are modelled in Modelica and integrated software-hardware codesign, large-scale sys- tems, Behavior Engineering, Modelica. 1. Introduction

  9. Large Scale Spatial Augmented Reality for Design and Prototyping

    E-Print Network [OSTI]

    Thomas, Bruce

    Chapter 10 Large Scale Spatial Augmented Reality for Design and Prototyping Michael R. Marner, Ross Augmented Reality allows the appearance of physical objects to be transformed using projected light commercial and personal use. This chapter explores how large Spatial Augmented Reality systems can be applied

  10. Determining Identifiable Parameterizations for Large-scale Physical Models in

    E-Print Network [OSTI]

    Van den Hof, Paul

    /Novem (Dutch Government). ISAPP (Integrated Systems Approach to Petroleum Production) is a joint project as applied in the field of petroleum reservoir engineering. Starting from a large-scale, physics-based model models in petroleum reservoir engineering. Petroleum reservoir engineering is concerned with maximizing

  11. Modeling emergent large-scale structures of barchan dune fields

    E-Print Network [OSTI]

    Claudin, Philippe

    that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealingModeling emergent large-scale structures of barchan dune fields S. Worman , A.B. Murray , R for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements

  12. Large-Scale Linear Programming Techniques for the Design of ...

    E-Print Network [OSTI]

    2002-02-05T23:59:59.000Z

    Feb 5, 2002 ... Page 1 ... We present large-scale optimization techniques to model the energy function that underlies the folding process of ..... which we will refer to from now on, we get a system. AT y ? b, ... Although we don't want to rule out that a so- ..... What we believe is interesting in this context is that the results from.

  13. Spatial Energy Balancing in Large-scale Wireless Multihop Networks

    E-Print Network [OSTI]

    de Veciana, Gustavo

    Spatial Energy Balancing in Large-scale Wireless Multihop Networks Seung Jun Baek and Gustavo de is on optimizing trade-offs between the energy cost of spreading traffic and the improved spatial balance of energy. We propose a parameterized family of energy balancing strategies for grids and approximate

  14. Materials Availability Expands the Opportunity for Large-Scale

    E-Print Network [OSTI]

    Kammen, Daniel M.

    Materials Availability Expands the Opportunity for Large-Scale Photovoltaics Deployment C Y R U S W of Chemistry, University of California, Berkeley, California 94720, Materials Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, Department of Materials Science and Engineering

  15. On solving large scale polynomial convex problems by randomized ...

    E-Print Network [OSTI]

    2013-03-24T23:59:59.000Z

    Mar 24, 2013 ... We show that for large-scale problems with favourable geometry, this ...... justable “aggressive” stepsize policy [8]; up to this policy, this is nothing but SMP with Pz .... building this representation is O(1)km2 a.o. We build this ...

  16. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect (OSTI)

    Bird, L.; Milligan, M.

    2012-06-01T23:59:59.000Z

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  17. Load Distribution in Large Scale Network Monitoring Infrastructures

    E-Print Network [OSTI]

    Politčcnica de Catalunya, Universitat

    Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanju`as-Cuxart, Pere to build a scalable, distributed passive network mon- itoring system that can run several arbitrary the principal research challenges behind building a distributed network monitoring system to support

  18. Large-scale tidal fields on primordial density perturbations ?

    E-Print Network [OSTI]

    Alejandro Gonzalez

    1997-02-17T23:59:59.000Z

    We calculate the strength of the tidal field produced by the large-scale density field acting on primordial density perturbations in power law models. By analysing changes in the orientation of the deformation tensor, resulted from smoothing the density field on different mass scales, we show that the large-scale tidal field can strongly affect the morphology and orientation of density peaks. The measure of the strength of the tidal field is performed as a function of the distance to the peak and of the spectral index. We detected evidence that two populations of perturbations seems to coexist; one, with a misalignment between the main axes of their inertia and deformation tensors. This would lead to the angular momentum acquisition and morphological changes. For the second population, the perturbations are found nearly aligned in the direction of the tidal field, which would imprint them low angular momentum and which would allow an alignment of structures as those reported between clusters of galaxies in filaments, and between galaxies in clusters. Evidence is presented that the correlation between the orientation of perturbations and the large-scale density field could be a common property of Gaussian density fields with spectral indexes $n < 0$. We argue that alignment of structures can be used to probe the flatness of the spectrum on large scales but it cannot determine the exact value of the spectral index.

  19. Chemical engineers design, control and optimize large-scale chemical,

    E-Print Network [OSTI]

    Rohs, Remo

    Emphasis in Nanotechnology · ChemicalEngineering Emphasis in Petroleum Engineering · ChemicalEngineering38 Chemical engineers design, control and optimize large-scale chemical, physicochemical and electronics fields. Chemical Engineers are employed in areas as diverse as the chemical, materials, energy

  20. Chemical engineers design, control and optimize large-scale chemical,

    E-Print Network [OSTI]

    Rohs, Remo

    · ChemicalEngineering (Nanotechnology) Bachelor of Science 131 units · ChemicalEngineering(Petroleum38 Chemical engineers design, control and optimize large-scale chemical, physicochemical and electronics fields. Chemical Engineers are employed in areas as diverse as the chemical, pharmaceutical

  1. Chemical engineers design, control and optimize large-scale chemical,

    E-Print Network [OSTI]

    Rohs, Remo

    in Nanotechnology · ChemicalEngineering Emphasis in Petroleum Engineering · ChemicalEngineering Emphasis in Polymers38 Chemical engineers design, control and optimize large-scale chemical, physicochemical and electronics fields. Chemical Engineers are employed in areas as diverse as the chemical, pharmaceutical

  2. IFIP/IEEE International Conference on Very Large Scale Integration

    E-Print Network [OSTI]

    Pierre, Laurence

    -Signal IC Design · 3-D Integration · Physical Design · SoC Design for Variability, Reliability, Fault22nd IFIP/IEEE International Conference on Very Large Scale Integration VLSI-SoC 2014 October 6-8, 2014 Playa del Carmen, Mexico Iberostar Tucán and Quetzal Hotel General Chairs: Arturo Sarmiento Reyes

  3. Reduced-Order Models of Zero-Net Mass-Flux Jets for Large-Scale Flow Control Simulations

    E-Print Network [OSTI]

    Mittal, Rajat

    Reduced-Order Models of Zero-Net Mass-Flux Jets for Large-Scale Flow Control Simulations Reni Raju computational tools are well suited for modeling the dynamics of zero-net mass-flux actuators, the computational vorticity, (s-1 ) I. Introduction ERO-net mass-flux (ZNMF) actuators or "synthetic jets" have potential

  4. High-Precision Floating-Point Arithmetic in ScientificComputation

    SciTech Connect (OSTI)

    Bailey, David H.

    2004-12-31T23:59:59.000Z

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required: some of these applications require roughly twice this level; others require four times; while still others require hundreds or more digits to obtain numerically meaningful results. Such calculations have been facilitated by new high-precision software packages that include high-level language translation modules to minimize the conversion effort. These activities have yielded a number of interesting new scientific results in fields as diverse as quantum theory, climate modeling and experimental mathematics, a few of which are described in this article. Such developments suggest that in the future, the numeric precision used for a scientific computation may be as important to the program design as are the algorithms and data structures.

  5. Advanced Scientific Computing Research User Facilities | U.S...

    Office of Science (SC) Website

    research projects that are funded by the DOE Office of Science and require high performance computing support are eligible to apply to use NERSC resources. Projects that are not...

  6. Certainty in Stockpile Computing: Recommending a Verification and Validation Program for Scientific Software

    SciTech Connect (OSTI)

    Lee, J.R.

    1998-11-01T23:59:59.000Z

    As computing assumes a more central role in managing the nuclear stockpile, the consequences of an erroneous computer simulation could be severe. Computational failures are common in other endeavors and have caused project failures, significant economic loss, and loss of life. This report examines the causes of software failure and proposes steps to mitigate them. A formal verification and validation program for scientific software is recommended and described.

  7. A Stochastic Quasi-Newton Method for Large-Scale Optimization

    E-Print Network [OSTI]

    2015-02-17T23:59:59.000Z

    ‡Department of Industrial Engineering and Management Sciences, ... Office of Advanced Scientific Computing Research, Applied Mathematics program ... ples include computer network traffic, web search, online advertisement, and sensor .... and stochastic gradient descent (SGD) method are used in the literature to denote.

  8. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect (OSTI)

    RODRIGUEZ, MARKO A. [Los Alamos National Laboratory; BOLLEN, JOHAN [Los Alamos National Laboratory; VAN DE SOMPEL, HERBERT [Los Alamos National Laboratory

    2007-01-30T23:59:59.000Z

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  9. National facility for advanced computational science: A sustainable path to scientific discovery

    SciTech Connect (OSTI)

    Simon, Horst; Kramer, William; Saphir, William; Shalf, John; Bailey, David; Oliker, Leonid; Banda, Michael; McCurdy, C. William; Hules, John; Canning, Andrew; Day, Marc; Colella, Philip; Serafini, David; Wehner, Michael; Nugent, Peter

    2004-04-02T23:59:59.000Z

    Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for Advanced Computational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

  10. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect (OSTI)

    Ghattas, Omar [The University of Texas at Austin] [The University of Texas at Austin

    2013-10-15T23:59:59.000Z

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  11. A subspace, interior, and conjugate gradient method for large-scale bound-constrained minimization problems

    SciTech Connect (OSTI)

    Branch, M.A.; Coleman, T.F.; Li, Y.

    1999-09-01T23:59:59.000Z

    A subspace adaptation of the Coleman-Li trust region and interior method is proposed for solving large-scale bound-constrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its full-space version. Computational performance on various large test problems is reported; advantages of the approach are demonstrated. The experience indicates that the proposed method represents an efficient way to solve large bound-constrained minimization problems.

  12. Alignments of Galaxy Group Shapes with Large Scale Structure

    E-Print Network [OSTI]

    Paz, Dante J; Merchán, Manuel; Padilla, Nelson

    2011-01-01T23:59:59.000Z

    In this paper we analyse the alignment of galaxy groups with the surrounding large scale structure traced by spectroscopic galaxies from the Sloan Digital Sky Survey Data Release 7. We characterise these alignments by means of an extension of the classical two-point cross-correlation function, developed by Paz et al. 2008 (arXiv:0804.4477, MNRAS 389 1127). We find a strong alignment signal between the projected major axis of group shapes and the surrounding galaxy distribution up to scales of 30 Mpc/h. This observed anisotropy signal becomes larger as the galaxy group mass increases, in excellent agreement with the corresponding predicted alignment obtained from mock catalogues and LCDM cosmological simulations. These measurements provide new direct evidence of the adequacy of the gravitational instability picture to describe the large-scale structure formation of our Universe.

  13. Diffuse Gamma-Ray Emission from Large Scale Structures

    E-Print Network [OSTI]

    Dobardzic, Aleksandra

    2012-01-01T23:59:59.000Z

    For more than a decade now the complete origin of the diffuse gamma-ray emission background (EGRB) has been unknown. Major components like unresolved star-forming galaxies (making 10GeV. Moreover, we show that, even though the gamma-ray emission arising from structure formation shocks at galaxy clusters is below previous estimates, these large scale shocks can still give an important, and even dominant at high energies, contribution to the EGRB. Future detections of cluster gamma-ray emission would make our upper limit of the extragalactic gamma-ray emission from structure-formation process, a firm prediction, and give us deeper insight in evolution of these large scale shock.

  14. Quantum noise in large-scale coherent nonlinear photonic circuits

    E-Print Network [OSTI]

    Charles Santori; Jason S. Pelc; Raymond G. Beausoleil; Nikolas Tezak; Ryan Hamerly; Hideo Mabuchi

    2014-05-27T23:59:59.000Z

    A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

  15. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect (OSTI)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29T23:59:59.000Z

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  16. Stochastic Ordering of Interferences in Large-scale Wireless Networks

    E-Print Network [OSTI]

    Lee, Junghoon

    2012-01-01T23:59:59.000Z

    Stochastic orders are binary relations defined on probability distributions which capture intuitive notions like being larger or being more variable. This paper introduces stochastic ordering of interference distributions in large-scale networks modeled as point process. Interference is the main performance-limiting factor in most wireless networks, thus it is important to understand its statistics. Since closed-form results for the distribution of interference for such networks are only available in limited cases, interference of networks are compared using stochastic orders, even when closed form expressions for interferences are not tractable. We show that the interference from a large-scale network depends on the fading distributions with respect to the stochastic Laplace transform order. The condition for path-loss models is also established to have stochastic ordering between interferences. The stochastic ordering of interferences between different networks are also shown. Monte-Carlo simulations are us...

  17. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect (OSTI)

    Rajamony, Ram

    2014-11-20T23:59:59.000Z

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­?scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  18. Predictions for Scientific Computing Fifty Years From Now

    E-Print Network [OSTI]

    Li, Tiejun

    ; automobiles, airplanes, spacecraft, computers, nuclear power, nuclear weapons, plastics, antibiotics, and genetic engineering? I believe that the explanation of our special position in history may be that it is not so special after all, because history tends not to last very long. This argument has been called

  19. PNNL pushing scientific discovery through data intensive computing breakthroughs

    ScienceCinema (OSTI)

    Deborah Gracio; David Koppenaal; Ruby Leung

    2012-12-31T23:59:59.000Z

    The Pacific Northwest National Laboratorys approach to data intensive computing (DIC) is focused on three key research areas: hybrid hardware architectures, software architectures, and analytic algorithms. Advancements in these areas will help to address, and solve, DIC issues associated with capturing, managing, analyzing and understanding, in near real time, data at volumes and rates that push the frontiers of current technologies.

  20. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect (OSTI)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01T23:59:59.000Z

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  1. Suppression of large-scale perturbations by stiff solid

    E-Print Network [OSTI]

    Vladimír Balek; Matej Škovran

    2015-01-28T23:59:59.000Z

    Evolution of large-scale scalar perturbations in the presence of stiff solid (solid with pressure to energy density ratio > 1/3) is studied. If the solid dominated the dynamics of the universe long enough, the perturbations could end up suppressed by as much as several orders of magnitude. To avoid too steep large-angle power spectrum of CMB, radiation must have prevailed over the solid long enough before recombination.

  2. Suppression of large-scale perturbations by stiff solid

    E-Print Network [OSTI]

    Balek, Vladimír

    2015-01-01T23:59:59.000Z

    Evolution of large-scale scalar perturbations in the presence of stiff solid (solid with pressure to energy density ratio > 1/3) is studied. If the solid dominated the dynamics of the universe long enough, the perturbations could end up suppressed by as much as several orders of magnitude. To avoid too steep large-angle power spectrum of CMB, radiation must have prevailed over the solid long enough before recombination.

  3. Computational methods for stealth design

    SciTech Connect (OSTI)

    Cable, V.P. (Lockheed Advanced Development Co., Sunland, CA (United States))

    1992-08-01T23:59:59.000Z

    A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.

  4. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect (OSTI)

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01T23:59:59.000Z

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  5. Primordial quantum nonequilibrium and large-scale cosmic anomalies

    E-Print Network [OSTI]

    Samuel Colin; Antony Valentini

    2014-07-31T23:59:59.000Z

    We study incomplete relaxation to quantum equilibrium at long wavelengths, during a pre-inflationary phase, as a possible explanation for the reported large-scale anomalies in the cosmic microwave background (CMB). Our scenario makes use of the de Broglie-Bohm pilot-wave formulation of quantum theory, in which the Born probability rule has a dynamical origin. The large-scale power deficit could arise from incomplete relaxation for the amplitudes of the primordial perturbations. We show, by numerical simulations for a spectator scalar field, that if the pre-inflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (an inverse-tangent dependence on wavenumber k, with oscillations). It is found that our scenario is able to produce a power deficit in the observed region and of the observed (approximate) magnitude for an appropriate choice of cosmological parameters. We also discuss the large-scale anisotropy, which could arise from incomplete relaxation for the phases of the primordial perturbations. We present numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. The extent to which the data might support our scenario is left as a question for future work. Our results suggest that we have a potentially viable model that might explain two apparently independent cosmic anomalies by means of a single mechanism.

  6. NERSC 2011: High Performance Computing Facility Operational Assessment for the National Energy Research Scientific Computing Center

    E-Print Network [OSTI]

    Antypas, Katie

    2013-01-01T23:59:59.000Z

    NERSC 2011 High Performance Computing Facility Operationalby providing high-performance computing, information, data,s deep knowledge of high performance computing to overcome

  7. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    SciTech Connect (OSTI)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26T23:59:59.000Z

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i.e., Newtonian or non-Newtonian). The most important properties for testing with Newtonian slurries are the Archimedes number distribution and the particle concentration. For some test objectives, the shear strength is important. In the testing to collect data for CFD V and V and CFD comparison, the liquid density and liquid viscosity are important. In the high temperature testing, the liquid density and liquid viscosity are important. The Archimedes number distribution combines effects of particle size distribution, solid-liquid density difference, and kinematic viscosity. The most important properties for testing with non-Newtonian slurries are the slurry yield stress, the slurry consistency, and the shear strength. The solid-liquid density difference and the particle size are also important. It is also important to match multiple properties within the same simulant to achieve behavior representative of the waste. Other properties such as particle shape, concentration, surface charge, and size distribution breadth, as well as slurry cohesiveness and adhesiveness, liquid pH and ionic strength also influence the simulant properties either directly or through other physical properties such as yield stress.

  8. Improving Energy Efficiency of GPU based General-Purpose Scientific Computing through

    E-Print Network [OSTI]

    Deng, Zhigang

    Improving Energy Efficiency of GPU based General-Purpose Scientific Computing through Automated challenge. In this paper, we propose a novel framework to improve the energy efficiency of GPU-based General configurations to improve the energy efficiency of any given GPGPU program. Through preliminary empirical

  9. The Portable Extensible Toolkit for Scientific computing Day 1: Usage and Algorithms

    E-Print Network [OSTI]

    The Portable Extensible Toolkit for Scientific computing Day 1: Usage and Algorithms Jed Brown CSCS runs performantly on a laptop · No iPhone support Jed Brown (ETH Zürich) PETSc day 1 CSCS 2010-05-10 5) · Same code runs performantly on a laptop · No iPhone support Jed Brown (ETH Zürich) PETSc day 1 CSCS

  10. Savannah River National Laboratory (SRNL) Scientific Computing Where We Have Been And

    E-Print Network [OSTI]

    Valtorta, Marco

    Savannah River National Laboratory (SRNL) ­ Scientific Computing ­ Where We Have Been And Where We · 1961: University of Georgia founded the Savannah River Ecology Laboratory (SREL) to study effects National Laboratory and Hanford Site) · SRS workforce: Approximately 8,000 ­ Prime contractor (about 58

  11. ITER UltraScaleScientific Joint Dark Energy Mission ComputingCapability

    E-Print Network [OSTI]

    #12;ITER UltraScaleScientific Joint Dark Energy Mission ComputingCapability Linac Coherent Light eRHIC Fusion Energy Contingency Source Upgrade HFIR Second Cold Source Integrated Beam Experiment IntroductionIntroductionIntroductionIntroductionIntroduction 8 Prioritization Process 9 A Benchmark

  12. VAX/VMS file protection on the STC (Scientific and Technical Computing) VAXES

    SciTech Connect (OSTI)

    Not Available

    1988-06-01T23:59:59.000Z

    This manual is a guide to use the file protection mechanisms available on the Martin Marietta Energy Systems, Inc. Scientific and Technical Computing (STC) System VAXes. User identification codes (UICs) and general identifiers are discussed as a basis for understanding UIC-based and access control list (ACL) protection. 5 figs.

  13. A Scientific and Engineering C ti Cl t F iComputing Cluster Focusing on

    E-Print Network [OSTI]

    Mohanty, Saraju P.

    A Scientific and Engineering C ti Cl t F iComputing Cluster Focusing on the Modeling faculty cover all time and length scales ~50 researchers Combustion chemistry Material fatigue 50;CrossDisciplinary Expertise · Chemistry ­ Bagus · Engineering ­ Boetcher (M&EE) ­ Borden ­ Cundari

  14. INTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing and Information

    E-Print Network [OSTI]

    Bürger, Raimund

    -dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing

  15. INTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing and Information

    E-Print Network [OSTI]

    Bürger, Raimund

    -dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially one-dimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing

  16. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    E-Print Network [OSTI]

    David Abdurachmanov; Brian Bockelman; Peter Elmer; Giulio Eulisse; Robert Knight; Shahzad Muzaffar

    2014-10-10T23:59:59.000Z

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  17. Failure as a Service (FaaS): A Cloud Service for Large-Scale, Online Failure Drills

    E-Print Network [OSTI]

    California at Irvine, University of

    . Gunawi Thanh Do Joseph M. Hellerstein Ion Stoica Dhruba Borthakur Jesse Robbins Electrical Engineering of Wisconsin, Madison Facebook Opscode Abstract Cloud computing is pervasive, but cloud service outages still." One main reason why ma- jor outages still occur is that there are many un- known large-scale failure

  18. An efficient algorithm for load balancing of transformers and feeders by switch operation in large scale distribution systems

    SciTech Connect (OSTI)

    Aoki, K. (Hiroshima Univ., Higashihiroshima (JP)); Kuwabara, H. (Kinki Univ., Kure (JP)); Satoh, T. (Hiroshima Univ., Higashihiroshima (JP)); Kanezashi, M. (Aichi Institute of Technology, Toyota (JP))

    1988-10-01T23:59:59.000Z

    This paper presents a systematic and practical algorithm for load balancing of transformers and feeders by automatic sectionalizing switch operation in large scale distribution systems of radial type. The algorithm is developed by extending an approximation algorithm for load transfer of desired two transformers. The algorithm proposed here is applicable to operations not only in normal state, but also in scheduled and failure outage states. Computational experience on a real large scale system has indicated that the algorithm is valid and effective for practical operations.

  19. Generation of large-scale winds in horizontally anisotropic convection

    E-Print Network [OSTI]

    von Hardenberg, J; Provenzale, A; Spiegel, E A

    2015-01-01T23:59:59.000Z

    We simulate three-dimensional, horizontally periodic Rayleigh-B\\'enard convection between free-slip horizontal plates, rotating about a horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind.

  20. Solar cycle variations of large scale flows in the Sun

    E-Print Network [OSTI]

    Sarbani Basu; H. M. Antia

    2000-01-17T23:59:59.000Z

    Using data from the Michelson Doppler Imager (MDI) instrument on board the Solar and Heliospheric Observatory (SOHO), we study the large-scale velocity fields in the outer part of the solar convection zone using the ring diagram technique. We use observations from four different times to study possible temporal variations in flow velocity. We find definite changes in both the zonal and meridional components of the flows. The amplitude of the zonal flow appears to increase with solar activity and the flow pattern also shifts towards lower latitude with time.

  1. Statistical analysis of large-scale structure in the Universe

    E-Print Network [OSTI]

    Martin Kerscher

    1999-12-15T23:59:59.000Z

    Methods for the statistical characterization of the large-scale structure in the Universe will be the main topic of the present text. The focus is on geometrical methods, mainly Minkowski functionals and the J-function. Their relations to standard methods used in cosmology and spatial statistics and their application to cosmological datasets will be discussed. This work is not only meant as a short review for comologist, but also attempts to illustrate these morphological methods and to make them accessible to scientists from other fields. Consequently, a short introduction to the standard picture of cosmology is given.

  2. Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop

    Broader source: Energy.gov [DOE]

    ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

  3. Large-Scale Anisotropy of EGRET Gamma Ray Sources

    E-Print Network [OSTI]

    Luis Anchordoqui; Thomas McCauley; Thomas Paul; Olaf Reimer; Diego F. Torres

    2005-06-24T23:59:59.000Z

    In the course of its operation, the EGRET experiment detected high-energy gamma ray sources at energies above 100 MeV over the whole sky. In this communication, we search for large-scale anisotropy patterns among the catalogued EGRET sources using an expansion in spherical harmonics, accounting for EGRET's highly non-uniform exposure. We find significant excess in the quadrupole and octopole moments. This is consistent with the hypothesis that, in addition to the galactic plane, a second mid-latitude (5^{\\circ} < |b| < 30^{\\circ}) population, perhaps associated with the Gould belt, contributes to the gamma ray flux above 100 MeV.

  4. Robust Morphological Measures for Large-Scale Structure

    E-Print Network [OSTI]

    T. Buchert

    1994-12-17T23:59:59.000Z

    A complete family of statistical descriptors for the morphology of large--scale structure based on Minkowski--Functionals is presented. These robust and significant measures can be used to characterize the local and global morphology of spatial patterns formed by a coverage of point sets which represent galaxy samples. Basic properties of these measures are highlighted and their relation to the `genus statistics' is discussed. Test models like a Poissonian point process and samples generated from a Voronoi--model are put into perspective.

  5. Large scale obscuration and related climate effects open literature bibliography

    SciTech Connect (OSTI)

    Russell, N.A.; Geitgey, J.; Behl, Y.K.; Zak, B.D.

    1994-05-01T23:59:59.000Z

    Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

  6. Large-Scale Liquid Hydrogen Handling Equipment | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(FactDepartment ofLetter Report:40PM toLEDControl ConceptCombustionLarge-Scale

  7. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect (OSTI)

    Marino, Dr. Raffaele [National Center for Atmospheric Research (NCAR); Mininni, Dr. Pablo D. [Universidad de Buenos Aires, Argentina; Rosenberg, Duane L [ORNL; Pouquet, Dr. Annick [National Center for Atmospheric Research (NCAR)

    2014-01-01T23:59:59.000Z

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  8. Optimal operation of large-scale power systems

    SciTech Connect (OSTI)

    Lee, K.Y.; Ortiz, J.L.; Mohtadi, M.A.; Park, Y.M.

    1988-05-01T23:59:59.000Z

    This paper presents a method for an optimal operation of large-scale power systems similar to the one utilized by the Houston Lighting and Power Company. The main objective is to minimize the system fuel costs, and maintain an acceptable system performance in terms of limits on generator real and reactive power outputs, transformer tap settings, and bus voltage levels. Minimizing the fuel costs of such large-scale systems enhances the performance of optimal real power generator allocation and of optimal power flow that results in an economic dispatch. The gradient projection method (GPM) is utilized in solving the optimization problems. It is an iterative numerical procedure for finding an extremum of a function of several variables that are required to satisfy various constraining relations without using penalty functions or Lagrange multipliers among other advantages. Mathematical models are developed to represent the sensitivity relationships between dependent and control variables for both real- and reactive-power optimization procedures; and thus eliminate the use of B-coefficients. Data provided by the Houston lighting and Power Company are used to demonstrate the effectiveness of the proposed procedures.

  9. Skewness and Kurtosis in Large-Scale Cosmic Fields

    E-Print Network [OSTI]

    F. Bernardeau

    1993-12-13T23:59:59.000Z

    In this paper, I present the calculation of the third and fourth moments of both the distribution function of the large--scale density and the large--scale divergence of the velocity field, $\\theta$. These calculations are made by the mean of perturbative calculations assuming Gaussian initial conditions and are expected to be valid in the linear or quasi linear regime. The moments are derived for a top--hat window function and for any cosmological parameters $\\Omega$ and $\\Lambda$. It turns out that the dependence with $\\Lambda$ is always very weak whereas the moments of the distribution function of the divergence are strongly dependent on $\\Omega$. A method to measure $\\Omega$ using the skewness of this field has already been presented by Bernardeau et al. (1993). I show here that the simultaneous measurement of the skewness and the kurtosis allows to test the validity of the gravitational instability scenario hypothesis. Indeed there is a combination of the first three moments of $\\theta$ that is almost independent of the cosmological parameters $\\Omega$ and $\\Lambda$, $${(-3^2) \\over ^2}\\approx 1.5,$$ (the value quoted is valid when the index of the power spectrum at the filtering scale is close to -1) so that any cosmic velocity field created by gravitational instabilities should verify such a property.

  10. Development of high performance scientific components for interoperability of computing packages

    SciTech Connect (OSTI)

    Gulabani, Teena Pratap

    2008-12-01T23:59:59.000Z

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  11. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2009-10-01T23:59:59.000Z

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  12. Acts -- A collection of high performing software tools for scientific computing

    SciTech Connect (OSTI)

    Drummond, L.A.; Marques, O.A.

    2002-11-01T23:59:59.000Z

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Further, many new discoveries depend on high performance computer simulations to satisfy their demands for large computational resources and short response time. The Advanced CompuTational Software (ACTS) Collection brings together a number of general-purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS collection promotes code portability, reusability, reduction of duplicate efforts, and tool maturity. This paper presents a brief introduction to the functionality available in ACTS. It also highlight the tools that are in demand by Climate and Weather modelers.

  13. Scientific Grand Challenges: Crosscutting Technologies for Computing at the Exascale - February 2-4, 2010, Washington, D.C.

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.

    2011-02-06T23:59:59.000Z

    The goal of the "Scientific Grand Challenges - Crosscutting Technologies for Computing at the Exascale" workshop in February 2010, jointly sponsored by the U.S. Department of Energy’s Office of Advanced Scientific Computing Research and the National Nuclear Security Administration, was to identify the elements of a research and development agenda that will address these challenges and create a comprehensive exascale computing environment. This exascale computing environment will enable the science applications identified in the eight previously held Scientific Grand Challenges Workshop Series.

  14. DOE's Office of Science Seeks Proposals for Expanded Large-Scale...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of new energy technologies." "This unique program opens up the world of high-performance computing to a broad array of new scientific users," Bodman said. "Through the use of...

  15. Atypical Behavior Identification in Large Scale Network Traffic

    SciTech Connect (OSTI)

    Best, Daniel M.; Hafen, Ryan P.; Olsen, Bryan K.; Pike, William A.

    2011-10-23T23:59:59.000Z

    Cyber analysts are faced with the daunting challenge of identifying exploits and threats within potentially billions of daily records of network traffic. Enterprise-wide cyber traffic involves hundreds of millions of distinct IP addresses and results in data sets ranging from terabytes to petabytes of raw data. Creating behavioral models and identifying trends based on those models requires data intensive architectures and techniques that can scale as data volume increases. Analysts need scalable visualization methods that foster interactive exploration of data and enable identification of behavioral anomalies. Developers must carefully consider application design, storage, processing, and display to provide usability and interactivity with large-scale data. We present an application that highlights atypical behavior in enterprise network flow records. This is accomplished by utilizing data intensive architectures to store the data, aggregation techniques to optimize data access, statistical techniques to characterize behavior, and a visual analytic environment to render the behavioral trends, highlight atypical activity, and allow for exploration.

  16. Transition from Large-Scale to Small-Scale Dynamo

    SciTech Connect (OSTI)

    Ponty, Y. [Universite de Nice Sophia-Antipolis, CNRS, Observatoire de la Cote d'Azur, B.P. 4229, Nice cedex 04 (France); Plunian, F. [Institut des Sciences de la Terre, CNRS, Universite Joseph Fourier, B.P. 53, 38041 Grenoble cedex 09 (France)

    2011-04-15T23:59:59.000Z

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The is governed by a generalized {alpha} effect, which includes both the usual {alpha} effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized {alpha} effect scales as O(Rm{sup -1}), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  17. The XMM/Megacam-VST/VIRMOS Large Scale Structure Survey

    E-Print Network [OSTI]

    M. Pierre

    2000-11-08T23:59:59.000Z

    The objective of the XMM-LSS Survey is to map the large scale structure of the universe, as highlighted by clusters and groups of galaxies, out to a redshift of about 1, over a single 8x8 sq.deg. area. For the first time, this will reveal the topology of the distribution of the deep potential wells and provide statistical measurements at truly cosmological distances. In addition, clusters identified via their X-ray properties will form the basis for the first uniformly-selected, multi-wavelength survey of the evolution of clusters and individual cluster galaxies as a function of redshift. The survey will also address the very important question of the QSO distribution within the cosmic web.

  18. High Metallicity, Photoionised Gas in Intergalactic Large-Scale Filaments

    E-Print Network [OSTI]

    Bastien Aracil; Todd M. Tripp; David V. Bowen; Jason X. Proschaska; Hsiao-Wen Chen; Brenda L. Frye

    2006-08-21T23:59:59.000Z

    We present high-resolution UV spectra of absorption-line systems toward the low-z QSO HS0624+6907 (z=0.3700). Coupled with spectroscopic galaxy redshifts, we find that many of these absorbers are integalactic gas clouds distributed within large-scale structures. The gas is cool (T0.9). STIS data reveal a cluster of 13 HI Lyman alpha lines within a 1000 km/s interval at z=0.0635. We find 10 galaxies at this redshift with impact parameters ranging from 135 h^-1 kpc to 1.37 h^-1 Mpc. We attribute the HI Lya absorptions to intragroup medium gas, possibly from a large-scale filament viewed along its long axis. Remarkably, the metallicity is near-solar, [M/H] = -0.05 +/- 0.4 (2 sigma uncertainty), yet the nearest galaxy which might pollute the IGM is at least 135 h_70^-1 kpc away. Tidal stripping from nearby galaxies appears to be the most likely origin of this highly enriched, cool gas. More than six Abell galaxy clusters are found within 4 degree of the sight line suggesting that the QSO line of sight passes near a node in the cosmic web. At z~0.077, we find absorption systems as well as galaxies at the redshift of the nearby clusters Abell 564 and Abell 559. We conclude that the sight line pierces a filament of gas and galaxies feeding into these clusters. The absorber at z_abs = 0.07573 associated with Abell 564/559 also has a high metallicity with [C/H] > -0.6, but again the closest galaxy is relatively far from the sight line (293 h^-1 kpc).

  19. Large-scale fabrication and assembly of carbon nanotubes via nanopelleting

    E-Print Network [OSTI]

    El Aguizy, Tarek A., 1977-

    2004-01-01T23:59:59.000Z

    Widespread use of carbon nanotubes is predicated on the development of robust large-scale manufacturing techniques. There remain, however, few feasible methods for the large-scale handling of aligned and geometrically ...

  20. Influence of Western North Pacific Tropical Cyclones on Their Large-Scale Environment

    E-Print Network [OSTI]

    Sobel, Adam

    water vapor, and sea surface tem- perature (SST)] on an index of TC activity [accumulated cyclone energyInfluence of Western North Pacific Tropical Cyclones on Their Large-Scale Environment ADAM H. SOBEL) tropical cyclones (TCs) on their large-scale environment by lag regressing various large-scale climate

  1. Large Scale Approximate Inference and Experimental Design for Sparse Linear Models

    E-Print Network [OSTI]

    Seeger, Matthias

    Large Scale Approximate Inference and Experimental Design for Sparse Linear Models Matthias W.kyb.tuebingen.mpg.de/bs/people/seeger/ 27 June 2008 Matthias W. Seeger (MPI BioCyb) Large Scale Bayesian Experimental Design 27/6/08 1 / 27 Algorithms 4 Magnetic Resonance Imaging Sequences Matthias W. Seeger (MPI BioCyb) Large Scale Bayesian

  2. LiveBench-2: Large-Scale Automated Evaluation of Protein Structure Prediction Servers

    E-Print Network [OSTI]

    Fischer, Daniel

    LiveBench-2: Large-Scale Automated Evaluation of Protein Structure Prediction Servers Janusz M from other evaluation experiments because it is a large-scale and a fully automated procedure. Since, to keep in pace with the development, we present the results of the second large-scale evaluation of pro

  3. Large-Scale Spray Releases: Additional Aerosol Test Results

    SciTech Connect (OSTI)

    Daniel, Richard C.; Gauglitz, Phillip A.; Burns, Carolyn A.; Fountain, Matthew S.; Shimskey, Rick W.; Billing, Justin M.; Bontha, Jagannadha R.; Kurath, Dean E.; Jenks, Jeromy WJ; MacFarlan, Paul J.; Mahoney, Lenna A.

    2013-08-01T23:59:59.000Z

    One of the events postulated in the hazard analysis for the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak event involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids that behave as a Newtonian fluid. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and in processing facilities across the DOE complex. To expand the data set upon which the WTP accident and safety analyses were based, an aerosol spray leak testing program was conducted by Pacific Northwest National Laboratory (PNNL). PNNL’s test program addressed two key technical areas to improve the WTP methodology (Larson and Allen 2010). The first technical area was to quantify the role of slurry particles in small breaches where slurry particles may plug the hole and prevent high-pressure sprays. The results from an effort to address this first technical area can be found in Mahoney et al. (2012a). The second technical area was to determine aerosol droplet size distribution and total droplet volume from prototypic breaches and fluids, including sprays from larger breaches and sprays of slurries for which literature data are mostly absent. To address the second technical area, the testing program collected aerosol generation data at two scales, commonly referred to as small-scale and large-scale testing. The small-scale testing and resultant data are described in Mahoney et al. (2012b), and the large-scale testing and resultant data are presented in Schonewill et al. (2012). In tests at both scales, simulants were used to mimic the relevant physical properties projected for actual WTP process streams.

  4. Ferroelectric opening switches for large-scale pulsed power drivers.

    SciTech Connect (OSTI)

    Brennecka, Geoffrey L.; Rudys, Joseph Matthew; Reed, Kim Warren; Pena, Gary Edward; Tuttle, Bruce Andrew; Glover, Steven Frank

    2009-11-01T23:59:59.000Z

    Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of these unexpected discoveries have lead to new research directions to address challenges.

  5. MiniGhost : a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing.

    SciTech Connect (OSTI)

    Barrett, Richard Frederick; Heroux, Michael Allen; Vaughan, Courtenay Thomas

    2012-04-01T23:59:59.000Z

    A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this report we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.

  6. Fractal Approach to Large-Scale Galaxy Distribution

    E-Print Network [OSTI]

    Yurij Baryshev; Pekka Teerikorpi

    2005-05-10T23:59:59.000Z

    We present a review of the history and the present state of the fractal approach to the large-scale distribution of galaxies. Angular correlation function was used as a general instrument for the structure analysis. It was realized later that a normalization condition for the reduced correlation function estimator results in distorted values for both R_{hom} and fractal dimension D. Moreover, according to a theorem on projections of fractals, galaxy angular catalogues can not be used for detecting a structure with the fractal dimension D>2. For this 3-d maps are required, and indeed modern extensive redshift-based 3-d maps have revealed the ``hidden'' fractal dimension of about 2, and have confirmed superclustering at scales even up to 500 Mpc (e.g. the Sloan Great Wall). On scales, where the fractal analysis is possible in completely embedded spheres, a power--law density field has been found. The fractal dimension D =2.2 +- 0.2 was directly obtained from 3-d maps and R_{hom} has expanded from 10 Mpc to scales approaching 100 Mpc. In concordance with the 3-d map results, modern all sky galaxy counts in the interval 10^m - 15^m give a 0.44m-law which corresponds to D=2.2 within a radius of 100h^{-1}_{100} Mpc. We emphasize that the fractal mass--radius law of galaxy clustering has become a key phenomenon in observational cosmology.

  7. An improved voltage control on large-scale power system

    SciTech Connect (OSTI)

    Vu, H.; Pruvot, P.; Launay, C.; Harmand, Y. [Electricite de France, Clamart (France). Study and Research Div.] [Electricite de France, Clamart (France). Study and Research Div.

    1996-08-01T23:59:59.000Z

    To achieve a better voltage-var control in the electric power transmission system, different facilities are used. Generators are equipped with automatic voltage regulators to cope with sudden and random changes voltage caused by natural load fluctuations or failures. Other devices like capacitors, inductors, transformers with on load tap changers are installed on the network. Faced with the evolution of the network and operating conditions, electricity utilities are more and more interested in overall and coherent control systems, automatic or not. These systems are expected to coordinate the actions of local facilities for a better voltage control (more stable and faster reaction) inside different areas of the network in case of greater voltage and var variations. They affords besides a better use of existing reactive resources. Also, installation of new devices can be avoided allowing economy of investment. With this frame of mind, EDF has designed a system called Co-ordinated Secondary Voltage Control (CSVC). It`s an automatic closed loop system with a dynamic of a few minutes. It takes into account the network conditions (topology, loads), the voltage limits and the generator operating constraints. This paper presents recent improvements which allow the CSVC to control the voltage profile and different kinds of reactive means on a large-scale power system. Furthermore, this paper presents solution to spread out investment costs over several years, considering a deployment gradually extended.

  8. Large-scale star formation in the Magellanic Clouds

    E-Print Network [OSTI]

    Jochen M. Braun

    2001-08-03T23:59:59.000Z

    In this contribution I will present the current status of our project of stellar population analyses and spatial information of both Magellanic Clouds (MCs). The Magellanic Clouds - especially the LMC with its large size and small depth (<300pc) - are suitable laboratories and testing ground for theoretical models of star formation. With distance moduli of 18.5 and 18.9mag for the LMC and SMC, respectively, and small galactic extinction, their stellar content can be studied in detail from the most massive stars of the youngest populations (<25Myr) connected to H-alpha emission down to the low mass end of about 1/10 of a solar mass. Based on broad-band photometry (U,B,V) I present results for the supergiant shell (SGS) SMC1, some regions at the LMC east side incl. LMC2 showing different overlapping young populations and the region around N171 with its large and varying colour excess, and LMC4. This best studied SGS shows a coeval population aged about 12Myr with little age spread and no correlation to distance from LMC4's centre. I will show that the available data are not compatible with many of the proposed scenarios like SSPSF or a central trigger (like a cluster or GRB), while a large-scale trigger like the bow-shock of the rotating LMC can do the job.

  9. Testing Inflation with Large Scale Structure: Connecting Hopes with Reality

    E-Print Network [OSTI]

    Marcelo Alvarez; Tobias Baldauf; J. Richard Bond; Neal Dalal; Roland de Putter; Olivier Doré; Daniel Green; Chris Hirata; Zhiqi Huang; Dragan Huterer; Donghui Jeong; Matthew C. Johnson; Elisabeth Krause; Marilena Loverde; Joel Meyers; P. Daniel Meerburg; Leonardo Senatore; Sarah Shandera; Eva Silverstein; Anže Slosar; Kendrick Smith; Matias Zaldarriaga; Valentin Assassi; Jonathan Braden; Amir Hajian; Takeshi Kobayashi; George Stein; Alexander van Engelen

    2014-12-15T23:59:59.000Z

    The statistics of primordial curvature fluctuations are our window into the period of inflation, where these fluctuations were generated. To date, the cosmic microwave background has been the dominant source of information about these perturbations. Large scale structure is however from where drastic improvements should originate. In this paper, we explain the theoretical motivations for pursuing such measurements and the challenges that lie ahead. In particular, we discuss and identify theoretical targets regarding the measurement of primordial non-Gaussianity. We argue that when quantified in terms of the local (equilateral) template amplitude $f_{\\rm NL}^{\\rm loc}$ ($f_{\\rm NL}^{\\rm eq}$), natural target levels of sensitivity are $\\Delta f_{\\rm NL}^{\\rm loc, eq.} \\simeq 1$. We highlight that such levels are within reach of future surveys by measuring 2-, 3- and 4-point statistics of the galaxy spatial distribution. This paper summarizes a workshop held at CITA (University of Toronto) on October 23-24, 2014.

  10. Giant radio galaxies - II. Tracers of large-scale structure

    E-Print Network [OSTI]

    Malarecki, J M; Saripalli, L; Staveley-Smith, L; Subrahmanyan, R

    2015-01-01T23:59:59.000Z

    We have carried out optical spectroscopy with the Anglo-Australian Telescope for 24,726 objects surrounding a sample of 19 Giant Radio Galaxies (GRGs) selected to have redshifts in the range 0.05 to 0.15 and projected linear sizes from 0.8 to 3.2 Mpc. Such radio galaxies are ideal candidates to study the Warm-Hot Intergalactic Medium (WHIM) because their radio lobes extend beyond the ISM and halos of their host galaxies, and into the tenuous IGM. We were able to measure redshifts for 9,076 galaxies. Radio imaging of each GRG, including high-sensitivity, wideband radio observations from the Australia Telescope Compact Array for 12 GRGs and host optical spectra (presented in a previous paper, Malarecki et al. 2013), is used in conjunction with the surrounding galaxy redshifts to trace large-scale structure. We find that the mean galaxy number overdensity in volumes of ~700 Mpc$^3$ near the GRG host galaxies is ~70 indicating an overdense but non-virialized environment. A Fourier component analysis is used to qu...

  11. Large Scale Obscuration and Related Climate Effects Workshop: Proceedings

    SciTech Connect (OSTI)

    Zak, B.D.; Russell, N.A.; Church, H.W.; Einfeld, W.; Yoon, D.; Behl, Y.K. [eds.

    1994-05-01T23:59:59.000Z

    A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

  12. Large-Scale Molecular Dynamics Simulations for Highly Parallel Infrastructures

    E-Print Network [OSTI]

    Pazúriková, Jana

    2014-01-01T23:59:59.000Z

    Computational chemistry allows researchers to experiment in sillico: by running a computer simulations of a biological or chemical processes of interest. Molecular dynamics with molecular mechanics model of interactions simulates N-body problem of atoms$-$it computes movements of atoms according to Newtonian physics and empirical descriptions of atomic electrostatic interactions. These simulations require high performance computing resources, as evaluations within each step are computationally demanding and billions of steps are needed to reach interesting timescales. Current methods decompose the spatial domain of the problem and calculate on parallel/distributed infrastructures. Even the methods with the highest strong scaling hit the limit at half a million cores: they are not able to cut the time to result if provided with more processors. At the dawn of exascale computing with massively parallel computational resources, we want to increase the level of parallelism by incorporating parallel-in-time comput...

  13. Model Abstraction Techniques for Large-Scale Power Systems

    E-Print Network [OSTI]

    Report on System Simulation using High Performance Computing Prepared by New Mexico Tech New Mexico: Application of High Performance Computing to Electric Power System Modeling, Simulation and Analysis Task Two

  14. A Distribution Oblivious Scalable Approach for Large-Scale Scientific Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa del(ANL-IN-03-032) -Less isNFebruaryOctober 2, AlgeriaQ1 Q2you aA Deep Dive into

  15. Large-Scale Spray Releases: Initial Aerosol Test Results

    SciTech Connect (OSTI)

    Schonewill, Philip P.; Gauglitz, Phillip A.; Bontha, Jagannadha R.; Daniel, Richard C.; Kurath, Dean E.; Adkins, Harold E.; Billing, Justin M.; Burns, Carolyn A.; Davis, James M.; Enderlin, Carl W.; Fischer, Christopher M.; Jenks, Jeromy WJ; Lukins, Craig D.; MacFarlan, Paul J.; Shutthanandan, Janani I.; Smith, Dennese M.

    2012-12-01T23:59:59.000Z

    One of the events postulated in the hazard analysis at the Waste Treatment and Immobilization Plant (WTP) and other U.S. Department of Energy (DOE) nuclear facilities is a breach in process piping that produces aerosols with droplet sizes in the respirable range. The current approach for predicting the size and concentration of aerosols produced in a spray leak involves extrapolating from correlations reported in the literature. These correlations are based on results obtained from small engineered spray nozzles using pure liquids with Newtonian fluid behavior. The narrow ranges of physical properties on which the correlations are based do not cover the wide range of slurries and viscous materials that will be processed in the WTP and across processing facilities in the DOE complex. Two key technical areas were identified where testing results were needed to improve the technical basis by reducing the uncertainty due to extrapolating existing literature results. The first technical need was to quantify the role of slurry particles in small breaches where the slurry particles may plug and result in substantially reduced, or even negligible, respirable fraction formed by high-pressure sprays. The second technical need was to determine the aerosol droplet size distribution and volume from prototypic breaches and fluids, specifically including sprays from larger breaches with slurries where data from the literature are scarce. To address these technical areas, small- and large-scale test stands were constructed and operated with simulants to determine aerosol release fractions and generation rates from a range of breach sizes and geometries. The properties of the simulants represented the range of properties expected in the WTP process streams and included water, sodium salt solutions, slurries containing boehmite or gibbsite, and a hazardous chemical simulant. The effect of anti-foam agents was assessed with most of the simulants. Orifices included round holes and rectangular slots. The round holes ranged in size from 0.2 to 4.46 mm. The slots ranged from (width × length) 0.3 × 5 to 2.74 × 76.2 mm. Most slots were oriented longitudinally along the pipe, but some were oriented circumferentially. In addition, a limited number of multi-hole test pieces were tested in an attempt to assess the impact of a more complex breach. Much of the testing was conducted at pressures of 200 and 380 psi, but some tests were conducted at 100 psi. Testing the largest postulated breaches was deemed impractical because of the large size of some of the WTP equipment. The purpose of this report is to present the experimental results and analyses for the aerosol measurements obtained in the large-scale test stand. The report includes a description of the simulants used and their properties, equipment and operations, data analysis methodology, and test results. The results of tests investigating the role of slurry particles in plugging of small breaches are reported in Mahoney et al. 2012a. The results of the aerosol measurements in the small-scale test stand are reported in Mahoney et al. (2012b).

  16. Smart Libraries: Best SQE Practices for Libraries with an Emphasis on Scientific Computing

    SciTech Connect (OSTI)

    Miller, M C; Reus, J F; Matzke, R P; Koziol, Q A; Cheng, A P

    2004-12-15T23:59:59.000Z

    As scientific computing applications grow in complexity, more and more functionality is being packaged in independently developed libraries. Worse, as the computing environments in which these applications run grow in complexity, it gets easier to make mistakes in building, installing and using libraries as well as the applications that depend on them. Unfortunately, SQA standards so far developed focus primarily on applications, not libraries. We show that SQA standards for libraries differ from applications in many respects. We introduce and describe a variety of practices aimed at minimizing the likelihood of making mistakes in using libraries and at maximizing users' ability to diagnose and correct them when they occur. We introduce the term Smart Library to refer to a library that is developed with these basic principles in mind. We draw upon specific examples from existing products we believe incorporate smart features: MPI, a parallel message passing library, and HDF5 and SAF, both of which are parallel I/O libraries supporting scientific computing applications. We conclude with a narrative of some real-world experiences in using smart libraries with Ale3d, VisIt and SAF.

  17. Inflationary tensor fossils in large-scale structure

    E-Print Network [OSTI]

    Emanuela Dimastrogiovanni; Matteo Fasiello; Donghui Jeong; Marc Kamionkowski

    2014-07-30T23:59:59.000Z

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.

  18. Large-Scale Data Challenges in Future Power Grids

    SciTech Connect (OSTI)

    Yin, Jian; Sharma, Poorva; Gorton, Ian; Akyol, Bora A.

    2013-03-25T23:59:59.000Z

    This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

  19. POET (parallel object-oriented environment and toolkit) and frameworks for scientific distributed computing

    SciTech Connect (OSTI)

    Armstrong, R.; Cheung, A.

    1997-01-01T23:59:59.000Z

    Frameworks for parallel computing have recently become popular as a means for preserving parallel algorithms as reusable components. Frameworks for parallel computing in general, and POET in particular, focus on finding ways to orchestrate and facilitate cooperation between components that implement the parallel algorithms. Since performance is a key requirement for POET applications, CORBA or CORBA-like systems are eschewed for a SPMD message-passing architecture common to the world of distributed-parallel computing. Though the system is written in C++ for portability, the behavior of POET is more like a classical framework, such as Smalltalk. POET seeks to be a general platform for scientific parallel algorithm components which can be modified, linked, mixed and matched to a user`s specification. The purpose of this work is to identify a means for parallel code reuse and to make parallel computing more accessible to scientists whose expertise is outside the field of parallel computing. The POET framework provides two things: (1) an object model for parallel components that allows cooperation without being restrictive; (2) services that allow components to access and manage user data and message-passing facilities, etc. This work has evolved through application of a series of real distributed-parallel scientific problems. The paper focuses on what is required for parallel components to cooperate and at the same time remain ``black-boxes`` that users can drop into the frame without having to know the exquisite details of message-passing, data layout, etc. The paper walks through a specific example of a chemically reacting flow application. The example is implemented in POET and the authors identify component cooperation, usability and reusability in an anecdotal fashion.

  20. Distortive Effects of Initial-Based Name Disambiguation on Measurements of Large-Scale Coauthorship Networks

    E-Print Network [OSTI]

    Kim, Jinseok

    2015-01-01T23:59:59.000Z

    Scholars have often relied on name initials to resolve name ambiguities in large-scale coauthorship network research. This approach bears the risk of incorrectly merging or splitting author identities. The use of initial-based disambiguation has been justified by the assumption that such errors would not affect research findings too much. This paper tests this assumption by analyzing coauthorship networks from five academic fields - biology, computer science, nanoscience, neuroscience, and physics - and an interdisciplinary journal, PNAS. Name instances in datasets of this study were disambiguated based on heuristics gained from previous algorithmic disambiguation solutions. We use disambiguated data as a proxy of ground-truth to test the performance of three types of initial-based disambiguation. Our results show that initial-based disambiguation can misrepresent statistical properties of coauthorship networks: it deflates the number of unique authors, number of component, average shortest paths, clustering ...

  1. A first large-scale flood inundation forecasting model

    SciTech Connect (OSTI)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04T23:59:59.000Z

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.

  2. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    E-Print Network [OSTI]

    Andreeva, J; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01T23:59:59.000Z

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Comp...

  3. A Topological Framework for the Interactive Exploration of Large Scale Turbulent Combustion

    E-Print Network [OSTI]

    Bremer, Peer-Timo

    2010-01-01T23:59:59.000Z

    comparison of terascale combustion simulation data. Mathe-premixed hydrogen ?ames. Combustion and Flame, [7] J. L.of Large Scale Turbulent Combustion Peer-Timo Bremer 1 ,

  4. Performance Engineering: Understanding and Improving the Performance of Large-Scale Codes

    E-Print Network [OSTI]

    2008-01-01T23:59:59.000Z

    An API for Runtime Code Patching,” Journal of Highof the Conference on Code Generation and Optimization,Performance of Large-Scale Codes David H. Bailey 1 , Robert

  5. A Large-Scale Sentiment Analysis for Yahoo! Answers Onur Kucuktunc

    E-Print Network [OSTI]

    Ferhatosmanoglu, Hakan

    and Behavioral Sciences]: Psychology, Sociology General Terms Design, Experimentation, Human Factors, MeasurementA Large-Scale Sentiment Analysis for Yahoo! Answers Onur Kucuktunc The Ohio State University

  6. a min-max regret robust optimization approach for large scale full ...

    E-Print Network [OSTI]

    admin

    2007-07-20T23:59:59.000Z

    the full-factorial scenario design of data uncertainty. The proposed algorithm is shown to be efficient for solving large-scale min-max regret robust optimization ...

  7. Computer Assisted Parallel Program Generation

    E-Print Network [OSTI]

    Kawata, Shigeo

    2015-01-01T23:59:59.000Z

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  8. Final Technical Report - Center for Technology for Advanced Scientific Component Software (TASCS)

    SciTech Connect (OSTI)

    Sussman, Alan [University of Maryland

    2014-10-21T23:59:59.000Z

    This is a final technical report for the University of Maryland work in the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS). The Maryland work focused on software tools for coupling parallel software components built using the Common Component Architecture (CCA) APIs. Those tools are based on the Maryland InterComm software framework that has been used in multiple computational science applications to build large-scale simulations of complex physical systems that employ multiple separately developed codes.

  9. Using LINUX/Mac Computers for Scientific Computing The easiest way to save, edit, compile, and run a program, save the output,

    E-Print Network [OSTI]

    Gardner, Carl

    Using LINUX/Mac Computers for Scientific Computing The easiest way to save, edit, compile, and run a program, save the output, and then graph the output using MATLAB is to work on a LINUX or Mac (Mac OS X for saving, editing, compiling, and running program.c, and graphing the output with MATLAB and Fig

  10. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    SciTech Connect (OSTI)

    James E. O'Brien

    2010-08-01T23:59:59.000Z

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demand for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a “hydrogen economy.” The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.

  11. NESC-VII: Fracture Mechanics Analyses of WPS Experiments on Large-scale Cruciform Specimen

    SciTech Connect (OSTI)

    Yin, Shengjun [ORNL; Williams, Paul T [ORNL; Bass, Bennett Richard [ORNL

    2011-01-01T23:59:59.000Z

    This paper describes numerical analyses performed to simulate warm pre-stress (WPS) experiments conducted with large-scale cruciform specimens within the Network for Evaluation of Structural Components (NESC-VII) project. NESC-VII is a European cooperative action in support of WPS application in reactor pressure vessel (RPV) integrity assessment. The project aims in evaluation of the influence of WPS when assessing the structural integrity of RPVs. Advanced fracture mechanics models will be developed and performed to validate experiments concerning the effect of different WPS scenarios on RPV components. The Oak Ridge National Laboratory (ORNL), USA contributes to the Work Package-2 (Analyses of WPS experiments) within the NESCVII network. A series of WPS type experiments on large-scale cruciform specimens have been conducted at CEA Saclay, France, within the framework of NESC VII project. This paper first describes NESC-VII feasibility test analyses conducted at ORNL. Very good agreement was achieved between AREVA NP SAS and ORNL. Further analyses were conducted to evaluate the NESC-VII WPS tests conducted under Load-Cool-Transient- Fracture (LCTF) and Load-Cool-Fracture (LCF) conditions. This objective of this work is to provide a definitive quantification of WPS effects when assessing the structural integrity of reactor pressure vessels. This information will be utilized to further validate, refine, and improve the WPS models that are being used in probabilistic fracture mechanics computer codes now in use by the NRC staff in their effort to develop risk-informed updates to Title 10 of the U.S. Code of Federal Regulations (CFR), Part 50, Appendix G.

  12. RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND

    SciTech Connect (OSTI)

    Chokchai "Box" Leangsuksun

    2011-05-31T23:59:59.000Z

    Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.

  13. Computing Resources | Argonne Leadership Computing Facility

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    is dedicated to large-scale computation and builds on Argonne's strengths in high-performance computing software, advanced hardware architectures and applications expertise. It...

  14. Model Reduction of Large-Scale Dynamical A. Antoulas1

    E-Print Network [OSTI]

    Van Dooren, Paul

    , and structural response of high-rise buildings to wind and earthquakes), electronic circuit simulation systems in which real- time prognostics and prediction are required, constraints on compute power, memory response characteristics. Two main themes can be identified among several methodologies: (a) balancing

  15. Atmospheric perturbations of large-scale nuclear war

    SciTech Connect (OSTI)

    Malone, R.C.

    1985-01-01T23:59:59.000Z

    Computer simulation of the injection into the atmosphere of a large quantity of smoke following a nuclear war are described. The focus is on what might happen to the smoke after it enters the atmosphere and what changes, or perturbations, could be induced in the atmospheric structure and circulation by the pressure of a large quantity of smoke. 4 refs., 7 figs. (ACR)

  16. Future Generation Computer Systems 16 (1999) 920 An extensible information model for shared scientific data collections

    E-Print Network [OSTI]

    Gupta, Amarnath

    1999-01-01T23:59:59.000Z

    scientific data collections Amarnath Gupta , Chaitanya Baru San Diego Supercomputer Center, University

  17. Reconfiguration-Assisted Charging in Large-Scale Lithium-ion Battery Systems

    E-Print Network [OSTI]

    Reconfiguration-Assisted Charging in Large-Scale Lithium-ion Battery Systems Liang He1 , Linghe, TX, USA ABSTRACT Large-scale Lithium-ion batteries are widely adopted in many systems and heterogeneous discharging con- ditions, cells in the battery system may have differ- ent statuses

  18. Room-temperature stationary sodium-ion batteries for large-scale electric energy storage

    E-Print Network [OSTI]

    Wang, Wei Hua

    Room-temperature stationary sodium-ion batteries for large-scale electric energy storage Huilin Pan attention particularly in large- scale electric energy storage applications for renewable energy and smart storage system in the near future. Broader context With the rapid development of renewable energy sources

  19. An Energy-Efficient Framework for Large-Scale Parallel Storage Systems

    E-Print Network [OSTI]

    Qin, Xiao

    An Energy-Efficient Framework for Large-Scale Parallel Storage Systems Ziliang Zong, Matt Briggs-scale and energy-efficient parallel storage systems. To validate the efficiency of the proposed framework, a buffer that this new framework can significantly improves the energy efficiency of large-scale parallel storage systems

  20. Wireless Ventilation Control for Large-Scale Systems: the Mining Industrial Case

    E-Print Network [OSTI]

    Boyer, Edmond

    Wireless Ventilation Control for Large-Scale Systems: the Mining Industrial Case E. Witrant1,, A. D, for large scale systems with high environmental impact: the mining ventilation control systems. Ventilation). We propose a new model for underground ventilation. The main components of the system dynamics

  1. Parallelisation of the revised simplex method for general large scale LP problems

    E-Print Network [OSTI]

    Hall, Julian

    Parallelisation of the revised simplex method for general large scale LP problems Julian Hall School of Mathematics University of Edinburgh August 9­10 2005 Parallelisation of the revised simplex method for general large scale LP problems #12;Overview · The (standard and revised) simplex method

  2. Random Features for Large-Scale Kernel Machines Intel Research Seattle

    E-Print Network [OSTI]

    Kim, Tae-Kyun

    Random Features for Large-Scale Kernel Machines Ali Rahimi Intel Research Seattle Seattle, WA 98105 products of the transformed data are approximately equal to those in the feature space of a user specified on their ability to approximate various radial basis kernels, and show that in large-scale classification

  3. Hash-SVM: Scalable Kernel Machines for Large-Scale Visual Classification , Shih-Fu Chang

    E-Print Network [OSTI]

    Chang, Shih-Fu

    Hash-SVM: Scalable Kernel Machines for Large-Scale Visual Classification Yadong Mu , Gang Hua , Wei the efficiency of non-linear kernel SVM in very large scale visual classification prob- lems. Our key idea be transformed into solving a linear SVM over the hash bits. The proposed Hash-SVM enjoys dramatic storage cost

  4. Automatic Construction of Large-Scale Regular Expression Matching Engines on FPGA

    E-Print Network [OSTI]

    Prasanna, Viktor K.

    Automatic Construction of Large-Scale Regular Expression Matching Engines on FPGA Yi-Hua E. Yang@usc.edu, prasanna@usc.edu Abstract--We present algorithms for implementing large-scale regular expression matching (REM) on FPGA. Based on the proposed algorithms, we develop tools that first transform regular

  5. Hamming embedding and weak geometric consistency for large scale image search

    E-Print Network [OSTI]

    Verbeek, Jakob

    Hamming embedding and weak geometric consistency for large scale image search Herve Jegou, Matthijs improves recent methods for large scale image search. State-of-the-art methods build on the bag large datasets. Exper- iments performed on a dataset of one million of images show a signifi- cant

  6. Large-Scale FPGA-based Convolutional Networks Clement Farabet1

    E-Print Network [OSTI]

    LeCun, Yann

    Large-Scale FPGA-based Convolutional Networks Cl´ement Farabet1 , Yann LeCun1 , Koray Kavukcuoglu1, New Haven, USA Chapter in Machine Learning on Very Large Data Sets, edited by Ron Bekkerman, Mikhail Bilenko, and John Langford, Cambridge University Press, 2011. May 2, 2011 1 #12;Large-Scale FPGA

  7. Evaluation of Segmentation Techniques for Inventory Management in Large Scale Multi-Item Inventory Systems1

    E-Print Network [OSTI]

    Rossetti, Manuel D.

    1 Evaluation of Segmentation Techniques for Inventory Management in Large Scale Multi-Item Inventory Systems1 Manuel D. Rossetti2 , Ph. D., P. E. Department of Industrial Engineering University of their inventory policies in a large-scale multi-item inventory system. Conventional inventory segmentation

  8. Electronic Properties of Large-scale Graphene Films Chemical Vapor Synthesized on Nickel and on Copper

    E-Print Network [OSTI]

    Chen, Yong P.

    transport properties of graphene films grown on Ni and Cu. Sample Preparation The synthesis of graphene film1 Electronic Properties of Large-scale Graphene Films Chemical Vapor Synthesized on Nickel of large scale graphene films grown by chemical vapor synthesis on Ni and Cu, and then transferred to SiO2

  9. QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web

    E-Print Network [OSTI]

    Liu, Ling

    1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping

  10. QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web

    E-Print Network [OSTI]

    Caverlee, James

    QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering

  11. LARGE-SCALE MOVEMENT PATTERNS OF MALE LOGGERHEAD SEA TURTLES (CARETTA CARETTA)

    E-Print Network [OSTI]

    LARGE-SCALE MOVEMENT PATTERNS OF MALE LOGGERHEAD SEA TURTLES (CARETTA CARETTA) IN SHARK BAY Management Title of Thesis: Large-Scale Movement Patterns of Male Loggerhead Sea Turtles (Caretta caretta) in Shark Bay, Australia Report No. 524 Examining Committee: Chair: Christine Gruman Master of Resource

  12. LETTER doi:10.1038/nature11727 Large-scale nanophotonic phased array

    E-Print Network [OSTI]

    Reif, Rafael

    and astronomy1 . The ability to generate arbi- trary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency of the nanoantennas pre- cisely balanced in power and aligned in phase to generate a designed, sophisticated radiation

  13. PENMAN Upper Model Building a LargeScale Knowledge Base for Machine Translation

    E-Print Network [OSTI]

    Knight, Kevin

    f g et al. Abstract PENMAN Upper Model Building a Large­Scale Knowledge Base for Machine­ gineer to build up an index to a KB in a second language, such as Spanish or Japanese. USC is a three­site collabora­ tive effort to build a large­scale knowledge­based ma­ chine translation system

  14. The Anatomy of a Large-Scale Hypertextual Web Search Sergey Brin and Lawrence Page

    E-Print Network [OSTI]

    Matwin, Stan

    . This paper addresses this question of how to build a practical large-scale system which can exploit of googol, or 10100 and fits well with our goal of building very large- scale search engines. 1.1 Web Search Engines -- Scaling Up: 1994 - 2000 Search engine technology has had to scale dramatically to keep up

  15. Harvesting Clean Energy How California Can Deploy Large-Scale Renewable

    E-Print Network [OSTI]

    Kammen, Daniel M.

    Harvesting Clean Energy How California Can Deploy Large-Scale Renewable Energy Projects Harvesting Clean Energy: How California Can Deploy Large-Scale Renewable Energy Projects on Appropriate acres of impaired lands in the Westlands Water District in the Central Valley may soon have

  16. Large-Scale Integration of Deferrable Demand and Renewable Energy Sources

    E-Print Network [OSTI]

    Oren, Shmuel S.

    1 Large-Scale Integration of Deferrable Demand and Renewable Energy Sources Anthony Papavasiliou. In order to accurately assess the impacts of renewable energy integration and demand response integration model for assessing the impacts of the large-scale integration of renewable energy sources

  17. Optimal Selection of AC Cables for Large Scale Offshore Wind Farms

    E-Print Network [OSTI]

    Hu, Weihao

    Optimal Selection of AC Cables for Large Scale Offshore Wind Farms Peng Hou, Weihao Hu, Zhe Chen@et.aau.dk, whu@iet.aau.dk, zch@iet.aau.dk Abstract--The investment of large scale offshore wind farms is high the operational requirements of the offshore wind farms and the connected power systems. In this paper, a new cost

  18. Impacts of Large-Scale Wind Generators Penetration on the Voltage Stability of Power Systems

    E-Print Network [OSTI]

    Pota, Himanshu Roy

    development of wind energy tech- nology and the current world-wide status of grid-connected as well as standImpacts of Large-Scale Wind Generators Penetration on the Voltage Stability of Power Systems M. J systems and their dynamic behaviours to identify critical issues that limit the large-scale integration

  19. Autonomous and Energy-Aware Management of Large-Scale Cloud Infrastructures

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Autonomous and Energy-Aware Management of Large-Scale Cloud Infrastructures Eugen Feller Advisor.e. self-organization and healing); (3) energy-awareness. However, existing open-source cloud management, and energy-aware resource management frameworks for large-scale cloud infrastructures. Particularly, a novel

  20. Large Scale Wind Turbine Siting Map Report NJ Department of Environmental Protection

    E-Print Network [OSTI]

    Holberton, Rebecca L.

    Large Scale Wind Turbine Siting Map Report NJ Department of Environmental Protection September 8 Jersey Department of Environmental Protection's (NJDEP) "Large Scale Wind Turbine Siting Map Management rules to address the development and permitting of wind turbines in the coastal zone

  1. Parallel domain decomposition for simulation of large-scale power grids

    E-Print Network [OSTI]

    Mohanram, Kartik

    of large-scale linear circuits such as power grids. DD techniques that use non-overlapping and overlap that with the proposed parallel DD framework, existing linear circuit simulators can be extended to handle large- scale can be solved independently in parallel using standard techniques for linear system analysis

  2. Evaluating the Potential for Large-Scale Biodiesel Deployments in a Global Context

    E-Print Network [OSTI]

    Wisconsin at Madison, University of

    Evaluating the Potential for Large-Scale Biodiesel Deployments in a Global Context by Matthew Johnston. All rights reserved. #12;#12;Evaluating the Potential for Large-Scale Biodiesel Deployments on the subject of biodiesel, but I can only hope she takes comfort knowing now much I appreciate everything she

  3. A Protocol for the Atomic Capture of Multiple Molecules on Large Scale Platforms

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    A Protocol for the Atomic Capture of Multiple Molecules on Large Scale Platforms Marin Bertier services. Envi- sioned over largely distributed and highly dynamic platforms, expressing this coordination coordination of services. However, the execution of such programs over large scale platforms raises several

  4. Feasibility of Large-Scale Ocean CO2 Sequestration

    SciTech Connect (OSTI)

    Peter Brewer

    2008-08-31T23:59:59.000Z

    Scientific knowledge of natural clathrate hydrates has grown enormously over the past decade, with spectacular new findings of large exposures of complex hydrates on the sea floor, the development of new tools for examining the solid phase in situ, significant progress in modeling natural hydrate systems, and the discovery of exotic hydrates associated with sea floor venting of liquid CO{sub 2}. Major unresolved questions remain about the role of hydrates in response to climate change today, and correlations between the hydrate reservoir of Earth and the stable isotopic evidence of massive hydrate dissociation in the geologic past. The examination of hydrates as a possible energy resource is proceeding apace for the subpermafrost accumulations in the Arctic, but serious questions remain about the viability of marine hydrates as an economic resource. New and energetic explorations by nations such as India and China are quickly uncovering large hydrate findings on their continental shelves. In this report we detail research carried out in the period October 1, 2007 through September 30, 2008. The primary body of work is contained in a formal publication attached as Appendix 1 to this report. In brief we have surveyed the recent literature with respect to the natural occurrence of clathrate hydrates (with a special emphasis on methane hydrates), the tools used to investigate them and their potential as a new source of natural gas for energy production.

  5. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect (OSTI)

    Brumby, Steven P [Los Alamos National Laboratory; Kenyon, Garrett [Los Alamos National Laboratory; Rasmussen, Craig E [Los Alamos National Laboratory; Swaminarayan, Sriram [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Landecker, Will [PORTLAND STATE UNIV.

    2009-01-01T23:59:59.000Z

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  6. Reconfigurable middleware architectures for large scale sensor networks

    SciTech Connect (OSTI)

    Brennan, Sean M.

    2010-03-01T23:59:59.000Z

    Wireless sensor networks, in an e#11;ffort to be energy efficient, typically lack the high-level abstractions of advanced programming languages. Though strong, the dichotomy between these two paradigms can be overcome. The SENSIX software framework, described in this dissertation, uniquely integrates constraint-dominated wireless sensor networks with the flexibility of object-oriented programming models, without violating the principles of either. Though these two computing paradigms are contradictory in many ways, SENSIX bridges them to yield a dynamic middleware abstraction unifying low-level resource-aware task recon#12;figuration and high-level object recomposition.

  7. Cosmological Simulations for Large-Scale Sky Surveys | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625govInstrumentstdmadapInactiveVisitingContract Management Fermi Site Office (FSO)CorporateCosmic FrontierComputing

  8. The Cielo Petascale Capability Supercomputer: Providing Large-Scale

    Office of Scientific and Technical Information (OSTI)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem Not Found Item Not Found The item you requested,C.TechnicalGenomeTechnicalComputing

  9. Large-Scale Continuous Subgraph Queries on Streams

    SciTech Connect (OSTI)

    Choudhury, Sutanay; Holder, Larry; Chin, George; Feo, John T.

    2011-11-30T23:59:59.000Z

    Graph pattern matching involves finding exact or approximate matches for a query subgraph in a larger graph. It has been studied extensively and has strong applications in domains such as computer vision, computational biology, social networks, security and finance. The problem of exact graph pattern matching is often described in terms of subgraph isomorphism which is NP-complete. The exponential growth in streaming data from online social networks, news and video streams and the continual need for situational awareness motivates a solution for finding patterns in streaming updates. This is also the prime driver for the real-time analytics market. Development of incremental algorithms for graph pattern matching on streaming inputs to a continually evolving graph is a nascent area of research. Some of the challenges associated with this problem are the same as found in continuous query (CQ) evaluation on streaming databases. This paper reviews some of the representative work from the exhaustively researched field of CQ systems and identifies important semantics, constraints and architectural features that are also appropriate for HPC systems performing real-time graph analytics. For each of these features we present a brief discussion of the challenge encountered in the database realm, the approach to the solution and state their relevance in a high-performance, streaming graph processing framework.

  10. 978-1-4799-4394-4/14/$31.00 c 2014 IEEE Towards Energy Proportionality for Large-Scale Latency-Critical Workloads

    E-Print Network [OSTI]

    Kozyrakis, Christos

    978-1-4799-4394-4/14/$31.00 c 2014 IEEE Towards Energy Proportionality for Large-Scale Latency University Google, Inc. Abstract Reducing the energy footprint of warehouse-scale computer (WSC) systems is key to their affordability, yet difficult to achieve in practice. The lack of energy proportionality

  11. The Role of Planning in Grid Computing Jim Blythe, Ewa Deelman, Yolanda Gil, Carl Kesselman, Amit Agarwal, Gaurang Mehta,

    E-Print Network [OSTI]

    Blythe, Jim

    a high-quality solution. We describe an implemented test case in gravitational wave interferometry to harness the power of large numbers of heterogeneous, distributed resources: computing resources, data these resources to solve complex large- scale problems. Scientific communities ranging from high- energy physics

  12. The Role of Planning in Grid Computing Jim Blythe, Ewa Deelman, Yolanda Gil, Carl Kesselman, Amit Agarwal, Gaurang Mehta,

    E-Print Network [OSTI]

    Gil, Yolanda

    describe an implemented test case in gravitational wave interferometry and show how the planner Grid computing (Foster & Kesselman 99, Foster et al. 01) promises users the ability to harness these resources to solve complex large-scale problems. Scientific communities ranging from high- energy physics

  13. Reactive power planning of large-scale systems

    SciTech Connect (OSTI)

    Burchett, R.C.; Happ, H.H.; Vierath, D.R.

    1983-01-01T23:59:59.000Z

    This paper discusses short-term operations planning applications in reactive power management involving existing equipment. Reactive power planning involves the sizing and siting of additional reactive support equipment in order to satisfy system voltage constraints (minimum and maximum limits) under both normal and contingency conditions. The use of the Optimal Power Flow (OPF) and the VARPLAN computer codes for operations planning are examined. The OPF software can be used to determine if reactive outputs from nearby generators are scheduled properly, and to confirm that parallel transformers have been properly set. A major benefit of the system planning software VARPLAN is the ability to simultaneously consider both normal and contingency conditions, while adding a minimal amount of new reactive power. Applications to long-term system planning of new reactive power sources are described.

  14. Edit paper Methods for Large Scale Hydraulic Fracture Monitoring

    E-Print Network [OSTI]

    Ely, Gregory

    2013-01-01T23:59:59.000Z

    In this paper we propose computationally efficient and robust methods for estimating the moment tensor and location of micro-seismic event(s) for large search volumes. Our contribution is two-fold. First, we propose a novel joint-complexity measure, namely the sum of nuclear norms which while imposing sparsity on the number of fractures (locations) over a large spatial volume, also captures the rank-1 nature of the induced wavefield pattern. This wavefield pattern is modeled as the outer-product of the source signature with the amplitude pattern across the receivers from a seismic source. A rank-1 factorization of the estimated wavefield pattern at each location can therefore be used to estimate the seismic moment tensor using the knowledge of the array geometry. In contrast to existing work this approach allows us to drop any other assumption on the source signature. Second, we exploit the recently proposed first-order incremental projection algorithms for a fast and efficient implementation of the resulting...

  15. Uncertainty quantification for large-scale ocean circulation predictions.

    SciTech Connect (OSTI)

    Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

    2010-09-01T23:59:59.000Z

    Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

  16. Aquifer sensitivity assessment modeling at a large scale

    SciTech Connect (OSTI)

    Berg, R.C.; Abert, C.C. (Illinois State Geological Survey, Champaign, IL (United States))

    1994-03-01T23:59:59.000Z

    A 480 square-mile region within Will County, northeastern Illinois was used as a test region for an evaluation of the sensitivity of aquifers to contamination. An aquifer sensitivity model was developed using a Geographic Information System (GIS) with ARC/INFO software to overlay and combine several data layers. Many of the input data layers were developed using 2-dimensional surface modeling (Interactive Surface Modeling (ISM)) and 3-dimensional volume modeling (Geologic Modeling Program (GMP)) computer software. Most of the input data layers (drift thickness, thickness of sand and gravel, depth to first aquifer) were derived from interpolation of descriptive logs for water wells and engineering borings from their study area. A total of 2,984 logs were used to produce these maps. The components used for the authors' model are (1) depth to sand and gravel or bedrock, (2) thickness of the uppermost sand and gravel aquifer, (3) drift thickness, and (4) absence or presence of uppermost bedrock aquifer. The model is an improvement over many aquifer sensitivity models because it combines specific information on depth to the uppermost sand and gravel aquifer with information on the thickness of the uppermost sand and gravel aquifer. The manipulation of the source maps according to rules-based assumptions results in a colored aquifer sensitivity map for the Will County study area. This colored map differentiates 42 aquifer sensitivity map areas by using line patterns within colors. The county-scale model results in an aquifer sensitivity map that can be a useful tool for making land-use planning decisions regarding aquifer protection and management of groundwater resources.

  17. National facility for advanced computational science: A sustainable path to scientific discovery

    E-Print Network [OSTI]

    2004-01-01T23:59:59.000Z

    Applications,” High Performance Computing for ComputationalSystem Effectiveness in High Performance Computing Systems,”Tammy Welcome, “High Performance Computing Facilities for

  18. Optimal capacitor placement, replacement and control in large-scale unbalanced distribution systems: System solution algorithms and numerical studies

    SciTech Connect (OSTI)

    Chiang, H.D.; Wang, J.C.; Tong, J. [Cornell Univ., Ithaca, NY (United States). School of Electrical Engineering; Darling, G. [NYSEG Corp., Binghamton, NY (United States). Distribution System Dept.

    1995-02-01T23:59:59.000Z

    This paper develops an effective and, yet, practical solution methodology for optimal capacitor placement, replacement and control in large-scale unbalanced, general radial or loop distribution systems. The solution methodology can optimally determine (i) the locations to install (or replace, or remove) capacitors, (ii) the types and sizes of capacitors to be installed (or replaced) and, during each load level, (iii) the control schemes for each capacitor in the nodes of a general three-phase unbalanced distribution system such that a desired objective function is minimized while the load constraints, network constraints and operational constraints at different load levels are satisfied. The solution methodology is based on a combination of the simulated annealing technique and the greedy search technique in order to achieve computational speed and high-quality of solutions. Both the numerical and implementational aspects of the solution methodology are detailed. Analysis of the computational complexity of the solution algorithm indicates that the algorithm is also effective for large-scale distribution systems in terms of computational efforts. Test results on a realistic, unbalanced distribution network, a 291-bus with 77 laterals, 305 distribution lines and 6 transformers, with varying loading conditions, are presented with promising results. The robustness of the solution methodology under varying loading conditions is also investigated.

  19. 1/12/11 1:51 PMComputer-Aided Brains: Scientific American Page 1 of 3http://www.scientificamerican.com/article.cfm?id=computer-aided-brains

    E-Print Network [OSTI]

    Salvucci, Dario D.

    » Scientific American Mind » October 2005 Head Lines | Mind & Brain Computer-Aided Brains By Brad Stenger1/12/11 1:51 PMComputer-Aided Brains: Scientific American Page 1 of 3http://www.scientificamerican.com/article.cfm?id=computer-aided-brains Image: For years, innovators have tried to devise computerized gadgetry to aid the brain. Advances have

  20. Active Set Algorithm for Large-Scale Continuous Knapsack Problems with Application to Topology Optimization Problems

    E-Print Network [OSTI]

    Tavakoli, Ruhollah

    2010-01-01T23:59:59.000Z

    The structure of many real-world optimization problems includes minimization of a nonlinear (or quadratic) functional subject to bound and singly linear constraints (in the form of either equality or bilateral inequality) which are commonly called as continuous knapsack problems. Since there are efficient methods to solve large-scale bound constrained nonlinear programs, it is desirable to adapt these methods to solve knapsack problems, while preserving their efficiency and convergence theories. The goal of this paper is to introduce a general framework to extend a box-constrained optimization solver to solve knapsack problems. This framework includes two main ingredients which are O(n) methods; in terms of the computational cost and required memory; for the projection onto the knapsack constrains and the null-space manipulation of the related linear constraint. The main focus of this work is on the extension of Hager-Zhang active set algorithm (SIAM J. Optim. 2006, pp. 526--557). The main reasons for this ch...

  1. On large-scale nonlinear programming techniques for solving optimal control problems

    SciTech Connect (OSTI)

    Faco, J.L.D.

    1994-12-31T23:59:59.000Z

    The formulation of decision problems by Optimal Control Theory allows the consideration of their dynamic structure and parameters estimation. This paper deals with techniques for choosing directions in the iterative solution of discrete-time optimal control problems. A unified formulation incorporates nonlinear performance criteria and dynamic equations, time delays, bounded state and control variables, free planning horizon and variable initial state vector. In general they are characterized by a large number of variables, mostly when arising from discretization of continuous-time optimal control or calculus of variations problems. In a GRG context the staircase structure of the jacobian matrix of the dynamic equations is exploited in the choice of basic and super basic variables and when changes of basis occur along the process. The search directions of the bound constrained nonlinear programming problem in the reduced space of the super basic variables are computed by large-scale NLP techniques. A modified Polak-Ribiere conjugate gradient method and a limited storage quasi-Newton BFGS method are analyzed and modifications to deal with the bounds on the variables are suggested based on projected gradient devices with specific linesearches. Some practical models are presented for electric generation planning and fishery management, and the application of the code GRECO - Gradient REduit pour la Commande Optimale - is discussed.

  2. LARGE-SCALE UNSTEADINESS IN A TWO-DIMENSIONAL DIFFUSER: NUMERICAL STUDY TOWARD ACTIVE SEPARATION CONTROL

    E-Print Network [OSTI]

    Colonius, Tim

    of Technology, Pasadena, California 91125 ABSTRACT We develop a reduced order model for large-scale unsteadiness mass injection can pinch off vortices with a smaller size; accordingly, their convective velocity

  3. The Association of Large-Scale Climate Variability and Teleconnections on Wind Energy Resource

    E-Print Network [OSTI]

    The Association of Large-Scale Climate Variability and Teleconnections on Wind Energy Resource over on Wind Energy Resource over Europe and its Intermittency Pascal Kriesche* and Adam Schlosser* Abstract

  4. Programmable window : a large-scale transparent electronic display using SPD film

    E-Print Network [OSTI]

    Ramos, Martin (Ramos Rizo-Patron)

    2004-01-01T23:59:59.000Z

    This research demonstrates that Suspended Particle Device (SPD) film is a viable option for the development of large-scale transparent display systems. The thesis analyzes the SPD film from an architectural display application ...

  5. A multiperiod optimization model to schedule large-scale petroleum development projects

    E-Print Network [OSTI]

    Husni, Mohammed Hamza

    2009-05-15T23:59:59.000Z

    This dissertation solves an optimization problem in the area of scheduling large-scale petroleum development projects under several resources constraints. The dissertation focuses on the application of a metaheuristic search Genetic Algorithm (GA...

  6. Large scale oceanic circulation and fluxes of freshwater, heat, nutrients and oxygen

    E-Print Network [OSTI]

    Ganachaud, Alexandre Similien, 1970-

    2000-01-01T23:59:59.000Z

    A new, global inversion is used to estimate the large scale oceanic circulation based on the World Ocean Circulation Experiment and Java Australia Dynamic Experiment hydrographic data. A linear inverse "box" model is used ...

  7. Large-Scale Evacuation Network Model for Transporting Evacuees with Multiple Priorities

    E-Print Network [OSTI]

    Na, Hyeong Suk

    2014-05-01T23:59:59.000Z

    There are increasing numbers of natural disasters occurring worldwide, particularly in populated areas. Such events affect a large number of people causing injuries and fatalities. With ever increasing damage being caused by large-scale natural...

  8. A randomized Mirror-Prox method for solving structured large-scale ...

    E-Print Network [OSTI]

    2011-12-06T23:59:59.000Z

    Dec 6, 2011 ... value optimization, large-scale problems, matrix exponentiation .... conclusions are demonstrated by numerical evidence: for solving problems (up to ...... To build such a procedure, we can specify T = T(?) in such a way.

  9. LARGE SCALE PERMEABILITY TEST OF THE GRANITE IN THE STRIPA MINE AND THERMAL CONDUCTIVITY TEST

    E-Print Network [OSTI]

    Lundstrom, L.

    2011-01-01T23:59:59.000Z

    No.2 LARGE SCALE PERMEABILITY TEST OF THE GRANITE' IN THEMINE AND, THERMAL CONDUCTIVITY TEST Lars Lundstrom and HakanSUMMARY REPORT Background TEST SITE Layout of test places

  10. Census: Location-Aware Membership Management for Large-Scale Distributed Systems

    E-Print Network [OSTI]

    Cowling, James Alexander

    We present Census, a platform for building large-scale distributed applications. Census provides a membership service and a multicast mechanism. The membership service provides every node with a consistent view of the ...

  11. Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation

    E-Print Network [OSTI]

    Collins, James J.

    Large-Scale Mapping and Validation of Escherichia coli Transcriptional Regulation from a Compendium Bioinformatics Program, Boston University, Boston, Massachusetts, United States of America, 2 Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America, 3 Boston

  12. A statistical learning framework for data mining of large-scale systems : algorithms, implementation, and applications

    E-Print Network [OSTI]

    Tsou, Ching-Huei, 1973-

    2007-01-01T23:59:59.000Z

    A machine learning framework is presented that supports data mining and statistical modeling of systems that are monitored by large-scale sensor networks. The proposed algorithm is novel in that it takes both observations ...

  13. Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Metal Catalyzed sp2 Bonded Carbon - Large-scale Graphene Synthesis and Beyond December 1, 2009 at 3pm36-428 Peter Sutter Center for Functional Nanomaterials sutter abstract:...

  14. Probing the imprint of interacting dark energy on very large scales

    E-Print Network [OSTI]

    Duniya, Didam; Maartens, Roy

    2015-01-01T23:59:59.000Z

    The observed galaxy power spectrum acquires relativistic corrections from lightcone effects, and these corrections grow on very large scales. Future galaxy surveys in optical, infrared and radio bands will probe increasingly large wavelength modes and reach higher redshifts. In order to exploit the new data on large scales, an accurate analysis requires inclusion of the relativistic effects. This is especially the case for primordial non-Gaussianity and for extending tests of dark energy models to horizon scales. Here we investigate the latter, focusing on models where the dark energy interacts non-gravitationally with dark matter. Interaction in the dark sector can also lead to large-scale deviations in the power spectrum. If the relativistic effects are ignored, the imprint of interacting dark energy will be incorrectly identified and thus lead to a bias in constraints on interacting dark energy on very large scales.

  15. Chemical engineers design, control and optimize large-scale chemical, physicochemical and

    E-Print Network [OSTI]

    Rohs, Remo

    , Biochemical, Environmental, Petroleum Engineering and Nantoechnology. CHEMICAL&MATERIALSSCIENCE CHE OVERVIEW of Science 131 units · Chemical Engineering (Petroleum) Bachelor of Science 136 units · Chemical Engineering38 Chemical engineers design, control and optimize large-scale chemical, physicochemical

  16. Bridging the Gap Between Commissioning Measures and Large Scale Retrofits in Existing Buildings

    E-Print Network [OSTI]

    Bynum, J.; Jones, A.; Claridge, D.E.

    2011-01-01T23:59:59.000Z

    Most often commissioning of existing buildings seeks to reduce a building's energy consumption by implementation of operational changes via the existing equipment. In contrast, large scale capital retrofits seek to make major changes...

  17. Bridging the Gap Between Commissioning Measures and Large Scale Retrofits in Existing Buildings

    E-Print Network [OSTI]

    Bynum, J.; Jones, A.; Claridge, D. E.

    Most often commissioning of existing buildings seeks to reduce a building’s energy consumption by implementation of operational changes via the existing equipment. In contrast, large scale capital retrofits seek to make major changes to the systems...

  18. Membraneless hydrogen bromine laminar flow battery for large-scale energy storage

    E-Print Network [OSTI]

    Braff, William Allan

    2014-01-01T23:59:59.000Z

    Electrochemical energy storage systems have been considered for a range of potential large-scale energy storage applications. These applications vary widely, both in the order of magnitude of energy storage that is required ...

  19. Potential Climatic Impacts and Reliability of Very Large-Scale Wind Farms

    E-Print Network [OSTI]

    Wang, Chien

    Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has ...

  20. The role of large-scale, extratropical dynamics in climate change

    SciTech Connect (OSTI)

    Shepherd, T.G. [ed.

    1994-02-01T23:59:59.000Z

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  1. Large scale test rig for flow visualization and leakage measurement of labyrinth seals

    E-Print Network [OSTI]

    Broussard, Daniel Harold

    1991-01-01T23:59:59.000Z

    LARGE SCALE TEST RIG FOR FLOW VISUALIZATION AND LEAKAGE MEASUREMENT OF LABYRINTH SEALS A Thesis by DANIEL HAROLD BROUSSARD Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of requirements for degree... of MASTER OF SCIENCE December 1991 Major Subject: Mechanical Engineering LARGE SCALE TEST RIG FOR FLOW VISUALIZATION AND LEAKAGE MEASUREMENT OF LABYRINTH SEALS A Thesis by DANIEL HAROLD BROUSSARD Approved as to style and content by: David L. Rhode...

  2. Optimal capacitor placement, replacement and control in large-scale unbalanced distribution systems: System solution algorithm and numerical studies

    SciTech Connect (OSTI)

    Chiang, H.D.; Wang, J.C.; Tong, J. [Cornell Univ., Ithaca, NY (United States). School of Electrical Engineering; Darling, G. [NYSEG Corp., Binghamton, NY (United States). Distribution System Dept.

    1994-12-31T23:59:59.000Z

    This paper develops an effective and, yet, practical solution methodology for optimal capacitor placement, replacement and control in large-scale unbalanced, general radial or loop distribution systems. The solution methodology can optimally determine (1) the locations to install (or replace, or remove) capacitors, (2) the types and sizes of capacitors to be installed (or replaced) and, during each load level, (3) the control schemes for each capacitor in the nodes of a general three-phase unbalanced distribution system such that a desired objective function is minimized while the load constraints, network constraints and operational constraints at different load levels are satisfied. The solution methodology is based on a combination of the simulated annealing technique and the greedy search technique in order to achieve computational speed and high-quality of solutions. Both the numerical and implementational aspects of the solution methodology are detailed. Analysis of the computational complexity of the solution algorithm indicated that the algorithm is also effective for large-scale distribution systems in terms of occupational efforts. Test results on a realistic, unbalanced distribution network, a 291-bus with 77 laterals, 305 distribution lines and 6 transformers, with varying loading conditions, are presented with promising results. The robustness of the solution methodology under varying loading conditions is also investigated.

  3. A decomposition approach to optimal reactive power dispatch in large-scale power systems

    SciTech Connect (OSTI)

    Deeb, N.I.

    1989-01-01T23:59:59.000Z

    Power systems network operation is aimed at reducing system losses and minimizing the operational cost while satisfying performance requirements in normal and contingency situations. In this project, the procedure for the reactive power optimization has the solutions for investment and operation subproblems. The global solution is an iterative process between these two subproblems using the Bender decomposition method. In the investment subproblem decisions for the capacity and location of new reactive sources are made. These decisions are used in the optimization of the system operation. The outstanding features of the proposed method are that it does not require any matrix inversion, will save computation time and memory space, and can be implemented on very large scale power systems. The method employs a linearized objective function and constraints, and is based on adjusting control variables which are tap positions of transformers and reactive power injections. Linear programming is used to calculate voltage increments, which would minimize transmission losses, and adjustments of control variables would be obtained by a modified Jacobian matrix. This approach would greatly simplify the application of Dantzig-Wolfe decomposition method for solving the operation subproblem. According to the mathematical features of the Dantzig-Wolfe method, a multi-area approach is implemented and system equations are decomposed into a master problem and several subproblems. The master problem is formed by constraints, which represent linking transmission lines between areas. Two updated techniques are incorporated in the method to enhance the optimization process, which would save additional computation time and memory space. The proposed method is applied to the IEEE-30 bus system, a 60-bus system, a 180-bus system and a 1200-bus system, and numerical results are presented.

  4. A Systematic Approach to the Design of a Large Scale Detritiation System for Controlled Thermonuclear Fusion Experiments

    E-Print Network [OSTI]

    A Systematic Approach to the Design of a Large Scale Detritiation System for Controlled Thermonuclear Fusion Experiments

  5. A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage

    E-Print Network [OSTI]

    Cui, Yi

    A membrane-free lithium/polysulfide semi-liquid battery for large-scale energy storage Yuan Yang,a Guangyuan Zhengb and Yi Cui*ac Large-scale energy storage represents a key challenge for renewable energy develop a new lithium/ polysulfide (Li/PS) semi-liquid battery for large-scale energy storage

  6. Scientific Grand Challenges: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale

    SciTech Connect (OSTI)

    Khaleel, Mohammad A.; Johnson, Gary M.; Washington, Warren M.

    2009-07-02T23:59:59.000Z

    The U.S. Department of Energy (DOE) Office of Biological and Environmental Research (BER) in partnership with the Office of Advanced Scientific Computing Research (ASCR) held a workshop on the challenges in climate change science and the role of computing at the extreme scale, November 6-7, 2008, in Bethesda, Maryland. At the workshop, participants identified the scientific challenges facing the field of climate science and outlined the research directions of highest priority that should be pursued to meet these challenges. Representatives from the national and international climate change research community as well as representatives from the high-performance computing community attended the workshop. This group represented a broad mix of expertise. Of the 99 participants, 6 were from international institutions. Before the workshop, each of the four panels prepared a white paper, which provided the starting place for the workshop discussions. These four panels of workshop attendees devoted to their efforts the following themes: Model Development and Integrated Assessment; Algorithms and Computational Environment; Decadal Predictability and Prediction; Data, Visualization, and Computing Productivity. The recommendations of the panels are summarized in the body of this report.

  7. A Minimal Model for Large-scale Epitaxial Growth Kinetics of Graphene

    E-Print Network [OSTI]

    Jiang, Huijun

    2015-01-01T23:59:59.000Z

    Epitaxial growth via chemical vapor deposition is considered to be the most promising way towards synthesizing large area graphene with high quality. However, it remains a big theoretical challenge to reveal growth kinetics with atomically energetic and large-scale spatial information included. Here, we propose a minimal kinetic Monte Carlo model to address such an issue on an active catalyst surface with graphene/substrate lattice mismatch, which facilitates us to perform large scale simulations of the growth kinetics over two dimensional surface with growth fronts of complex shapes. A geometry-determined large-scale growth mechanism is revealed, where the rate-dominating event is found to be $C_{1}$-attachment for concave growth front segments and $C_{5}$-attachment for others. This growth mechanism leads to an interesting time-resolved growth behavior which is well consistent with that observed in a recent scanning tunneling microscopy experiment.

  8. What Will the Neighbors Think? Building Large-Scale Science Projects Around the World

    ScienceCinema (OSTI)

    Craig Jones, Christian Mrotzek, Nobu Toge and Doug Sarno

    2010-01-08T23:59:59.000Z

    Public participation is an essential ingredient for turning the International Linear Collider into a reality. Wherever the proposed particle accelerator is sited in the world, its neighbors -- in any country -- will have something to say about hosting a 35-kilometer-long collider in their backyards. When it comes to building large-scale physics projects, almost every laboratory has a story to tell. Three case studies from Japan, Germany and the US will be presented to examine how community relations are handled in different parts of the world. How do particle physics laboratories interact with their local communities? How do neighbors react to building large-scale projects in each region? How can the lessons learned from past experiences help in building the next big project? These and other questions will be discussed to engage the audience in an active dialogue about how a large-scale project like the ILC can be a good neighbor.

  9. Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.

    SciTech Connect (OSTI)

    Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken; Gentile, Ann C. [Sandia National Laboratories, Albuquerque, NM; Lloyd, Levi; Minnich, Ronald G.; Vanderveen, Keith; Van Randwyk, Jamie A; Rudish, Don W.

    2008-10-01T23:59:59.000Z

    We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3

  10. Bringing large-scale multiple genome analysis one step closer: ScalaBLAST and beyond

    SciTech Connect (OSTI)

    Oehmen, Christopher S.; Sofia, Heidi J.; Baxter, Douglas; Szeto, Ernest; Hugenholtz, Philip; Kyrpides, Nikos; Markowitz, Victor; Straatsma, Tjerk P.

    2007-06-01T23:59:59.000Z

    Genome sequence comparisons of exponentially growing data sets form the foundation for the comparative analysis tools provided by community biological data resources such as the Integrated Microbial Genome (IMG) system at the Joint Genome Institute (JGI). We present an example of how ScalaBLAST, a high-throughput sequence analysis program harnesses increasingly critical high-performance computing to perform sequence analysis which is a critical component of maintaining a state-of-the-art sequence data repository. The Integrated Microbial Genomes (IMG) system1 is a data management and analysis platform for microbial genomes hosted at the JGI. IMG contains both draft and complete JGI genomes integrated with other publicly available microbial genomes of all three domains of life. IMG provides tools and viewers for interactive analysis of genomes, genes and functions, individually or in a comparative context. Most of these tools are based on pre-computed pairwise sequence similarities involving millions of genes. These computations are becoming prohibitively time consuming with the rapid increase in the number of newly sequenced genomes incorporated into IMG and the need to refresh regularly the content of IMG in order to reflect changes in the annotations of existing genomes. Thus, building IMG 2.0 (released on December 1st 2006) entailed reloading from NCBI's RefSeq all the genomes in the previous version of IMG (IMG 1.6, as of September 1st, 2006) together with 1,541 new public microbial,viral and eukaryal genomes, bringing the total of IMG genomes to 2,301. A critical part of building IMG 2.0 involved using PNNL ScalaBLAST software for computing pairwise similarities for over 2.2 million genes in under 26 hours on 1,000 processors, thus illustrating the impact that new generation bioinformatics tools are poised to make in biology. The BLAST algorithm2, 3 is a familiar bioinformatics application for computing sequence similarity, and has become a workhorse in large-scale genomics projects. The rapid growth of genome resources such as IMG cannot be sustained without more powerful tools such as ScalaBLAST that use more effectively large scale computing resources to perform the core BLAST calculations. ScalaBLAST is a high performance computing algorithm designed to give high throughput BLAST results on high-end supercomputers. Other parallel sequence comparison applications have been developed4-6. However problems with scaling generally prevent these applications from being used for very large searches. ScalaBLAST7 is the first BLAST application to be both highly scaleable against the size of the database as well as the number of computer processors on high-end hardware and on commodity clusters. ScalaBLAST achieves high throughput by parsing a large collection of query sequences into independent subgroups. These smaller tasks are assigned to independent process groups. Efficient scaling is achieved by (transparently to the user) sharing only one copy of the target database across all processors using the Global Array toolkit 8, 9, which provides software implementation of shared memory interface. ScalaBLAST was initially deployed on the 1,960 processor MPP2 cluster in the Wiliam R. Wiley Environmental Molecular Sciences Laboratory at Pacific Northwest National Laboratory, and has since been ported to a variety of linux-based clusters and shared memory architectures, including SGI Altix, AMD opteron, and Intel Xeon-based clusters. Future targets include IBM BlueGene, Cray, and SGI Altix XE architectures. The importance of performing high-throughput calculations rapidly lies in the rate of growth of sequence data. For a genome sequencing center to provide multiple-genome comparison capabilities, it must keep pace with exponentially growing collection of protein data, both from its own genomes, and from the public genome information as well. As sequence data continues to grow exponentially, this challenge will only increase with time. Solving the BLAST throughput challenge for centralized data resources like IMG has the poten

  11. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    SciTech Connect (OSTI)

    Karbach, Carsten; Frings, Wolfgang

    2013-02-20T23:59:59.000Z

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing)Ă?Âť. The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.

  12. Diffuse Pionic Gamma-Ray Emission from Large Scale Structures in the FERMI Era

    E-Print Network [OSTI]

    Aleksandra Dobardzic; Tijana Prodanovic

    2014-04-08T23:59:59.000Z

    For more than a decade now the complete origin of the diffuse gamma-ray emission background (EGRB) has been unknown. Major components like unresolved star-forming galaxies (making SFCR gamma-ray emission are weak (above the observed EGRB) in some case, in others, some of our models can provide a good fit to the observed EGRB. More importantly, we show that these large-scale shocks could still give an important contribution to the EGRB, especially at high energies. Future detections of cluster gamma-ray emission would help place tighter constraints on our models and give us a better insight into large-scale shocks forming around them.

  13. Theory of large-scale turbulent transport of chemically active pollutants

    SciTech Connect (OSTI)

    Chefranov, S.G.

    1986-01-01T23:59:59.000Z

    This paper shows that ordered Turing structures may be produced in the large-scale turbulent mixing of chemically active pollutants as a result of statistical instability of the spatially homogeneous state. Threshold values are obtained for the variance of a random non-Gaussian velocity field, beyond which this statistical instability is realized even in two-component systems with quadratically nonlinear kinetics. The possibility for the formation of large-scale spatially non-homogeneous concentration distributions of chemically active pollutants by this mechanism is examined.

  14. Accelerating scientific discovery : 2007 annual report.

    SciTech Connect (OSTI)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14T23:59:59.000Z

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide guidance for applications that are transitioning to petascale as well as to produce software that facilitates their development, such as the MPICH library, which provides a portable and efficient implementation of the MPI standard--the prevalent programming model for large-scale scientific applications--and the PETSc toolkit that provides a programming paradigm that eases the development of many scientific applications on high-end computers.

  15. Cosmological parameters from observational data on the large scale structure of the Universe

    E-Print Network [OSTI]

    B. Novosyadlyj; R. Durrer; S. Apunevych

    2000-09-29T23:59:59.000Z

    The observational data on the large scale structure (LSS) of the Universe are used to determine cosmological parameters within the class of adiabatic inflationary models. We show that a mixed dark matter model with cosmological constant ($\\Lambda$MDM model) and parameters $\\Omega_m=0.37^{+0.25}_{-0.15}$, $\\Omega_{\\Lambda}=0.69^{+0.15}_{-0.20}$, $\\Omega_{\

  16. Large-Scale Fabrication, 3D Tomography, and Lithium-Ion Battery Application of Porous Silicon

    E-Print Network [OSTI]

    Zhou, Chongwu

    Large-Scale Fabrication, 3D Tomography, and Lithium-Ion Battery Application of Porous Silicon, United States *S Supporting Information ABSTRACT: Recently, silicon-based lithium-ion battery anodes have for the next-generation lithium-ion batteries with enhanced capacity and energy density. KEYWORDS: Cost

  17. INORGANIC NANOPARTICLES AS PHASE-CHANGE MATERIALS FOR LARGE-SCALE THERMAL ENERGY STORAGE

    E-Print Network [OSTI]

    Pennycook, Steve

    INORGANIC NANOPARTICLES AS PHASE-CHANGE MATERIALS FOR LARGE- SCALE THERMAL ENERGY STORAGE Miroslaw storage performance. The expected immediate outcome of this effort is the demonstration of high-energy generation at high efficiency could revolutionize the development of solar energy. Nanoparticle-based phase

  18. Comparison of large-scale field-aligned currents under sunlit and dark ionospheric conditions

    E-Print Network [OSTI]

    Higuchi, Tomoyuki

    geomagnetic activity as measured by the IMF BZ component. Result 1 can be partially explained in termsComparison of large-scale field-aligned currents under sunlit and dark ionospheric conditions S-aligned currents (FACs) under sunlit and dark ionospheric conditions. A total of $74,000 auroral oval crossings

  19. SEQUENCING TECHNOLOGIES Microdroplet-based PCR enrichment for large-scale

    E-Print Network [OSTI]

    Rosenberg, Noah

    . Genet. 22 Oct 2009 (doi:10.1016/j.ajhg.2009.09.017) In case­control association studies, imputation.1126/science.1181498) Genome sequencing for large-scale population-based studies requires technologies generated in this study is expected to be a useful resource for examining the molecular characteristics

  20. A Climatology of Tropical Anvil and Its Relationship to the Large-Scale Circulation 

    E-Print Network [OSTI]

    Li, Wei

    2011-02-22T23:59:59.000Z

    of anvil formation, and to provide a more realistic assessment of the radiative impact of tropical anvil on the large-scale circulation. Based on 10 years (1998-2007) of observations, anvil observed by the Tropical Rainfall Measuring Mission (TRMM...

  1. Effects of large-scale distribution of wind energy in and around Europe

    E-Print Network [OSTI]

    Effects of large-scale distribution of wind energy in and around Europe Gregor Giebel Niels Gylling energy in Europe? · Distribution of wind energy all over Europe leads to smoothing of the wind power energy can easily supply up to ~20% of the European demand. At this stage, · Less than 13% of the wind

  2. A parallel revised simplex solver for large scale block angular LP problems

    E-Print Network [OSTI]

    Hall, Julian

    A parallel revised simplex solver for large scale block angular LP problems Julian Hall and Edmund Smith School of Mathematics University of Edinburgh 29th July 2010 A parallel revised simplex solver · Revised simplex method for BALP problems Basis matrix and its inversion Solution of linear systems

  3. Energy Evaluation of PMCMTP for Large-Scale Wireless Sensor Networks

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    slots inside each Personal Area Network (PAN)), · Energy balancing and saving to prolong networkEnergy Evaluation of PMCMTP for Large-Scale Wireless Sensor Networks Jamila Ben Slimane, Ye-Qiong Song, Anis Koub^aa§¶ and Mounir Frikha Sup'Com-MEDIATRON, City of Communication Technologies, 2083

  4. Large Scale Volume Rendering in Immersive Environments with Direct Manipulation Widgets

    E-Print Network [OSTI]

    Kniss, Joe Michael

    Large Scale Volume Rendering in Immersive Environments with Direct Manipulation Widgets Master's Thesis Proposal Joe Michael Kniss May 15, 2001 1 Introduction 1.1 Thesis Statement Parallel rendering exploration. 1.2 Motivation Direct volume rendering has proven to be an important visualization tool

  5. Fast Solver for Large Scale Eddy Current Non-Destructive Evaluation Problems

    E-Print Network [OSTI]

    Fast Solver for Large Scale Eddy Current Non-Destructive Evaluation Problems Naiguang Lei Advisor: Lalita Udpa Thursday, July 31st, 2014 9:00-11:00am, EB2219 Abstract Eddy current testing plays a very, an alternating magnetic field source generates induced currents, called eddy currents, in an electrically

  6. CHANGES OF SYSTEM OPERATION COSTS DUE TO LARGE-SCALE WIND INTEGRATION

    E-Print Network [OSTI]

    Model Institute of Energy Economics and the Rational Use of EnergyIER Changes of System Operation CostsCHANGES OF SYSTEM OPERATION COSTS DUE TO LARGE-SCALE WIND INTEGRATION Derk Jan SWIDER1 , Rüdiger-Essen, Germany 3 Risoe International Laboratory, Denmark Business and Policy Track: Integrating wind

  7. Large-scale flow of geofluids at the Dead Sea Rift H. Gvirtzmana,*, E. Stanislavskyb

    E-Print Network [OSTI]

    Gvirtzman, Haim

    that has caused large-scale migration of brine and hydrocarbons at the Dead Sea Rift. Numerical simulations flow directions. The first is a density-driven migration of brine through deep aquifers from the rift reserved. Keywords: Groundwater; Brine; Hydrocarbons; Rift; Dead Sea; Modeling 1. The Dead Sea Rift

  8. Page Digest for Large-Scale Web Services Daniel Rocco, David Buttler, Ling Liu

    E-Print Network [OSTI]

    Rocco, Daniel

    Page Digest for Large-Scale Web Services Daniel Rocco, David Buttler, Ling Liu Georgia Institute this storage and processing efficiently. In this paper, we introduce Page Digest, a mechanism for efficient storage and processing of Web documents. The Page Digest design encourages a clean separation

  9. Large-scale hierarchical optimization for online advertising and wind farm planning

    E-Print Network [OSTI]

    Eskenazi, Maxine

    Large-scale hierarchical optimization for online advertising and wind farm planning Konstantin (particularly, spon- sored search) and wind farm turbine-layout planning. Whereas very different in specifics annealing and integer linear programming as our principled approach. Wind farm layout optimization

  10. Mining Induced Seismicity -Monitoring of a Large Scale Salt Cavern Collapse

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Mining Induced Seismicity - Monitoring of a Large Scale Salt Cavern Collapse E. Klein* (Ineris), I ground failure phenomenon induced by old underground mining works, a field experiment was undertaken in collaboration with the SOLVAY mining company: a solution mine was instrumented in 2004 previously to its

  11. I/O-Conscious Data Preparation for Large-Scale Web Search Engines

    E-Print Network [OSTI]

    Chiueh, Tzi-cker

    a general technique for efficiently car- rying out large sets of simple transformation or queryingI/O-Conscious Data Preparation for Large-Scale Web Search Engines Maxim Lifantsev Tzi-cker Chiueh of the transformation and querying operations that work with the data. This data and processing partitioning is natu

  12. Large-scale Probabilistic Forecasting in Energy Systems using Sparse Gaussian Conditional Random Fields

    E-Print Network [OSTI]

    Kolter, J. Zico

    -Gaussian case using the copula transform. On a wind power forecasting task, we show that this probabilisticLarge-scale Probabilistic Forecasting in Energy Systems using Sparse Gaussian Conditional Random high-dimensional conditional Gaussian distributions to forecasting wind power and extend it to the non

  13. Simulating the Power Consumption of Large-Scale Sensor Network Applications

    E-Print Network [OSTI]

    Simulating the Power Consumption of Large-Scale Sensor Network Applications Victor Shnayder, Mark of the most important as- pects of sensor application design: that of power consump- tion. While simple approximations of overall power usage can be derived from estimates of node duty cycle and com- munication rates

  14. Microfluidic very large scale integration (mVLSI) with integrated micromechanical valves{

    E-Print Network [OSTI]

    Quake, Stephen R.

    Microfluidic very large scale integration (mVLSI) with integrated micromechanical valves{ Ismail40258k Microfluidic chips with a high density of control elements are required to improve device and accessible high-density microfluidic chips, we have fabricated a monolithic PDMS valve architecture

  15. Parameter identification in large-scale models for oil and gas production

    E-Print Network [OSTI]

    Van den Hof, Paul

    Parameter identification in large-scale models for oil and gas production Jorn F.M. Van Doren: Models used for model-based (long-term) operations as monitoring, control and optimization of oil and gas information to the identification problem. These options are illustrated with examples taken from oil and gas

  16. PNNL-SA-XXXXX Ultra Large-Scale Power System Control and

    E-Print Network [OSTI]

    Low, Steven H.

    PNNL-SA-XXXXX Ultra Large-Scale Power System Control and Coordination Architecture A Strategic Institute of Technology Rick Geiger Utilities and Smart Grid Cisco Systems #12;#12;PNNL-SA-XXXXX #12;PNNL Richland, Washington 99352 #12;PNNL-SA-XXXXX #12;#12;PNNL-SA-XXXXX 1.0 Introduction Electric power grids

  17. Large-Scale Analysis of Individual and Task Differences in Search Result Page Examination Strategies

    E-Print Network [OSTI]

    Dumais, Susan

    Large-Scale Analysis of Individual and Task Differences in Search Result Page Examination users examine results which are similar to those observed in small-scale studies. Our findings have differences on search result page examination strategies is important in develop- ing improved search engines

  18. A steady-state L-mode tokamak fusion reactor : large scale and minimum scale

    E-Print Network [OSTI]

    Reed, Mark W. (Mark Wilbert)

    2010-01-01T23:59:59.000Z

    We perform extensive analysis on the physics of L-mode tokamak fusion reactors to identify (1) a favorable parameter space for a large scale steady-state reactor and (2) an operating point for a minimum scale steady-state ...

  19. A Large-scale Benchmark Study of Existing Algorithms for Taxonomy-Independent

    E-Print Network [OSTI]

    Slatton, Clint

    A Large-scale Benchmark Study of Existing Algorithms for Taxonomy-Independent Microbial Community sequencing technology have created new op- portunities to probe the hidden world of microbes. Taxonomy: pyrosequencing, 16S rRNA, taxonomy independent analysis, massive data, clustering, microbial diversity estimation

  20. Environmental impacts of large-scale grid-connected ground-mounted PV installations

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Environmental impacts of large-scale grid-connected ground-mounted PV installations Antoine Beylota-scale ground-mounted PV installations by considering a life-cycle approach. The methodology is based. Mobile PV installations with dual-axis trackers show the largest impact potential on ecosystem quality

  1. A Scalable Model for Energy Load Balancing in Large-scale Sensor Networks

    E-Print Network [OSTI]

    de Veciana, Gustavo

    A Scalable Model for Energy Load Balancing in Large-scale Sensor Networks Seung Jun Baek we consider how one might achieve more balanced energy burdens across the network by spreading sinks change their locations to balance the energy burdens incurred accross the network nodes [1

  2. Funding for Large-Scale Sustainable Energy Projects Combining Expert Opinions to Support Decisions

    E-Print Network [OSTI]

    Mountziaris, T. J.

    Funding for Large-Scale Sustainable Energy Projects Combining Expert Opinions to Support Decisions for a sustainable energy future? Three teams, UMass, Harvard, and FEEM (Fondazione Eni Enrico Mattei), share a goal technologies to fund for optimal success for a sustainable energy future. Progress and Results · Created models

  3. Polymeric Electro-optic Modulators: From Chromophore Design to Integration with Semiconductor Very Large Scale Integration

    E-Print Network [OSTI]

    Polymeric Electro-optic Modulators: From Chromophore Design to Integration with Semiconductor Very Large Scale Integration Electronics and Silica Fiber Optics L. Dalton, A. Harper, A. Ren, F. Wang, G California, Los Angeles, California 90089-1661 Chromophores with optimized second-order optical nonlinearity

  4. Wavelet Analysis for a New Multiresolution Model for Large-Scale Textured Terrains

    E-Print Network [OSTI]

    Illes Balears, Universitat de les

    Wavelet Analysis for a New Multiresolution Model for Large-Scale Textured Terrains María José transmission of both geometry and textures of a terrain model. Wavelet Multiresolution Analysis is applied. An innovative texture synthesis process based on Wavelet classification is used in the reconstruction

  5. Self-Organizing Fault-Tolerant Topology Control in Large-Scale Three-Dimensional

    E-Print Network [OSTI]

    Wang, Yu

    be deployed in three-dimensional (3D) space, such as under water wireless sensor networks in ocean or mobile to investigate self-organizing fault-tolerant topology control protocols for large- scale 3D wireless networks networks. Our simulation confirms our theoretical proofs for all proposed 3D topologies. Categories

  6. Chimera: Large-Scale Classification using Machine Learning, Rules, and Crowdsourcing

    E-Print Network [OSTI]

    Doan, AnHai

    Chimera: Large-Scale Classification using Machine Learning, Rules, and Crowdsourcing Chong Sun1 has been published on how this is done in practice. In this paper we describe Chimera, our solution solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in

  7. Large Scale Distribution of Stochastic Control Algorithms for Gas Storage Constantinos Makassikis, Stephane Vialle

    E-Print Network [OSTI]

    Vialle, Stéphane

    Large Scale Distribution of Stochastic Control Algorithms for Gas Storage Valuation Constantinos algorithm which is applied to gas storage valuation, and presents its experimental performances on two PC achieved in the field of gas storage valuation (see [2, 3] for example). As a result, many different price

  8. IEEE TRANSACTIONS ON POWER SYSTEMS 1 Large-Scale Integration of Deferrable

    E-Print Network [OSTI]

    Oren, Shmuel S.

    . Index Terms--Load management, power generation scheduling, wind power generation. I. INTRODUCTION on power system operations it is necessary to represent the balancing oper- ations of the remaining gridIEEE TRANSACTIONS ON POWER SYSTEMS 1 Large-Scale Integration of Deferrable Demand and Renewable

  9. Facility Location under Demand Uncertainty: Response to a Large-scale Bioterror Attack

    E-Print Network [OSTI]

    Dessouky, Maged

    Facility Location under Demand Uncertainty: Response to a Large-scale Bioterror Attack Abstract In the event of a catastrophic bio-terror attack, major urban centers need to effi- ciently distribute large of a hypothetical anthrax attack in Los Angeles County. Keywords: Capacitated facility location, distance

  10. Critical Perspectives on Large-Scale Distributed Applications and Production Grids

    E-Print Network [OSTI]

    Weissman, Jon

    not progressed in phase. Progress in the next phase and generation of distributed applications will require that can seamlessly utilize distributed infrastructures in an extensible and scalable fashion. We believeCritical Perspectives on Large-Scale Distributed Applications and Production Grids Shantenu Jha1

  11. Challenges and Opportunities in Large-Scale Deployment of Automated Energy Consumption

    E-Print Network [OSTI]

    Mohsenian-Rad, Hamed

    to the locational marginal price (LMP) at that bus. We show that a key challenge in large- scale deployment of ECS, locational marginal price. I. INTRODUCTION Real-time and time-of-use electricity pricing models can- edge among users on how to respond to time-varying prices and the lack of effective home automation

  12. Large-Scale Patent Classification with Min-Max Modular Support Vector Machines

    E-Print Network [OSTI]

    Lu, Bao-Liang

    Large-Scale Patent Classification with Min-Max Modular Support Vector Machines Xiao-Lei Chu, Chao Ma, Jing Li, Bao-Liang Lu Senior Member, IEEE, Masao Utiyama, and Hitoshi Isahara Abstract-- Patent-world patent classification typically exceeds one million, and this number increases every year. An effective

  13. Comparative Analysis of Balanced Winnow and SVM in Large Scale Patent Categorization

    E-Print Network [OSTI]

    Steels, Luc

    Comparative Analysis of Balanced Winnow and SVM in Large Scale Patent Categorization Katrien Beuls techniques, a collection of 1.2 million patent applications is used to build a classifier that is able). Contrary to SVM, Balanced Winnow is frequently applied in today's patent categorization systems. Results

  14. ON THE ROLE OF THE LARGE-SCALE MAGNETIC RECONNECTION IN THE CORONAL HEATING

    E-Print Network [OSTI]

    Pevtsov, Alexei A.

    emerging active region. We demonstrate that the effects of remote heating can be seen at significantON THE ROLE OF THE LARGE-SCALE MAGNETIC RECONNECTION IN THE CORONAL HEATING Alexei A. Pevtsov(1 changes in the magnetic connectivity, may play a role in coronal heating. To demonstrate the validity

  15. Time series modeling and large scale global solar radiation forecasting from geostationary satellites data

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    1 Time series modeling and large scale global solar radiation forecasting from geostationary global solar radiation. In this paper, we use geostationary satellites data to generate 2-D time series of solar radiation for the next hour. The results presented in this paper relate to a particular territory

  16. Crude closure dynamics through large scale statistical theories Marcus J. Grote and Andrew J. Majda

    E-Print Network [OSTI]

    Majda, Andrew J.

    Crude closure dynamics through large scale statistical theories Marcus J. Grote and Andrew J. Majda 10012-1185 Received 22 January 1997; accepted 9 July 1997 Crude closure algorithms based on equilibrium on equilibrium energy-enstrophy statistical theory, or two parameters, the energy and circulation, for crude

  17. Introduction to a Large-Scale General Purpose Ground Truth Database: Methodology,

    E-Print Network [OSTI]

    Zhu, Song Chun

    is to build up a publicly accessible annotated image database with over 1,000,000 of images and more than 200, to make the database general enough to be used for different evaluation tasks, we need to build up in a universal way. To the best of our knowledge, there has not been much previous work on building a large scale

  18. Large-Scale Urban Modeling by Combining Ground Level Panoramic and Aerial Imagery

    E-Print Network [OSTI]

    Shahabi, Cyrus

    building or part of a build- ing. Due to error propagation, they are difficult to scale up to model aerial image, we can identify the footprints(up to a common scale) of the buildings, in- cluding of multiple tall buildings. Existing methods for large-scale modeling mostly de- pend on remote sensing

  19. Automated Data Verification in a Large-scale Citizen Science Project: a Case Study

    E-Print Network [OSTI]

    Wong, Weng-Keen

    Automated Data Verification in a Large-scale Citizen Science Project: a Case Study Jun Yu1 , Steve,jag73}@cornell.edu Abstract-- Although citizen science projects can engage a very large number with eBird, which is a broad-scale citizen science project to collect bird observations, has shown

  20. Distributed Sampling-Based Roadmap of Trees for Large-Scale Motion Planning

    E-Print Network [OSTI]

    Kavraki, Lydia E.

    Distributed Sampling-Based Roadmap of Trees for Large-Scale Motion Planning Erion Plaku and Lydia E of the Sampling-based Roadmap of Trees (SRT) algorithm using a decentralized master-client scheme. The distributed that similar speedups can be obtained with several hundred processors. Index Terms-- motion planning, roadmap

  1. Mining for Statistical Models of Availability in Large-Scale Distributed Systems

    E-Print Network [OSTI]

    Kondo, Derrick

    Mining for Statistical Models of Availability in Large-Scale Distributed Systems: An Empirical and Telecommunication Systems (MASCOTS 2009) B. Javadi (INRIA) Statistical Models of Availability MASCOTS 2009 1 / 34) Statistical Models of Availability MASCOTS 2009 2 / 34 #12;Introduction and Motivation P2P, Grid, Cloud

  2. Simulation-based optimization of communication protocols for large-scale wireless sensor networks1

    E-Print Network [OSTI]

    Maróti, Miklós

    1 Simulation-based optimization of communication protocols for large-scale wireless sensor networks--The design of reliable, dynamic, fault-tolerant services in wireless sensor networks is a big challenge everyday life more comfortable, e.g. Intelligent Spaces [3]. These sensor networks often use distributed

  3. Very Large Scale Open Wireless Sensor Network Testbed Clement Burin des Rosiers2

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    SensLAB Very Large Scale Open Wireless Sensor Network Testbed Cl´ement Burin des Rosiers2 wireless sensor network protocols and applications. SensLAB's main and most important goal is to offer examples to illustrate the use of the SensLAB testbed. Key words: Wireless Sensor Network, Testbed, Radio

  4. SensLAB: a Very Large Scale Open Wireless Sensor Network Testbed

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    SensLAB: a Very Large Scale Open Wireless Sensor Network Testbed C. Burin des Rosiers2 , G. Chelius- tations of scalable wireless sensor network protocols and applications. SensLAB's main and most important demonstration examples to illustrate the use of the SensLAB testbed. Keywords: Wireless Sensor Network, Testbed

  5. Identification of Market Power in Large-Scale Electric Energy Markets Bernard C. Lesieutre

    E-Print Network [OSTI]

    Identification of Market Power in Large-Scale Electric Energy Markets Bernard C. Lesieutre Hyung and competitive operation of centrally- dispatched electricity markets. Traditional measures for market power demand and reserve requirements, a centrally-dispatched electricity market provides a transparent

  6. Rationale Support for Maintenance of Large Scale Systems Janet E. Burge and David C. Brown

    E-Print Network [OSTI]

    Brown, David C.

    of developing the software in the first place [19]. One reason for this is that the software lifecycle is a long and expensive phases of the software life-cycle. Maintenance is especially difficult for large-scale systems maintenance, we are developing the SEURAT (Software Engineering Using RATionale) system to support

  7. LARGE SCALE DIRECT SHEAR TESTING WITH TIRE BALES By: Christopher J. LaRocque1

    E-Print Network [OSTI]

    Zornberg, Jorge G.

    2 , Advisor Abstract: There are growing environmental interests in the utilization of recycled tireLARGE SCALE DIRECT SHEAR TESTING WITH TIRE BALES By: Christopher J. LaRocque1 and Jorge G. Zornberg bales for civil engineering applications. Due to their lightweight and free-draining properties, tire

  8. U.S. Energy Infrastructure Investment: Large-Scale Integrated Smart Grid

    E-Print Network [OSTI]

    research on challenges facing the electric power industry and educating the next generation of powerU.S. Energy Infrastructure Investment: Large-Scale Integrated Smart Grid Solutions with High Penetration of Renewable Resources, Dispersed Generation, and Customer Participation White Paper Power Systems

  9. Analysis and Management of Heterogeneous User Mobility in Large-scale Downlink Systems

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    Analysis and Management of Heterogeneous User Mobility in Large-scale Downlink Systems Axel M¨uller§, Emil Bj¨ornson§, Romain Couillet, and M´erouane Debbah§ Intel Mobile Communications, Sophia Antipolis, France ACCESS Linnaeus Centre, Signal Processing Lab, KTH Royal Institute of Technology, Sweden

  10. Performance of Hybrid Methods for Large-Scale Unconstrained Optimization as Applied

    E-Print Network [OSTI]

    Navon, Michael

    Performance of Hybrid Methods for Large-Scale Unconstrained Optimization as Applied to Models. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms­1231, 2003 Key words: energy minimization; proteins; loops; hybrid method; truncated Newton; dielectric

  11. A LARGE SCALE CONTINUUM-DISCRETE NUMERICAL MODELLING: APPLICATION TO OVERBURDEN DAMAGE OF A SALT CAVERN

    E-Print Network [OSTI]

    Boyer, Edmond

    A LARGE SCALE CONTINUUM-DISCRETE NUMERICAL MODELLING: APPLICATION TO OVERBURDEN DAMAGE OF A SALT damage on top of an underground solution mining, an in-situ experiment is undertaken above a salt cavity in the Lorraine region (NE of France). The overburden overlying the salt cavity is characterized by a competent

  12. CrowdSC: Building Smart Cities with Large Scale Citizen Participation

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    CrowdSC: Building Smart Cities with Large Scale Citizen Participation Karim Benouaret1 , Raman/Inria/Universit´e de Lorraine, Villers-l`es-Nancy, France Abstract ­ An elegant way to make cities smarter would CrowdSC, an effective crowdsourcing frame- work designed for smarter cities. We show that it is possible

  13. THE PREV AIR SYSTEM, AN OPERATIONAL SYSTEM FOR LARGE SCALE AIR QUALITY FORECASTS OVER EUROPE; APPLICATIONS

    E-Print Network [OSTI]

    Paris-Sud XI, Université de

    THE PREV AIR SYSTEM, AN OPERATIONAL SYSTEM FOR LARGE SCALE AIR QUALITY FORECASTS OVER EUROPE Author ABSTRACT Since Summer 2003, the PREV'AIR system has been delivering through the Internet1 daily air quality forecasts over Europe. This is the visible part of a wider collaborative project

  14. A Programming Model for Context-Aware Applications in Large-Scale Pervasive Systems

    E-Print Network [OSTI]

    Dustdar, Schahram

    .g. pervasive health-care, city traffic monitoring, environmental monitoring, smart grids). These large- scale, and smart grids. These systems differ significantly from conventional context-aware systems, which focus. Examples of such trends are pervasive health-care, city traffic scheduling, environmental monitoring

  15. Impacts of Shortwave Penetration Depth on Large-Scale Ocean Circulation and Heat Transport

    E-Print Network [OSTI]

    Gnanadesikan, Anand

    Impacts of Shortwave Penetration Depth on Large-Scale Ocean Circulation and Heat Transport COLM independent parameter- izations that use ocean color to estimate the penetration depth of shortwave radiation. This study offers a way to evaluate the changes in irradiance penetration depths in coupled ocean

  16. Onix: A Distributed Control Platform for Large-scale Production Networks Teemu Koponen

    E-Print Network [OSTI]

    Onix: A Distributed Control Platform for Large-scale Production Networks Teemu Koponen , Martin on top of which a network control plane can be implemented as a distributed system. Control planes written within Onix operate on a global view of the network, and use basic state distribution primitives

  17. A SPECULATIVE FRAMEWORK FOR THE APPLICATION OF ARTIFICIAL INTELLIGENCE TO LARGE SCALE INTERCONNECTED POWER SYSTEMS

    E-Print Network [OSTI]

    Hartley, Roger

    INTERCONNECTED POWER SYSTEMS By Nadipuram R. Prasad Satish J. Ranade Electrical Engineering Department New Mexico) technologies to the operation and control of large scale interconnected electric power systems. A fundamental issue discussed in this paper is the control structure of power systems. An evaluation of the control

  18. Large-scale molecular dynamics simulation of magnetic properties of amorphous iron under pressure

    E-Print Network [OSTI]

    ) Enhanced refrigerant capacity and magnetic entropy flattening using a two-amorphous FeZrB(Cu) compositeLarge-scale molecular dynamics simulation of magnetic properties of amorphous iron under pressure Appl. Phys. Lett. 99, 232501 (2011) Nonlinear motion of magnetic vortex under alternating

  19. Energy, water and large-scale patterns of reptile and amphibian species richness in Europe

    E-Print Network [OSTI]

    Rodríguez, Miguel Ángel

    Energy, water and large-scale patterns of reptile and amphibian species richness in Europe Miguel Á and amphibian species richness in Europe and 11 environmental variables related to five hypotheses, an estimate of plant biomass generated through satellite remote sensing, both described similar proportions

  20. Large-scale hybrid poplar production economics: 1995 Alexandria, Minnesota establishment cost and management

    SciTech Connect (OSTI)

    Downing, M. [Oak Ridge National Lab., TN (United States); Langseth, D. [WesMinn Resource Conservation and Development District, Alexandria, MN (United States); Stoffel, R. [Minnesota Dept. of Natural Resources, Alexandria, MN (United States); Kroll, T. [Minnesota Dept. of Natural Resources, St. Paul, MN (United States). Forestry Div.

    1996-12-31T23:59:59.000Z

    The purpose of this project was to track and monitor costs of planting, maintaining, and monitoring large scale commercial plantings of hybrid poplar in Minnesota. These costs assists potential growers and purchasers of this resource to determine the ways in which supply and demand may be secured through developing markets.

  1. Modeling the large-scale water balance impact of different irrigation systems

    E-Print Network [OSTI]

    Evans, Jason

    Modeling the large-scale water balance impact of different irrigation systems J. P. Evans1 and B. F precipitation causes the Turkish government to invest in modernizing its own irrigation systems balance impact of different irrigation systems, Water Resour. Res., 44, W08448, doi:10.1029/2007WR006671

  2. Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN

    E-Print Network [OSTI]

    Large-Scale Errors and Mesoscale Predictability in Pacific Northwest Snowstorms DALE R. DURRAN The development of mesoscale numerical weather prediction (NWP) models over the last two decades has made- search communities. Nevertheless, the predictability of the mesoscale features captured in such forecasts

  3. Large-Scale Oceanographic Constraints on the Distribution of Melting and Freezing under Ice Shelves

    E-Print Network [OSTI]

    Gnanadesikan, Anand

    Large-Scale Oceanographic Constraints on the Distribution of Melting and Freezing under Ice Shelves received 10 October 2007, in final form 11 March 2008) ABSTRACT Previous studies suggest that ice shelves. Introduction Fifty percent of the Antarctic coastline is fringed by ice shelves (floating extensions

  4. Optimization and Large Scale Learning Optimization lies at the heart of almost every machine

    E-Print Network [OSTI]

    Optimization and Large Scale Learning Optimization lies at the heart of almost every machine these facets requires optimization techniques tailored to not only respect them but to ag- gressively exploit by looking at the recent book [1] (MIT Press, 2011), or at the follow- ing workshops: (i) "Optimization

  5. Categorised Ethical Guidelines for Large Scale Mobile HCI Donald McMillan

    E-Print Network [OSTI]

    Chalmers, Matthew

    University of Glasgow, UK matthew.chalmers@glasgow.ac.uk ABSTRACT The recent rise in large scale trials of community con- sensus can leave researchers unsure as to how to run a study which meets their ethical in Greenfield's Everyware book [18]. High-level guidelines such as `do no harm' and `default to harmlessness

  6. Design Considerations for a Large-Scale Wireless Sensor Network for Substation Monitoring

    E-Print Network [OSTI]

    Nasipuri, Asis

    effective monitoring applications for the substation using low-cost wireless sensor nodes that can sustainDesign Considerations for a Large-Scale Wireless Sensor Network for Substation Monitoring Asis University City Blvd. Charlotte, NC 28223 Luke Van der Zel and Bienvenido Rodriguez Substations Group EPRI

  7. The impact of large scale biomass production on ozone air pollution in Joost B. Beltman a

    E-Print Network [OSTI]

    Utrecht, Universiteit

    The impact of large scale biomass production on ozone air pollution in Europe Joost B. Beltman by up to 25% and 40%. Air pollution mitigation strategies should consider land use management. a r t i Poplar a b s t r a c t Tropospheric ozone contributes to the removal of air pollutants from

  8. Communications via Systems-on-Chips Clustering in Large-Scaled Sensor Networks

    E-Print Network [OSTI]

    Fan, Jeffrey

    node, the large-scaled senor network is proposed to be transformed into a maze diagram by a user of data, including temperature, humidity, pressure, noise levels, vehicular movement, etc to the functionalities of a highly scaled VLSI silicon chip with multi-cored environments. In other words, each SoC has

  9. Impact of Wind Shear and Tower Shadow Effects on Power System with Large Scale Wind Power

    E-Print Network [OSTI]

    Hu, Weihao

    Impact of Wind Shear and Tower Shadow Effects on Power System with Large Scale Wind Power to wind speed variations, the wind shear and the tower shadow effects. The fluctuating power may be ableSILENT/PowerFactory. In this paper, the impacts of wind shear and tower shadow effects on the small signal stability of power systems

  10. Advanced modeling of large-scale oxy-fuel combustion processes

    E-Print Network [OSTI]

    Yin, Chungen

    Advanced modeling of large-scale oxy-fuel combustion processes Chungen Yin Department of Energy Technology, Aalborg University, DK-9220 Aalborg, Denmark, chy@et.aau.dk Introduction Oxy-fuel combustion simulations of various oxy- fuel combustion processes and experimental validation. Result · A new weighted

  11. Does the Budget Surplus Justify Large-Scale Tax Cuts?: Updates and Extensions

    E-Print Network [OSTI]

    Sadoulet, Elisabeth

    Does the Budget Surplus Justify Large-Scale Tax Cuts?: Updates and Extensions Alan J. Auerbach agreed should not be used for tax cuts. All of the remaining "on-budget" surplus was due to implausible of the on-budget surplus was due to accumulations in government trust funds for medicare and pensions, which

  12. Domain Controlled Architecture A New Approach for Large Scale Software Integrated Automotive Systems

    E-Print Network [OSTI]

    Kühnhauser, Winfried

    Domain Controlled Architecture A New Approach for Large Scale Software Integrated Automotive Scale Software Integration, LSSI, Automotive Real Time, Multi-core, Many-core, Embedded Automo- tive mobility domain. The automotive in- dustry is confronted with a rising system complexity and several

  13. China's changing landscape during the 1990s: Large-scale land transformations estimated with satellite data

    E-Print Network [OSTI]

    China's changing landscape during the 1990s: Large-scale land transformations estimated January 2005. [1] Land-cover changes in China are being powered by demand for food for its growing increased by 2.99 million hectares and urban areas increased by 0.82 million hectares. In northern China

  14. Smart Home in a Box: A Large Scale Smart Home Deployment

    E-Print Network [OSTI]

    Cook, Diane J.

    Smart Home in a Box: A Large Scale Smart Home Deployment Aaron S. CRANDALL a and Diane J. COOK a,1 systems. This work summa- rizes some of the existing works and introduces the Smart Home in a Box (SHiB) Project. The upcoming SHiB Project targets building 100 smart homes in several kinds of living spaces

  15. Invited Applications Paper Detecting Large-Scale System Problems by Mining Console Logs

    E-Print Network [OSTI]

    Xu, Wei

    . Researchers and operators have been using all kinds of mon- itoring data, from the simplest numerical metrics Problems by Mining Console Logs part operator, and charged with fixing the problem-- are usuallyInvited Applications Paper Detecting Large-Scale System Problems by Mining Console Logs Wei Xu xuw

  16. Platform-of-Platforms: A Modular, Integrated Resource Framework for Large-Scale Services

    E-Print Network [OSTI]

    Weissman, Jon

    Platform-of-Platforms: A Modular, Integrated Resource Framework for Large-Scale Services Rahul there has been a great deal of research ac- tivity in the development of diverse network service platforms-tier resource platform may be a natural fit for such multi-tier network services. Figure 1: Hierarchical

  17. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    SciTech Connect (OSTI)

    Ramamurthy, Byravamurthy [University of Nebraska-Lincoln

    2014-05-05T23:59:59.000Z

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published several conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.

  18. Scientific Discovery through Advanced Computing (SciDAC) | U.S. DOE Office

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office of ScienceandMesa del Sol HomeFacebook Twitter Principalfuel cells"03,ScientificScientificDataof

  19. Using Computers For Scientific Work c 1996 C.T.J. Dodson

    E-Print Network [OSTI]

    Dodson, C.T.J.

    simple hypertext documents for the World Wide Web. Updates! If you are using these materials for learning quality reports of scientific work. The software we shall need is a web browser, a spreadsheet package be called for: Wordprocessing On a PC, WordPad, Word, WordPerfect etc are standard wordprocessing packages

  20. (865) 574-6185, mccoydd@ornl.gov Advanced Scientific Computing Research

    E-Print Network [OSTI]

    Pennycook, Steve

    on integrating new software for the science applications which researchers run on high performance computing platforms. One of the key challenges in high performance computing is to ensure that the software which

  1. Path2Models: large-scale generation of computational models from biochemical pathway maps

    E-Print Network [OSTI]

    2013-01-01T23:59:59.000Z

    Swainston N, Dada JO, Khan F, Pir P, Simeonidis E, Spasi? I,D, Simeonidis E, Lanthaler K, Pir P, Lu C, Swainston N, Dunn

  2. Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    of focus include magnetohydrodynamics, plasma turbulence andsystems, with a focus on the physics of plasmas in magneticas well as space plasmas. The focus of his work is on

  3. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    Coherent Light Source (LCLS). d) Architectures with largeCoherent Light Source (LCLS) at SLAC National Acceleratorto chart new directions. At LCLS, the short duration of hard

  4. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    E-Print Network [OSTI]

    Gerber, Richard

    2014-01-01T23:59:59.000Z

    Requirements  for  Fusion  Energy  Sciences:  Target  2017  Requirements  for  Fusion  Energy  Sciences:  Target  and  Context   DOE’s  Fusion  Energy  Sciences  program  

  5. Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    simulations of fusion and energy systems with unprecedentedRequirements  for  Fusion  Energy  Sciences   14 General  and  Storage  Requirements  for  Fusion  Energy  Sciences  

  6. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    geological structure and fluids, which relates to the grand challenge of integrated characterization,

  7. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    photovoltaics; hydrogen storage; ultrathin epitaxial filmsstorage to obtain an accurate power spectrum, especially if the relatively rapid vibrational behavior of hydrogen

  8. Computer Energy Modeling Techniques for Simulation Large Scale Correctional Institutes in Texas

    E-Print Network [OSTI]

    Heneghan, T.; Haberl, J. S.; Saman, N.; Bou-Saada, T. E.

    1996-01-01T23:59:59.000Z

    using the DOE-2.1E building enegy simulation program to model a 1,000 bed case study correctional unit located in Texas. INTRODUCTION The Texas Department of Criminal Justice (TDCJ) Stephenson unit located in Cuero, Texas was N. Saman, Ph.D., P... building enegy simulation program (LBL 1980; 1981; 1982; 1989; 1994). The second part of the project included evaluating the energy consumption of this prototype unit. This paper presents a methodology that may be used to view and improve simulation...

  9. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    Electronic Structure Calculations mp261 Multiscale Simulations of Particle-, Molecule-Surface Interactions, Simulations of nanowires: Structure, Dynamics,Electronic Structure Calculations Lin-Wang Wang 10 M Multiscale Simulations of Particle-, Molecule-Surface Interactions, simulations of nanowires: Structure, dynamics,

  10. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    E-Print Network [OSTI]

    Gerber, Richard

    2014-01-01T23:59:59.000Z

    in   the   Swarthmore   Spheromak   Experiment   showing  of  the  Swarthmore  Spheromak   Experiment  (SSX)  the   Swarthmore   Spheromak   Experiment   is   shown  

  11. A Large Scale Test of Computational Protein Design: Folding and Stability of Nine Completely Redesigned

    E-Print Network [OSTI]

    Baker, David

    Baker1,2 * 1 Department of Biochemistry University of Washington Seattle, WA 98195, USA 2 Howard Hughes

  12. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    E-Print Network [OSTI]

    DOE Office of Science, Biological and Environmental Research Program Office BER,

    2010-01-01T23:59:59.000Z

    climate and earth system models, based on theoreticalemerging class of Earth System Models that include detailedof integrated earth system model predictions requires

  13. Large Scale Computing and Storage Requirements for Fusion Energy Sciences Research

    E-Print Network [OSTI]

    Gerber, Richard

    2012-01-01T23:59:59.000Z

    basic plasma science, including both burning plasma and low temperature plasma science and engineering, to enhance economic

  14. Multinet Bayesian network models for large-scale transcriptome integration in computational medicine

    E-Print Network [OSTI]

    Lin, Tiffany J

    2012-01-01T23:59:59.000Z

    Motivation: This work utilizes the closed loop Bayesian network framework for predictive medicine via integrative analysis of publicly available gene expression findings pertaining to various diseases and analyzes the ...

  15. Improving the Performance of Uintah: A Large-Scale Adaptive Meshing Computational

    E-Print Network [OSTI]

    Utah, University of

    a software system in which fundamental chem- istry and engineering physics are fully coupled with nonlinear with a force much larger than expected leaving behind a 70 foot crater. Fortunately no one was hurt. Why did to convective and radiative heat fluxes from a fire which heats the container and the PBX. After some amount

  16. Large-Scale Computational Screening of Zeolites for Ethane/Ethene

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItemResearch > The Energy MaterialsFeatured VideosTechnologies | BlandineSeparation |

  17. Harvey Wasserman! Large Scale Computing and Storage Requirements for High Energy Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville PowerCherries 82981-1cn SunnybankD.jpgHanford LEED&soil HanfordHappyHaroldHarvey Brooks,Harvey

  18. The FES Scientific Discovery through Advanced Computing (SciDAC) Program

    E-Print Network [OSTI]

    and researchers are expected to be leaders in the efficient and productive use of High Performance Computing

  19. Opportunities and Challenges for Running Scientific Workflows on the Cloud School of Computer Science and Engineering

    E-Print Network [OSTI]

    Lu, Shiyong

    of Computer Science and Engineering Univ. of Electronic and Science Technology of China Chengdu, China yongzh04@gmail.com Xubo Fei Department of Computer Science Wayne State University Detroit, USA xubo@wayne.edu Ioan Raicu Department of Computer Science Illinois Institute of Technology Chicago, USA iraicu

  20. APPLICATIONS OF CFD METHOD TO GAS MIXING ANALYSIS IN A LARGE-SCALED TANK

    SciTech Connect (OSTI)

    Lee, S; Richard Dimenna, R

    2007-03-19T23:59:59.000Z

    The computational fluid dynamics (CFD) modeling technique was applied to the estimation of maximum benzene concentration for the vapor space inside a large-scaled and high-level radioactive waste tank at Savannah River site (SRS). The objective of the work was to perform the calculations for the benzene mixing behavior in the vapor space of Tank 48 and its impact on the local concentration of benzene. The calculations were used to evaluate the degree to which purge air mixes with benzene evolving from the liquid surface and its ability to prevent an unacceptable concentration of benzene from forming. The analysis was focused on changing the tank operating conditions to establish internal recirculation and changing the benzene evolution rate from the liquid surface. The model used a three-dimensional momentum coupled with multi-species transport. The calculations included potential operating conditions for air inlet and exhaust flows, recirculation flow rate, and benzene evolution rate with prototypic tank geometry. The flow conditions are assumed to be fully turbulent since Reynolds numbers for typical operating conditions are in the range of 20,000 to 70,000 based on the inlet conditions of the air purge system. A standard two-equation turbulence model was used. The modeling results for the typical gas mixing problems available in the literature were compared and verified through comparisons with the test results. The benchmarking results showed that the predictions are in good agreement with the analytical solutions and literature data. Additional sensitivity calculations included a reduced benzene evolution rate, reduced air inlet and exhaust flow, and forced internal recirculation. The modeling results showed that the vapor space was fairly well mixed and that benzene concentrations were relatively low when forced recirculation and 72 cfm ventilation air through the tank boundary were imposed. For the same 72 cfm air inlet flow but without forced recirculation, the heavier benzene gas was stratified. The results demonstrated that benzene concentrations were relatively low for typical operating configurations and conditions. Detailed results and the cases considered in the calculations will be discussed here.