National Library of Energy BETA

Sample records for applied mathematics computer

  1. Applied & Computational MathematicsChallenges for the Design and Control of Dynamic Energy Systems

    SciTech Connect (OSTI)

    Brown, D L; Burns, J A; Collis, S; Grosh, J; Jacobson, C A; Johansen, H; Mezic, I; Narayanan, S; Wetter, M

    2011-03-10

    The Energy Independence and Security Act of 2007 (EISA) was passed with the goal 'to move the United States toward greater energy independence and security.' Energy security and independence cannot be achieved unless the United States addresses the issue of energy consumption in the building sector and significantly reduces energy consumption in buildings. Commercial and residential buildings account for approximately 40% of the U.S. energy consumption and emit 50% of CO{sub 2} emissions in the U.S. which is more than twice the total energy consumption of the entire U.S. automobile and light truck fleet. A 50%-80% improvement in building energy efficiency in both new construction and in retrofitting existing buildings could significantly reduce U.S. energy consumption and mitigate climate change. Reaching these aggressive building efficiency goals will not happen without significant Federal investments in areas of computational and mathematical sciences. Applied and computational mathematics are required to enable the development of algorithms and tools to design, control and optimize energy efficient buildings. The challenge has been issued by the U.S. Secretary of Energy, Dr. Steven Chu (emphasis added): 'We need to do more transformational research at DOE including computer design tools for commercial and residential buildings that enable reductions in energy consumption of up to 80 percent with investments that will pay for themselves in less than 10 years.' On July 8-9, 2010 a team of technical experts from industry, government and academia were assembled in Arlington, Virginia to identify the challenges associated with developing and deploying newcomputational methodologies and tools thatwill address building energy efficiency. These experts concluded that investments in fundamental applied and computational mathematics will be required to build enabling technology that can be used to realize the target of 80% reductions in energy consumption. In addition the finding was that there are tools and technologies that can be assembled and deployed in the short term - the next 3-5 years - that can be used to significantly reduce the cost and time effective delivery of moderate energy savings in the U.S. building stock. Simulation tools, which are a core strength of current DOE computational research programs, provide only a part of the answer by providing a basis for simulation enabled design. New investments will be required within a broad dynamics and control research agenda which must focus on dynamics, control, optimization and simulation of multi-scale energy systems during design and operation. U.S. investments in high performance and high productivity computing (HP2C) should be leveraged and coupled with advances in dynamics and control to impact both the existing building stock through retrofits and also new construction. The essential R&D areas requiring investment are: (1) Characterizing the Dynamics of Multi-scale Energy Systems; (2) Control and Optimization Methodologies of Multi-scale Energy Systems Under Uncertainty; and (3) Multiscale Modeling and Simulation Enabled Design and Operation. The concept of using design and control specific computational tools is a new idea for the building industry. The potential payoffs in terms of accelerated design cycle times, performance optimization and optimal supervisory control to obtain and maintain energy savings are huge. Recent advances in computational power, computer science, and mathematical algorithms offer the foundations to address the control problems presented by the complex dynamics of whole building systems. The key areas for focus and associated metrics with targets for establishing competitiveness in energy efficient building design and operation are: (1) Scalability - Current methodology and tools can provide design guidance for very low energy buildings in weeks to months; what is needed is hours to days. A 50X improvement is needed. (2) Installation and commissioning - Current methodology and tools can target a three month window for commissioni

  2. Computational physics and applied mathematics capability review June 8-10, 2010

    SciTech Connect (OSTI)

    Lee, Stephen R

    2010-01-01

    Los Alamos National Laboratory will review its Computational Physics and Applied Mathematics (CPAM) capabilities in 2010. The goals of capability reviews are to assess the quality of science, technology, and engineering (STE) performed by the capability, evaluate the integration of this capability across the Laboratory and within the scientific community, examine the relevance of this capability to the Laboratory's programs, and provide advice on the current and future directions of this capability. This is the first such review for CPAM, which has a long and unique history at the Laboratory, starting from the inception of the Laboratory in 1943. The CPAM capability covers an extremely broad technical area at Los Alamos, encompassing a wide array of disciplines, research topics, and organizations. A vast array of technical disciplines and activities are included in this capability, from general numerical modeling, to coupled multi-physics simulations, to detailed domain science activities in mathematics, methods, and algorithms. The CPAM capability involves over 12 different technical divisions and a majority of our programmatic and scientific activities. To make this large scope tractable, the CPAM capability is broken into the following six technical 'themes.' These themes represent technical slices through the CPAM capability and collect critical core competencies of the Laboratory, each of which contributes to the capability (and each of which is divided into multiple additional elements in the detailed descriptions of the themes in subsequent sections), as follows. Theme 1: Computational Fluid Dynamics - This theme speaks to the vast array of scientific capabilities for the simulation of fluids under shocks, low-speed flow, and turbulent conditions - which are key, historical, and fundamental strengths of the Laboratory. Theme 2: Partial Differential Equations - The technical scope of this theme is the applied mathematics and numerical solution of partial differential equations (broadly defined) in a variety of settings, including particle transport, solvers, and plasma physics. Theme 3: Monte Carlo - Monte Carlo was invented at Los Alamos. This theme discusses these vitally important methods and their application in everything from particle transport, to condensed matter theory, to biology. Theme 4: Molecular Dynamics - This theme describes the widespread use of molecular dynamics for a variety of important applications, including nuclear energy, materials science, and biological modeling. Theme 5: Discrete Event Simulation - The technical scope of this theme represents a class of complex system evolutions governed by the action of discrete events. Examples include network, communication, vehicle traffic, and epidemiology modeling. Theme 6: Integrated Codes - This theme discusses integrated applications (comprised of all of the supporting science represented in Themes 1-5) that are of strategic importance to the Laboratory and the nation. The Laboratory has in approximately 10 million source lines of code in over 100 different such strategically important applications. Of these themes, four of them will be reviewed during the 2010 review cycle: Themes 1,2, 3, and 6. Because these reviews occur every three years, Themes 4 and 5 will be reviewed in 2013, along with Theme 6 (which will be reviewed during each review, owing to this theme's role as an integrator of the supporting science represented by the other five themes). Yearly written status reports will be provided to the CPAM Committee Chair during off-cycle years.

  3. Mathematical and Computational Epidemiology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

  4. Experimental Mathematics and Computational Statistics

    SciTech Connect (OSTI)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  5. Applied Mathematics | U.S. DOE Office of Science (SC)

    Office of Science (SC) Website

    Applied Mathematics Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Applied Mathematics Conferences And Workshops Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Contact Information Advanced Scientific Computing Research U.S. Department of

  6. Computational physics and applied mathematics capability review June 8-10, 2010 (Advance materials to committee members)

    SciTech Connect (OSTI)

    Lee, Stephen R

    2010-01-01

    Los Alamos National Laboratory will review its Computational Physics and Applied Mathematics (CPAM) capabilities in 2010. The goals of capability reviews are to assess the quality of science, technology, and engineering (STE) performed by the capability, evaluate the integration of this capability across the Laboratory and within the scientific community, examine the relevance of this capability to the Laboratory's programs, and provide advice on the current and future directions of this capability. This is the first such review for CPAM, which has a long and unique history at the laboratory, starting from the inception of the Laboratory in 1943. The CPAM capability covers an extremely broad technical area at Los Alamos, encompassing a wide array of disciplines, research topics, and organizations. A vast array of technical disciplines and activities are included in this capability, from general numerical modeling, to coupled mUlti-physics simulations, to detailed domain science activities in mathematics, methods, and algorithms. The CPAM capability involves over 12 different technical divisions and a majority of our programmatic and scientific activities. To make this large scope tractable, the CPAM capability is broken into the following six technical 'themes.' These themes represent technical slices through the CP AM capability and collect critical core competencies of the Laboratory, each of which contributes to the capability (and each of which is divided into multiple additional elements in the detailed descriptions of the themes in subsequent sections): (1) Computational Fluid Dynamics - This theme speaks to the vast array of scientific capabilities for the simulation of fluids under shocks, low-speed flow, and turbulent conditions - which are key, historical, and fundamental strengths of the laboratory; (2) Partial Differential Equations - The technical scope of this theme is the applied mathematics and numerical solution of partial differential equations (broadly defined) in a variety of settings, including particle transport, solvers, and plasma physics; (3) Monte Carlo - Monte Carlo was invented at Los Alamos, and this theme discusses these vitally important methods and their application in everything from particle transport, to condensed matter theory, to biology; (4) Molecular Dynamics - This theme describes the widespread use of molecular dynamics for a variety of important applications, including nuclear energy, materials science, and biological modeling; (5) Discrete Event Simulation - The technical scope of this theme represents a class of complex system evolutions governed by the action of discrete events. Examples include network, communication, vehicle traffic, and epidemiology modeling; and (6) Integrated Codes - This theme discusses integrated applications (comprised of all of the supporting science represented in Themes 1-5) that are of strategic importance to the Laboratory and the nation. The laboratory has in approximately 10 million source lines of code in over 100 different such strategically important applications. Of these themes, four of them will be reviewed during the 2010 review cycle: Themes 1, 2, 3, and 6. Because these capability reviews occur every three years, Themes 4 and 5 will be reviewed in 2013, along with Theme 6 (which will be reviewed during each review, owing to this theme's role as an integrator of the supporting science represented by the other 5 themes). Yearly written status reports will be provided to the Capability Review Committee Chair during off-cycle years.

  7. Applied Mathematics Conferences and Workshops | U.S. DOE Office of Science

    Office of Science (SC) Website

    (SC) Applied Mathematics » Applied Mathematics Conferences And Workshops Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Applied Mathematics Conferences And Workshops Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Contact Information

  8. Applied & Computational Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

  9. Information Science, Computing, Applied Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Capabilities Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security ...

  10. Applied Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ADTSC » CCS » CCS-7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These

  11. Information Science, Computing, Applied Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Capabilities » Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design»

  12. Applied Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

  13. Applied & Computational Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

  14. The Applied Mathematics for Power Systems (AMPS) (Technical Report) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Technical Report: The Applied Mathematics for Power Systems (AMPS) Citation Details In-Document Search Title: The Applied Mathematics for Power Systems (AMPS) Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to

  15. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    SciTech Connect (OSTI)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

  16. January 2013 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    January 2013 Most Viewed Documents for Mathematics And Computing Cybersecurity through Real-Time Distributed Control Systems Kisner, Roger A ORNL; Manges, Wayne W ORNL; ...

  17. Most Viewed Documents - Mathematics and Computing | OSTI, US...

    Office of Scientific and Technical Information (OSTI)

    - Mathematics and Computing Metaphors for cyber security. Moore, Judy Hennessey; Parrott, Lori K.; Karas, Thomas H. (2008) Staggered-grid finite-difference acoustic modeling with ...

  18. Applying computationally efficient schemes for biogeochemical cycles

    Office of Scientific and Technical Information (OSTI)

    (ACES4BGC) (Technical Report) | SciTech Connect Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) Citation Details In-Document Search Title: Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) NCAR contributed to the ACES4BGC project through software engineering work on aerosol model implementation, build system and script changes, coupler enhancements for biogeochemical tracers, improvements to the Community Land Model (CLM) code and

  19. Applied Mathematics Conferences and Workshops | U.S. DOE Office...

    Office of Science (SC) Website

    file (13KB) - Workshop Report .pdf file (222KB) Multiscale Mathematics Initiative: A Roadmap Workshops to consider the scientific needs and mathematical challenges for multiscale...

  20. Physics, Computer Science and Mathematics Division. Annual report, 1 January-31 December 1979

    SciTech Connect (OSTI)

    Lepore, J.V.

    1980-09-01

    This annual report describes the research work carried out by the Physics, Computer Science and Mathematics Division during 1979. The major research effort of the Division remained High Energy Particle Physics with emphasis on preparing for experiments to be carried out at PEP. The largest effort in this field was for development and construction of the Time Projection Chamber, a powerful new particle detector. This work took a large fraction of the effort of the physics staff of the Division together with the equivalent of more than a hundred staff members in the Engineering Departments and shops. Research in the Computer Science and Mathematics Department of the Division (CSAM) has been rapidly expanding during the last few years. Cross fertilization of ideas and talents resulting from the diversity of effort in the Physics, Computer Science and Mathematics Division contributed to the software design for the Time Projection Chamber, made by the Computer Science and Applied Mathematics Department.

  1. Physics, Computer Science and Mathematics Division annual report, 1 January-31 December 1983

    SciTech Connect (OSTI)

    Jackson, J.D.

    1984-08-01

    This report summarizes the research performed in the Physics, Computer Science and Mathematics Division of the Lawrence Berkeley Laboratory during calendar year 1983. The major activity of the Division is research in high-energy physics, both experimental and theoretical, and research and development in associated technologies. A smaller, but still significant, program is in computer science and applied mathematics. During 1983 there were approximately 160 people in the Division active in or supporting high-energy physics research, including about 40 graduate students. In computer science and mathematics, the total staff, including students and faculty, was roughly 50. Because of the creation in late 1983 of a Computing Division at LBL and the transfer of the Computer Science activities to the new Division, this annual report is the last from the Physics, Computer Science and Mathematics Division. In December 1983 the Division reverted to its historic name, the Physics Division. Its future annual reports will document high energy physics activities and also those of its Mathematics Department.

  2. September 2013 Most Viewed Documents for Mathematics And Computing | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information September 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 169 /> Lecture notes for introduction to safety and health Biele, F. (1992) 57 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 50 /> Computational procedures for determining

  3. Five-Laboratory Conference on Computational Mathematics - 2005, Vienna,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Austria | National Nuclear Security Administration Five-Laboratory Conference on Computational Mathematics - 2005, Vienna, Austria | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios

  4. September 2015 Most Viewed Documents for Mathematics And Computing | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information September 2015 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1049 Lecture notes for introduction to safety and health Biele, F. (1992) 333 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 286 Ferrite Measurement in Austenitic and Duplex Stainless Steel Castings -

  5. December 2015 Most Viewed Documents for Mathematics And Computing | OSTI,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    US Dept of Energy, Office of Scientific and Technical Information Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1446 Automotive vehicle sensors Sheen, S.H.; Raptis, A.C.; Moscynski, M.J. (1995) 373 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 365 Lecture notes for introduction to safety and health Biele, F. (1992) 324 Ferrite Measurement in Austenitic and

  6. Most Viewed Documents for Mathematics and Computing: December 2014 | OSTI,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    US Dept of Energy, Office of Scientific and Technical Information Most Viewed Documents for Mathematics and Computing: December 2014 Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 322 Levenberg--Marquardt algorithm: implementation and theory More, J.J. (1977) 64 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 51 Lecture notes for introduction to safety and health Biele, F. (1992) 50

  7. Most Viewed Documents for Mathematics and Computing: September 2014 | OSTI,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    US Dept of Energy, Office of Scientific and Technical Information for Mathematics and Computing: September 2014 Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 193 Lecture notes for introduction to safety and health Biele, F. (1992) 56 Mort User's Manual: For use with the Management Oversight and Risk Tree analytical logic diagram Knox, N.W.; Eicher, R.W. (1992) 51 Levenberg--Marquardt algorithm: implementation and theory More, J.J.

  8. Computing and Computational Sciences Directorate - Computer Science and

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematics Division Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, applied mathematics, and intelligent systems. Our mission includes basic research in computational sciences and application of advanced computing systems, computational, mathematical and analysis techniques to the solution of scientific problems of national importance. We seek to work

  9. Computational Advances in Applied Energy | Department of Energy

    Office of Environmental Management (EM)

    Computational Advances in Applied Energy Computational Advances in Applied Energy PDF icon Friedmann-LLNL-SEAB.10.11.pdf More Documents & Publications Director's Perspective by George Miller Fact Sheet: Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) QER - Comment of Canadian Hydropower Association

  10. Apply for the Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Computing » How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Current application deadline is February 5, 2016 with notification by early March 2016. Who can apply? Upper division undergraduate

  11. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

    SciTech Connect (OSTI)

    1980-10-01

    Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

  12. Name Center for Applied Scientific Computing month day, 1998

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bosl, Art Mirin, Phil Duffy Lawrence Livermore National Lab Climate and Carbon Cycle Modeling Group Center for Applied Scientific Computing April 24, 2003 High Resolution Climate Simulation and Regional Water Supplies WJB 2 CASC/CCCM High-Performance Computing for Climate Modeling as a Planning Tool GLOBAL WARMING IS HERE!! ... so now what? How will climate change really affect societies? Effects of global climate change are local Some effects of climate change can be mitigated Requires accurate

  13. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  14. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  15. GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION...

    Office of Scientific and Technical Information (OSTI)

    PC-1D installation manual and user's guide Basore, P.A. 14 SOLAR ENERGY; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 42 ENGINEERING; CHARGE...

  16. Most Viewed Documents for Mathematics and Computing: September...

    Office of Scientific and Technical Information (OSTI)

    for configuration management using computer aided software engineering (CASE) tools Smith, P.R.; Sarfaty, R. (1993) 26 Ferrite Measurement in Austenitic and Duplex Stainless ...

  17. KNUPP,PATRICK 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING...

    Office of Scientific and Technical Information (OSTI)

    DIFFERENTIAL EQUATIONS; VERIFICATION; COMPUTER CODES; NUMERICAL SOLUTION; FLUID MECHANICS A procedure for code Verification by the Method of Manufactured Solutions (MMS) is...

  18. April 2013 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    R.A. (1997) 69 > Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain Ueng, Tzou-Shin; Chen, ...

  19. December 2015 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    Dept. of Chemical Engineering; Yarbro, S.L. Los Alamos National Lab., NM (United States) (1997) 66 Computational procedures for determining parameters in Ramberg-Osgood ...

  20. June 2015 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    Including an examination of the Department of Energys position on quality management Bennett, C.T. (1994) 74 Computational procedures for determining parameters in Ramberg-Osgood ...

  1. September 2015 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    Not Available (1987) 89 Computational procedures for determining parameters in Ramberg-Osgood elastoplastic model based on modulus and damping versus strain Ueng, Tzou-Shin; Chen, ...

  2. March 2014 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information 4 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 291 /> Ten Problems in Experimental Mathematics Bailey, David H.; Borwein, Jonathan M.; Kapoor, Vishaal;Weisstein, Eric (2004) 101 /> The Effects of Nuclear Weapons Glasstone, Samuel (1964) 72 /> Levenberg--Marquardt algorithm: implementation and

  3. July 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information July 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 567 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 89 /> Lecture notes for introduction to safety and health Biele, F. (1992) 78 /> Computational procedures for determining parameters

  4. June 2014 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information June 2014 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 337 /> The Effects of Nuclear Weapons Glasstone, Samuel (1964) 71 /> Levenberg--Marquardt algorithm: implementation and theory More, J.J. (1977) 68 /> Computational procedures for determining parameters in Ramberg-Osgood elastoplastic

  5. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

    SciTech Connect (OSTI)

    1997-12-31

    The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

  6. Webinar "Applying High Performance Computing to Engine Design...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    batteries --Electricity transmission --Smart Grid Environment -Biology --Computational ... Education Highlights: Gasoline Compression Ignition Mihai Anitescu on Electric Grids ...

  7. NREL: Computational Science Home Page

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

  8. April 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information April 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Publications in biomedical and environmental sciences programs, 1981 Moody, J.B. (comp.) (1982) 306 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 159 /> Lecture notes for introduction to safety and health Biele, F. (1992) 138 /> Analytical considerations in the code qualification of

  9. January 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information January 2013 Most Viewed Documents for Mathematics And Computing Cybersecurity through Real-Time Distributed Control Systems Kisner, Roger A [ORNL]; Manges, Wayne W [ORNL]; MacIntyre, Lawrence Paul [ORNL]; Nutaro, James J [ORNL]; Munro Jr, John K [ORNL]; Ewing, Paul D [ORNL]; Howlader, Mostofa [ORNL]; Kuruganti, Phani Teja [ORNL]; Wallace, Richard M [ORNL]; Olama, Mohammed M [ORNL] REACTOR ANALYSIS AND VIRTUAL CONTROL ENVIRONMENT

  10. June 2015 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information June 2015 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 833 Lecture notes for introduction to safety and health Biele, F. (1992) 256 Systems engineering management plans. Rodriguez, Tamara S. (2009) 218 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 216 Ferrite

  11. March 2015 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Dept of Energy, Office of Scientific and Technical Information 5 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1019 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 183 Lecture notes for introduction to safety and health Biele, F. (1992) 172 Mort User's Manual: For use with the Management Oversight and Risk Tree analytical logic

  12. Physics, computer science and mathematics division. Annual report, 1 January - 31 December 1982

    SciTech Connect (OSTI)

    Jackson, J.D.

    1983-08-01

    Experimental physics research activities are described under the following headings: research on e/sup +/e/sup -/ annihilation; research at Fermilab; search for effects of a right-handed gauge boson; the particle data center; high energy astrophysics and interdisciplinary experiments; detector and other research and development; publications and reports of other research; computation and communication; and engineering, evaluation, and support operations. Theoretical particle physics research and heavy ion fusion research are described. Also, activities of the Computer Science and Mathematics Department are summarized. Publications are listed. (WHK)

  13. Previous Computer Science Award Announcements | U.S. DOE Office...

    Office of Science (SC) Website

    Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop...

  14. Applications for Postdoctoral Fellowship in Computational Science...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    at Berkeley Lab due November 26 October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have...

  15. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect (OSTI)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  16. Mathematical Applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Math Mathematical Applications Mathematica Mathematica is a fully integrated environment for technical computing. It performs symbolic manipulation of equations, integrals, differential equations and almost any mathematical expression. Read More » Matlab MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Read More »

  17. Webinar "Applying High Performance Computing to Engine Design Using

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputers" | Argonne National Laboratory Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share Description Video from the February 25, 2016 Convergent Science/Argonne National Laboratory webinar "Applying High Performance Computing to Engine Design using Supercomputers," featuring Janardhan Kodavasal of Argonne National Laboratory Speakers Janardhan Kodavasal, Argonne National Laboratory Duration 52:26 Topic Energy Energy

  18. Apply

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Apply Application Process Bringing together top, space science students with internationally recognized researchers at Los Alamos in an educational and collaborative atmosphere. Contacts Director Misa Cowee Email Administrative Assistant Mary Wubbena Email Request more information Email Applications for the 2016 summer school are now closed. Applications were due on February 5, 2016. PLEASE NOTE: After the 2016 session, the program will not be offered again until 2018. Before applying Check your

  19. A Multifaceted Mathematical Approach for Complex Systems

    SciTech Connect (OSTI)

    Alexander, F.; Anitescu, M.; Bell, J.; Brown, D.; Ferris, M.; Luskin, M.; Mehrotra, S.; Moser, B.; Pinar, A.; Tartakovsky, A.; Willcox, K.; Wright, S.; Zavala, V.

    2012-03-07

    Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significant impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.

  20. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATION...

    Office of Scientific and Technical Information (OSTI)

    cell via a so-called in-cell reconstruction process. ... on Mathematics and Computational Methods Applied to Nuclear ... FLOW; COMPUTERIZED SIMULATION; EFFICIENCY; FLUID ...

  1. ACES4BGC Applying Computationally Efficient Schemes for BioGeochemical Cycles

    Office of Scientific and Technical Information (OSTI)

    ACES4BGC Applying Computationally Efficient Schemes for BioGeochemical Cycles Principal Investigator: Fo r r es t M. H o ff m a n ( O R N L ) Co-Investigators: Pavel B. B o c h e v ( SN L) , Philip J. C a m e r o n - S m i t h ( LLNL) , Ri chard C. East er , Jr. ( P N N L ) , S c o t t M. Elliott ( LANE ) , S t e v e n J. G h a n ( P N N L ) , X i a o h o n g Liu ( f or me rl y P N N L , U. W y o m i n g ) , R o b e r t B. Lowrie ( LA N L ) , D o n a l d D. Lu ca s ( LLNL) , P o - l un Ma ( P N

  2. Vehicle Technologies Office Merit Review 2015: Applied Integrated Computational Materials Engineering (ICME) for New Propulsion Materials

    Broader source: Energy.gov [DOE]

    Presentation given by Oak Ridge National Laboratory at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about Applied...

  3. SC e-journals, Computer Science

    Office of Scientific and Technical Information (OSTI)

    & Mathematical Organization Theory Computational Complexity Computational Economics Computational Management ... Technology EURASIP Journal on Information Security ...

  4. Applied Mathematics and Plasma Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    skills database Electron density simulation Electron density from an orbital-free quantum molecular dynamics simulation for a warm dense plasma of deuterium at density 10 gcc and...

  5. Mathematical and Computational Epidemiology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for forecasting the spread of infectious diseases and understanding human behavior using social media Sara Del Valle 1:03 Faces of Science: Sara Del Valle We provide decision...

  6. Computing Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and...

  7. Computing and Computational Sciences Directorate - Divisions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center for Computational Sciences

  8. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

  9. Parallel computing works

    SciTech Connect (OSTI)

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  10. Applications for Postdoctoral Fellowship in Computational Science at

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Berkeley Lab due November 26 Postdoctoral Fellowship in Computational Science at Berkeley Lab Applications for Postdoctoral Fellowship in Computational Science at Berkeley Lab due November 26 October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have received their Ph.D. within the last three years are encouraged to apply for the Luis W. Alvarez Postdoctoral Fellowship in Computational Science at Lawrence

  11. Computer Science Program | U.S. DOE Office of Science (SC)

    Office of Science (SC) Website

    Computer Science Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community

  12. Previous Computer Science Award Announcements | U.S. DOE Office of Science

    Office of Science (SC) Website

    (SC) Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing

  13. Collaborative Mathematical Workbench Eliot Feibush, Matthew Milano,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Benjamin Phillips, Andrew Zwicker, and James Morgan | Princeton Plasma Physics Lab Collaborative Mathematical Workbench Eliot Feibush, Matthew Milano, Benjamin Phillips, Andrew Zwicker, and James Morgan This invention enables modifying and analyzing numerical data by applying custom programs and graphically displaying the input and the result. The invention allows groups of users to interactively share their data and interactions among a number of computers for effective collaboration. The

  14. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  15. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    SciTech Connect (OSTI)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  16. Mathematical Statisticians

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematical Statisticians The U.S. Energy Information Administration (EIA) within the Department of Energy has forged a world-class information program that stresses quality, teamwork, and employee growth. In support of our program, we offer a variety of profes- sional positions, including the Mathematical Statistician, whose work is associated with the design, implementation and evaluation of statistical methods. Responsibilities: Mathematical Statisticians perform or participate in one or

  17. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  18. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  19. How To Apply

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CSCNSI How To Apply How to Apply for Computer System, Cluster, and Networking Summer Institute Emphasizes practical skills development Contact Leader Stephan Eidenbenz (505)...

  20. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

    Office of Science (SC) Website

    and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

    1. Mathematical Perspectives

      SciTech Connect (OSTI)

      Glimm, J.

      2009-10-14

      Progress for the past decade or so has been extraordinary. The solution of Fermat's Last Theorem [11] and of the Poincare Conjecture [1] have resolved two of the most outstanding challenges to mathematics. For both cases, deep and advanced theories and whole subfields of mathematics came into play and were developed further as part of the solutions. And still the future is wide open. Six of the original seven problems from the Clay Foundation challenge remain open, the 23 DARPA challenge problems are open. Entire new branches of mathematics have been developed, including financial mathematics and the connection between geometry and string theory, proposed to solve the problems of quantized gravity. New solutions of the Einstein equations, inspired by shock wave theory, suggest a cosmology model which fits accelerating expansion of the universe possibly eliminating assumptions of 'dark matter'. Intellectual challenges and opportunities for mathematics are greater than ever. The role of mathematics in society continues to grow; with this growth comes new opportunities and some growing pains; each will be analyzed here. We see a broadening of the intellectual and professional opportunities and responsibilities for mathematicians. These trends are also occuring across all of science. The response can be at the level of the professional societies, which can work to deepen their interactions, not only within the mathematical sciences, but also with other scientific societies. At a deeper level, the choices to be made will come from individual mathematicians. Here, of course, the individual choices will be varied, and we argue for respect and support for this diversity of responses. In such a manner, we hope to preserve the best of the present while welcoming the best of the new.

    2. Topological one-way quantum computation on verified logical cluster...

      Office of Scientific and Technical Information (OSTI)

      Subject: 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 97 MATHEMATICAL METHODS AND COMPUTING; CALCULATION METHODS; ERRORS; MATHEMATICAL LOGIC; NOISE; QUANTUM COMPUTERS; ...

    3. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

      SciTech Connect (OSTI)

      Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

      2010-01-01

      In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

    4. Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. ! Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13 Garching bei München, Germany, July 17, 2007 ScicomP 13, July 17, 2007, Garching Overview * About Bassi * Memory on Bassi * Large Page Memory (It's Great!) * System Configuration * Large Page

    5. Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

    6. ASCR Workshop on Quantum Computing for Science

      SciTech Connect (OSTI)

      Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

      2015-06-01

      This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

    7. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The form factor for the decay of a kaon into a pion and two leptons Lattice QCD Paul Mackenzie Allocation Program: INCITE Allocation Hours: 180 Million Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological Sciences Apply sort descending An example of a Category 5

    8. Chameleon: A Computer Science Testbed as Application of Cloud...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Chameleon: A Computer Science Testbed as Application of Cloud Computing Event Sponsor: Mathematics and Computing Science Brownbag Lunch Start Date: Dec 15 2015 - 12:00pm Building...

    9. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      ... Alfaro, Manuel - Departamento de Matemticas, Universidad de Zaragoza Algebraic Number Theory Archives Applied Algebra Group at Linz Argonne National Laboratory, Mathematics and ...

    10. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    11. Engineering Physics and Mathematics Division progress report for period ending September 30, 1987

      SciTech Connect (OSTI)

      Not Available

      1987-12-01

      This report provides an archival record of the activities of the Engineering Physics and Mathematics Division during the period June 30, 1985 through September 30, 1987. Work in Mathematical Sciences continues to include applied mathematics research, statistics research, and computer science. Nuclear-data measurements and evaluations continue for fusion reactors, fission reactors, and other nuclear systems. Also discussed are long-standing studies of fission-reactor shields through experiments and related analysis, of accelerator shielding, and of fusion-reactor neutronics. Work in Machine Intelligence continues to feature the development of an autonomous robot. The last descriptive part of this report reflects the work in our Engineering Physics Information Center, which again concentrates primarily upon radiation-shielding methods and related data.

    12. Mathematical Formulation Requirements and Specifications for the Process Models

      SciTech Connect (OSTI)

      Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

      2010-11-01

      The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally generates a suite of conceptual models that span a range of process complexity, potentially coupling hydrological, biogeochemical, geomechanical, and thermal processes. The Platform will use ensembles of these simulations to quantify the associated uncertainty, sensitivity, and risk. The Process Models task within the HPC Simulator focuses on the mathematical descriptions of the relevant physical processes.

    13. High-precision arithmetic in mathematical physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Bailey, David H.; Borwein, Jonathan M.

      2015-05-12

      For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

    14. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    15. General Mathematical and Computing System Routines

      Energy Science and Technology Software Center (OSTI)

      1999-04-20

      GO is a 32-bit genetic optimization driver that runs under Windows. It is an optimization scheme used to solve large combinatorial problems using "genetic "algorithms. GO is a genetic optimization driver: it must be linked with a user supplied process model before it can be used. The link is made through a text file that transfers data to and from the user-supplied process model. A user interface allows optimization parameters to be entered, edited, saved.moreIt also allows the user to display results as the optimization proceeds or at a later time.less

    16. How To Apply

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      How To Apply How to Apply for Computer System, Cluster, and Networking Summer Institute Emphasizes practical skills development Contacts Program Lead Carolyn Connor (505) 665-9891 Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email The 2016 application process will commence January 5 through February 13, 2016. Applicants must be U.S. citizens. Required Materials Current resume Official university transcript (with Spring courses posted and/or a copy of Spring 2016

    17. Computing and Computational Sciences Directorate - Joint Institute for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Joint Institute for Computational Sciences To help realize the full potential of new-generation computers for advancing scientific discovery, the University of Tennessee (UT) and Oak Ridge National Laboratory (ORNL) have created the Joint Institute for Computational Sciences (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and focuses these skills on

    18. Mathematical models for risk assessment

      SciTech Connect (OSTI)

      Zaikin, S.A.

      1995-12-01

      The use of mathematical models in risk assessment results in the proper understanding of many aspects of chemical exposure and allows to make more actual decisions. Our project ISCRA (Integrated Systems of Complex Risk Assessment) has the aim to create integrated systems of algorythms for prediction of pollutants` exposure on human and environmental health and to apply them for environmental monitoring, and decision-making. Mathematical model {open_quotes}MASTER{close_quotes} (Mathematical Algorythm of SimulaTion of Environmental Risk) represents the complex of algorythmical blocks and is intended for the prediction of danger of pollutants` exposure for human and environmental risk. Model LIMES (LIMits EStimation) is developed for prognosis of safety concentrations of pollutants in the environment both in the case of isolated exposure and in the case of complex exposure for concrete location. Model QUANT (QUANtity of Toxicant) represents the multicompartmental physiological pharmacokinetic model describing absorption, distribution, fate, metabolism, and elimination of pollutants in the body of different groups of human population, as a result of the different kind of exposure. Decision support system CLEVER (Complex LEVE1 of Risk) predicts the probability and the degree of development of unfavourable effects as result of exposure of pollutant on human health. System is based on the data of epidemiological and experimental researches and includes several mathematical models for analysis of {open_quotes}dose-time-response{close_quotes} relations and information about clinical symptoms of diseases. Model CEP (Combination Effect Prognosis) contains probabilistic algorythms for forecasting the effect of simultaneous impact of several factors polluting the environment. The result of the program work is the prediction of an independent exposure of two or more factors, and intensification or weakening of exposure in depending on factors` interactions.

    19. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    20. Sandia Energy - Computations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computations Home Transportation Energy Predictive Simulation of Engines Reacting Flow Applied Math & Software Computations ComputationsAshley Otero2015-10-30T02:18:51+00:00...

    1. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    2. The Applied Mathematics for Power Systems (AMPS) (Technical Report...

      Office of Scientific and Technical Information (OSTI)

      sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their...

    3. Applied combustion

      SciTech Connect (OSTI)

      1993-12-31

      From the title, the reader is led to expect a broad practical treatise on combustion and combustion devices. Remarkably, for a book of modest dimension, the author is able to deliver. The text is organized into 12 Chapters, broadly treating three major areas: combustion fundamentals -- introduction (Ch. 1), thermodynamics (Ch. 2), fluid mechanics (Ch. 7), and kinetics (Ch. 8); fuels -- coal, municipal solid waste, and other solid fuels (Ch. 4), liquid (Ch. 5) and gaseous (Ch. 6) fuels; and combustion devices -- fuel cells (Ch. 3), boilers (Ch. 4), Otto (Ch. 10), diesel (Ch. 11), and Wankel (Ch. 10) engines and gas turbines (Ch. 12). Although each topic could warrant a complete text on its own, the author addresses each of these major themes with reasonable thoroughness. Also, the book is well documented with a bibliography, references, a good index, and many helpful tables and appendices. In short, Applied Combustion does admirably fulfill the author`s goal for a wide engineering science introduction to the general subject of combustion.

    4. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    5. Proceedings of the Computational Needs for the Next Generation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      the operation and planning of the electric power system. The attached papers from these experts highlight mathematical and computational problems relevant for potential power...

    6. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

      SciTech Connect (OSTI)

      Corones, James

      2013-09-23

      High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

    7. An Overview of High Performance Computing and Challenges for the Future

      ScienceCinema (OSTI)

      Google Tech Talks

      2009-09-01

      In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.

    8. New DOE Program Funds $20 Million for Mathematics Research | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Program Funds $20 Million for Mathematics Research New DOE Program Funds $20 Million for Mathematics Research August 4, 2005 - 2:37pm Addthis WASHINGTON, DC - Under a new program funded by the Department of Energy's Office of Science, researchers will use mathematics to help solve problems such as the production of clean energy, pollution cleanup, manufacturing ever smaller computer chips, and making new "nanomaterials." Thirteen major research awards totaling $20 million

    9. Engineering Physics and Mathematics Division progress report for period ending December 31, 1994

      SciTech Connect (OSTI)

      Sincovec, R.F.

      1995-07-01

      This report provides a record of the research activities of the Engineering Physics and Mathematics Division for the period January 1, 1993, through December 31, 1994. This report is the final archival record of the EPM Division. On October 1, 1994, ORELA was transferred to Physics Division and on January 1, 1995, the Engineering Physics and Mathematics Division and the Computer Applications Division reorganized to form the Computer Science and Mathematics Division and the Computational Physics and Engineering Division. Earlier reports in this series are identified on the previous pages, along with the progress reports describing ORNL`s research in the mathematical sciences prior to 1984 when those activities moved into the Engineering Physics and Mathematics Division.

    10. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... Alfaro, Manuel - Departamento de Matemticas, Universidad de Zaragoza Algebraic Number Theory Archives Applied Algebra Group at Linz Argonne National Laboratory, Mathematics and ...

    11. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    12. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    13. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    14. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    15. Mathematical modeling of a Fermilab helium liquefier coldbox

      SciTech Connect (OSTI)

      Geynisman, M.G.; Walker, R.J.

      1995-12-01

      Fermilab Central Helium Liquefier (CHL) facility is operated 24 hours-a-day to supply 4.6{degrees}K for the Fermilab Tevatron superconducting proton-antiproton collider Ring and to recover warm return gases. The centerpieces of the CHL are two independent cold boxes rated at 4000 and 5400 liters/hour with LN{sub 2} precool. These coldboxes are Claude cycle and have identical heat exchangers trains, but different turbo-expanders. The Tevatron cryogenics demand for higher helium supply from CHL was the driving force to investigate an installation of an expansion engine in place of the Joule-Thompson valve. A mathematical model was developed to describe the thermo- and gas-dynamic processes for the equipment included in the helium coldbox. The model is based on a finite element approach, opposite to a global variables approach, thus providing for higher accuracy and conversion stability. Though the coefficients used in thermo- and gas-dynamic equations are unique for a given coldbox, the general approach, the equations, the methods of computations, and most of the subroutines written in FORTRAN can be readily applied to different coldboxes. The simulation results are compared against actual operating data to demonstrate applicability of the model.

    16. Multiscale Mathematics For Plasma Kinetics Spanning Multiple...

      Office of Scientific and Technical Information (OSTI)

      Technical Report: Multiscale Mathematics For Plasma Kinetics Spanning Multiple Collisionality Regimes Citation Details In-Document Search Title: Multiscale Mathematics For Plasma...

    17. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    18. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    19. ADVANCED SCIENTIFIC COMPUTING ADVISORY COMMITTEEMonday, July...

      Office of Science (SC) Website

      The meeting is open to the public. To access the call: Dial Toll-Free Number: 866-740-1260 ... Break 3:30 PM-4:00 PM Center for Applied Mathematics for Energy Research ApplicationS ...

    20. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, ...

    1. Research | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Link to the ASCR Computer Science Web Page APPLIED MATHEMATICS The Applied Mathematics ... Link to the ASCR Applied Mathematics Web Page NEXT GENERATION NETWORKING FOR SCIENCE ...

    2. LANL scientists named SIAM Fellows for their contributions to mathematics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      LANL scientists named SIAM Fellows LANL scientists named SIAM Fellows for their contributions to mathematics James M. "Mac" Hyman, Alan S. Perelson, David H. Sharp and Burton B. "Burt" Wendroff are new Fellows of the Society for Industrial and Applied Mathematics. May 4, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience,

    3. Applied Research Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ARC Privacy and Security Notice Skip over navigation Search the JLab Site Applied Research Center Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? Applied Research Center ARC Home Consortium News EH&S Reports print version ARC Resources Commercial Tenants ARC Brochure Library Conference Room Applied Research Center Applied Research Center front view Applied Research

    4. Mathematical modeling and computer simulation of processes in energy systems

      SciTech Connect (OSTI)

      Hanjalic, K.C. )

      1990-01-01

      This book is divided into the following chapters. Modeling techniques and tools (fundamental concepts of modeling); 2. Fluid flow, heat and mass transfer, chemical reactions, and combustion; 3. Processes in energy equipment and plant components (boilers, steam and gas turbines, IC engines, heat exchangers, pumps and compressors, nuclear reactors, steam generators and separators, energy transport equipment, energy convertors, etc.); 4. New thermal energy conversion technologies (MHD, coal gasification and liquefaction fluidized-bed combustion, pulse-combustors, multistage combustion, etc.); 5. Combined cycles and plants, cogeneration; 6. Dynamics of energy systems and their components; 7. Integrated approach to energy systems modeling, and 8. Application of modeling in energy expert systems.

    5. September 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      selection in conceptual design Kleban, Stephen D.; ... GIS DATABASE FOR NEW MEXICO OIL PRODUCERS Martha ... II: A finite element data model Schoof, L.A.; Yarberry, ...

    6. March 2014 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      D.; Bezburuah, R.; Ding, J. (1991) 18 > Communication of emergency public warnings: A social science perspective and state-of-the-art assessment Mileti, D.S. (Colorado State ...

    7. June 2014 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      D.; Bezburuah, R.; Ding, J. (1991) 22 > Communication of emergency public warnings: A social science perspective and state-of-the-art assessment Mileti, D.S. (Colorado State ...

    8. Most Viewed Documents for Mathematics and Computing: December...

      Office of Scientific and Technical Information (OSTI)

      Thermal and Plasma Processes Dept. (1997) 26 Review of zirconium-zircaloy pyrophoricity Cooper, T.D. (1984) 26 SMART BRIDGE: A tool for estimating the military load classification ...

    9. March 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Dash, Z.; Kelkar, S. (1988) 50 Review of zirconium-zircaloy pyrophoricity Cooper, T.D. (1984) 50 U235: a gamma ray analysis code for uranium isotopic determination Clark, D. (1997) ...

    10. July 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Bickford, J.C. (1997) 34 > Review of zirconium-zircaloy pyrophoricity Cooper, T.D. (1984) 33 > Description of DASSL: a differentialalgebraic system solver Petzold, L.R. ...

    11. GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION...

      Office of Scientific and Technical Information (OSTI)

      ENERGY; LMFBR TYPE REACTORS; NUCLEAR POWER; PHYSICS; BREEDER REACTORS; CARBONACEOUS MATERIALS; DOCUMENT TYPES; ENERGY; ENERGY SOURCES; EPITHERMAL REACTORS; FAST REACTORS; FBR...

    12. SCIENCE; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING...

      Office of Scientific and Technical Information (OSTI)

      ZIRCONIUM ALLOYS; ZIRCONIUM BASE ALLOYS 360100* -- Metals & Alloys; 570000 -- Health & Safety Massive zirconium metal scrap can be handled, shipped, and stored with no...

    13. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      You can read more about the positions and apply at jobs.lbl.gov: Bioinformatics High Performance Computing Consultant (job number: 73194) and Software Developer for High...

    14. Computing and Computational Sciences Directorate - Information Technology

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences and Engineering The Computational Sciences and Engineering Division (CSED) is ORNL's premier source of basic and applied research in the field of data sciences and knowledge discovery. CSED's science agenda is focused on research and development related to knowledge discovery enabled by the explosive growth in the availability, size, and variability of dynamic and disparate data sources. This science agenda encompasses data sciences as well as advanced modeling and

    15. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    16. Present and Future Computing Requirements for PETSc

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Future Computing Requirements for PETSc Jed Brown jedbrown@mcs.anl.gov Mathematics and Computer Science Division, Argonne National Laboratory Department of Computer Science, University of Colorado Boulder NERSC ASCR Requirements for 2017 2014-01-15 Extending PETSc's Hierarchically Nested Solvers ANL Lois C. McInnes, Barry Smith, Jed Brown, Satish Balay UChicago Matt Knepley IIT Hong Zhang LBL Mark Adams Linear solvers, nonlinear solvers, time integrators, optimization methods (merged TAO)

    17. Applied Energy Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Energy Programs Applied Energy Programs Los Alamos is using its world-class scientific capabilities to enhance national energy security by developing energy sources with limited environmental impact and by improving the efficiency and reliability of the energy infrastructure. CONTACT US Program Director Melissa Fox (505) 665-0896 Email Applied Energy Program Office serves as the hub connecting the Laboratory's scientific and technical resources to DOE sponsors, DoD programs, and to

    18. Applied Math & Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Math & Software - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied Math & Software HomeTransportation ...

    19. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-02-01 08:07:08

    20. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop....

    1. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    2. DOE Fundamentals Handbook: Mathematics, Volume 1

      SciTech Connect (OSTI)

      Not Available

      1992-06-01

      The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

    3. DOE Fundamentals Handbook: Mathematics, Volume 2

      SciTech Connect (OSTI)

      Not Available

      1992-06-01

      The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

    4. Mathematics and biology: The interface, challenges and opportunities

      SciTech Connect (OSTI)

      Levin, S.A. )

      1992-06-01

      The interface between mathematics and biology has long been a rich area of research, with mutual benefit to each supporting discipline. Traditional areas of investigation, such as population genetics, ecology, neurobiology, and 3-D reconstructions, have flourished, despite a rather meager environment for the funding of such work. In the past twenty years, the kind and scope of such interactions between mathematicians and biologists have changed dramatically, reaching out to encompass areas of both biology and mathematics that previously had not benefited. At the same time, with the closer integration of theory and experiment, and the increased reliance on high-speed computation, the costs of such research grew, though not the opportunities for funding. The perception became reinforced, both within the research community and at funding agencies, that although these interactions were expanding, they were not doing so at the rate necessary to meet the opportunities and needs. A workshop was held in Washington, DC, between April 28 and May 3, 1990 which drew together a broadly based group of researchers to synthesize conclusions from a group of working papers and extended discussions. The result is the report presented here, which we hope will provide a guide and stimulus to research in mathematical and computational biology for at least the next decade. The report identifies a number of grand challenges, representing a broad consensus among the participants.

    5. Using Mira to Design Cleaner Engines | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Mira to Design Cleaner Engines Event Sponsor: Mathematics and Computing Science - LANS Seminar Start Date: Oct 28 2015 - 3:00pm BuildingRoom: Building 240Room 4301...

    6. New DOE Office of Science support for CAMERA to develop computational

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      mathematics for experimental facilities research New DOE Office of Science support for CAMERA to develop computational mathematics for experimental facilities research New DOE Office of Science support for CAMERA to develop computational mathematics for experimental facilities research September 22, 2015 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov newcameralogofinal Experimental science is evolving. With the advent of new technology, scientific facilities are collecting data at

    7. Mathematics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (1868-1942) JSTOR Contains the backfiles of many core academic journals Zentralblatt MATH The ZBMATH Online Database covers 1826-present Organizations American Institute of...

    8. DOE Applied Math Summit | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      DOE Applied Math Summit Advanced Scientific Computing Research (ASCR) ASCR Home About ... ASCR Workshops and Conferences DOE Applied Math Summit Print Text Size: A A A ...

    9. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental health, cleaner energy, and national security. Contact Us Group Leader Carl Gable Deputy Group Leader Gilles Bussod Email Profile pages header Search our Profile pages Hari Viswanathan inspects a microfluidic cell used to study the extraction of hydrocarbon fuels from a complex fracture network. EES-16's Subsurface Flow

    10. Computing and Computational Sciences Directorate - Computer Science...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      AWARD Winners: Jess Gehin; Jackie Isaacs; Douglas Kothe; Debbie McCoy; Bonnie Nestor; John Turner; Gilbert Weigand Organization(s): Nuclear Technology Program; Computing and...

    11. Apply for the Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant...

    12. Mathematical models of cocurrent spray drying

      SciTech Connect (OSTI)

      Negiz, A.; Lagergren, E.S.; Cinar, A.

      1995-10-01

      A steady state mathematical model for a cocurrent spray dryer is developed. The model includes the mass, momentum, and energy balances for a single drying droplet as well as the total energy and mass balances of the drying medium. A log normal droplet size distribution is assumed to hold at the exit of the twin-fluid atomizer located at the top of the drying chamber. The discretization of this log normal distribution with a certain number of bins yields a system of nonlinear coupled first-order differential equations as a function of the axial distance of the drying chamber. This system of equations is used to compute the axial changes in droplet diameter, density, velocity, moisture, and temperature for the droplets at each representative bin. Furthermore, the distributions of important process parameters such as droplet moisture content, diameter, density, and temperature are also obtainable along the length of the chamber. On the basis of the developed model, a constrained nonlinear optimization problem is solved, where the exit particle moisture content is minimized with respect to the process inputs subjected to a fixed mean particle diameter at the chamber exit. Response surface studies based on empirical models are also performed to illustrate the effectiveness of these techniques in achieving the optimal solution when an a priori model is not available. The structure of empirical models obtained from the model is shown to be in agreement with the structure of the empirical models obtained from the experimental studies.

    13. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Apply for Beamtime Print Friday, 28 August 2009 13:23 Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn

    14. Applied Science/Techniques

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Science/Techniques Applied Science/Techniques Print The ALS is an excellent incubator of new scientific techniques and instrumentation. Many of the technical advances that make the ALS a world-class soft x-ray facility are developed at the ALS itself. The optical components in use at the ALS-mirrors and lenses optimized for x-ray wavelengths-require incredibly high-precision surfaces and patterns (often formed through extreme ultraviolet lithography at the ALS) and must undergo rigorous

    15. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    16. Yuri Alexeev | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Yuri Alexeev Assistant Computational Scientist Yury Alekseev Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1126 Argonne IL, 60439 630-252-0157 yuri@alcf.anl.gov Yuri Alexeev is an Assistant Computational Scientist at the Argonne Leadership Computing Facility where he applies his skills, knowledge and experience for using and enabling computational methods in chemistry and biology for high-performance computing on next-generation high-performance computers. Yuri is

    17. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    18. ACM TOMS replicated computational results initiative

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Heroux, Michael Allen

      2015-06-03

      In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.

    19. ACM TOMS replicated computational results initiative

      SciTech Connect (OSTI)

      Heroux, Michael Allen

      2015-06-03

      In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.

    20. Mark Hereld | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Hereld Manager, Visualization and Data Analysis Mark Hereld Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 4139 Argonne, IL 60439 630-252-4170 hereld@mcs.anl.gov Mark Hereld is the ALCF's Visualization and Data Analysis Manager. He is also a member of the research staff in Argonne's Mathematics and Computer Science Division and a Senior Fellow of the Computation Institute with a joint appointment at the University of Chicago. His work in understanding simulation on future

    1. Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software |

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Department of Energy Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software Barracuda® Computational Particle Fluid Dynamics (CPFD®) Software Innovative Software Program Extends the Capabilities of CFD by Modeling Solid Particle Movement Invented at the Los Alamos Scientific Laboratory in the 1950s and '60s, computational fluid dynamics (CFD) is a mathematical expression of the physics of the movements of fluids (liquids and gases). CFD computer software simulates real-world

    2. Apply for Technical Assistance

      Office of Environmental Management (EM)

      Apply for Technical Assistance Use this online form to request technical assistance from the DOE Offce of Indian Energy for planning and implementing energy projects on tribal lands. To help us determine whether your request fts within the program's scope and can be addressed with available resources, please provide the information below and then click on "Submit Request." Only requests from federally recognized Indian Tribes, bands, nations, tribal energy resource develop- ment

    3. Applied Modern Physics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      1 Applied Modern Physics From the first bionic eye to airport scanners that detect liquid explosives, our expertise in developing advanced diagnostics results in real-world innovations. Contact Us Group Leader (acting) Larry Schultz Email Deputy Group Leader John George Email Group Office (505) 665-2545 QkarD Quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. Read more... A history of excellence in the development and use of

    4. ARM - Facility News Article

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Research (ASCR) has announced a new funding opportunity in Applied Mathematics entitled "Mathematical Multifaceted Integrated Capability Centers (MMICCs)"...

    5. A novel mathematical model for controllable near-field electrospinning

      SciTech Connect (OSTI)

      Ru, Changhai E-mail: luojun@shu.edu.cn; Robotics and Microsystems Center, Soochow University, Suzhou 215021 ; Chen, Jie; Shao, Zhushuai; Pang, Ming; Luo, Jun E-mail: luojun@shu.edu.cn

      2014-01-15

      Near-field electrospinning (NFES) had better controllability than conventional electrospinning. However, due to the lack of guidance of theoretical model, precise deposition of micro/nano fibers could only accomplished by experience. To analyze the behavior of charged jet in NFES using mathematical model, the momentum balance equation was simplified and a new expression between jet cross-sectional radius and axial position was derived. Using this new expression and mass conservation equation, expressions for jet cross-sectional radius and velocity were derived in terms of axial position and initial jet acceleration in the form of exponential functions. Based on Slender-body theory and Giesekus model, a quadratic equation for initial jet acceleration was acquired. With the proposed model, it was able to accurately predict the diameter and velocity of polymer fibers in NFES, and mathematical analysis rather than experimental methods could be applied to study the effects of the process parameters in NFES. Moreover, the movement velocity of the collector stage can be regulated by mathematical model rather than experience. Therefore, the model proposed in this paper had important guiding significance to precise deposition of polymer fibers.

    6. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Print Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn more about the three different types of

    7. Applied Science/Techniques

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Science/Techniques Print The ALS is an excellent incubator of new scientific techniques and instrumentation. Many of the technical advances that make the ALS a world-class soft x-ray facility are developed at the ALS itself. The optical components in use at the ALS-mirrors and lenses optimized for x-ray wavelengths-require incredibly high-precision surfaces and patterns (often formed through extreme ultraviolet lithography at the ALS) and must undergo rigorous calibration and testing

    8. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Print Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn more about the three different types of

    9. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    10. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    11. Conference on Non-linear Phenomena in Mathematical Physics: Dedicated...

      Office of Scientific and Technical Information (OSTI)

      current trends of nonlinear phenomena in mathematical physics, but also served as an awareness session of current womens contribution to mathematics. less Authors:...

    12. ORISE: Applied health physics projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied health physics projects The Oak Ridge Institute for Science and Education (ORISE) provides applied health physics services to government agencies needing technical support ...

    13. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    14. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    15. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    16. Browse by Discipline -- E-print Network Subject Pathways: Computer

      Office of Scientific and Technical Information (OSTI)

      Technologies and Information Sciences -- Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy I J K L M N O P Q R S T U V W X Y Z Haack, Jeff (Jeff Haack) - Department of Mathematics, University of Texas at Austin Haagerup, Uffe (Uffe Haagerup) - Department of Mathematics and Computer Science, University of Southern Denmark Haak, Bernhard (Bernhard Haak) - Institut de Mathematiques de Bordeaux,

    17. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Exascale Computing CoDEx Project: A Hardware/Software Codesign Environment for the Exascale Era The next decade will see a rapid evolution of HPC node architectures as power and cooling constraints are limiting increases in microprocessor clock speeds and constraining data movement. Applications and algorithms will need to change and adapt as node architectures evolve. A key element of the strategy as we move forward is the co-design of applications, architectures and programming

    19. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    20. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

      SciTech Connect (OSTI)

      Du, Qiang

      2014-11-12

      The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.

    1. ACM TOMS replicated computational results initiative (Journal Article) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Journal Article: ACM TOMS replicated computational results initiative Citation Details In-Document Search This content will become publicly available on June 3, 2016 Title: ACM TOMS replicated computational results initiative In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical

    2. Climate Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Mirin, A A

      2007-02-05

      The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

    3. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    4. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home › About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and Computational Sciences Directorate Michael Bartell Chief Information Officer Information Technologies Services Division Jim Hack Director, Climate Science Institute National Center for Computational Sciences Shaun Gleason Division Director Computational Sciences and Engineering Barney Maccabe Division Director Computer Science

    5. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    6. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    7. MULTISCALE MATHEMATICS FOR BIOMASS CONVERSION TO RENEWABLE HYDROGEN

      SciTech Connect (OSTI)

      Vlachos, Dionisios; Plechac, Petr; Katsoulakis, Markos

      2013-09-05

      The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.

    8. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

      SciTech Connect (OSTI)

      Ziaul Huque

      2007-08-31

      This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

    9. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Resources This page is the repository for sundry items of information relevant to general computing on BooNE. If you have a question or problem that isn't answered here, or a suggestion for improving this page or the information on it, please mail boone-computing@fnal.gov and we'll do our best to address any issues. Note about this page Some links on this page point to www.everything2.com, and are meant to give an idea about a concept or thing without necessarily wading through a whole website

    10. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

      SciTech Connect (OSTI)

      Luttman, A.

      2012-03-30

      The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

    11. Applied Optoelectronics | Open Energy Information

      Open Energy Info (EERE)

      optical semiconductor devices, packaged optical components, optical subsystems, laser transmitters, and fiber optic transceivers. References: Applied Optoelectronics1...

    12. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model,...

    13. Energy Department Announces Ten New Projects to Apply High-Performance

      Energy Savers [EERE]

      Computing to Manufacturing Challenges | Department of Energy Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges Energy Department Announces Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges February 17, 2016 - 9:30am Addthis The Energy Department today announced $3 million for ten new projects that will enable private-sector companies to use high-performance computing resources at the department's national laboratories to tackle

    14. Sandia Energy - Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science Home Energy Research Advanced Scientific Computing Research (ASCR) Computational Science Computational Sciencecwdd2015-03-26T13:35:2...

    15. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    16. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    17. Applied Math PI Meet | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Applied Math PI Meet Advanced Scientific Computing Research (ASCR) ASCR Home About ... ASCR Workshops and Conferences Applied Math PI Meet Print Text Size: A A A FeedbackShare ...

    18. Apply

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Unofficial transcripts are acceptable. If transcripts are not in English, provide a translation. If grades are not in the U.S.-traditional lettered (A,B,C), or GPA (out of 4.0)...

    19. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Events Computing Events Spotlighting the most advanced scientific and technical applications in the world! Featuring exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver. Supercomputing Conference 13 Denver, Colorado November 17-22, 2013 Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international

    20. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report ?? Phase I

      SciTech Connect (OSTI)

      Mark S. Schmalz

      2011-07-24

      Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.

    1. Idaho Science, Technology, Engineering and Mathematics Overview

      ScienceCinema (OSTI)

      None

      2013-05-28

      Idaho National Laboratory has been instrumental in establishing the Idaho Science, Technology, Engineering and Mathematics initiative -- i-STEM, which brings together industry, educators, government and other partners to provide K-12 teachers with support, materials and opportunities to improve STEM instruction and increase student interest in technical careers. You can learn more about INL's education programs at http://www.facebook.com/idahonationallaboratory.

    2. Applied Materials | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: Applied Materials Address: 3050 Bowers Avenue Place: Santa Clara, California Zip: 95054 Sector: Solar Website: www.appliedmaterials.com...

    3. Sandia Energy - Applied Turbulent Combustion

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      submodels that bridge fundamental energy sciences with applied device engineering and optimization. Turbulent-combustion-lab1-300x218 Complementary burner facilities with...

    4. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Jefferson Lab Jefferson Lab Home Search Contact JLab Computing at JLab ---------------------- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org

    5. Mathematical modeling of silica deposition in Tongonan-I reinjection wells, Philippines

      SciTech Connect (OSTI)

      Malate, R.C.M.; O`Sullivan, M.J.

      1993-10-01

      Mathematical models of silica deposition are derived using the method of characteristics for the problem of variable rate injection into a well producing radially symmetric flow. Solutions are developed using the first order rate equation of silica deposition suggested by Rimstidt and Barnes (1980). The changes in porosity and permeability resulting from deposition are included in the models. The models developed are successfully applied in simulating the changes in injection capacity in some of the reinjection wells in Tongonan geothermal field, Philippines.

    6. Artificial intelligence technologies applied to terrain analysis

      SciTech Connect (OSTI)

      Wright, J.C. ); Powell, D.R. )

      1990-01-01

      The US Army Training and Doctrine Command is currently developing, in cooperation with Los Alamos National Laboratory, a Corps level combat simulation to support military analytical studies. This model emphasizes high resolution modeling of the command and control processes, with particular attention to architectural considerations that enable extension of the model. A planned future extension is the inclusion of an computer based planning capability for command echelons that can be dynamical invoked during the execution of then model. Command and control is the process through which the activities of military forces are directed, coordinated, and controlled to achieve the stated mission. To perform command and control the commander must understand the mission, perform terrain analysis, understand his own situation and capabilities as well as the enemy situation and his probable actions. To support computer based planning, data structures must be available to support the computer's ability to understand'' the mission, terrain, own capabilities, and enemy situation. The availability of digitized terrain makes it feasible to apply artificial intelligence technologies to emulate the terrain analysis process, producing data structures for uses in planning. The work derived thus for to support the understanding of terrain is the topic of this paper. 13 refs., 5 figs., 6 tabs.

    7. Profile for Timothy C. Germann

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      materials Computational Physics and Applied Mathematics Computational Co-Design Molecular dynamics Accelerated Molecular Dynamics (AMD) Agent-based applications ...

    8. Debugging and Correctness Tools Working Session | U.S. DOE Office...

      Office of Science (SC) Website

      Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I...

    9. Challenges | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I...

    10. Awards | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Next Generation Networking Scientific Discovery through Advanced...

    11. High performance computing and communications: Advancing the frontiers of information technology

      SciTech Connect (OSTI)

      1997-12-31

      This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

    12. Computation Directorate 2008 Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2009-03-25

      Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

    13. Bridging the Gap between Fundamental Physics and Chemistry and Applied

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Models for HCCI Engines | Department of Energy Bridging the Gap between Fundamental Physics and Chemistry and Applied Models for HCCI Engines Bridging the Gap between Fundamental Physics and Chemistry and Applied Models for HCCI Engines 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters PDF icon 2005_deer_assanis.pdf More Documents & Publications Computationally Efficient Modeling of High-Efficiency Clean Combustion Engines Modeling of HCCI and PCCI

    14. Applied Sedimentology | Open Energy Information

      Open Energy Info (EERE)

      Sedimentology Jump to: navigation, search OpenEI Reference LibraryAdd to library Book: Applied Sedimentology Author R.C. Salley Published Academic Press, 2000 DOI Not Provided...

    15. Computing Resources | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Computing Resources Theory and Computing Sciences Building Argonne's Theory and Computing Sciences (TCS) building houses a wide variety of computing systems including some of the most powerful supercomputers in the world. The facility has 25,000 square feet of raised computer floor space and a pair of redundant 20 megavolt amperes electrical feeds from a 90 megawatt substation. The building also

    16. ORISE: Applied health physics projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied health physics projects The Oak Ridge Institute for Science and Education (ORISE) provides applied health physics services to government agencies needing technical support for decommissioning projects. Whether the need is assistance with the development of technical basis documents or advice on how to identify, measure and assess the presence of radiological materials, ORISE can help determine the best course for an environmental cleanup project. Our key areas of expertise include fuel

    17. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      apply to a variety of other vaccine targets, such as human immunodeficiency virus and influenza. Wanted: Dead or Computed As strange as it sounds, most vaccines are composed of...

    18. Bringing Advanced Computational Techniques to Energy Research

      SciTech Connect (OSTI)

      Mitchell, Julie C

      2012-11-17

      Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

    19. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    20. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      User Defined Images Archive APEX Home R & D Exascale Computing CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in...

    1. Computing and Computational Sciences Directorate - Computer Science and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematics Division Supercomputing Oak Ridge National Laboratory is home to several of the world's most powerful supercomputing resources. Each of these resources is dedicated to delivering high-impact science results for the researchers that utilize them. For more information about each of these systems, please visit the following: Titan Kraken Gaea

    2. Quantum Computing: Solving Complex Problems

      ScienceCinema (OSTI)

      DiVincenzo, David [IBM Watson Research Center

      2009-09-01

      One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

    3. Guide to Preventing Computer Software Piracy

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2001-07-12

      Guide to Preventing Computer Software Piracy It is the intent of the Department of Energy (DOE) to issue guidance in accordance with Federal CIO Council recommendations and in compliance with Executive Order 13103. The guidance in this document is based on the CIO Council's recommendations in reference to computer software piracy, and applies to all DOE elements. Canceled by DOE N 205.18

    4. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    5. Mathematical and Statistical Opportunities in Cyber Security (Technical

      Office of Scientific and Technical Information (OSTI)

      Report) | SciTech Connect Mathematical and Statistical Opportunities in Cyber Security Citation Details In-Document Search Title: Mathematical and Statistical Opportunities in Cyber Security The role of mathematics in a complex system such as the Internet has yet to be deeply explored. In this paper, we summarize some of the important and pressing problems in cyber security from the viewpoint of open science environments. We start by posing the question 'What fundamental problems exist

    6. Summer of Applied Geophysical Experience

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Summer of Applied Geophysical Experience (SAGE) 2016 - Our 34 rd Year! SAGE is a 3-4 week research and education program in exploration geophysics for graduate, undergraduate students, and working professionals based in Santa Fe, NM, U.S.A. Application deadline March 27, 2016, 5:00pm MDT SAGE students, faculty, teaching assistants, and visiting scientists acquire, process and interpret reflection/refraction seismic, magnetotelluric (MT)/electromagnetic (EM), ground penetrating radar (GPR),

    7. SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      On Saturday MBG Auditorium SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics in Understanding Tsunami" Professor J. Douglas Wright, Associate Professor...

    8. Integrated Mathematical Modeling Software Series of Vehicle Propulsion...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Mathematical Modeling Software Series of Vehicle Propulsion System: (1) Tractive Effort (T ... and Performance Data Collection and Analysis Program WORKSHOP REPORT: Trucks and ...

    9. Conference on Non-linear Phenomena in Mathematical Physics: Dedicated...

      Office of Scientific and Technical Information (OSTI)

      Institute, Toronto, Canada September 18-20, 2008. Sponsors: Association for Women in Mathematics, Inc. and The Fields Institute Citation Details In-Document Search Title:...

    10. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      In the early 2000s, members of Fermilab's Computing Division looked ahead to experiments like those at the Large Hadron Collider, which would collect more data than any computing ...

    11. computation | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computation | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    12. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computers | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    13. computing | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      computing | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at NNSA Blog

    14. Mira Computational Readiness Assessment | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility INCITE Program 5 Checks & 5 Tips for INCITE Mira Computational Readiness Assessment ALCC Program Director's Discretionary (DD) Program Early Science Program INCITE 2016 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations Mira Computational Readiness Assessment Assess your project's computational readiness for Mira A review of the following computational readiness points in relation to scaling, porting, I/O, memory

    15. International combustion engines; Applied thermosciences

      SciTech Connect (OSTI)

      Ferguson, C.R.

      1985-01-01

      Focusing on thermodynamic analysis - from the requisite first law to more sophisticated applications - and engine design, this book is an introduction to internal combustion engines and their mechanics. It covers the many types of internal combustion engines, including spark ignition, compression ignition, and stratified charge engines, and examines processes, keeping equations of state simple by assuming constant specific heats. Equations are limited to heat engines and later applied to combustion engines. Topics include realistic equations of state, stroichiometry, predictions of chemical equilibrium, engine performance criteria, and friction, which is discussed in terms of the hydrodynamic theory of lubrication and experimental methods such as dimensional analysis.

    16. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J. (Rochester, MN); Megerian, Mark G. (Rochester, MN); Ratterman, Joseph D. (Rochester, MN); Smith, Brian E. (Rochester, MN)

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    17. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    18. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    19. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or...

    20. Mathematical and computational modeling of the diffraction problems by discrete singularities method

      SciTech Connect (OSTI)

      Nesvit, K. V.

      2014-11-12

      The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.

    1. Antaki, G.A. 22 NUCLEAR REACTOR TECHNOLOGY; 99 MATHEMATICS, COMPUTERS...

      Office of Scientific and Technical Information (OSTI)

      PIPES; DYNAMIC LOADS; ANALYTIC FUNCTIONS; ANALYTICAL SOLUTION; STRESSES; REGULATIONS; SEISMIC EFFECTS; STRESS ANALYSIS; EPRI; STANDARDS The paper addresses several analytical...

    2. CNMS D Jun-Qiang Lu Computer Science and Mathematics Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      D I I S S C C O O V V E E R R Y Y SEMINAR SERIES Abstract The pursuit of spintronics ultimately depends on our ability to steer spin currents and detect or flip their polarization. ...

    3. Applied Materials Wind Turbine | Open Energy Information

      Open Energy Info (EERE)

      Wind Turbine Jump to: navigation, search Name Applied Materials Wind Turbine Facility Applied Materials Sector Wind energy Facility Type Community Wind Facility Status In Service...

    4. Building America Expert Meeting: Recommendations for Applying...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Recommendations for Applying Water Heaters in Combination Space and Domestic Water Heating Systems Building America Expert Meeting: Recommendations for Applying Water Heaters in ...

    5. Applied Ventures LLC | Open Energy Information

      Open Energy Info (EERE)

      Applied Ventures LLC Name: Applied Ventures LLC Address: 3050 Bowers Avenue Place: Santa Clara, California Zip: 95054 Region: Southern CA Area Product: Venture capital. Number...

    6. Applied Intellectual Capital AIC | Open Energy Information

      Open Energy Info (EERE)

      Intellectual Capital AIC Jump to: navigation, search Name: Applied Intellectual Capital (AIC) Place: California Zip: 94501-1010 Product: Applied Intellectual Capital (AIC) was...

    7. Cosmic Reionization On Computers | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      its Cosmic Reionization On Computers (CROC) project, using the Adaptive Refinement Tree (ART) code as its main simulation tool. An important objective of this research is to make...

    8. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    9. Computers-BSA.ppt

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers! Boy Scout Troop 405! What is a computer?! Is this a computer?! Charles Babbage: Father of the Computer! 1830s Designed mechanical calculators to reduce human error. *Input device *Memory to store instructions and results *A processors *Output device! Vacuum Tube! Edison 1883 & Lee de Forest 1906 discovered that "vacuum tubes" could serve as electrical switches and amplifiers A switch can be ON (1)" or OFF (0) Electronic computers use Boolean (George Bool 1850) logic

    10. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    11. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design » Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, "Large-scale molecular dynamics simulations of shock induced plasticity in tantalum single crystals," AIP Conference Proceedings 1426, 1263-1266 (2012). Frank J. Cherne, Guy Dimonte, and Timothy C. Germann, "Richtymer-Meshkov instability examined with large-scale molecular dynamics simulations," AIP

    12. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    13. Theory & Computation > Research > The Energy Materials Center...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Theory & Computation In This Section Computation & Simulation Theory & Computation Computation & Simulation...

    14. Applied Math PI Meet | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Applied Math PI Meet Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC Conferences HPC Operations Review and Best

    15. DOE Applied Math Summit | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      DOE Applied Math Summit Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC Conferences HPC Operations Review and Best

    16. Applied Math PI Meet Talks | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      ASCR Workshops and Conferences » Applied Math PI Meet Talks Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC

    17. Program Managers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      security. managers Advanced Scientific Computing Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-1 Computational Partnerships: Galen Shipman, CCS-7...

    18. Now Accepting Applications for Alvarez Fellowship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      by Lawrence Berkeley National Laboratory's Computing Sciences Directorate. Researchers in computer science, applied mathematics or any computational science discipline who have...

    19. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

      SciTech Connect (OSTI)

      Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

      2012-05-04

      The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

    20. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw (Los Alamos, NM); Gokhale, Maya B. (Los Alamos, NM); McCabe, Kevin Peter (Los Alamos, NM)

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    1. HISTORY OF THE ENGINEERING PHYSICS AND MATHEMATICS DIVISION 1955-1993

      SciTech Connect (OSTI)

      Maskewitz, B.F.

      2001-09-14

      A review of division progress reports noting significant events and findings of the Applied Nuclear Physics, Neutron Physics, Engineering Physics, and then Engineering Physics and Mathematics divisions from 1955 to 1993 was prepared for use in developing a history of the Oak Ridge National Laboratory in celebration of its 50th year. The research resulted in an accumulation of historic material and photographs covering 38 years of effort, and the decision was made to publish a brief history of the division. The history begins with a detailed account of the founding of the Applied Nuclear Physics Division in 1955 and continues through the name change to the Neutron Physics Division in the late 1950s. The material thereafter is presented in decades--the sixties, seventies, and eighties--and ends as we enter the nineties.

    2. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    3. Stochastic Robust Mathematical Programming Model for Power System

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Optimization | Argonne National Laboratory Stochastic Robust Mathematical Programming Model for Power System Optimization Title Stochastic Robust Mathematical Programming Model for Power System Optimization Publication Type Journal Article Year of Publication 2015 Authors Liu, C, Lee, C, Chen, H, Merhotra, S Journal IEEE Transactions on Power Systems Volume PP Issue 99 Date Published 02112015 ISSN 0885-8950 Keywords distributionally robust, power system optimization, robust optimization,

    4. New Mathematical Method Reveals Where Genes Switch On or Off

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Mathematical Method Reveals Where Genes Switch On or Off New Mathematical Method Reveals Where Genes Switch On or Off "Compressed sensing" determines atomic-level energy potentials with accuracy approaching experimental measurement February 22, 2012 John Hules, JAHules@lbl.gov, +1 510 486 6008 Figure 1. Helix-turn-helix (HTH) proteins are the most widely distributed family of DNA-binding proteins, occurring in all biological kingdoms. This image shows a lambda repressor HTH

    5. Next-Generation Wireless Instrumentation Integrated with Mathematical

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Modeling for Aluminum Production | Department of Energy Next-Generation Wireless Instrumentation Integrated with Mathematical Modeling for Aluminum Production Next-Generation Wireless Instrumentation Integrated with Mathematical Modeling for Aluminum Production Monitoring Electrolytic Cell Anode Current Increases Current and Energy Efficiency In 2011, five-and-a-half-million tons of aluminum were produced in the United States. Over two-million tons were produced in smelters, large

    6. Mathematical Models Shed New Light on Cancer Mutations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical Models Shed New Light on Cancer Mutations Mathematical Models Shed New Light on Cancer Mutations Calculations Run at NERSC Pinpoint Rare Mutants More Quickly November 3, 2014 Contact: David Cameron, 617.432.0441, david_cameron@hms.harvard.edu cancermutations3 Heat map of the average magnitude of interaction energies projected onto a structural representation of SH2 domains (white) in complex with phosphopeptide (green). SH2 (Src Homology 2) is a protein domain found in many

    7. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    8. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    9. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    10. Getting Computer Accounts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Accounts When you first arrive at the lab, you will be presented with lots of forms that must be read and signed in order to get an ID and computer access. You must ensure...

    11. Computational nuclear quantum many-body problem: The UNEDF project

      SciTech Connect (OSTI)

      Fann, George I [ORNL

      2013-01-01

      The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

    12. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    13. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    14. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    15. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    16. Computer_Vision

      Energy Science and Technology Software Center (OSTI)

      2002-10-04

      The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. Thismore » step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four child symbols ‘(‘ , ‘L’ , ‘L’, and ‘)‘ . Levels in the tree can then be thought of as coarser (towards the root) or finer (towards the leaves) representations of the same contours. 7.) Multi scale Matching. Having a multi-scale characterization allows us to compare objects at a coarser level before matching at finer levels of detail. Matching at a coarser level not only increases the speed of the matching process (you’re comparing fewer symbols) , but also increases accuracy since small variations along contours do not significantly detract from two objects’ similarity.« less

    17. Apply for Beam Time | Advanced Photon Source

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All About Proposals Users Home Apply for Beam Time Deadlines Proposal Types Concepts, Definitions, and Help My APS Portal My APS Portal Apply for Beam Time Next Proposal Deadline...

    18. How to Apply for the ENERGY STAR®

      Broader source: Energy.gov [DOE]

      Join us to learn about applying for ENERGY STAR Certification in Portfolio Manager. Understand the value of the ENERGY STAR certification, see the step-by-step process of applying, and gain tips to...

    19. Mathematical Deductions from Some Rules Concerning High-Energy Total Cross Sections

      DOE R&D Accomplishments [OSTI]

      Yang, C. N.

      1962-07-23

      Mathematical implications of the Pomeranchuk rule and the Pomeranchuk- Okun rule are discussed. (auth)

    20. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    1. Workshop in computational molecular biology, April 15, 1991--April 14, 1994

      SciTech Connect (OSTI)

      Tavare, S.

      1995-04-12

      Funds from this award were used to the Workshop in Computational Molecular Biology, `91 Symposium entitled Interface: Computing Science and Statistics, Seattle, Washington, April 21, 1991; the Workshop in Statistical Issues in Molecular Biology held at Stanford, California, August 8, 1993; and the Session on Population Genetics a part of the 56th Annual Meeting, Institute of Mathematical Statistics, San Francisco, California, August 9, 1993.

    2. DOE Issues Funding Opportunity for Advanced Computational and Modeling Research for the Electric Power System

      Broader source: Energy.gov [DOE]

      The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

    3. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... - Department of Mathematics, Massachusetts Institute of Technology (MIT) Vogel, Curtis (Curtis Vogel) - Department of Mathematical Sciences, Montana State University Vogel, ...

    4. A posteriori error estimate for a Lagrangian method applied to...

      Office of Scientific and Technical Information (OSTI)

      Resource Relation: Conference: Proposed for presentation at the Conference on Analysis, ... Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS ...

    5. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

      SciTech Connect (OSTI)

      Izzuddin, Nur; Sunarsih,; Priyanto, Agoes

      2015-05-15

      As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

    6. Derivation of an Applied Nonlinear Schroedinger Equation.

      SciTech Connect (OSTI)

      Pitts, Todd Alan; Laine, Mark Richard; Schwarz, Jens; Rambo, Patrick K.; Karelitz, David B.

      2015-01-01

      We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release

    7. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    8. Modules and methods for all photonic computing

      DOE Patents [OSTI]

      Schultz, David R. (Knoxville, TN); Ma, Chao Hung (Oak Ridge, TN)

      2001-01-01

      A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

    9. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

      SciTech Connect (OSTI)

      Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

      2011-03-18

      As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

    10. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    11. Mathematical modeling of mass transfer during centrifugal filtration of polydisperse suspensions

      SciTech Connect (OSTI)

      V.F. Pozhidaev; Y.B. Rubinshtein; G.Y. Golberg; S.A. Osadchii

      2009-07-15

      A mass-transfer equation, the solution of which for given boundary conditions makes it possible to derive in analytical form a relationship between the extraction of the solid phase of a suspension into the centrifuge effluent and the fineness of the particles, is suggested on the basis of a model; this is of particular importance in connection with the development of a new trend in the utilization of filtering centrifuges - concentration of coal slurries by extraction into the centrifuge effluent of the finest particles, the ash content of which is substantially higher than that of particles of the coarser classes. Results are presented for production studies under conditions at an active establishment (the Neryungrinskaya Enrichment Factory); these results confirmed the adequacy of the mathematical model proposed: convergence of computed and experimental data was within the limits of the experimental error (no more than 3%). The model in question can be used to predict results of suspension separation by centrifugal filtration.

    12. A Hygrothermal Risk Analysis Applied to Residential Unvented Attics

      SciTech Connect (OSTI)

      Pallin, Simon B; Kehrer, Manfred

      2013-01-01

      Aresidential building, constructed with an unvented attic, is acommonroof assembly in the United States.The expected hygrothermal performance and service life of the roof are difficult to estimate due to a number of varying parameters.Typical parameters expected to vary are the climate, direction, and slope of the roof as well as the radiation properties of the surface material. Furthermore, influential parameters are indoor moisture excess, air leakages through the attic floor, and leakages from air-handling unit and ventilation ducts. In addition, the type of building materials such as the insulation material and closed or open cell spray polyurethane foam will influence the future performance of the roof. A development of a simulation model of the roof assembly will enable a risk and sensitivity analysis, in which the most important varying parameters on the hygrothermal performance can be determined. The model is designed to perform probabilistic simulations using mathematical and hygrothermal calculation tools. The varying input parameters can be chosen from existing measurements, simulations, or standards. An analysis is applied to determine the risk of consequences, such as mold growth, rot, or energy demand of the HVAC unit. Furthermore, the future performance of the roof can be simulated in different climates to facilitate the design of an efficient and reliable roof construction with the most suitable technical solution and to determine the most appropriate building materials for a given climate

    13. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    14. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy required to compute is to reduce the power usage on a node. One way to accomplish this is by lowering the frequency at which the CPU operates. However, reducing the clock speed increases the time to solution, creating a potential tradeoff. NERSC continues to examine how such methods impact its operations and its

    15. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    16. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last...

    17. Applied geodesy (Book) | SciTech Connect

      Office of Scientific and Technical Information (OSTI)

      Book: Applied geodesy Citation Details In-Document Search Title: Applied geodesy This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are

    18. Apply for Your First NERSC Allocation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Your First Allocation Apply for Your First NERSC Allocation Initial Steps Needed to Apply for Your First NERSC Allocation All work done at NERSC must be within the DOE Office of Science mission. See the Mission descriptions for each office at Allocations Overview and Eligibility. Prospective Principal Investigators without a NERSC login need to fill out two forms: The online ERCAP Access Request Form. If you wish to designate another person to fill out the request form you may

    19. Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences November 1, 2014 by Francesca Verdier Applications are now being acceted for the Luis W. Alvarez Postdoctoral Fellowship in Computing Sciences and are due November 24. Apply at https://lbl.taleo.net/careersection/2/jobdetail.ftl?lang=en&job=80004. This fellowship provides recent graduates (within the past three years) opportunities to work on some of the most important

    20. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership

      Office of Scientific and Technical Information (OSTI)

      Project Annual Report (Technical Report) | SciTech Connect Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report Citation Details In-Document Search Title: Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of

    1. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    2. Applied Field Research Initiative Attenuation Based Remedies

      Office of Environmental Management (EM)

      Laboratory (SRNL), the initiative is a collaborative effort that leverages DOE invest- ments in applied research and basic science and the work of the site contractors to...

    3. Applied Materials Inc AMAT | Open Energy Information

      Open Energy Info (EERE)

      manufacturer of equipment used in solar (silicon, thin-film, BIPV), semiconductor, and LCD markets. References: Applied Materials Inc (AMAT)1 This article is a stub. You can...

    4. Applied Quantum Technology AQT | Open Energy Information

      Open Energy Info (EERE)

      Quantum Technology AQT Jump to: navigation, search Name: Applied Quantum Technology (AQT) Place: Santa Clara, California Zip: 95054 Product: California-based manufacturer of CIGS...

    5. Applied Energy Management | Open Energy Information

      Open Energy Info (EERE)

      Energy Management Jump to: navigation, search Name: Applied Energy Management Place: Huntersville, North Carolina Zip: 28078 Sector: Efficiency, Renewable Energy Product: North...

    6. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    7. Nuclear Facilities and Applied Technologies at Sandia

      SciTech Connect (OSTI)

      Wheeler, Dave; Kaiser, Krista; Martin, Lonnie; Hanson, Don; Harms, Gary; Quirk, Tom

      2014-11-28

      The Nuclear Facilities and Applied Technologies organization at Sandia National Laboratories Technical Area Five (TA-V) is the leader in advancing nuclear technologies through applied radiation science and unique nuclear environments. This video describes the organizations capabilities, facilities, and culture.

    8. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

      SciTech Connect (OSTI)

      Runnels, Scott Robert; Caldwell, Wendy; Brown, Barton Jed; Pederson, Clark; Brown, Justin; Burrill, Daniel; Feinblum, David; Hyde, David; Levick, Nathan; Lyngaas, Isaac; Maeng, Brad; Reed, Richard LeRoy; Sarno-Smith, Lois; Shohet, Gil; Skarda, Jinhie; Stevens, Josey; Zeppetello, Lucas; Grossman-Ponemon, Benjamin; Bottini, Joseph Larkin; Loudon, Tyson Shane; VanGessel, Francis Gilbert; Nagaraj, Sriram; Price, Jacob

      2015-10-15

      The two primary purposes of LANLs Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANLs involvement in it. This report includes both the background for the program and the reports from the students.

    9. Researchers develop a new mathematical tool for analyzing and evaluating

      National Nuclear Security Administration (NNSA)

      nuclear material | National Nuclear Security Administration develop a new mathematical tool for analyzing and evaluating nuclear material | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios

    10. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

    11. Innovative mathematical modeling in environmental remediation

      SciTech Connect (OSTI)

      Yeh, Gour T.; Gwo, Jin Ping; Siegel, Malcolm D.; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steven B.

      2013-05-01

      There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.

    12. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    13. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course offered at the University of California, Berkeley. The course is being taught by UC Berkeley professor and LBNL Faculty Scientist Jim Demmel. CS267 is broadcast live over the internet and all NERSC users are invited to monitor the broadcast course, but course credit is available only to student registered for the

    14. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.

    15. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    16. Theory, Modeling and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      modeling and simulation will be enhanced not only by the wealth of data available from MaRIE but by the increased computational capacity made possible by the advent of extreme...

    17. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    18. Pi in Applied Optics | GE Global Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Inside the Applied Optics Lab II Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share...

    19. Apply to the Cyclotron Institute REU Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      an advanced physicschemistry course. To apply for the REU Program, complete the 3 steps below: Fill out the on-line 2016 Cyclotron Institute REU Application Note: You will be...

    20. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    1. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    2. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    3. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    4. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    5. SAGE, Summer of Applied Geophysical Experience

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      About Apply Who Qualifies Special Undergrad Information Contributors Faculty Past Programs Photo Gallery NSEC » CSES » SAGE SAGE, the Summer of Applied Geophysical Experience Application deadline: March 27, 2016, 5:00 pm MDT Contacts Institute Director Reinhard Friedel-Los Alamos SAGE Co-Director W. Scott Baldridge-Los Alamos SAGE Co-Director Larry Braile-Purdue University Professional Staff Assistant Georgia Sanchez (505) 665-0855 Email Application process for SAGE 2016 is now open. U.S.

    6. LANSCE | Lujan Center | Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime LANSCE User Resources Tips for a Successful Proposal Step 1: Apply for Beam Time 1. Select an Instrument and a Local Contact 2. Submit Your Proposal Step 2: Before You Arrive 1. Complete the LANSCE User Facility Agreement Questionnaire 2. Arrange for Site Access 3. Prepare for Your Experiment: Contact Lujan Experiment Coordinator to arrange shipping of your samples. Talk to the beamline scientist about any electrical equipment you might bring. 4. Complete your training Step 3:

    7. How to Apply | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Postdoctoral Research Awards » How to Apply How to Apply Online Application Available at www.zintellect.com/Posting/Details/853 Application deadline May 7, 2015. Familiarize yourself with the benefits, obligations, eligibility requirements, and evaluation criteria. Familiarize yourself with the requirements and obligations to determine whether your education and professional goals are well aligned with the EERE Postdoctoral Research Awards. Read the Evaluation Criteria that will be used to

    8. Effects of Relativity Lead to"Warp Speed" Computations

      SciTech Connect (OSTI)

      Vay, J.-L.

      2007-11-01

      A scientist at Lawrence Berkeley National Laboratory has discovered that a previously unnoticed consequence of Einstein's special theory of relativity can lead to speedup of computer calculations by orders of magnitude when applied to the computer modeling of a certain class of physical systems. This new finding offers the possibility of tackling some problems in a much shorter time and with far more precision than was possible before, as well as studying some configurations in every detail for the first time. The basis of Einstein's theory is the principle of relativity, which states that the laws of physics are the same for all observers, whether the 'observer' is a turtle 'racing' with a rabbit, or a beam of particles moving at near light speed. From the invariance of the laws of physics, one may be tempted to infer that the complexity of a system is independent of the motion of the observer, and consequently, a computer simulation will require the same number of mathematical operations, independently of the reference frame that is used for the calculation. Length contraction and time dilation are well known consequences of the special theory of relativity which lead to very counterintuitive effects. An alien observing human activity through a telescope in a spaceship traveling in the Vicinity of the earth near the speed of light would see everything flattened in the direction of propagation of its spaceship (for him, the earth would have the shape of a pancake), while all motions on earth would appear extremely slow, slowed almost to a standstill. Conversely, a space scientist observing the alien through a telescope based on earth would see a flattened alien almost to a standstill in a flattened spaceship. Meanwhile, an astronaut sitting in a spaceship moving at some lower velocity than the alien spaceship with regard to earth might see both the alien spaceship and the earth flattened in the same proportion and the motion unfolding in each of them at the same speed. Let us now assume that each protagonist (the alien, the space scientist and the astronaut) is to run a computer simulation describing the motion of all of them in a single calculation. In order to model a physical system on a computer, scientists often divide space and time into small chunks. Since the computer must calculated some things for each chunk, having a large system containing numerous small chunks translates to long calculations requiring many computational steps on supercomputers. Let us assume that each protagonist of our intergalactic story uses the space and time slicing as described and chooses to perform the calculation in its own frame of reference. For the alien and the space scientist, the slicing of space and time results in an exceedingly large number of chunks, due to the wide disparity of spatial and time scales needed to describe both their own environment and motion together with the other extremely flattened environment and slowed motion. Since the disparity of scales is reduced for the astronaut, who is traveling at an intermediate velocity, the number of computer operations needed to complete the calculation in his frame of reference will be significantly lower, possibly by many orders of magnitude. Analogously, the new discovery at Lawrence Berkeley National Laboratory shows that there exists a frame of reference minimizing the number of computational operations needed for studying the interaction of beams of particles or light (lasers) interacting at, or near, light speed with other particles or with surrounding structures. Speedups ranging from ten to a million times or more are predicted for the modeling of beams interacting with electron clouds, such as those in the upcoming Large Hadron Collider 'atom smasher' accelerator at CERN (Switzerland), and in free electron lasers and tabletop laser wakefield accelerators. The discovery has surprised many physicists and was received initially with much skepticism. It sounded too much like a 'free lunch'. Yet, the demonstration of a speedup of a stunning one thousand times in a te

    9. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

      SciTech Connect (OSTI)

      Jablonowski, Christiane

      2015-07-14

      The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.

    10. Partial Support of Meeting of the Board on Mathematical Sciences and Their Applications

      SciTech Connect (OSTI)

      Weidman, Scott

      2014-08-31

      During the performance period, BMSA released the following major reports: Transforming Combustion Research through Cyberinfrastructure (2011); Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification (2012); Fueling Innovation and Discovery: The Mathematical Sciences in the 21st Century (2012); Aging and the Macroeconomy: Long-Term Implications of an Older Population (2012); The Mathematical Sciences in 2025 (2013); Frontiers in Massive Data Analysis (2013); and Developing a 21st Century Global Library for Mathematics Research (2014).

    11. Discretionary Allocation Request | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Discretionary Allocation Request Welcome to the Director's Discretionary Allocation request page. Director's Discretionary Allocations are "start up" awards of compute hours given by the ALCF to projects that can demonstrate a need for leadership-class resources. Awards are made year round to industry, academia, laboratories and others. Duration is three or six months. To apply for an allocation, please complete the following form. The ALCF allocation team will contact you within 2

    12. ALCF Technical Reports | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Science at ALCF Allocation Programs INCITE 2015 Projects ALCC 2015 Projects ESP Projects View All Projects Publications ALCF Tech Reports Industry Collaborations ALCF Technical Reports Sort by Published Date Title Authors Order Asc Desc Apply Yao Zhang, Prasanna Balaprakash, Jiayuan Meng, Vitali Morozov, Scott Parker, Kalyan Kumaran, "Raexplore: Enabling Rapid, Automated Architecture Exploration for Full Applications," Argonne Leadership Computing Facility, December 2014. view Samara

    13. advanced simulation and computing | National Nuclear Security

      National Nuclear Security Administration (NNSA)

      Administration simulation and computing | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs

    14. high performance computing | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      performance computing | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing Proliferation Powering the Nuclear Navy Emergency Response Recapitalizing Our Infrastructure Countering Nuclear Terrorism About Our Programs Our History Who We Are Our Leadership Our Locations Budget Our Operations Library Bios Congressional Testimony Fact Sheets Newsletters Press Releases Photo Gallery Jobs Apply for Our Jobs Our Jobs Working at

    15. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

      2008-09-01

      The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

    16. Computational model, method, and system for kinetically-tailoring multi-drug chemotherapy for individuals

      DOE Patents [OSTI]

      Gardner, Shea Nicole (San Leandro, CA)

      2007-10-23

      A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.

    17. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    18. Computing and Computational Sciences Directorate - National Center for

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Sciences Home National Center for Computational Sciences The National Center for Computational Sciences (NCCS), formed in 1992, is home to two of Oak Ridge National Laboratory's (ORNL's) high-performance computing projects-the Oak Ridge Leadership Computing Facility (OLCF) and the National Climate-Computing Research Center (NCRC). The OLCF (www.olcf.ornl.gov) was established at ORNL in 2004 with the mission of standing up a supercomputer 100 times more powerful than the leading

    19. Students showcase research at 19th Supercomputing Challenge Expo...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      is to increase knowledge of science and computing, expose students and teachers to computers and applied mathematics, and instill enthusiasm for science. April 14, 2009 Los...

    20. Students descend on Los Alamos National Laboratory April 26-27...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      is to increase knowledge of science and computing, expose students and teachers to computers and applied mathematics. April 21, 2010 Los Alamos National Laboratory sits on top...

    1. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    2. Mathematical Modeling of Microbial Community Dynamics: A Methodological Review

      SciTech Connect (OSTI)

      Song, Hyun-Seob; Cannon, William R.; Beliaev, Alex S.; Konopka, Allan

      2014-10-17

      Microorganisms in nature form diverse communities that dynamically change in structure and function in response to environmental variations. As a complex adaptive system, microbial communities show higher-order properties that are not present in individual microbes, but arise from their interactions. Predictive mathematical models not only help to understand the underlying principles of the dynamics and emergent properties of natural and synthetic microbial communities, but also provide key knowledge required for engineering them. In this article, we provide an overview of mathematical tools that include not only current mainstream approaches, but also less traditional approaches that, in our opinion, can be potentially useful. We discuss a broad range of methods ranging from low-resolution supra-organismal to high-resolution individual-based modeling. Particularly, we highlight the integrative approaches that synergistically combine disparate methods. In conclusion, we provide our outlook for the key aspects that should be further developed to move microbial community modeling towards greater predictive power.

    3. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    4. SC e-journals, Materials Science

      Office of Scientific and Technical Information (OSTI)

      Materials Science Acta Materialia Advanced Composite Materials Advanced Energy Materials Advanced Engineering Materials Advanced Functional Materials Advanced Materials Advanced Powder Technology Advances in Materials Science and Engineering - OAJ Annual Review of Materials Research Applied Composite Materials Applied Mathematical Modelling Applied Mathematics & Computation Applied Physics A Applied Physics B Applied Surface Science Archives of Computational Materials Science and Surface

    5. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy U V W X Y Z Tesfatsion, Leigh (Leigh Tesfatsion) - Departments of Economics & Mathematics, Iowa State University Tinsley, Matt (Matt Tinsley) - School of Biological and Environmental Sciences, University of Stirling Toor, Saqib (Saqib Toor) - Department of Energy Technology, Aalborg University Tsao, Tsu-Chin (Tsu-Chin Tsao) - Mechanical and

    6. EEG and MEG source localization using recursively applied (RAP) MUSIC

      SciTech Connect (OSTI)

      Mosher, J.C.; Leahy, R.M.

      1996-12-31

      The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.

    7. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse...

    8. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    9. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing Complex (SCC) at the Los Alamos National Laboratory...

    10. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    11. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

      SciTech Connect (OSTI)

      Nyman, Peter

      2009-03-10

      This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

    12. Uniform insulation applied-B ion diode

      DOE Patents [OSTI]

      Seidel, David B. (Albuquerque, NM); Slutz, Stephen A. (Albuquerque, NM)

      1988-01-01

      An applied-B field extraction ion diode has uniform insulation over an anode surface for increased efficiency. When the uniform insulation is accomplished with anode coils, and a charge-exchange foil is properly placed, the ions may be focused at a point on the z axis.

    13. How to Apply for Senior Executive positions

      Broader source: Energy.gov [DOE]

      To apply vacancies for SENIOR EXECUTIVE SERVICE (SES) , SENIOR LEVEL (SL), SCIENTIFIC AND PROFESSIONAL (ST) positions within the Department of Energy please visit OPM's website: http://www.usajobs.gov. From this site, you may download announcements for vacancies of interest to you.

    14. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    15. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    16. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    17. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    18. Programs for attracting under-represented minority students to graduate school and research careers in computational science. Final report for period October 1, 1995 - September 30, 1997

      SciTech Connect (OSTI)

      Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno

      1997-10-01

      Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this program to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.

    19. Sandia National Laboratories: Advanced Simulation and Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment program builds integrated,...

    20. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      Z Yaakobi, Eitan (Eitan Yaakobi) - Department of Electrical Engineering, California Institute of Technology Yagan, Osman (Osman Yagan) - Department of Electrical and Computer ...

    1. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard A.

      2013-01-01

      This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

    2. Visitor Hanford Computer Access Request - Hanford Site

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Visitor Hanford Computer Access Request Email Email Page |...

    3. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    4. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... - Department of Mathematics, Kutztown University of Pennsylvania McMullen, Curtis T.(Curtis T.McMullen).- Department of Mathematics, Harvard University McNamara, Peter ...

    5. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... Brakocevic) - Department of Mathematics and Statistics, McGill University Brand, Neal (Neal Brand) - Department of Mathematics, University of North Texas Brandolese, Lorenzo ...

    6. Applied Cathode Enhancement and Robustness Technologies (ACERT)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accelerators, Electrodynamics » ACERT Applied Cathode Enhancement and Robustness Technologies (ACERT) World leading experts from fields of accelerator design & testing, chemical synthesis of nanomaterials, and shielding application of nanomaterials. thumbnail of Nathan Moody Nathan Moody Principal Investigator (PI) Email ACERT Logo Team Our project team, a part of Los Alamos National Laboratory (LANL) comprised of world leading experts from fields of accelerator design & testing,

    7. Argonne Training Program on Extreme-Scale Computing Scheduled for July

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      31-August 12, 2016 | Argonne Leadership Computing Facility Argonne Training Program on Extreme-Scale Computing Scheduled for July 31-August 12, 2016 Author: Brian Grabowski January 25, 2016 Facebook Twitter LinkedIn Google E-mail Printer-friendly version ARGONNE, Ill., January 25, 2016 - Computational scientists now have the opportunity to apply for the upcoming Argonne Training Program on Extreme-Scale Computing (ATPESC), to take place from July 31-August 12, 2016. With the challenges posed

    8. The future of mathematical communication. Final technical report

      SciTech Connect (OSTI)

      Christy, J.

      1994-12-31

      One of the first fruits of cooperation with LBL was the use of the MBone (Multi-Cast Backbone) to broadcast the Conference on the Future of Mathematical Communication, held at MSRI November 30--December 3, 1994. Late last fall, MSRI brought together more than 150 mathematicians, librarians, software developers, representatives of scholarly societies, and both commercial and not-for-profit publishers to discuss the revolution in scholarly communication brought about by digital technology. The conference was funded by the Department of Energy, the National Science Foundation, and the Paul and Gabriella Rosenbaum Foundation. It focused on the impact of the technological revolution on mathematics, but necessarily included issues of a much wider scope. There were talks on electronic publishing, collaboration across the Internet, economic and intellectual property issues, and various new technologies which promise to carry the revolution forward. There were panel discussions of electronic documents in mathematics, the unique nature of electronic journals, technological tools, and the role of scholarly societies. There were focus groups on Developing Countries, K-12 Education, Libraries, and Te{sub X}. The meeting also embodied the promises of the revolution; it was multicast over the MBone channel of the Internet to hundreds of sites around the world and much information on the conference will be available on their World Wide Web server at the URL http://www.msri.org/fmc. The authors have received many comments about the meeting indicating that it has had a profound impact on how the community thinks about how scientists can communicate and make their work public.

    9. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    10. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    11. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    12. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Performance Computing - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    13. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Anti-HIV antibody Software optimized on Mira advances design of mini-proteins for medicines, materials Scientists at the University of Washington are using Mira to virtually design unique, artificial peptides, or short proteins. Read More Celebrating 10 years 10 science highlights celebrating 10 years of Argonne Leadership Computing Facility To celebrate our 10th anniversary, we're highlighting 10 science accomplishments since we opened our doors. Read More Bill Gropp works with students during

    14. From Federal Computer Week:

      National Nuclear Security Administration (NNSA)

      Federal Computer Week: Energy agency launches performance-based pay system By Richard W. Walker Published on March 27, 2008 The Energy Department's National Nuclear Security Administration has launched a new performance- based pay system involving about 2,000 of its 2,500 employees. NNSA officials described the effort as a pilot project that will test the feasibility of the new system, which collapses the traditional 15 General Schedule pay bands into broader pay bands. The new structure

    15. Computed Tomography Status

      DOE R&D Accomplishments [OSTI]

      Hansche, B. D.

      1983-01-01

      Computed tomography (CT) is a relatively new radiographic technique which has become widely used in the medical field, where it is better known as computerized axial tomographic (CAT) scanning. This technique is also being adopted by the industrial radiographic community, although the greater range of densities, variation in samples sizes, plus possible requirement for finer resolution make it difficult to duplicate the excellent results that the medical scanners have achieved.

    16. AECU-4439 PHYSICS AND MATHEMATICS HYDRODYNAMIC ASPECTS OF BOILING HEAT

      Office of Scientific and Technical Information (OSTI)

      AECU-4439 PHYSICS AND MATHEMATICS HYDRODYNAMIC ASPECTS OF BOILING HEAT TRANS FER (t h esi s) BY Novak Zuber June 1959 . - . Reaearch - Laboratory @is Angelei) 811~1 Ramo-Wooldridge Corporation University of California Los Angeles, California - 2 - .w- UNITED STATES ATOMIC ENERGY COMMISSION Technical Information Service L E G A L N O T I C E This report was prepared aa an account of Government sponsored work. Neither tbe United States, nor the Commission, nor MY person acting on behalf of the

    17. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    18. Applied Energy Programs, SPO-AE: LANL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Kevin Ott 505-663-5537 Program Administrator Jutta Kayser 505-663-5649 Program Manager Karl Jonietz 505-663-5539 Program Manager Melissa Fox 505-663-5538 Budget Analyst Fawn Gore 505-665-0224 The Applied Energy Program Office (SPO-AE) manages Los Alamos National Laboratory programs funded by the Department of Energy's Offices of Energy Efficiency/Renewable Energy, Electricity Delivery and Energy Reliability, and Fossil Energy. With energy use increasing across the nation and the world, Los

    19. Apply for a Job | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FAQs Answers to frequently asked questions about applying for a job at Argonne A Note About Privacy We do not ask you for personally identifiable information such as birthdate, social security number, or driver's license number. To ensure your privacy, please do not include such information in the documents that you upload to the system A Note About File Size Our application system has a file size limit of 820KB. While this is sufficient for the vast majority of documents, we have found that

    20. SUMO, System performance assessment for a high-level nuclear waste repository: Mathematical models

      SciTech Connect (OSTI)

      Eslinger, P.W.; Miley, T.B.; Engel, D.W.; Chamberlain, P.J. II

      1992-09-01

      Following completion of the preliminary risk assessment of the potential Yucca Mountain Site by Pacific Northwest Laboratory (PNL) in 1988, the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE) requested the Performance Assessment Scientific Support (PASS) Program at PNL to develop an integrated system model and computer code that provides performance and risk assessment analysis capabilities for a potential high-level nuclear waste repository. The system model that has been developed addresses the cumulative radionuclide release criteria established by the US Environmental Protection Agency (EPA) and estimates population risks in terms of dose to humans. The system model embodied in the SUMO (System Unsaturated Model) code will also allow benchmarking of other models being developed for the Yucca Mountain Project. The system model has three natural divisions: (1) source term, (2) far-field transport, and (3) dose to humans. This document gives a detailed description of the mathematics of each of these three divisions. Each of the governing equations employed is based on modeling assumptions that are widely accepted within the scientific community.

    1. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    2. Hybrid soft computing systems: Industrial and commercial applications

      SciTech Connect (OSTI)

      Bonissone, P.P.; Chen, Y.T.; Goebel, K.; Khedkar, P.S.

      1999-09-01

      Soft computing (SC) is an association of computing methodologies that includes as its principal members fuzzy logic, neurocomputing, evolutionary computing and probabilistic computing. The authors present a collection of methods and tools that can be used to perform diagnostics, estimation, and control. These tools are a great match for real-world applications that are characterized by imprecise, uncertain data and incomplete domain knowledge. The authors outline the advantages of applying SC techniques and in particular the synergy derived from the use of hybrid SC systems. They illustrate some combinations of hybrid SC systems, such as fuzzy logic controllers (FLC's) tuned by neural networks (NN's) and evolutionary computing (EC), NN's tuned by EC or FLC's, and EC controlled by FLC's. The authors discuss three successful real-world examples of SC applications to industrial equipment diagnostics, freight train control, and residential property valuation.

    3. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    4. Discrete Mathematical Approaches to Graph-Based Traffic Analysis

      SciTech Connect (OSTI)

      Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.; Olsen, Bryan K.

      2014-04-01

      Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In this paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.

    5. 2009 Applied and Environmental Microbiology GRC

      SciTech Connect (OSTI)

      Nicole Dubilier

      2009-07-12

      The topic of the 2009 Gordon Conference on Applied and Environmental Microbiology is: From Single Cells to the Environment. The Conference will present and discuss cutting-edge research on applied and environmental microbiology with a focus on understanding interactions between microorganisms and the environment at levels ranging from single cells to complex communities. The Conference will feature a wide range of topics such as single cell techniques (including genomics, imaging, and NanoSIMS), microbial diversity at scales ranging from clonal to global, environmental 'meta-omics', biodegradation and bioremediation, metal - microbe interactions, animal microbiomes and symbioses. The Conference will bring together investigators who are at the forefront of their field, and will provide opportunities for junior scientists and graduate students to present their work in poster format and exchange ideas with leaders in the field. Some poster presenters will be selected for short talks. The collegial atmosphere of this Conference, with extensive discussion sessions as well as opportunities for informal gatherings in the afternoons and evenings, provides an ideal setting for scientists from different disciplines to exchange ideas, brainstorm and discuss cross-disciplinary collaborations.

    6. Closed loop computer control for an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    7. System for computer controlled shifting of an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    8. An exact general remeshing scheme applied to physically conservative voxelization

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Powell, Devon; Abel, Tom

      2015-05-21

      We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themorecorresponding integral over the output mesh. We refer to this as physically conservative voxelization. At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.less

    9. [Computer Science and Telecommunications Board activities

      SciTech Connect (OSTI)

      Blumenthal, M.S.

      1993-02-23

      The board considers technical and policy issues pertaining to computer science, telecommunications, and associated technologies. Functions include providing a base of expertise for these fields in NRC, monitoring and promoting health of these fields, initiating studies of these fields as critical resources and sources of national economic strength, responding to requests for advice, and fostering interaction among the technologies and the other pure and applied science and technology. This document describes its major accomplishments, current programs, other sponsored activities, cooperative ventures, and plans and prospects.

    10. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    11. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    12. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost® 3 on the TRACC Cluster Oct. 21-22, 2010 Argonne TRACC Dr. Cezary Bojanowski Dr. Ronald F. Kulak This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The LS-PrePost Introductory Course was held October 21-22, 2010 at TRACC in West Chicago with interactive participation on-site as well as remotely via the Internet. Intended primarily for finite element analysts with

    13. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    14. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    15. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    16. Ultra-high resolution computed tomography imaging

      DOE Patents [OSTI]

      Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

      2002-01-01

      A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

    17. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    18. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    19. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial

    20. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    1. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Leadership Computing The Argonne Leadership Computing Facility (ALCF) was established at Argonne National Laboratory in 2004 as part of a U.S. Department of Energy (DOE) initiative dedicated to enabling leading-edge computational capabilities to advance fundamental discovery and understanding in a broad range of scientific and engineering disciplines. Supported by the Advanced Scientific Computing Research (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE Leadership

    2. Peak fitting applied to low-resolution enrichment measurements

      SciTech Connect (OSTI)

      Bracken, D.; McKown, T.; Sprinkle, J.K. Jr.; Gunnink, R.; Kartoshov, M.; Kuropatwinski, J.; Raphina, G.; Sokolov, G.

      1998-12-01

      Materials accounting at bulk processing facilities that handle low enriched uranium consists primarily of weight and uranium enrichment measurements. Most low enriched uranium processing facilities draw separate materials balances for each enrichment handled at the facility. The enrichment measurement determines the isotopic abundance of the {sup 235}U, thereby determining the proper strata for the item, while the weight measurement generates the primary accounting value for the item. Enrichment measurements using the passive gamma radiation from uranium were developed for use in US facilities a few decades ago. In the US, the use of low-resolution detectors was favored because they cost less, are lighter and more robust, and don`t require the use of liquid nitrogen. When these techniques were exported to Europe, however, difficulties were encountered. Two of the possible root causes were discovered to be inaccurate knowledge of the container wall thickness and higher levels of minor isotopes of uranium introduced by the use of reactor returns in the enrichment plants. the minor isotopes cause an increase in the Compton continuum under the 185.7 keV assay peak and the observance of interfering 238.6 keV gamma rays. The solution selected to address these problems was to rely on the slower, more costly, high-resolution gamma ray detectors when the low-resolution method failed. Recently, these gamma ray based enrichment measurement techniques have been applied to Russian origin material. The presence of interfering gamma radiation from minor isotopes was confirmed. However, with the advent of fast portable computers, it is now possible to apply more sophisticated analysis techniques to the low-resolution data in the field. Explicit corrections for Compton background, gamma rays from {sup 236}U daughters, and the attenuation caused by thick containers can be part of the least squares fitting routine. Preliminary results from field measurements in Kazakhstan will be discussed.

    3. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      J K L M N O P Q R S T U V W X Y Z Ilic, Marija D. (Marija D. Ilic) - Department of Electrical and Computer Engineering, Carnegie Mellon University Go back to Individual Researchers ...

    4. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      F G H I J K L M N O P Q R S T U V W X Y Z Elkashlan, Maged (Maged Elkashlan) - School of Electronic Engineering and Computer Science, Queen Mary, University of London Erdogan, ...

    5. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    6. FY 1990 Applied Sciences Branch annual report

      SciTech Connect (OSTI)

      Keyes, B.M.; Dippo, P.C.

      1991-11-01

      The Applied Sciences Branch actively supports the advancement of DOE/SERI goals for the development and implementation of the solar photovoltaic technology. The primary focus of the laboratories is to provide state-of-the-art analytical capabilities for materials and device characterization and fabrication. The branch houses a comprehensive facility which is capable of providing information on the full range of photovoltaic components. A major objective of the branch is to aggressively pursue collaborative research with other government laboratories, universities, and industrial firms for the advancement of photovoltaic technologies. Members of the branch disseminate research findings to the technical community in publications and presentations. This report contains information on surface and interface analysis, materials characterization, development, electro-optical characterization module testing and performance, surface interactions and FTIR spectroscopy.

    7. Distributed Design and Analysis of Computer Experiments

      Energy Science and Technology Software Center (OSTI)

      2002-11-11

      DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

    8. Noise tolerant spatiotemporal chaos computing

      SciTech Connect (OSTI)

      Kia, Behnam; Kia, Sarvenaz; Ditto, William L.; Lindner, John F.; Sinha, Sudeshna

      2014-12-01

      We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

    9. Mathematical model of testing of pipeline integrity by thermal fields

      SciTech Connect (OSTI)

      Vaganova, Nataliia

      2014-11-18

      Thermal fields testing at the ground surface above a pipeline are considered. One method to obtain and investigate an ideal thermal field in different environments is a direct numerical simulation of heat transfer processes taking into account the most important physical factors. In the paper a mathematical model of heat propagation from an underground source is described with accounting of physical factors such as filtration of water in soil and solar radiation. Thermal processes are considered in 3D origin where the heat source is a pipeline with constant temperature and non-uniform isolated shell (with 'damages'). This problem leads to solution of heat diffusivity equation with nonlinear boundary conditions. Approaches to analysis of thermal fields are considered to detect damages.

    10. Mathematical modelling of post combustion in Dofasco's KOBM

      SciTech Connect (OSTI)

      Gou, H.; Irons, G.A.; Lu, W.K.

      1992-01-01

      In the AISI Direct Steelmaking program, trials were undertaken in Dofasco's 300 Tonne KOBM to examine post combustion. To support this work, a two-dimensional turbulent mathematical model has been developed to describe gas flow, combustion reactions and heat transfer (radiation and convection) in converter-type steelmaking processes. Gaseous flow patterns, temperature and heat flux distributions in the furnace were calculated with this model. Key findings are: The post combustion ratio is determined from the rates of oxygen supply, oxygen used for decarburization and the remainder available for post combustion, i.e. deducible from a mass balance calculation, comparison between the heat transfer fluxes calculated based on the model and those measured industrially indicates that the conventionally defined heat transfer efficiency over-estimates the heat recovered by the bath by about 20%, and the location of the combustion zone can be controlled, to a certain extent, by adjusting the lance practice.

    11. Mathematical modelling of post combustion in Dofasco`s KOBM

      SciTech Connect (OSTI)

      Gou, H.; Irons, G.A.; Lu, W.K.

      1992-12-31

      In the AISI Direct Steelmaking program, trials were undertaken in Dofasco`s 300 Tonne KOBM to examine post combustion. To support this work, a two-dimensional turbulent mathematical model has been developed to describe gas flow, combustion reactions and heat transfer (radiation and convection) in converter-type steelmaking processes. Gaseous flow patterns, temperature and heat flux distributions in the furnace were calculated with this model. Key findings are: The post combustion ratio is determined from the rates of oxygen supply, oxygen used for decarburization and the remainder available for post combustion, i.e. deducible from a mass balance calculation, comparison between the heat transfer fluxes calculated based on the model and those measured industrially indicates that the conventionally defined heat transfer efficiency over-estimates the heat recovered by the bath by about 20%, and the location of the combustion zone can be controlled, to a certain extent, by adjusting the lance practice.

    12. AMRITA -- A computational facility

      SciTech Connect (OSTI)

      Shepherd, J.E.; Quirk, J.J.

      1998-02-23

      Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

    13. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    14. Rational Catalyst Design Applied to Development of Advanced Oxidation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Rational Catalyst Design Applied to Development of Advanced Oxidation Catalysts for Diesel Emission Control Rational Catalyst Design Applied to Development of Advanced Oxidation ...

    15. Energy Department Extends Deadline to Apply for START Tribal...

      Energy Savers [EERE]

      Extends Deadline to Apply for START Tribal Renewable Energy Project Development Assistance to May 22, 2015 Energy Department Extends Deadline to Apply for START Tribal Renewable...

    16. Tritium research activities in Safety and Tritium Applied Research...

      Office of Environmental Management (EM)

      research activities in Safety and Tritium Applied Research (STAR) facility, Idaho National Laboratory Tritium research activities in Safety and Tritium Applied Research (STAR)...

    17. James Webb Space Telescope: PM Lessons Applied - Eric Smith,...

      Energy Savers [EERE]

      James Webb Space Telescope: PM Lessons Applied - Eric Smith, Deputy Program Director, NASA James Webb Space Telescope: PM Lessons Applied - Eric Smith, Deputy Program Director,...

    18. Opportunities to Apply Phase Change Materials to Building Enclosures...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Opportunities to Apply Phase Change Materials to Building Enclosures Webinar Opportunities to Apply Phase Change Materials to Building Enclosures Webinar Slides from the Building...

    19. Applying physics, teamwork to fusion energy science | Princeton Plasma

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Physics Lab Applying physics, teamwork to fusion energy science American Fusion News Category: Massachusetts Institute of Technology (MIT) Link: Applying physics, teamwork to fusion energy science

    20. 2008 Annual Merit Review Results Summary - 2. Applied Battery...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      2. Applied Battery Research 2008 Annual Merit Review Results Summary - 2. Applied Battery Research DOE Vehicle Technologies Annual Merit Review PDF icon 2008meritreview2.pdf...

    1. Advanced Multivariate Analysis Tools Applied to Surface Analysis...

      Office of Scientific and Technical Information (OSTI)

      Advanced Multivariate Analysis Tools Applied to Surface Analysis. Citation Details In-Document Search Title: Advanced Multivariate Analysis Tools Applied to Surface Analysis. No...

    2. Statistical and Domain Analytics Applied to PV Module Lifetime...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Statistical and Domain Analytics Applied to PV Module Lifetime and Degradation Science Statistical and Domain Analytics Applied to PV Module Lifetime and Degradation Science...

    3. Optical Diagnostics and Modeling Tools Applied to Diesel HCCI...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Optical Diagnostics and Modeling Tools Applied to Diesel HCCI Optical Diagnostics and Modeling Tools Applied to Diesel HCCI 2002 DEER Conference Presentation: Caterpillar Engine...

    4. Magnetic relaxometry as applied to sensitive cancer detection...

      Office of Scientific and Technical Information (OSTI)

      relaxometry as applied to sensitive cancer detection and localization Title: Magnetic relaxometry as applied to sensitive cancer detection and localization Here we describe ...

    5. Applying the Battery Ownership Model in Pursuit of Optimal Battery...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies 2012 DOE ...

    6. Engineering Physics and Mathematics Division progress report for period ending August 31, 1989

      SciTech Connect (OSTI)

      Not Available

      1989-12-01

      This paper contains abstracts on research performed at the Engineering Physics and Mathematics Division of Oak Ridge National Laboratory. The areas covered are: mathematical science; nuclear-data measurement and evaluation; intelligent systems; nuclear analysis and shielding; and Engineering Physics Information Center. (LSP)

    7. Intro to computer programming, no computer required! | Argonne Leadership

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Facility Intro to computer programming, no computer required! Author: Laura Wolf January 6, 2016 Facebook Twitter LinkedIn Google E-mail Printer-friendly version Pairing the volunteers with interested schools was the easy part. School administrators and teachers alike were delighted to have Argonne National Laboratory volunteers visit and help guide their Hour of Code activities last December. In all, Argonne's Educational Programs department helped place 44 volunteers in Chicago

    8. Other World Computing | Open Energy Information

      Open Energy Info (EERE)

      World Computing Jump to: navigation, search Name Other World Computing Facility Other World Computing Sector Wind energy Facility Type Community Wind Facility Status In Service...

    9. CLAMR (Compute Language Adaptive Mesh Refinement)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) is being developed as a DOE...

    10. Advanced Computational Methods for Security Constrained Financial Transmission Rights

      SciTech Connect (OSTI)

      Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

      2012-07-26

      Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

    11. Mathematical simulation of the amplification of 1790-nm laser radiation in a nuclear-excited He Ar plasma containing nanoclusters of uranium compounds

      SciTech Connect (OSTI)

      Kosarev, V A; Kuznetsova, E E

      2014-02-28

      The possibility of applying dusty active media in nuclearpumped lasers has been considered. The amplification of 1790-nm radiation in a nuclear-excited dusty He Ar plasma is studied by mathematical simulation. The influence of nanoclusters on the component composition of the medium and the kinetics of the processes occurring in it is analysed using a specially developed kinetic model, including 72 components and more than 400 reactions. An analysis of the results indicates that amplification can in principle be implemented in an active laser He Ar medium containing 10-nm nanoclusters of metallic uranium and uranium dioxide. (lasers)

    12. Study of trajectories and combustion of fuel-oil droplets in the combustion chamber of a power-plant boiler with the use of a mathematical model

      SciTech Connect (OSTI)

      Enyakin, Yu.P.; Usman, Yu.M.

      1988-03-01

      A mathematical model was developed to permit study of the behavior of fuel-oil droplets in a combustion chamber, and results are presented from a computer calculation performed for the 300-MW model TGMP-314P boiler of a power plant. The program written to perform the calculations was organized so that the first stage would entail calculation of the combustion (vaporization) of a droplet of liquid fuel. The program then provided for a sudden decrease in the mass of the fuel particle, simulating rupture of the coke shell and ejection of some of the liquid. The program then considered the combustion of a hollow coke particle. Physicochemical parameters characteristic of fuel oil M-100 were introduced in the program in the first stage of computations, while parameters characteristic of the coke particle associated with an unburned fuel-oil droplet were included in the second stage.

    13. Computational Fluid Dynamics Library

      Energy Science and Technology Software Center (OSTI)

      2005-03-04

      CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

    14. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bioinformatics Computing Consultant Position Available Bioinformatics Computing Consultant Position Available October 31, 2011 by Katie Antypas NERSC and the Joint Genome Institute (JGI) are searching for two individuals who can help biologists exploit advanced computing platforms. JGI provides production sequencing and genomics for the Department of Energy. These activities are critical to the DOE missions in areas related to clean energy generation and environmental characterization and

    15. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email The Parallel Computing Summer Research Internship is an intense 10 week program aimed at providing students with a solid foundation in modern high performance

    16. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    17. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Careers at Argonne Looking for a unique opportunity to work at the forefront of high-performance computing? At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a talented and diverse team of experts, we enable researchers to pursue groundbreaking discoveries that would otherwise not be possible. Check out our open positions below. For the most current listing of

    18. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    19. Tukey | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. Feedback Form Tukey The primary purpose of...

    20. QBox | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computers. Obtaining Qbox http:eslab.ucdavis.edusoftwareqbox Building Qbox for Blue GeneQ Qbox requires the standard math libraries plus the Xerces-C http:...

    1. Thrusts in High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Exascale computers (1000x Hopper) in next decade: - Manycore processors using graphics, games, embedded cores, or other low power designs offer 100x in power efficiency -...

    2. Advanced Simulation and Computing Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The SSP mission is to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality. ASC works in partnership with computer ...

    3. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    4. Measures of agreement between computation and experiment:validation metrics.

      SciTech Connect (OSTI)

      Barone, Matthew Franklin; Oberkampf, William Louis

      2005-08-01

      With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

    5. Manufacturing Energy and Carbon Footprint - Sector: Computer...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computers, Electronics and Electrical Equipment (NAICS 334, 335) Process Energy ... Carbon Footprint Sector: Computers, Electronics and Electrical Equipment (NAICS 334, ...

    6. Mathematical modeling of postcombustion in a KOBM converter

      SciTech Connect (OSTI)

      Gou, H.; Irons, G.A.; Lu, W.-K. )

      1993-02-01

      A mathematical model has been developed to describe gas flow, combustion reactions, and heat transfer in converter-type steelmaking processes. The k-[epsilon] two-equation turbulent model, a finite reaction model, and the DeMarco-Lockwood flux model have been incorporated in this model to deal with the turbulent flow, postcombustion reactions, and radiation heat transfer. Local gaseous flow patterns, temperature, and heat flux distributions were calculated for a 300 tonne Kloeckner Oxygen Blowing Maximillanshuette (KOBM) converter. Comparison between the heat-transfer fluxes calculated based on the model and those measured industrially indicates the validity of the model in this application. The postcombustion has been found to be determined by the decarburization rate (DCR) which is directly related to the hardness of blowing not by the entrainment of surrounding gas to the oxygen jet as previously reported. The model revealed that about 20 pct of what is normally considered to be recovered heat has actually been lost through the vessel wall and to the lance. It is shown that this model is useful in studying the detailed mechanisms of postcombustion to optimize operations in converter-type steelmaking processes.

    7. Applying Human Factors during the SIS Life Cycle

      SciTech Connect (OSTI)

      Avery, K.

      2010-05-05

      Safety Instrumented Systems (SIS) are widely used in U.S. Department of Energy's (DOE) nonreactor nuclear facilities for safety-critical applications. Although use of the SIS technology and computer-based digital controls, can improve performance and safety, it potentially introduces additional complexities, such as failure modes that are not readily detectable. Either automated actions or manual (operator) actions may be required to complete the safety instrumented function to place the process in a safe state or mitigate a hazard in response to an alarm or indication. DOE will issue a new standard, Application of Safety Instrumented Systems Used at DOE Nonreactor Nuclear Facilities, to provide guidance for the design, procurement, installation, testing, maintenance, operation, and quality assurance of SIS used in safety significant functions at DOE nonreactor nuclear facilities. The DOE standard focuses on utilizing the process industry consensus standard, American National Standards Institute/ International Society of Automation (ANSI/ISA) 84.00.01, Functional Safety: Safety Instrumented Systems for the Process Industry Sector, to support reliable SIS design throughout the DOE complex. SIS design must take into account human-machine interfaces and their limitations and follow good human factors engineering (HFE) practices. HFE encompasses many diverse areas (e.g., information display, user-system interaction, alarm management, operator response, control room design, and system maintainability), which affect all aspects of system development and modification. This paper presents how the HFE processes and principles apply throughout the SIS life cycle to support the design and use of SIS at DOE nonreactor nuclear facilities.

    8. OPTIONS for systemic change in mathematics, science, and technology education: Scientist/teacher partnerships

      SciTech Connect (OSTI)

      Glantz, C.S.; Fayette, L.

      1994-01-01

      Options is a US Department of Energy/Pacific Northwest Laboratory (DOE/PNL) project whose goal is to assist Washington and Oregon middle schools having high percentages of students historically underrepresented in mathematics, science, and technology. The goal is to ensure that all students receive high-quality mathematics, science, and technology education throughout their middle school years. Teams of scientists work with teams of teachers from participating OPTIONS schools to initiate significant change in the manner in which science, mathematics, and technology are taught. As part of this effort, PNL scientists team up with teachers to develop curricula.

    9. Guidebook to excellence, 1994: A directory of federal resources for mathematics and science education improvement

      SciTech Connect (OSTI)

      Not Available

      1994-04-01

      The purpose of this Guidebook to Excellence is to assist educators, parents, and students across the country in attaining the National Education Goals, particularly Goal 4: By the year 2000, US students will be first in the world in science and mathematics achievement. The Guidebook will help make the education community aware of the Federal Government`s extensive commitment to mathematics and science education. Sixteen Federal agencies collaborated with the Eisenhower National Clearinghouse to produce this publication. Although the Guidebook contains valuable information for anyone involved in mathematics and science education, its focus is on the elementary and secondary levels.

    10. Can Cloud Computing Address the Scientific Computing Requirements...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      achieve energy efficiency levels comparable to commercial cloud centers. Cloud is a business model and can be applied at DOE supercomputing centers. The progress of the...

    11. Size-dependent fluorescence of bioaerosols: Mathematical model using fluorescing and absorbing molecules in bacteria

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Hill, Steven C.; Williamson, Chatt C.; Doughty, David C.; Pan, Yong-Le; Santarpia, Joshua L.; Hill, Hanna H.

      2015-02-02

      This paper uses a mathematical model of fluorescent biological particles composed of bacteria and/or proteins (mostly as in Hill et al., 2013 [23]) to investigate the size-dependence of the total fluorescence emitted in all directions. The model applies to particles which have negligible reabsorption of fluorescence within the particle. The specific particles modeled here are composed of ovalbumin and of a generic Bacillus. The particles need not be spherical, and in some cases need not be homogeneous. However, the results calculated in this paper are for spherical homogeneous particles. Light absorbing and fluorescing molecules included in the model are aminomore » acids, nucleic acids, and several coenzymes. Here the excitation wavelength is 266 nm. The emission range, 300 to 370 nm, encompasses the fluorescence of tryptophan. The fluorescence cross section (CF) is calculated and compared with one set of published measured values. We investigate power law (Ady) approximations to CF, where d is diameter, and A and y are parameters adjusted to fit the data, and examine how y varies with d and composition, including the fraction as water. The particle's fluorescence efficiency (QF=CF/geometric-cross-section) can be written for homogeneous particles as QabsRF, where Qabs is the absorption efficiency, and RF, the fraction of the absorbed light emitted as fluorescence, is independent of size and shape. When QF is plotted vs. mid or mi(mr-1)d, where m=mr+imi is the complex refractive index, the plots for different fractions of water in the particle tend to overlap.« less

    12. Size-dependent fluorescence of bioaerosols: Mathematical model using fluorescing and absorbing molecules in bacteria

      SciTech Connect (OSTI)

      Hill, Steven C.; Williamson, Chatt C.; Doughty, David C.; Pan, Yong-Le; Santarpia, Joshua L.; Hill, Hanna H.

      2015-02-02

      This paper uses a mathematical model of fluorescent biological particles composed of bacteria and/or proteins (mostly as in Hill et al., 2013 [23]) to investigate the size-dependence of the total fluorescence emitted in all directions. The model applies to particles which have negligible reabsorption of fluorescence within the particle. The specific particles modeled here are composed of ovalbumin and of a generic Bacillus. The particles need not be spherical, and in some cases need not be homogeneous. However, the results calculated in this paper are for spherical homogeneous particles. Light absorbing and fluorescing molecules included in the model are amino acids, nucleic acids, and several coenzymes. Here the excitation wavelength is 266 nm. The emission range, 300 to 370 nm, encompasses the fluorescence of tryptophan. The fluorescence cross section (CF) is calculated and compared with one set of published measured values. We investigate power law (Ady) approximations to CF, where d is diameter, and A and y are parameters adjusted to fit the data, and examine how y varies with d and composition, including the fraction as water. The particle's fluorescence efficiency (QF=CF/geometric-cross-section) can be written for homogeneous particles as QabsRF, where Qabs is the absorption efficiency, and RF, the fraction of the absorbed light emitted as fluorescence, is independent of size and shape. When QF is plotted vs. mid or mi(mr-1)d, where m=mr+imi is the complex refractive index, the plots for different fractions of water in the particle tend to overlap.

    13. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy L M N O P Q R S T U V W X Y Z Karabasoglu, Orkun (Orkun Karabasoglu) - Department of Electrical and Computer Engineering, Carnegie Mellon University Karlsson, Anette M. (Anette M. Karlsson) - Department of Mechanical Engineering, University of Delaware Kerekes, Tamas (Tamas Kerekes) - Department of Energy Technology, Aalborg University Kherani,

    14. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy M N O P Q R S T U V W X Y Z Lazzaro, John (John Lazzaro) - Department of Electrical Engineering and Computer Sciences, University of California at Berkeley Lee, Tonghun (Tonghun Lee) - Department of Mechanical Engineering, Michigan State University Li, Ying (Ying Li) - Department of Mechanical Engineering, University of Wisconsin-Milwaukee

    15. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy S T U V W X Y Z Sangiovanni-Vincentelli, Alberto (Alberto Sangiovanni-Vincentelli) - Department of Electrical Engineering and Computer Sciences,University of California at Berkeley Schaefer, Laura A. (Laura A. Schaefer) - Department of Mechanical Engineering and Materials Science, University of Pittsburgh Schaltz, Erik (Erik Schaltz) -

    16. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy Z Zhai, John Z. (John Z. Zhai) - Department of Civil, Environmental, and Architectural Engineering, University of Colorado at Boulder Zhao, Tianshou (Tianshou Zhao) - Department of Mechanical Engineering, Hong Kong University of Science and Technology Zhong, Lin (Lin Zhong) - Department of Electrical and Computer Engineering, Rice University

    17. Computational thermal, chemical, fluid, and solid mechanics for geosystems management.

      SciTech Connect (OSTI)

      Davison, Scott; Alger, Nicholas; Turner, Daniel Zack; Subia, Samuel Ramirez; Carnes, Brian; Martinez, Mario J.; Notz, Patrick K.; Klise, Katherine A.; Stone, Charles Michael; Field, Richard V., Jr.; Newell, Pania; Jove-Colon, Carlos F.; Red-Horse, John Robert; Bishop, Joseph E.; Dewers, Thomas A.; Hopkins, Polly L.; Mesh, Mikhail; Bean, James E.; Moffat, Harry K.; Yoon, Hongkyu

      2011-09-01

      This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.

    18. X-Ray Photoelectron Spectroscopy (XPS) Applied to Soot & What...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Photoelectron Spectroscopy (XPS) Applied to Soot & What It Can Do for You X-Ray Photoelectron Spectroscopy (XPS) Applied to Soot & What It Can Do for You Presentation given at DEER...

    19. DOE - Office of Legacy Management -- Case School of Applied Science...

      Office of Legacy Management (LM)

      Case School of Applied Science Ohio State University - OH 0-01 FUSRAP Considered Sites Site: Case School of Applied Science, Ohio State University (OH.0-01 ) Eliminated from...

    20. Oregon Learning About and Applying for Water Rights Webpage ...

      Open Energy Info (EERE)

      Learning About and Applying for Water Rights Webpage Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Oregon Learning About and Applying for Water...

    1. Aachen University of Applied Sciences | Open Energy Information

      Open Energy Info (EERE)

      Aachen University of Applied Sciences Place: Germany Sector: Services Product: General Financial & Legal Services ( Academic Research foundation ) References: Aachen...

    2. Applied Process Engineering Laborotory APEL | Open Energy Information

      Open Energy Info (EERE)

      Engineering Laborotory (APEL) Place: United States Sector: Services Product: General Financial & Legal Services ( Private family-controlled ) References: Applied Process...

    3. Applying for PMCDP/FPD Certification (initial) | Department of Energy

      Energy Savers [EERE]

      Services » Career Development (PMCDP) » Applying for PMCDP/FPD Certification (initial) Applying for PMCDP/FPD Certification (initial) Certification applicants are nominated by their respective Program Secretarial Office (PSO) to apply for FPD certification - candidates may not apply without program sponsorship. Each participating program has a dedicated point of contact (POC) whose role is to support the FPD applicant in preparing their certification package. First time applicants, as well as

    4. Attenuation-Based Remedies in the Subsurface Applied Field Research

      Energy Savers [EERE]

      Initiative (ABRS AFRI) | Department of Energy Attenuation-Based Remedies in the Subsurface Applied Field Research Initiative (ABRS AFRI) Attenuation-Based Remedies in the Subsurface Applied Field Research Initiative (ABRS AFRI) Attenuation-Based Remedies in the Subsurface Applied Field Research Initiative (ABRS AFRI) Located at the Savannah River Site in Aiken, South Carolina, the Attenuation-Based Remedies in the Subsurface Applied Field Research Initiative (ABRS AFRI) was established to

    5. Vehicle Technologies Office: Applied Battery Research | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Applied Battery Research Vehicle Technologies Office: Applied Battery Research Applied battery research addresses the barriers facing the lithium-ion systems that are closest to meeting the technical energy and power requirements for hybrid electric vehicle (HEV) and electric vehicle (EV) applications. In addition, applied battery research concentrates on technology transfer to ensure that the research results and lessons learned are effectively provided to U.S. automotive and battery

    6. Oak Ridge National Laboratory - Computing and Computational Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Directorate Oak Ridge to acquire next generation supercomputer Oak Ridge to acquire next generation supercomputer The U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) has signed a contract with IBM to bring a next-generation supercomputer to Oak Ridge National Laboratory (ORNL). The OLCF's new hybrid CPU/GPU computing system, Summit, will be delivered in 2017. (more) Links Department of Energy Consortium for Advanced Simulation of Light Water Reactors Extreme

    7. Review of Natural Phenomena Hazards (NPH) Requirements Currently Applied to

      Office of Environmental Management (EM)

      the Thomas Jefferson National Accelerator Facility (TJNAF) | Department of Energy Review of Natural Phenomena Hazards (NPH) Requirements Currently Applied to the Thomas Jefferson National Accelerator Facility (TJNAF) Review of Natural Phenomena Hazards (NPH) Requirements Currently Applied to the Thomas Jefferson National Accelerator Facility (TJNAF) Review of Natural Phenomena Hazards (NPH) Requirements Currently Applied to the Thomas Jefferson National Accelerator Facility (TJNAF) By:

    8. Overview of Applied Battery Research | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      10 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C. PDF icon es014_henriksen_2010_o.pdf More Documents & Publications Overview of Applied Battery Research Overview and Progress of the Applied Battery Research (ABR) Activity Overview and Progress of the Applied Battery Research (ABR) Activity

    9. SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics in

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Understanding Tsunami" | Princeton Plasma Physics Lab 26, 2013, 9:30am Science On Saturday MBG Auditorium SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics in Understanding Tsunami" Professor J. Douglas Wright, Associate Professor Department of Mathematics, Drexel University Presentation: PDF icon SOS26JAN2013_JDWright.pdf Science on Saturday is a series of lectures given by scientists, mathematicians, and other professionals involved in cutting-edge

    10. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      F G H I J K L M N O P Q R S T U V W X Y Z E, Weinan (Weinan E) - Department of Mathematics, Princeton University Ealy, Clifton (Clifton Ealy) - Department of Mathematics, Western ...

    11. Power throttling of collections of computing elements

      DOE Patents [OSTI]

      Bellofatto, Ralph E. (Ridgefield, CT); Coteus, Paul W. (Yorktown Heights, NY); Crumley, Paul G. (Yorktown Heights, NY); Gara, Alan G. (Mount Kidsco, NY); Giampapa, Mark E. (Irvington, NY); Gooding; Thomas M. (Rochester, MN); Haring, Rudolf A. (Cortlandt Manor, NY); Megerian, Mark G. (Rochester, MN); Ohmacht, Martin (Yorktown Heights, NY); Reed, Don D. (Mantorville, MN); Swetz, Richard A. (Mahopac, NY); Takken, Todd (Brewster, NY)

      2011-08-16

      An apparatus and method for controlling power usage in a computer includes a plurality of computers communicating with a local control device, and a power source supplying power to the local control device and the computer. A plurality of sensors communicate with the computer for ascertaining power usage of the computer, and a system control device communicates with the computer for controlling power usage of the computer.

    12. About the Deputy Director: Short Scientific Biography

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lab Deputy Director Horst Simon Horst Simon is an internationally recognized expert in computer science and applied mathematics and the Deputy Director of Lawrence Berkeley...

    13. Microsoft PowerPoint - Polansky-Exascale-Salishan-20100426

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      through modeling and simulation through - Continued excellence in applied mathematics and computer science research - Strengthening new and established modes for cross-...

    14. Programs & User Facilities

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      understanding of energy and matter and advance national, economic, and energy security Advanced Scientific Computing Research Applied Mathematics Co-Design Centers Exascale...

    15. Sandia National Laboratories: Careers: Students & Postdocs: Internship...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Year-round and summer Who can apply Graduate and undergraduate students in computer science, mathematics, engineering, and related disciplines, as well as advanced high school...

    16. Maria K. Y. Chan | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computation Institute, The University of Chicago Education BSc, Physics and Applied Mathematics, University of California at Los Angeles PhD, Physics, Massachusetts Institute of...

    17. 1

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      August 3, 2015 Drawing on expertise from astrophysics, applied mathematics, fluid mechanics, data management, and computer science, a interdisciplinary multi-institution...

    18. Visit our website

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      to name just a few--- through the exploration of materials science, physics, chemistry, engineering, applied mathematics, and high performance computing. The Ames...

    19. Computational Needs for the Next Generation Electric Grid Proceedings

      SciTech Connect (OSTI)

      Birman, Kenneth; Ganesh, Lakshmi; Renessee, Robbert van; Ferris, Michael; Hofmann, Andreas; Williams, Brian; Sztipanovits, Janos; Hemingway, Graham; University, Vanderbilt; Bose, Anjan; Stivastava, Anurag; Grijalva, Santiago; Grijalva, Santiago; Ryan, Sarah M.; McCalley, James D.; Woodruff, David L.; Xiong, Jinjun; Acar, Emrah; Agrawal, Bhavna; Conn, Andrew R.; Ditlow, Gary; Feldmann, Peter; Finkler, Ulrich; Gaucher, Brian; Gupta, Anshul; Heng, Fook-Luen; Kalagnanam, Jayant R; Koc, Ali; Kung, David; Phan, Dung; Singhee, Amith; Smith, Basil

      2011-10-05

      The April 2011 DOE workshop, 'Computational Needs for the Next Generation Electric Grid', was the culmination of a year-long process to bring together some of the Nation's leading researchers and experts to identify computational challenges associated with the operation and planning of the electric power system. The attached papers provide a journey into these experts' insights, highlighting a class of mathematical and computational problems relevant for potential power systems research. While each paper defines a specific problem area, there were several recurrent themes. First, the breadth and depth of power system data has expanded tremendously over the past decade. This provides the potential for new control approaches and operator tools that can enhance system efficiencies and improve reliability. However, the large volume of data poses its own challenges, and could benefit from application of advances in computer networking and architecture, as well as data base structures. Second, the computational complexity of the underlying system problems is growing. Transmitting electricity from clean, domestic energy resources in remote regions to urban consumers, for example, requires broader, regional planning over multi-decade time horizons. Yet, it may also mean operational focus on local solutions and shorter timescales, as reactive power and system dynamics (including fast switching and controls) play an increasingly critical role in achieving stability and ultimately reliability. The expected growth in reliance on variable renewable sources of electricity generation places an exclamation point on both of these observations, and highlights the need for new focus in areas such as stochastic optimization to accommodate the increased uncertainty that is occurring in both planning and operations. Application of research advances in algorithms (especially related to optimization techniques and uncertainty quantification) could accelerate power system software tool performance, i.e. speed to solution, and enhance applicability for new and existing real-time operation and control approaches, as well as large-scale planning analysis. Finally, models are becoming increasingly essential for improved decision-making across the electric system, from resource forecasting to adaptive real-time controls to online dynamics analysis. The importance of data is thus reinforced by their inescapable role in validating, high-fidelity models that lead to deeper system understanding. Traditional boundaries (reflecting geographic, institutional, and market differences) are becoming blurred, and thus, it is increasingly important to address these seams in model formulation and utilization to ensure accuracy in the results and achieve predictability necessary for reliable operations. Each paper also embodies the philosophy that our energy challenges require interdisciplinary solutions - drawing on the latest developments in fields such as mathematics, computation, economics, as well as power systems. In this vein, the workshop should be viewed not as the end product, but the beginning of what DOE seeks to establish as a vibrant, on-going dialogue among these various communities. Bridging communication gaps among these communities will yield opportunities for innovation and advancement. The papers and workshop discussion provide the opportunity to learn from experts on the current state-of-the-art on computational approaches for electric power systems, and where one may focus to accelerate progress. It has been extremely valuable to me as I better understand this space, and consider future programmatic activities. I am confident that you too will enjoy the discussion, and certainly learn from the many experts. I would like to thank the authors of the papers for sharing their perspectives, as well as the paper discussants, session recorders, and participants. The meeting would not have been as successful without your commitment and engagement. I also would like to thank Joe Eto and Bob Thomas for their vision and leadership in bringing together such a well-structured and productive forum.

    20. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

      SciTech Connect (OSTI)

      Kashyap, Vinay L.; Siemiginowska, Aneta [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Van Dyk, David A.; Xu Jin [Department of Statistics, University of California, Irvine, CA 92697-1250 (United States); Connors, Alanna [Eureka Scientific, 2452 Delmer Street, Suite 100, Oakland, CA 94602-3017 (United States); Freeman, Peter E. [Department of Statistics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States); Zezas, Andreas, E-mail: vkashyap@cfa.harvard.ed, E-mail: asiemiginowska@cfa.harvard.ed, E-mail: dvd@ics.uci.ed, E-mail: jinx@ics.uci.ed, E-mail: aconnors@eurekabayes.co, E-mail: pfreeman@cmu.ed, E-mail: azezas@cfa.harvard.ed [Physics Department, University of Crete, P.O. Box 2208, GR-710 03, Heraklion, Crete (Greece)

      2010-08-10

      A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error), and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.

    1. Is ""predictability"" in computational sciences a myth?

      SciTech Connect (OSTI)

      Hemez, Francois M [Los Alamos National Laboratory

      2011-01-31

      Within the last two decades, Modeling and Simulation (M&S) has become the tool of choice to investigate the behavior of complex phenomena. Successes encountered in 'hard' sciences are prompting interest to apply a similar approach to Computational Social Sciences in support, for example, of national security applications faced by the Intelligence Community (IC). This manuscript attempts to contribute to the debate on the relevance of M&S to IC problems by offering an overview of what it takes to reach 'predictability' in computational sciences. Even though models developed in 'soft' and 'hard' sciences are different, useful analogies can be drawn. The starting point is to view numerical simulations as 'filters' capable to represent information only within specific length, time or energy bandwidths. This simplified view leads to the discussion of resolving versus modeling which motivates the need for sub-scale modeling. The role that modeling assumptions play in 'hiding' our lack-of-knowledge about sub-scale phenomena is explained which leads to discussing uncertainty in simulations. It is argued that the uncertainty caused by resolution and modeling assumptions should be dealt with differently than uncertainty due to randomness or variability. The corollary is that a predictive capability cannot be defined solely as accuracy, or ability of predictions to match the available physical observations. We propose that 'predictability' is the demonstration that predictions from a class of 'equivalent' models are as consistent as possible. Equivalency stems from defining models that share a minimum requirement of accuracy, while being equally robust to the sources of lack-of-knowledge in the problem. Examples in computational physics and engineering are given to illustrate the discussion.

    2. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRL/LCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name: __________________________________________________________ Institution: ___________________________________________________________ Mailing Address: ______________________________________________________ Email Address: _______________________________________________________ Telephone:

    3. Computing at SSRL Home Page

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      contents you are looking for have moved. You will be redirected to the new location automatically in 5 seconds. Please bookmark the correct page at http://www-ssrl.slac.stanford.edu/content/staff-resources/computer-networking-group

    4. Events | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2:00 PM Finding Multiple Local Minima of Computationally Expensive Simulations Jeffery Larson Postdoctoral Appointee, MCS Building 240Room 4301 Pages 1 2 3 4 5 6 7 8 9 ... next...

    5. Present and Future Computing Requirements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cosmology SciDAC-3 Project Ann Almgren (LBNL) Nick Gnedin (FNAL) Dave Higdon (LANL) Rob Ross (ANL) Martin White (UC Berkeley LBNL) Large Scale Production Computing and Storage...

    6. SSRL Computer Account Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SSRLLCLS Computer Account Request Form August 2009 Fill in this form and sign the security statement mentioned at the bottom of this page to obtain an account. Your Name:...

    7. Synchrotron-based X-ray computed tomography during compression loading of cellular materials

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Cordes, Nikolaus L.; Henderson, Kevin; Stannard, Tyler; Williams, Jason J.; Xiao, Xianghui; Robinson, Mathew W. C.; Schaedler, Tobias A.; Chawla, Nikhilesh; Patterson, Brian M.

      2015-04-29

      Three-dimensional X-ray computed tomography (CT) of in situ dynamic processes provides internal snapshot images as a function of time. Tomograms are mathematically reconstructed from a series of radiographs taken in rapid succession as the specimen is rotated in small angular increments. In addition to spatial resolution, temporal resolution is important. Thus temporal resolution indicates how close together in time two distinct tomograms can be acquired. Tomograms taken in rapid succession allow detailed analyses of internal processes that cannot be obtained by other means. This article describes the state-of-the-art for such measurements acquired using synchrotron radiation as the X-ray source.

    8. Synchrotron-based X-ray computed tomography during compression loading of cellular materials

      SciTech Connect (OSTI)

      Cordes, Nikolaus L.; Henderson, Kevin; Stannard, Tyler; Williams, Jason J.; Xiao, Xianghui; Robinson, Mathew W. C.; Schaedler, Tobias A.; Chawla, Nikhilesh; Patterson, Brian M.

      2015-04-29

      Three-dimensional X-ray computed tomography (CT) of in situ dynamic processes provides internal snapshot images as a function of time. Tomograms are mathematically reconstructed from a series of radiographs taken in rapid succession as the specimen is rotated in small angular increments. In addition to spatial resolution, temporal resolution is important. Thus temporal resolution indicates how close together in time two distinct tomograms can be acquired. Tomograms taken in rapid succession allow detailed analyses of internal processes that cannot be obtained by other means. This article describes the state-of-the-art for such measurements acquired using synchrotron radiation as the X-ray source.

    9. Final Report for the grant "Applied Geometry" (DOE DE-FG02-04ER25657)

      SciTech Connect (OSTI)

      Prof. Mathieu Desbrun

      2009-05-20

      The primary purpose of this 3-year DOE-funded research effort, now completed, was to develop consistent, theoretical foundations of computations on discrete geometry, to realize the promise of predictive and scalable management of large geometric datasets as handled routinely in applied sciences. Geometry (be it simple 3D shapes or higher dimensional manifolds) is indeed a central and challenging issue from the modeling and computational perspective in several sciences such as mechanics, biology, molecular dynamics, geophysics, as well as engineering. From digital maps of our world, virtual car crash simulation, predictive animation of carbon nano-tubes, to trajectory design of space missions, knowing how to process and animate digital geometry is key in many cross-disciplinary research areas.

    10. Computer Assisted Virtual Environment - CAVE

      ScienceCinema (OSTI)

      Erickson, Phillip; Podgorney, Robert; Weingartner, Shawn; Whiting, Eric

      2014-06-09

      Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

    11. Secure computing for the 'Everyman'

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Secure computing for the 'Everyman' Secure computing for the 'Everyman' If implemented on a wide scale, quantum key distribution technology could ensure truly secure commerce, banking, communications and data transfer. September 2, 2014 This small device developed at Los Alamos National Laboratory uses the truly random spin of light particles as defined by laws of quantum mechanics to generate a random number for use in a cryptographic key that can be used to securely transmit information

    12. computational-hydraulics-for-transportation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Transportation Workshop Sept. 23-24, 2009 Argonne TRACC Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. Announcement pdficon small The Transportation Research and Analysis Computing Center at Argonne National Laboratory will hold a workshop on the use of computational hydraulics for transportation applications. The goals of the workshop are: Bring together people who are using or would benefit from the use of high performance cluster

    13. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      If you have questions or comments regarding any of our research and development activities, how to work with ORNL and the Computational Sciences and Engineering (CSE) Division, or the content of this website please contact one of the following people: If you have questions regarding CSE technologies and capabilities, job opportunities, working with ORNL and the CSE Division, intellectual property, etc., contact, Shaun S. Gleason, Ph.D. Division Director, Computational Sciences and Engineering

    14. Mira | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Resources Mira Cetus and Vesta Visualization Cluster Data and Networking Software JLSE Featured Videos Mira: Argonne's 10-Petaflop Supercomputer Mira's Dedication Ceremony Introducing Mira: Our Next-Generation Supercomputer Mira Mira Ushers in a New Era of Scientific Supercomputing As one of the fastest supercomputers, Mira, our 10-petaflops IBM Blue Gene/Q system, is capable of 10 quadrillion calculations per second. With this computing power, Mira can do in one day what it would take

    15. Cooley | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Changes from Tukey to Cooley Compiling and Linking Using Cobalt on Cooley Visit on Cooley Paraview on Cooley ParaView Tutorial VNC on Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] Cooley The primary purpose of Cooley is to analyze and visualize data produced on Mira. Equipped with state-of-the-art graphics processing units (GPUs), Cooley converts computational data from Mira

    16. LAMMPS | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Software & Libraries Boost CPMD Code_Saturne GAMESS GPAW GROMACS LAMMPS MADNESS QBox IBM References Cooley Policies Documentation Feedback Please provide feedback to help guide us as we continue to build documentation for our new computing resource. [Feedback Form] LAMMPS Overview LAMMPS is a general-purpose molecular dynamics software package for massively parallel computers. It is written in an exceptionally clean style that makes it one of the most popular codes for users to extend and

    17. Automatic computation of transfer functions

      DOE Patents [OSTI]

      Atcitty, Stanley; Watson, Luke Dale

      2015-04-14

      Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

    18. Mathematical model for oil slick transport and mixing in rivers. Special report

      SciTech Connect (OSTI)

      Shen, H.T.; Yapa, P.D.; Wang, D.S.; Yang, X.Q.

      1993-08-01

      The growing concern over the impacts of oil spills on aquatic environments has led to the development of many computer models for simulating the transport and spreading of oil slicks in surface waters. Almost all of these models were developed for coastal environments. A few river models exist. These models only considered the movement of surface oil slicks. In this study a two-layer model, ROSS2, is developed for simulating oil spills in rivers. This model considers the oil in the river to consist of a surface slick and suspended oil droplets entrained over the depth of the flow. The oil transformation processes considered in the model include advection, mechanical spreading, turbulent diffusion and mixing, evaporation, dissolution, emulsification, shoreline deposition and sinking. The model can be used for simulating instantaneous or continuous spills either on or under the water surface in rivers with or without an ice cover. The model has been implemented for the Ohio-Monongahela-Allegheny river system and the upper St. Lawrence River. This report describes the model formulation and implementation. A case study is presented along with detailed explanations of the program structure and its input and output. Although it is developed for simulating oil spills, the model can be applied to spills of other hazardous materials. Computer models, Oil spills, Oil slicks, Rivers.

    19. Browse by Discipline -- E-print Network Subject Pathways: Mathematics --

      Office of Scientific and Technical Information (OSTI)

      Energy, science, and technology for the research community -- hosted by the Office of Scientific and Technical Information, U.S. Department of Energy W X Y Z Van Zee, John W. (John W. Van Zee) - Department of Chemical Engineering, University of South Carolina Varaiya, Pravin (Pravin Varaiya) - Department of Electrical Engineering and Computer Sciences, University of California at Berkeley Go back to Individual Researchers Collections: A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

    20. ITP Chemicals: Technology Roadmap for Computational Fluid Dynamics, January

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      1999 | Department of Energy Fluid Dynamics, January 1999 ITP Chemicals: Technology Roadmap for Computational Fluid Dynamics, January 1999 PDF icon cfd_roadmap.pdf More Documents & Publications 3-D Combustion Simulation Strategy Status, Future Potential, and Application Issues A Workshop to Identify Research Needs and Impacts in Predictive Simulation for Internal Combustion Engines (PreSICE) Vehicle Technologies Office Merit Review 2015: Large Eddy Simulation (LES) Applied to Advanced