National Library of Energy BETA

Sample records for applied mathematics computer

  1. Applied Mathematics Conferences and Workshops | U.S. DOE Office...

    Office of Science (SC) Website

    Applied Mathematics Applied Mathematics Conferences And Workshops Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Applied Mathematics ...

  2. Applied & Computational MathematicsChallenges for the Design and Control of Dynamic Energy Systems

    SciTech Connect (OSTI)

    Brown, D L; Burns, J A; Collis, S; Grosh, J; Jacobson, C A; Johansen, H; Mezic, I; Narayanan, S; Wetter, M

    2011-03-10

    The Energy Independence and Security Act of 2007 (EISA) was passed with the goal 'to move the United States toward greater energy independence and security.' Energy security and independence cannot be achieved unless the United States addresses the issue of energy consumption in the building sector and significantly reduces energy consumption in buildings. Commercial and residential buildings account for approximately 40% of the U.S. energy consumption and emit 50% of CO{sub 2} emissions in the U.S. which is more than twice the total energy consumption of the entire U.S. automobile and light truck fleet. A 50%-80% improvement in building energy efficiency in both new construction and in retrofitting existing buildings could significantly reduce U.S. energy consumption and mitigate climate change. Reaching these aggressive building efficiency goals will not happen without significant Federal investments in areas of computational and mathematical sciences. Applied and computational mathematics are required to enable the development of algorithms and tools to design, control and optimize energy efficient buildings. The challenge has been issued by the U.S. Secretary of Energy, Dr. Steven Chu (emphasis added): 'We need to do more transformational research at DOE including computer design tools for commercial and residential buildings that enable reductions in energy consumption of up to 80 percent with investments that will pay for themselves in less than 10 years.' On July 8-9, 2010 a team of technical experts from industry, government and academia were assembled in Arlington, Virginia to identify the challenges associated with developing and deploying newcomputational methodologies and tools thatwill address building energy efficiency. These experts concluded that investments in fundamental applied and computational mathematics will be required to build enabling technology that can be used to realize the target of 80% reductions in energy consumption. In addition the finding was that there are tools and technologies that can be assembled and deployed in the short term - the next 3-5 years - that can be used to significantly reduce the cost and time effective delivery of moderate energy savings in the U.S. building stock. Simulation tools, which are a core strength of current DOE computational research programs, provide only a part of the answer by providing a basis for simulation enabled design. New investments will be required within a broad dynamics and control research agenda which must focus on dynamics, control, optimization and simulation of multi-scale energy systems during design and operation. U.S. investments in high performance and high productivity computing (HP2C) should be leveraged and coupled with advances in dynamics and control to impact both the existing building stock through retrofits and also new construction. The essential R&D areas requiring investment are: (1) Characterizing the Dynamics of Multi-scale Energy Systems; (2) Control and Optimization Methodologies of Multi-scale Energy Systems Under Uncertainty; and (3) Multiscale Modeling and Simulation Enabled Design and Operation. The concept of using design and control specific computational tools is a new idea for the building industry. The potential payoffs in terms of accelerated design cycle times, performance optimization and optimal supervisory control to obtain and maintain energy savings are huge. Recent advances in computational power, computer science, and mathematical algorithms offer the foundations to address the control problems presented by the complex dynamics of whole building systems. The key areas for focus and associated metrics with targets for establishing competitiveness in energy efficient building design and operation are: (1) Scalability - Current methodology and tools can provide design guidance for very low energy buildings in weeks to months; what is needed is hours to days. A 50X improvement is needed. (2) Installation and commissioning - Current methodology and tools can target a three month window for commissioning of building subsystems; what is needed is one week. A 10X improvement is needed. (3) Quality - Current design tools can achieve 30% accuracy; what is needed to make design decisions is 5% with quantification of uncertainty. A 5X improvement is needed. These challenges cannot be overcome by raw computational power alone and require the development of new algorithms. Here algorithms mean much more than simulating the building physics but need to be inclusive of a much better understanding of the building and the control systems associated with the building and to capture the entire set of dynamics. The algorithmsmust represent computationally new mathematical approaches to modeling, simulation, optimization and control of large multi-scale dynamic systems and bringing these elements to bear on industry in simulation enabled design approaches.

  3. Computational physics and applied mathematics capability review June 8-10, 2010

    SciTech Connect (OSTI)

    Lee, Stephen R

    2010-01-01

    Los Alamos National Laboratory will review its Computational Physics and Applied Mathematics (CPAM) capabilities in 2010. The goals of capability reviews are to assess the quality of science, technology, and engineering (STE) performed by the capability, evaluate the integration of this capability across the Laboratory and within the scientific community, examine the relevance of this capability to the Laboratory's programs, and provide advice on the current and future directions of this capability. This is the first such review for CPAM, which has a long and unique history at the Laboratory, starting from the inception of the Laboratory in 1943. The CPAM capability covers an extremely broad technical area at Los Alamos, encompassing a wide array of disciplines, research topics, and organizations. A vast array of technical disciplines and activities are included in this capability, from general numerical modeling, to coupled multi-physics simulations, to detailed domain science activities in mathematics, methods, and algorithms. The CPAM capability involves over 12 different technical divisions and a majority of our programmatic and scientific activities. To make this large scope tractable, the CPAM capability is broken into the following six technical 'themes.' These themes represent technical slices through the CPAM capability and collect critical core competencies of the Laboratory, each of which contributes to the capability (and each of which is divided into multiple additional elements in the detailed descriptions of the themes in subsequent sections), as follows. Theme 1: Computational Fluid Dynamics - This theme speaks to the vast array of scientific capabilities for the simulation of fluids under shocks, low-speed flow, and turbulent conditions - which are key, historical, and fundamental strengths of the Laboratory. Theme 2: Partial Differential Equations - The technical scope of this theme is the applied mathematics and numerical solution of partial differential equations (broadly defined) in a variety of settings, including particle transport, solvers, and plasma physics. Theme 3: Monte Carlo - Monte Carlo was invented at Los Alamos. This theme discusses these vitally important methods and their application in everything from particle transport, to condensed matter theory, to biology. Theme 4: Molecular Dynamics - This theme describes the widespread use of molecular dynamics for a variety of important applications, including nuclear energy, materials science, and biological modeling. Theme 5: Discrete Event Simulation - The technical scope of this theme represents a class of complex system evolutions governed by the action of discrete events. Examples include network, communication, vehicle traffic, and epidemiology modeling. Theme 6: Integrated Codes - This theme discusses integrated applications (comprised of all of the supporting science represented in Themes 1-5) that are of strategic importance to the Laboratory and the nation. The Laboratory has in approximately 10 million source lines of code in over 100 different such strategically important applications. Of these themes, four of them will be reviewed during the 2010 review cycle: Themes 1,2, 3, and 6. Because these reviews occur every three years, Themes 4 and 5 will be reviewed in 2013, along with Theme 6 (which will be reviewed during each review, owing to this theme's role as an integrator of the supporting science represented by the other five themes). Yearly written status reports will be provided to the CPAM Committee Chair during off-cycle years.

  4. Mathematical and Computational Epidemiology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematical and Computational Epidemiology Search Site submit Contacts | Sponsors Mathematical and Computational Epidemiology Los Alamos National Laboratory change this image and alt text Menu About Contact Sponsors Research Agent-based Modeling Mixing Patterns, Social Networks Mathematical Epidemiology Social Internet Research Uncertainty Quantification Publications People Mathematical and Computational Epidemiology (MCEpi) Quantifying model uncertainty in agent-based simulations for

  5. Experimental Mathematics and Computational Statistics

    SciTech Connect (OSTI)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  6. Applied Mathematics | U.S. DOE Office of Science (SC)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Applied Mathematics Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Applied Mathematics Conferences And Workshops Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Contact Information Advanced Scientific Computing Research U.S. Department of

  7. Applied Mathematics and Plasma Physics

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    5 Applied Mathematics and Plasma Physics Maintaining mathematic, theory, modeling, and simulation capabilities in a broad set of areas Leadership Group Leader Pieter Swart Email Deputy Group Leader (Acting) Luis Chacon Email Contact Us Administrator Charlotte Lehman Email Electron density simulation Electron density from an orbital-free quantum molecular dynamics simulation for a warm dense plasma of deuterium at density 10 g/cc and temperature 10 eV. Mathematical, theory, modeling, and

  8. Computational physics and applied mathematics capability review June 8-10, 2010 (Advance materials to committee members)

    SciTech Connect (OSTI)

    Lee, Stephen R

    2010-01-01

    Los Alamos National Laboratory will review its Computational Physics and Applied Mathematics (CPAM) capabilities in 2010. The goals of capability reviews are to assess the quality of science, technology, and engineering (STE) performed by the capability, evaluate the integration of this capability across the Laboratory and within the scientific community, examine the relevance of this capability to the Laboratory's programs, and provide advice on the current and future directions of this capability. This is the first such review for CPAM, which has a long and unique history at the laboratory, starting from the inception of the Laboratory in 1943. The CPAM capability covers an extremely broad technical area at Los Alamos, encompassing a wide array of disciplines, research topics, and organizations. A vast array of technical disciplines and activities are included in this capability, from general numerical modeling, to coupled mUlti-physics simulations, to detailed domain science activities in mathematics, methods, and algorithms. The CPAM capability involves over 12 different technical divisions and a majority of our programmatic and scientific activities. To make this large scope tractable, the CPAM capability is broken into the following six technical 'themes.' These themes represent technical slices through the CP AM capability and collect critical core competencies of the Laboratory, each of which contributes to the capability (and each of which is divided into multiple additional elements in the detailed descriptions of the themes in subsequent sections): (1) Computational Fluid Dynamics - This theme speaks to the vast array of scientific capabilities for the simulation of fluids under shocks, low-speed flow, and turbulent conditions - which are key, historical, and fundamental strengths of the laboratory; (2) Partial Differential Equations - The technical scope of this theme is the applied mathematics and numerical solution of partial differential equations (broadly defined) in a variety of settings, including particle transport, solvers, and plasma physics; (3) Monte Carlo - Monte Carlo was invented at Los Alamos, and this theme discusses these vitally important methods and their application in everything from particle transport, to condensed matter theory, to biology; (4) Molecular Dynamics - This theme describes the widespread use of molecular dynamics for a variety of important applications, including nuclear energy, materials science, and biological modeling; (5) Discrete Event Simulation - The technical scope of this theme represents a class of complex system evolutions governed by the action of discrete events. Examples include network, communication, vehicle traffic, and epidemiology modeling; and (6) Integrated Codes - This theme discusses integrated applications (comprised of all of the supporting science represented in Themes 1-5) that are of strategic importance to the Laboratory and the nation. The laboratory has in approximately 10 million source lines of code in over 100 different such strategically important applications. Of these themes, four of them will be reviewed during the 2010 review cycle: Themes 1, 2, 3, and 6. Because these capability reviews occur every three years, Themes 4 and 5 will be reviewed in 2013, along with Theme 6 (which will be reviewed during each review, owing to this theme's role as an integrator of the supporting science represented by the other 5 themes). Yearly written status reports will be provided to the Capability Review Committee Chair during off-cycle years.

  9. Applied Mathematics Conferences and Workshops | U.S. DOE Office of Science

    Office of Science (SC) Website

    (SC) Applied Mathematics » Applied Mathematics Conferences And Workshops Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Applied Mathematics Conferences And Workshops Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Contact Information

  10. Applied & Computational Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    & Computational Math - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied & Computational Math HomeEnergy ...

  11. Information Science, Computing, Applied Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Information Science, Computing, Applied Math science-innovationassetsimagesicon-science.jpg Information Science, Computing, Applied Math National security depends on science ...

  12. Mathematics and Computer Science Division | Argonne National...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mathematics and Computer Science Division To help solve some of the nation's most critical scientific problems, the Mathematics and Computer Science (MCS) Division at Argonne ...

  13. Applied Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    7 Applied Computer Science Innovative co-design of applications, algorithms, and architectures in order to enable scientific simulations at extreme scale Leadership Group Leader Linn Collins Email Deputy Group Leader (Acting) Bryan Lally Email Climate modeling visualization Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and blue color scale. These colors were

  14. The Applied Mathematics for Power Systems (AMPS) (Technical Report...

    Office of Scientific and Technical Information (OSTI)

    ... Subject: 24 POWER TRANSMISSION AND DISTRIBUTION; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 97 MATHEMATICAL METHODS AND COMPUTING; ALGORITHMS; ...

  15. Information Science, Computing, Applied Math

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Information Science, Computing, Applied Math /science-innovation/_assets/images/icon-science.jpg Information Science, Computing, Applied Math National security depends on science and technology. The United States relies on Los Alamos National Laboratory for the best of both. No place on Earth pursues a broader array of world-class scientific endeavors. Computer, Computational, and Statistical Sciences (CCS)» High Performance Computing (HPC)» Extreme Scale Computing, Co-design» supercomputing

  16. Applied Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Results from a climate simulation computed using the Model for Prediction Across Scales (MPAS) code. This visualization shows the temperature of ocean currents using a green and ...

  17. Applying computationally efficient schemes for biogeochemical...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: USDOE Office of Science (SC) Country of Publication: United States Language: English Subject: 54 ENVIRONMENTAL SCIENCES; 97 MATHEMATICS AND COMPUTING Word Cloud ...

  18. The Applied Mathematics for Power Systems (AMPS) (Technical Report) |

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Technical Report: The Applied Mathematics for Power Systems (AMPS) Citation Details In-Document Search Title: The Applied Mathematics for Power Systems (AMPS) Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to

  19. TriBITS lifecycle model. Version 1.0, a lean/agile software lifecycle model for research-based computational science and engineering and applied mathematical software.

    SciTech Connect (OSTI)

    Willenbring, James M.; Bartlett, Roscoe Ainsworth; Heroux, Michael Allen

    2012-01-01

    Software lifecycles are becoming an increasingly important issue for computational science and engineering (CSE) software. The process by which a piece of CSE software begins life as a set of research requirements and then matures into a trusted high-quality capability is both commonplace and extremely challenging. Although an implicit lifecycle is obviously being used in any effort, the challenges of this process - respecting the competing needs of research vs. production - cannot be overstated. Here we describe a proposal for a well-defined software lifecycle process based on modern Lean/Agile software engineering principles. What we propose is appropriate for many CSE software projects that are initially heavily focused on research but also are expected to eventually produce usable high-quality capabilities. The model is related to TriBITS, a build, integration and testing system, which serves as a strong foundation for this lifecycle model, and aspects of this lifecycle model are ingrained in the TriBITS system. Here, we advocate three to four phases or maturity levels that address the appropriate handling of many issues associated with the transition from research to production software. The goals of this lifecycle model are to better communicate maturity levels with customers and to help to identify and promote Software Engineering (SE) practices that will help to improve productivity and produce better software. An important collection of software in this domain is Trilinos, which is used as the motivation and the initial target for this lifecycle model. However, many other related and similar CSE (and non-CSE) software projects can also make good use of this lifecycle model, especially those that use the TriBITS system. Indeed this lifecycle process, if followed, will enable large-scale sustainable integration of many complex CSE software efforts across several institutions.

  20. Computing and Computational Sciences Directorate - Computer Science...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Science and Mathematics Division The Computer Science and Mathematics Division (CSMD) is ORNL's premier source of basic and applied research in high-performance computing, ...

  1. Applying computationally efficient schemes for biogeochemical cycles

    Office of Scientific and Technical Information (OSTI)

    (ACES4BGC) (Technical Report) | SciTech Connect Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) Citation Details In-Document Search Title: Applying computationally efficient schemes for biogeochemical cycles (ACES4BGC) NCAR contributed to the ACES4BGC project through software engineering work on aerosol model implementation, build system and script changes, coupler enhancements for biogeochemical tracers, improvements to the Community Land Model (CLM) code and

  2. Physics, Computer Science and Mathematics Division. Annual report, 1 January-31 December 1979

    SciTech Connect (OSTI)

    Lepore, J.V.

    1980-09-01

    This annual report describes the research work carried out by the Physics, Computer Science and Mathematics Division during 1979. The major research effort of the Division remained High Energy Particle Physics with emphasis on preparing for experiments to be carried out at PEP. The largest effort in this field was for development and construction of the Time Projection Chamber, a powerful new particle detector. This work took a large fraction of the effort of the physics staff of the Division together with the equivalent of more than a hundred staff members in the Engineering Departments and shops. Research in the Computer Science and Mathematics Department of the Division (CSAM) has been rapidly expanding during the last few years. Cross fertilization of ideas and talents resulting from the diversity of effort in the Physics, Computer Science and Mathematics Division contributed to the software design for the Time Projection Chamber, made by the Computer Science and Applied Mathematics Department.

  3. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

    SciTech Connect (OSTI)

    Birge, R.W.

    1981-12-01

    Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

  4. Physics, Computer Science and Mathematics Division annual report, 1 January-31 December 1983

    SciTech Connect (OSTI)

    Jackson, J.D.

    1984-08-01

    This report summarizes the research performed in the Physics, Computer Science and Mathematics Division of the Lawrence Berkeley Laboratory during calendar year 1983. The major activity of the Division is research in high-energy physics, both experimental and theoretical, and research and development in associated technologies. A smaller, but still significant, program is in computer science and applied mathematics. During 1983 there were approximately 160 people in the Division active in or supporting high-energy physics research, including about 40 graduate students. In computer science and mathematics, the total staff, including students and faculty, was roughly 50. Because of the creation in late 1983 of a Computing Division at LBL and the transfer of the Computer Science activities to the new Division, this annual report is the last from the Physics, Computer Science and Mathematics Division. In December 1983 the Division reverted to its historic name, the Physics Division. Its future annual reports will document high energy physics activities and also those of its Mathematics Department.

  5. September 2013 Most Viewed Documents for Mathematics And Computing | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information September 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 169 /> Lecture notes for introduction to safety and health Biele, F. (1992) 57 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 50 /> Computational procedures for determining

  6. September 2015 Most Viewed Documents for Mathematics And Computing | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information September 2015 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1049 Lecture notes for introduction to safety and health Biele, F. (1992) 333 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 286 Ferrite Measurement in Austenitic and Duplex Stainless Steel Castings -

  7. Most Viewed Documents for Mathematics and Computing: December 2014 | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information Most Viewed Documents for Mathematics and Computing: December 2014 Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 322 Levenberg--Marquardt algorithm: implementation and theory More, J.J. (1977) 64 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 51 Lecture notes for introduction to safety and health Biele, F. (1992) 50

  8. Webinar "Applying High Performance Computing to Engine Design...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share ... Study Benefits of Bioenergy Crop Integration Video: Biofuel technology at Argonne

  9. Most Viewed Documents for Mathematics and Computing: September 2014 | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information for Mathematics and Computing: September 2014 Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 193 Lecture notes for introduction to safety and health Biele, F. (1992) 56 Mort User's Manual: For use with the Management Oversight and Risk Tree analytical logic diagram Knox, N.W.; Eicher, R.W. (1992) 51 Levenberg--Marquardt algorithm: implementation and theory More, J.J.

  10. December 2015 Most Viewed Documents for Mathematics And Computing | OSTI,

    Office of Scientific and Technical Information (OSTI)

    US Dept of Energy, Office of Scientific and Technical Information December 2015 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1446 Automotive vehicle sensors Sheen, S.H.; Raptis, A.C.; Moscynski, M.J. (1995) 373 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 365 Lecture notes for introduction to safety and health Biele, F. (1992) 324

  11. Apply for the Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Computing » How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Current application deadline is February 5, 2016 with notification by early March 2016. Who can apply? Upper division undergraduate

  12. Name Center for Applied Scientific Computing month day, 1998

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Bosl, Art Mirin, Phil Duffy Lawrence Livermore National Lab Climate and Carbon Cycle Modeling Group Center for Applied Scientific Computing April 24, 2003 High Resolution Climate Simulation and Regional Water Supplies WJB 2 CASC/CCCM High-Performance Computing for Climate Modeling as a Planning Tool GLOBAL WARMING IS HERE!! ... so now what? How will climate change really affect societies? Effects of global climate change are local Some effects of climate change can be mitigated Requires accurate

  13. Progress report No. 56, October 1, 1979-September 30, 1980. [Courant Mathematics and Computing Lab. , New York Univ

    SciTech Connect (OSTI)

    1980-10-01

    Research during the period is sketched in a series of abstract-length summaries. The forte of the Laboratory lies in the development and analysis of mathematical models and efficient computing methods for the rapid solution of technological problems of interest to DOE, in particular, the detailed calculation on large computers of complicated fluid flows in which reactions and heat conduction may be taking place. The research program of the Laboratory encompasses two broad categories: analytical and numerical methods, which include applied analysis, computational mathematics, and numerical methods for partial differential equations, and advanced computer concepts, which include software engineering, distributed systems, and high-performance systems. Lists of seminars and publications are included. (RWR)

  14. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    SciTech Connect (OSTI)

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  15. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    SciTech Connect (OSTI)

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  16. GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION...

    Office of Scientific and Technical Information (OSTI)

    PC-1D installation manual and user's guide Basore, P.A. 14 SOLAR ENERGY; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 42 ENGINEERING; CHARGE...

  17. KNUPP,PATRICK 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING...

    Office of Scientific and Technical Information (OSTI)

    DIFFERENTIAL EQUATIONS; VERIFICATION; COMPUTER CODES; NUMERICAL SOLUTION; FLUID MECHANICS A procedure for code Verification by the Method of Manufactured Solutions (MMS) is...

  18. September 2013 Most Viewed Documents for Mathematics And Computing...

    Office of Scientific and Technical Information (OSTI)

    to quantitative Altenbach, T.J. (1995) 50 > Computational procedures for determining ... Ueng, Tzou-Shin; Chen, Jian-Chu. (1992) 50 > Analytical considerations in the code ...

  19. March 2014 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information 4 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 291 /> Ten Problems in Experimental Mathematics Bailey, David H.; Borwein, Jonathan M.; Kapoor, Vishaal;Weisstein, Eric (2004) 101 /> The Effects of Nuclear Weapons Glasstone, Samuel (1964) 72 /> Levenberg--Marquardt algorithm: implementation and

  20. July 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information July 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 567 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 89 /> Lecture notes for introduction to safety and health Biele, F. (1992) 78 /> Computational procedures for determining parameters

  1. June 2014 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information June 2014 Most Viewed Documents for Mathematics And Computing Science Subject Feed Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 337 /> The Effects of Nuclear Weapons Glasstone, Samuel (1964) 71 /> Levenberg--Marquardt algorithm: implementation and theory More, J.J. (1977) 68 /> Computational procedures for determining parameters in Ramberg-Osgood elastoplastic

  2. NREL: Computational Science Home Page

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    high-performance computing, computational science, applied mathematics, scientific data management, visualization, and informatics. NREL is home to the largest high performance...

  3. Fourth SIAM conference on mathematical and computational issues in the geosciences: Final program and abstracts

    SciTech Connect (OSTI)

    1997-12-31

    The conference focused on computational and modeling issues in the geosciences. Of the geosciences, problems associated with phenomena occurring in the earth`s subsurface were best represented. Topics in this area included petroleum recovery, ground water contamination and remediation, seismic imaging, parameter estimation, upscaling, geostatistical heterogeneity, reservoir and aquifer characterization, optimal well placement and pumping strategies, and geochemistry. Additional sessions were devoted to the atmosphere, surface water and oceans. The central mathematical themes included computational algorithms and numerical analysis, parallel computing, mathematical analysis of partial differential equations, statistical and stochastic methods, optimization, inversion, homogenization and renormalization. The problem areas discussed at this conference are of considerable national importance, with the increasing importance of environmental issues, global change, remediation of waste sites, declining domestic energy sources and an increasing reliance on producing the most out of established oil reservoirs.

  4. March 2016 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 2444 Automotive vehicle sensors Sheen, S.H.; Raptis, A.C.; Moscynski, M.J. (1995) 726 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 560 Ferrite Measurement in Austenitic and Duplex Stainless Steel Castings - Literature Review Lundin, C.D.; Zhou, G.;

  5. April 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information April 2013 Most Viewed Documents for Mathematics And Computing Science Subject Feed Publications in biomedical and environmental sciences programs, 1981 Moody, J.B. (comp.) (1982) 306 /> A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 159 /> Lecture notes for introduction to safety and health Biele, F. (1992) 138 /> Analytical considerations in the code qualification of

  6. January 2013 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information January 2013 Most Viewed Documents for Mathematics And Computing Cybersecurity through Real-Time Distributed Control Systems Kisner, Roger A [ORNL]; Manges, Wayne W [ORNL]; MacIntyre, Lawrence Paul [ORNL]; Nutaro, James J [ORNL]; Munro Jr, John K [ORNL]; Ewing, Paul D [ORNL]; Howlader, Mostofa [ORNL]; Kuruganti, Phani Teja [ORNL]; Wallace, Richard M [ORNL]; Olama, Mohammed M [ORNL] REACTOR ANALYSIS AND VIRTUAL CONTROL ENVIRONMENT

  7. June 2015 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information June 2015 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 833 Lecture notes for introduction to safety and health Biele, F. (1992) 256 Systems engineering management plans. Rodriguez, Tamara S. (2009) 218 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 216 Ferrite

  8. March 2015 Most Viewed Documents for Mathematics And Computing | OSTI, US

    Office of Scientific and Technical Information (OSTI)

    Dept of Energy, Office of Scientific and Technical Information 5 Most Viewed Documents for Mathematics And Computing Process Equipment Cost Estimation, Final Report H.P. Loh; Jennifer Lyons; Charles W. White, III (2002) 1019 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 183 Lecture notes for introduction to safety and health Biele, F. (1992) 172 Mort User's Manual: For use with the Management Oversight and Risk Tree analytical logic

  9. Physics, computer science and mathematics division. Annual report, 1 January - 31 December 1982

    SciTech Connect (OSTI)

    Jackson, J.D.

    1983-08-01

    Experimental physics research activities are described under the following headings: research on e/sup +/e/sup -/ annihilation; research at Fermilab; search for effects of a right-handed gauge boson; the particle data center; high energy astrophysics and interdisciplinary experiments; detector and other research and development; publications and reports of other research; computation and communication; and engineering, evaluation, and support operations. Theoretical particle physics research and heavy ion fusion research are described. Also, activities of the Computer Science and Mathematics Department are summarized. Publications are listed. (WHK)

  10. Mathematical Applications

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Math Mathematical Applications Mathematica Mathematica is a fully integrated environment for technical computing. It performs symbolic manipulation of equations, integrals, ...

  11. Previous Computer Science Award Announcements | U.S. DOE Office...

    Office of Science (SC) Website

    Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop ...

  12. High-performance Computing Applied to Semantic Databases

    SciTech Connect (OSTI)

    Goodman, Eric L.; Jimenez, Edward; Mizell, David W.; al-Saffar, Sinan; Adolf, Robert D.; Haglin, David J.

    2011-06-02

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  13. High-performance computing applied to semantic databases.

    SciTech Connect (OSTI)

    al-Saffar, Sinan; Jimenez, Edward Steven, Jr.; Adolf, Robert; Haglin, David; Goodman, Eric L.; Mizell, David

    2010-12-01

    To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

  14. Computer Science Program | U.S. DOE Office of Science (SC)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Science Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop ...

  15. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    SciTech Connect (OSTI)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  16. Webinar "Applying High Performance Computing to Engine Design Using

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Supercomputers" | Argonne National Laboratory Webinar "Applying High Performance Computing to Engine Design Using Supercomputers" Share Description Video from the February 25, 2016 Convergent Science/Argonne National Laboratory webinar "Applying High Performance Computing to Engine Design using Supercomputers," featuring Janardhan Kodavasal of Argonne National Laboratory Speakers Janardhan Kodavasal, Argonne National Laboratory Duration 52:26 Topic Energy Energy

  17. Apply

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Applied Studies and Technology (AS&T) Applied Studies and Technology (AS&T) Applied Studies and Technology (AS&T) DOE established the Environmental Sciences Laboratory (ESL) in Grand Junction, Colorado, in 1991 to support its programs. ESL scientists perform applied research and laboratory-scale demonstrations of soil and groundwater remediation and treatment technologies. Capabilities Installation, monitoring, and operation of permeable reactive barriers Research of permeable

  18. Unsolicited Projects in 2012: Research in Computer Architecture...

    Office of Science (SC) Website

    Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

  19. Apply

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Apply Application Process Bringing together top space science students with internationally recognized researchers at Los Alamos in an educational and collaborative atmosphere. ...

  20. A Multifaceted Mathematical Approach for Complex Systems

    SciTech Connect (OSTI)

    Alexander, F.; Anitescu, M.; Bell, J.; Brown, D.; Ferris, M.; Luskin, M.; Mehrotra, S.; Moser, B.; Pinar, A.; Tartakovsky, A.; Willcox, K.; Wright, S.; Zavala, V.

    2012-03-07

    Applied mathematics has an important role to play in developing the tools needed for the analysis, simulation, and optimization of complex problems. These efforts require the development of the mathematical foundations for scientific discovery, engineering design, and risk analysis based on a sound integrated approach for the understanding of complex systems. However, maximizing the impact of applied mathematics on these challenges requires a novel perspective on approaching the mathematical enterprise. Previous reports that have surveyed the DOE's research needs in applied mathematics have played a key role in defining research directions with the community. Although these reports have had significant impact, accurately assessing current research needs requires an evaluation of today's challenges against the backdrop of recent advances in applied mathematics and computing. To address these needs, the DOE Applied Mathematics Program sponsored a Workshop for Mathematics for the Analysis, Simulation and Optimization of Complex Systems on September 13-14, 2011. The workshop had approximately 50 participants from both the national labs and academia. The goal of the workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that are needed to address problems associated with complex systems. This report describes recommendations from the workshop and subsequent analysis of the workshop findings by the organizing committee.

  1. Computing and Computational Sciences Directorate - Computer Science...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computer Science and Mathematics Division Citation: For exemplary administrative secretarial support to the Computer Science and Mathematics Division and to the ORNL ...

  2. Optimization methods of the net emission computation applied to cylindrical sodium vapor plasma

    SciTech Connect (OSTI)

    Hadj Salah, S. Hajji, S.; Ben Hamida, M. B.; Charrada, K.

    2015-01-15

    An optimization method based on a physical analysis of the temperature profile and different terms in the radiative transfer equation is developed to reduce the time computation of the net emission. This method has been applied for the cylindrical discharge in sodium vapor. Numerical results show a relative error of spectral flux density values lower than 5% with an exact solution, whereas the computation time is about 10 orders of magnitude less. This method is followed by a spectral method based on the rearrangement of the lines profile. Results are shown for Lorentzian profile and they demonstrated a relative error lower than 10% with the reference method and gain in computation time about 20 orders of magnitude.

  3. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATION...

    Office of Scientific and Technical Information (OSTI)

    Resource Relation: Conference: International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Eng,Rio de Janeiro, Brazil,05182011,05122011 ...

  4. Computing and Computational Sciences Directorate - Divisions

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CCSD Divisions Computational Sciences and Engineering Computer Sciences and Mathematics Information Technolgoy Services Joint Institute for Computational Sciences National Center ...

  5. Vehicle Technologies Office Merit Review 2015: Applied Integrated Computational Materials Engineering (ICME) for New Propulsion Materials

    Broader source: Energy.gov [DOE]

    Presentation given by Oak Ridge National Laboratory at 2015 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Office Annual Merit Review and Peer Evaluation Meeting about Applied...

  6. Mathematical and Computational Epidemiology

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    for forecasting the spread of infectious diseases and understanding human behavior using social media Sara Del Valle 1:03 Faces of Science: Sara Del Valle We provide decision...

  7. Software and High Performance Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Software and High Performance Computing Software and High Performance Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest Contact thumbnail of Kathleen McDonald Head of Intellectual Property, Business Development Executive Kathleen McDonald Richard P. Feynman Center for Innovation (505) 667-5844 Email Software Computational physics, computer science, applied mathematics, statistics and the

  8. Parallel computing works

    SciTech Connect (OSTI)

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  9. Applications for Postdoctoral Fellowship in Computational Science at

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Berkeley Lab due November 26 Postdoctoral Fellowship in Computational Science at Berkeley Lab Applications for Postdoctoral Fellowship in Computational Science at Berkeley Lab due November 26 October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have received their Ph.D. within the last three years are encouraged to apply for the Luis W. Alvarez Postdoctoral Fellowship in Computational Science at Lawrence

  10. Previous Computer Science Award Announcements | U.S. DOE Office of Science

    Office of Science (SC) Website

    (SC) Previous Computer Science Award Announcements Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing

  11. Advanced Scientific Computing Research (ASCR)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Applied Mathematics, and in SciDAC partnerships that link ASCR programs to activities throughout the Office of Science including BES, BER, and FES. Applied Mathematics Research ...

  12. Sandia National Laboratories: Careers: Computer Science

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Advanced software research & development Collaborative technologies Computational science and mathematics High-performance computing Visualization and scientific computing Advanced ...

  13. Sandian Named Fellow of the Society for Industrial and Applied...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Named Fellow of the Society for Industrial and Applied Mathematics - Sandia Energy Energy ... Sandian Named Fellow of the Society for Industrial and Applied Mathematics HomeResearch & ...

  14. Computing and Computational Sciences Directorate - Joint Institute...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    (JICS). JICS combines the experience and expertise in theoretical and computational science and engineering, computer science, and mathematics in these two institutions and ...

  15. computers

    National Nuclear Security Administration (NNSA)

    Each successive generation of computing system has provided greater computing power and energy efficiency.

    CTS-1 clusters will support NNSA's Life Extension Program and...

  16. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    SciTech Connect (OSTI)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions.Conclusions: The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  17. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing /newsroom/_assets/images/computing-icon.png Computing Providing world-class high performance computing capability that enables unsurpassed solutions to complex problems of strategic national interest. Health Space Computing Energy Earth Materials Science Technology The Lab All Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable

  18. Parallel Programming with MPI | Argonne Leadership Computing...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Programming with MPI Event Sponsor: Mathematics and Computer Science Division ...permalinksargonne16mpi.php The Mathematics and Computer Science division of ...

  19. Computing Sciences

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Sciences Our Vision National User Facilities Research Areas In Focus Global Solutions ⇒ Navigate Section Our Vision National User Facilities Research Areas In Focus Global Solutions Computational Research Division The Computational Research Division conducts research and development in mathematical modeling and simulation, algorithm design, data storage, management and analysis, computer system architecture and high-performance software implementation. Scientific Networking

  20. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... Software Computations Uncertainty Quantification Stochastic About CRF Transportation Energy Consortiums Engine Combustion Heavy Duty Heavy Duty Low-Temperature & Diesel Combustion ...

  1. Computer

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    I. INTRODUCTION This paper presents several computational tools required for processing images of a heavy ion beam and estimating the magnetic field within a plasma. The...

  2. computers

    National Nuclear Security Administration (NNSA)

    California.

    Retired computers used for cybersecurity research at Sandia National...

  3. How To Apply

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    CSCNSI How To Apply How to Apply for Computer System, Cluster, and Networking Summer Institute Emphasizes practical skills development Contact Leader Stephan Eidenbenz (505)...

  4. Computing

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Office of Advanced Scientific Computing Research in the Department of Energy Office of Science under contract number DE-AC02-05CH11231. Application and System Memory Use, ...

  5. Unsolicited Projects in 2012: Research in Computer Architecture, Modeling,

    Office of Science (SC) Website

    and Evolving MPI for Exascale | U.S. DOE Office of Science (SC) 2: Research in Computer Architecture, Modeling, and Evolving MPI for Exascale Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I Workshop External link Architectures II Workshop External link Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) ASCR SBIR-STTR Facilities

  6. Mathematical Perspectives

    SciTech Connect (OSTI)

    Glimm, J.

    2009-10-14

    Progress for the past decade or so has been extraordinary. The solution of Fermat's Last Theorem [11] and of the Poincare Conjecture [1] have resolved two of the most outstanding challenges to mathematics. For both cases, deep and advanced theories and whole subfields of mathematics came into play and were developed further as part of the solutions. And still the future is wide open. Six of the original seven problems from the Clay Foundation challenge remain open, the 23 DARPA challenge problems are open. Entire new branches of mathematics have been developed, including financial mathematics and the connection between geometry and string theory, proposed to solve the problems of quantized gravity. New solutions of the Einstein equations, inspired by shock wave theory, suggest a cosmology model which fits accelerating expansion of the universe possibly eliminating assumptions of 'dark matter'. Intellectual challenges and opportunities for mathematics are greater than ever. The role of mathematics in society continues to grow; with this growth comes new opportunities and some growing pains; each will be analyzed here. We see a broadening of the intellectual and professional opportunities and responsibilities for mathematicians. These trends are also occuring across all of science. The response can be at the level of the professional societies, which can work to deepen their interactions, not only within the mathematical sciences, but also with other scientific societies. At a deeper level, the choices to be made will come from individual mathematicians. Here, of course, the individual choices will be varied, and we argue for respect and support for this diversity of responses. In such a manner, we hope to preserve the best of the present while welcoming the best of the new.

  7. A Systematic Comprehensive Computational Model for Stake Estimation in Mission Assurance: Applying Cyber Security Econometrics System (CSES) to Mission Assurance Analysis Protocol (MAAP)

    SciTech Connect (OSTI)

    Abercrombie, Robert K; Sheldon, Frederick T; Grimaila, Michael R

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper, we discuss how this infrastructure can be used in the subject domain of mission assurance as defined as the full life-cycle engineering process to identify and mitigate design, production, test, and field support deficiencies of mission success. We address the opportunity to apply the Cyberspace Security Econometrics System (CSES) to Carnegie Mellon University and Software Engineering Institute s Mission Assurance Analysis Protocol (MAAP) in this context.

  8. Computations

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computations - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced Nuclear

  9. Scientific Discovery through Advanced Computing (SciDAC) | U.S. DOE Office

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of Science (SC) Research » Scientific Discovery through Advanced Computing (SciDAC) Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Next Generation Networking Scientific Discovery through Advanced Computing (SciDAC) Co-Design SciDAC Institutes ASCR SBIR-STTR Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources Contact Information Advanced

  10. Present and Future Computing Requirements for PETSc

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    and Future Computing Requirements for PETSc Jed Brown jedbrown@mcs.anl.gov Mathematics and Computer Science Division, Argonne National Laboratory Department of Computer Science, ...

  11. A posteriori error estimate for a Lagrangian method applied to...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

  12. Cielo Computational Environment Usage Model With Mappings to...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: DOELANL Country of Publication: United States Language: English Subject: Computer Hardware; Mathematics & Computing(97); AVAILABILITY; LANL; LAWRENCE LIVERMORE ...

  13. Mathematical Statisticians

    U.S. Energy Information Administration (EIA) Indexed Site

    work is associated with the design, implementation and evaluation of statistical methods... packages, research, select, and apply ... to frames development, selection, and ...

  14. New DOE Office of Science support for CAMERA to develop computational...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    to develop computational mathematics for experimental facilities research New DOE Office of Science support for CAMERA to develop computational mathematics for experimental ...

  15. ASCR Workshop on Quantum Computing for Science

    SciTech Connect (OSTI)

    Aspuru-Guzik, Alan; Van Dam, Wim; Farhi, Edward; Gaitan, Frank; Humble, Travis; Jordan, Stephen; Landahl, Andrew J; Love, Peter; Lucas, Robert; Preskill, John; Muller, Richard P.; Svore, Krysta; Wiebe, Nathan; Williams, Carl

    2015-06-01

    This report details the findings of the DOE ASCR Workshop on Quantum Computing for Science that was organized to assess the viability of quantum computing technologies to meet the computational requirements of the DOE’s science and energy mission, and to identify the potential impact of quantum technologies. The workshop was held on February 17-18, 2015, in Bethesda, MD, to solicit input from members of the quantum computing community. The workshop considered models of quantum computation and programming environments, physical science applications relevant to DOE's science mission as well as quantum simulation, and applied mathematics topics including potential quantum algorithms for linear algebra, graph theory, and machine learning. This report summarizes these perspectives into an outlook on the opportunities for quantum computing to impact problems relevant to the DOE’s mission as well as the additional research required to bring quantum computing to the point where it can have such impact.

  16. PNNL: Staff Search - Fundamental & Computational Sciences Directorate

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Divisions Advanced Computing, Mathematics & Data Atmospheric Sciences & Global Change Biological Sciences Physical Sciences User Facilities Environmental Molecular Sciences ...

  17. Browse by Discipline -- E-print Network Subject Pathways: Computer...

    Office of Scientific and Technical Information (OSTI)

    ... - Mathematics and Computer Science Division, Argonne National Laboratory Fish, Alexander (Alexander Fish) - School of Mathematics and Statistics, University of Sydney Fisher, ...

  18. Topological one-way quantum computation on verified logical cluster...

    Office of Scientific and Technical Information (OSTI)

    Language: English Subject: 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 97 MATHEMATICAL METHODS AND COMPUTING; CALCULATION METHODS; ERRORS; MATHEMATICAL LOGIC; NOISE; ...

  19. Predictive Capability Maturity Model for computational modeling...

    Office of Scientific and Technical Information (OSTI)

    Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

  20. Predictive Capability Maturity Model for computational modeling...

    Office of Scientific and Technical Information (OSTI)

    ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; 99 GENERAL AND MISCELLANEOUSMATHEMATICS, COMPUTING, ...

    1. Chameleon: A Computer Science Testbed as Application of Cloud...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Chameleon: A Computer Science Testbed as Application of Cloud Computing Event Sponsor: Mathematics and Computing Science Brownbag Lunch Start Date: Dec 15 2015 - 12:00pm Building...

    2. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      ... Alfaro, Manuel - Departamento de Matemticas, Universidad de Zaragoza Algebraic Number Theory Archives Applied Algebra Group at Linz Argonne National Laboratory, Mathematics and ...

    3. Computing Videos

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Videos Computing

    4. Computer, Computational, and Statistical Sciences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Directed Research and Development (LDRD) Defense Advanced Research Projects Agency (DARPA) Defense Threat Reduction Agency (DTRA) Research Applied Computer Science Co-design ...

    5. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... de Fsica, Applied Physics Institute for Mathematics and its Applications Iowa State University, Department of Statistics Isaac Newton Institute for Mathematical Sciences

    6. Engineering Physics and Mathematics Division progress report for period ending September 30, 1987

      SciTech Connect (OSTI)

      Not Available

      1987-12-01

      This report provides an archival record of the activities of the Engineering Physics and Mathematics Division during the period June 30, 1985 through September 30, 1987. Work in Mathematical Sciences continues to include applied mathematics research, statistics research, and computer science. Nuclear-data measurements and evaluations continue for fusion reactors, fission reactors, and other nuclear systems. Also discussed are long-standing studies of fission-reactor shields through experiments and related analysis, of accelerator shielding, and of fusion-reactor neutronics. Work in Machine Intelligence continues to feature the development of an autonomous robot. The last descriptive part of this report reflects the work in our Engineering Physics Information Center, which again concentrates primarily upon radiation-shielding methods and related data.

    7. Mathematical Formulation Requirements and Specifications for the Process Models

      SciTech Connect (OSTI)

      Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

      2010-11-01

      The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally generates a suite of conceptual models that span a range of process complexity, potentially coupling hydrological, biogeochemical, geomechanical, and thermal processes. The Platform will use ensembles of these simulations to quantify the associated uncertainty, sensitivity, and risk. The Process Models task within the HPC Simulator focuses on the mathematical descriptions of the relevant physical processes.

    8. High-precision arithmetic in mathematical physics

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Bailey, David H.; Borwein, Jonathan M.

      2015-05-12

      For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

    9. Computational Fluid Dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      scour-tracc-cfd TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Fluid Dynamics Overview of CFD: Video Clip with Audio Computational fluid dynamics (CFD) research uses mathematical and computational models of flowing fluids to describe and predict fluid response in problems of interest, such as the flow of air around a moving vehicle or the flow of water and sediment in a river. Coupled with appropriate and prototypical

    10. DOE Applied Math Summit | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      American Geophysical Union (AGU), External link 2000 Florida Ave, N.W. Washington, D.C. 20009 A New Paradigm for DOE Applied Mathematics: The Applied Mathematics program has ...

    11. General Mathematical and Computing System Routines

      Energy Science and Technology Software Center (OSTI)

      1999-04-20

      GO is a 32-bit genetic optimization driver that runs under Windows. It is an optimization scheme used to solve large combinatorial problems using "genetic "algorithms. GO is a genetic optimization driver: it must be linked with a user supplied process model before it can be used. The link is made through a text file that transfers data to and from the user-supplied process model. A user interface allows optimization parameters to be entered, edited, saved.more » It also allows the user to display results as the optimization proceeds or at a later time.« less

    12. Measures of agreement between computation and experiment:validation...

      Office of Scientific and Technical Information (OSTI)

      and safety assessment, improved methods are needed for comparing computational ... EXPERIMENTS Uncertainty-Mathematical models.; Validation-Simulation.; Experimental design. ...

    13. How To Apply

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      How To Apply How to Apply for Computer System, Cluster, and Networking Summer Institute Emphasizes practical skills development Contacts Program Lead Carolyn Connor (505) 665-9891 Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email The 2016 application process will commence January 5 through February 13, 2016. Applicants must be U.S. citizens. Required Materials Current resume Official university transcript (with Spring courses posted and/or a copy of Spring 2016

    14. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      6 Computational Earth Science We develop and apply a range of high-performance computational methods and software tools to Earth science projects in support of environmental ...

    15. Computational Fluid Dynamics & Large-Scale Uncertainty Quantification...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Computational Fluid Dynamics & Large-Scale Uncertainty Quantification for Wind Energy A team of Sandia experts in aerospace engineering, scientific computing, and mathematics ...

    16. Measures of agreement between computation and experiment : validation...

      Office of Scientific and Technical Information (OSTI)

      Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; BENCH-SCALE EXPERIMENTS; COMPUTER CALCULATIONS; ...

    17. MG-RAST in "the cloud" | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      MG-RAST in "the cloud" Event Sponsor: Mathematics and Computer Science Division Seminar ... data uploaded and analyzed in the past few years posing numerous computational challenges. ...

    18. Derivative-free optimization for parameter estimation in computational...

      Office of Scientific and Technical Information (OSTI)

      Journal Article: Derivative-free optimization for parameter estimation in computational nuclear physics Citation Details ... RADIATION PHYSICS; 97 MATHEMATICS, COMPUTING, AND ...

    19. Sandia Computational Mathematician Receives DOE's EO Lawrence...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Pavel Bochev (in Sandia's Computational Mathematics Dept.) has received an EO Lawrence Award for his pioneering theoretical and practical advances in numerical methods for partial ...

    20. Mathematical models for risk assessment

      SciTech Connect (OSTI)

      Zaikin, S.A.

      1995-12-01

      The use of mathematical models in risk assessment results in the proper understanding of many aspects of chemical exposure and allows to make more actual decisions. Our project ISCRA (Integrated Systems of Complex Risk Assessment) has the aim to create integrated systems of algorythms for prediction of pollutants` exposure on human and environmental health and to apply them for environmental monitoring, and decision-making. Mathematical model {open_quotes}MASTER{close_quotes} (Mathematical Algorythm of SimulaTion of Environmental Risk) represents the complex of algorythmical blocks and is intended for the prediction of danger of pollutants` exposure for human and environmental risk. Model LIMES (LIMits EStimation) is developed for prognosis of safety concentrations of pollutants in the environment both in the case of isolated exposure and in the case of complex exposure for concrete location. Model QUANT (QUANtity of Toxicant) represents the multicompartmental physiological pharmacokinetic model describing absorption, distribution, fate, metabolism, and elimination of pollutants in the body of different groups of human population, as a result of the different kind of exposure. Decision support system CLEVER (Complex LEVE1 of Risk) predicts the probability and the degree of development of unfavourable effects as result of exposure of pollutant on human health. System is based on the data of epidemiological and experimental researches and includes several mathematical models for analysis of {open_quotes}dose-time-response{close_quotes} relations and information about clinical symptoms of diseases. Model CEP (Combination Effect Prognosis) contains probabilistic algorythms for forecasting the effect of simultaneous impact of several factors polluting the environment. The result of the program work is the prediction of an independent exposure of two or more factors, and intensification or weakening of exposure in depending on factors` interactions.

    1. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs ...

    2. Exploratory Experimentation and Computation

      SciTech Connect (OSTI)

      Bailey, David H.; Borwein, Jonathan M.

      2010-02-25

      We believe the mathematical research community is facing a great challenge to re-evaluate the role of proof in light of recent developments. On one hand, the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet, has provided marvelous resources to the research mathematician. On the other hand, the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the classification of finite simple groups has raised questions as to how we can better ensure the integrity of modern mathematics. Yet as the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished.

    3. ACES4BGC Applying Computationally Efficient...

      Office of Scientific and Technical Information (OSTI)

      ... evolving remote aerosol system. 1.1.2 D im eth y l ... for predicting the response of plants to climate change. ... processes on a Leadership Class machine ...

    4. Applied combustion

      SciTech Connect (OSTI)

      1993-12-31

      From the title, the reader is led to expect a broad practical treatise on combustion and combustion devices. Remarkably, for a book of modest dimension, the author is able to deliver. The text is organized into 12 Chapters, broadly treating three major areas: combustion fundamentals -- introduction (Ch. 1), thermodynamics (Ch. 2), fluid mechanics (Ch. 7), and kinetics (Ch. 8); fuels -- coal, municipal solid waste, and other solid fuels (Ch. 4), liquid (Ch. 5) and gaseous (Ch. 6) fuels; and combustion devices -- fuel cells (Ch. 3), boilers (Ch. 4), Otto (Ch. 10), diesel (Ch. 11), and Wankel (Ch. 10) engines and gas turbines (Ch. 12). Although each topic could warrant a complete text on its own, the author addresses each of these major themes with reasonable thoroughness. Also, the book is well documented with a bibliography, references, a good index, and many helpful tables and appendices. In short, Applied Combustion does admirably fulfill the author`s goal for a wide engineering science introduction to the general subject of combustion.

    5. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      System, Cluster, and Networking Summer Institute New Mexico Consortium and Los Alamos National Laboratory HOW TO APPLY Applications will be accepted JANUARY 5 - FEBRUARY 13, 2016 Computing and Information Technology undegraduate students are encouraged to apply. Must be a U.S. citizen. * Submit a current resume; * Offcial University Transcript (with spring courses posted and/or a copy of spring 2016 schedule) 3.0 GPA minimum; * One Letter of Recommendation from a Faculty Member; and * Letter of

    6. Computational Science and Engineering

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Science and Engineering NETL's Computational Science and Engineering competency consists of conducting applied scientific research and developing physics-based simulation models, methods, and tools to support the development and deployment of novel process and equipment designs. Research includes advanced computations to generate information beyond the reach of experiments alone by integrating experimental and computational sciences across different length and time scales. Specific

    7. Microsoft Word - 4-Carter.doc

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Program in Applied and Computational Mathematics Engineering Quadrangle Olden Street ... of Mechanical and Aerospace Engineering and Applied and Computational Mathematics. ...

    8. developing-compute-efficient

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Developing Compute-efficient, Quality Models with LS-PrePost 3 on the TRACC Cluster Oct. ... with an emphasis on applying these capabilities to build computationally efficient models. ...

    9. Uncertainty quantification and multiscale mathematics. (Conference...

      Office of Scientific and Technical Information (OSTI)

      quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. Authors: Trucano, Timothy Guy ...

    10. Uncertainty quantification and multiscale mathematics. (Conference...

      Office of Scientific and Technical Information (OSTI)

      Uncertainty quantification and multiscale mathematics. Citation Details In-Document Search Title: Uncertainty quantification and multiscale mathematics. No abstract prepared. ...

    11. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

      SciTech Connect (OSTI)

      Corones, James

      2013-09-23

      High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

    12. Scientific Computing at Los Alamos National Laboratory (Conference...

      Office of Scientific and Technical Information (OSTI)

      States Research Org: Los Alamos National Laboratory (LANL) Sponsoring Org: DOELANL Country of Publication: United States Language: English Subject: Mathematics & Computing(97

    13. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATION...

      Office of Scientific and Technical Information (OSTI)

      ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 97 MATHEMATICAL METHODS AND COMPUTING; ACCURACY; ALGORITHMS; COMPRESSIBLE FLOW; COMPUTERIZED ...

    14. Proceedings of the Computational Needs for the Next Generation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      the operation and planning of the electric power system. The attached papers from these experts highlight mathematical and computational problems relevant for potential power...

    15. An Overview of High Performance Computing and Challenges for the Future

      ScienceCinema (OSTI)

      Google Tech Talks

      2009-09-01

      In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our software. A new generation of software libraries and lgorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder. We will focus on the redesign of software to fit multicore architectures. Speaker: Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI. He has published approximately 200 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches. He is a Fellow of the AAAS, ACM, and the IEEE and a member of the National Academy of Engineering.

    16. Customizable Computing at Datacenter Scale | Argonne Leadership Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Facility Customizable Computing at Datacenter Scale Event Sponsor: Mathematics and Computer Science Division Seminar Start Date: May 2 2016 - 10:00am Building/Room: Building 240/Room 1416 Location: Argonne National Laboratory Speaker(s): Jason Cong Speaker(s) Title: UCLA Host: Marc Snir Customizable computing has been of interest to the research community for over three decades. The interest has intensified in the recent years as the power and energy become a significant limiting factor to

    17. New DOE Program Funds $20 Million for Mathematics Research | Department of

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Energy Program Funds $20 Million for Mathematics Research New DOE Program Funds $20 Million for Mathematics Research August 4, 2005 - 2:37pm Addthis WASHINGTON, DC - Under a new program funded by the Department of Energy's Office of Science, researchers will use mathematics to help solve problems such as the production of clean energy, pollution cleanup, manufacturing ever smaller computer chips, and making new "nanomaterials." Thirteen major research awards totaling $20 million

    18. Engineering Physics and Mathematics Division progress report for period ending December 31, 1994

      SciTech Connect (OSTI)

      Sincovec, R.F.

      1995-07-01

      This report provides a record of the research activities of the Engineering Physics and Mathematics Division for the period January 1, 1993, through December 31, 1994. This report is the final archival record of the EPM Division. On October 1, 1994, ORELA was transferred to Physics Division and on January 1, 1995, the Engineering Physics and Mathematics Division and the Computer Applications Division reorganized to form the Computer Science and Mathematics Division and the Computational Physics and Engineering Division. Earlier reports in this series are identified on the previous pages, along with the progress reports describing ORNL`s research in the mathematical sciences prior to 1984 when those activities moved into the Engineering Physics and Mathematics Division.

    19. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... Alfaro, Manuel - Departamento de Matemticas, Universidad de Zaragoza Algebraic Number Theory Archives Applied Algebra Group at Linz Argonne National Laboratory, Mathematics and ...

    20. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2005-11-01

      The Brookhaven Computational Science Center brings together researchers in biology, chemistry, physics, and medicine with applied mathematicians and computer scientists to exploit the remarkable opportunities for scientific discovery which have been enabled by modern computers. These opportunities are especially great in computational biology and nanoscience, but extend throughout science and technology and include, for example, nuclear and high energy physics, astrophysics, materials and chemical science, sustainable energy, environment, and homeland security. To achieve our goals we have established a close alliance with applied mathematicians and computer scientists at Stony Brook and Columbia Universities.

    1. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zrich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    2. Computing for Finance

      SciTech Connect (OSTI)

      2010-03-24

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing – from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    3. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege of being a summer student at CERN.3. Opportunities for gLite in finance and related industriesAdam Vile, Head of Grid, HPC and Technical Computing, Excelian Ltd.gLite, the Grid software developed by the EGEE project, has been exceedingly successful as an enabling infrastructure, and has been a massive success in bringing together scientific and technical communities to provide the compute power to address previously incomputable problems. Not so in the finance industry. In its current form gLite would be a business disabler. There are other middleware tools that solve the finance communities compute problems much better. Things are moving on, however. There are moves afoot in the open source community to evolve the technology to address other, more sophisticated needs such as utility and interactive computing. In this talk, I will describe how Excelian is providing Grid consultancy services for the finance community and how, through its relationship to the EGEE project, Excelian is helping to identify and exploit opportunities as the research and business worlds converge. Because of the strong third party presence in the finance industry, such opportunities are few and far between, but they are there, especially as we expand sideways into related verticals such as the smaller hedge funds and energy companies. This talk will give an overview of the barriers to adoption of gLite in the finance industry and highlight some of the opportunities offered in this and related industries as the ideas around Grid mature. Speaker Bio: Dr Adam Vile is a senior consultant and head of the Grid and HPC practice at Excelian, a consultancy that focuses on financial markets professional services. He has spent many years in investment banking, as a developer, project manager and architect in both front and back office. Before joining Excelian he was senior Grid and HPC architect at Barclays Capital. Prior to joining investment banking, Adam spent a number of years lecturing in IT and mathematics at a UK University and maintains links with academia through lectures, research and through validation and steering of postgraduate courses. He is a chartered mathematician and was the conference chair of the Institute of Mathematics and its Applications first conference in computational Finance.4. From Monte Carlo to Wall Street Daniel Egloff, Head of Financial Engineering Computing Unit, Zürich Cantonal Bank High performance computing techniques provide new means to solve computationally hard problems in the financial service industry. First I consider Monte Carlo simulation and illustrate how it can be used to implement a sophisticated credit risk management and economic capital framework. From a HPC perspective, basic Monte Carlo simulation is embarrassingly parallel and can be implemented efficiently on distributed memory clusters. Additional difficulties arise for adaptive variance reduction schemes, if the information content in a sample is very small, and if the amount of simulated date becomes huge such that incremental processing algorithms are indispensable. We discuss the business value of an advanced credit risk quantification which is particularly compelling in these days. While Monte Carlo simulation is a very versatile tool it is not always the preferred solution for the pricing of complex products like multi asset options, structured products, or credit derivatives. As a second application I show how operator methods can be used to develop a pricing framework. The scalability of operator methods relies heavily on optimized dense matrix-matrix multiplications and requires specialized BLAS level-3 implementations provided by specialized FPGA or GPU boards. Speaker Bio: Daniel Egloff studied mathematics, theoretical physics, and computer science at the University of Zurich and the ETH Zurich. He holds a PhD in Mathematics from University of Fribourg, Switzerland. After his PhD he started to work for a large Swiss insurance company in the area of asset and liability management. He continued his professional career in the consulting industry. At KPMG and Arthur Andersen he consulted international clients and implemented quantitative risk management solutions for financial institutions and insurance companies. In 2002 he joined Zurich Cantonal Bank. He was assigned to develop and implement credit portfolio risk and economic capital methodologies. He built up a competence center for high performance and cluster computing. Currently, Daniel Egloff is heading the Financial Computing unit in the ZKB Financial Engineering division. He and his team is engineering and operating high performance cluster applications for computationally intensive problems in financial risk management.

    4. Mathematical modeling of a Fermilab helium liquefier coldbox

      SciTech Connect (OSTI)

      Geynisman, M.G.; Walker, R.J.

      1995-12-01

      Fermilab Central Helium Liquefier (CHL) facility is operated 24 hours-a-day to supply 4.6{degrees}K for the Fermilab Tevatron superconducting proton-antiproton collider Ring and to recover warm return gases. The centerpieces of the CHL are two independent cold boxes rated at 4000 and 5400 liters/hour with LN{sub 2} precool. These coldboxes are Claude cycle and have identical heat exchangers trains, but different turbo-expanders. The Tevatron cryogenics demand for higher helium supply from CHL was the driving force to investigate an installation of an expansion engine in place of the Joule-Thompson valve. A mathematical model was developed to describe the thermo- and gas-dynamic processes for the equipment included in the helium coldbox. The model is based on a finite element approach, opposite to a global variables approach, thus providing for higher accuracy and conversion stability. Though the coefficients used in thermo- and gas-dynamic equations are unique for a given coldbox, the general approach, the equations, the methods of computations, and most of the subroutines written in FORTRAN can be readily applied to different coldboxes. The simulation results are compared against actual operating data to demonstrate applicability of the model.

    5. Profiles List

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Anderson-Cook, Christine Computational Physics and Applied Mathematics Information Science ... Bennett, Katrina Computational Physics and Applied Mathematics Computer and Computational ...

    6. Multiscale Mathematics For Plasma Kinetics Spanning Multiple...

      Office of Scientific and Technical Information (OSTI)

      Technical Report: Multiscale Mathematics For Plasma Kinetics Spanning Multiple Collisionality Regimes Citation Details In-Document Search Title: Multiscale Mathematics For Plasma ...

    7. Computation & Simulation > Theory & Computation > Research >...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      it. Click above to view. computational2 computational3 In This Section Computation & Simulation Computation & Simulation Extensive combinatorial results and ongoing basic...

    8. ADVANCED SCIENTIFIC COMPUTING ADVISORY COMMITTEEMonday, July...

      Office of Science (SC) Website

      Kathy Yelick, Lawrence Berkeley National Laboratory 3:15 PM-3:30 PM Break 3:30 PM-4:00 PM Center for Applied Mathematics for Energy Research ApplicationS (CAMERA) .pdf file (1.8MB) ...

    9. Quantum steady computation

      SciTech Connect (OSTI)

      Castagnoli, G. )

      1991-08-10

      This paper reports that current conceptions of quantum mechanical computers inherit from conventional digital machines two apparently interacting features, machine imperfection and temporal development of the computational process. On account of machine imperfection, the process would become ideally reversible only in the limiting case of zero speed. Therefore the process is irreversible in practice and cannot be considered to be a fundamental quantum one. By giving up classical features and using a linear, reversible and non-sequential representation of the computational process - not realizable in classical machines - the process can be identified with the mathematical form of a quantum steady state. This form of steady quantum computation would seem to have an important bearing on the notion of cognition.

    10. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math Extreme Scale Computing, Co-design Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, ...

    11. CASL-U-2015-0165-000

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      International Conference on Mathematics and Computation (M&C), Supercomputing ... International Conference on Mathematics and Computational Methods Applied to ...

    12. Engineering Physics and Mathematics Division progress report for period ending June 30, 1985

      SciTech Connect (OSTI)

      Not Available

      1986-02-01

      The report is divided into eight sections: (1) nuclear data measurements and evaluation; (2) systems analysis and shielding; (3) applied physics and fusion reactor analysis; (4) mathematical modeling and intelligent control; (5) reliability and human factors research; (6) applied risk and decision analysis; (7) information analysis and data management; and (8) mathematical sciences. Each section then consists of abstracts of presented or published papers. (WRF)

    13. Applied Research Center

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ARC Privacy and Security Notice Skip over navigation Search the JLab Site Applied Research Center Please upgrade your browser. This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser. Concerns? Applied Research Center ARC Home Consortium News EH&S Reports print version ARC Resources Commercial Tenants ARC Brochure Library Conference Room Applied Research Center Applied Research Center front view Applied Research

    14. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      You can read more about the positions and apply at jobs.lbl.gov: Bioinformatics High Performance Computing Consultant (job number: 73194) and Software Developer for High...

    15. Theory, Simulation, and Computation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ADTSC Theory, Simulation, and Computation Supporting the Laboratory's overarching strategy to provide cutting-edge tools to guide and interpret experiments and further our fundamental understanding and predictive capabilities for complex systems. Theory, modeling, informatics Suites of experiment data High performance computing, simulation, visualization Contacts Associate Director John Sarrao Deputy Associate Director Paul Dotson Directorate Office (505) 667-6645 Email Applying the Scientific

    16. SCIENCE; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING...

      Office of Scientific and Technical Information (OSTI)

      ZIRCONIUM ALLOYS; ZIRCONIUM BASE ALLOYS 360100* -- Metals & Alloys; 570000 -- Health & Safety Massive zirconium metal scrap can be handled, shipped, and stored with no...

    17. July 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Petzold, L.R. (1982) 29 > Conduction heat transfer solutions VanSant, J.H. (1983) 29 > ... C.T. (1994) 26 > Monte Carlo fundamentals Brown, F.B.; Sutton, T.M. (1996) 24 ...

    18. March 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      G.A. (1995) 53 FEHM: finite element heat and mass transfer code Zyvoloski, G.; Dash, Z.; ... Clark, D. (1997) 46 Monte Carlo fundamentals Brown, F.B.; Sutton, T.M. (1996) ...

    19. Most Viewed Documents for Mathematics and Computing: September...

      Office of Scientific and Technical Information (OSTI)

      C.N.; Paddock, R.A. (1997) 47 Conduction heat transfer solutions VanSant, J.H. (1983) 36 ... C.A. (comps.) (1980) 23 Monte Carlo fundamentals Brown, F.B.; Sutton, T.M. (1996) 22 ...

    20. Most Viewed Documents for Mathematics and Computing: December...

      Office of Scientific and Technical Information (OSTI)

      Petzold, L.R. (1982) 33 Monte Carlo fundamentals Brown, F.B.; Sutton, T.M. (1996) 31 ... N.W.; Eicher, R.W. (1992) 31 Conduction heat transfer solutions VanSant, J.H. (1983) 28 ...

    1. March 2014 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Kinetic theory approach Gidaspow, D.; Bezburuah, R.; Ding, J. (1991) 18 > Communication of emergency public warnings: A social science perspective and state-of-the-art assessment ...

    2. June 2014 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Kinetic theory approach Gidaspow, D.; Bezburuah, R.; Ding, J. (1991) 22 > Communication of emergency public warnings: A social science perspective and state-of-the-art assessment ...

    3. GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION...

      Office of Scientific and Technical Information (OSTI)

      ENERGY; LMFBR TYPE REACTORS; NUCLEAR POWER; PHYSICS; BREEDER REACTORS; CARBONACEOUS MATERIALS; DOCUMENT TYPES; ENERGY; ENERGY SOURCES; EPITHERMAL REACTORS; FAST REACTORS; FBR...

    4. Most Viewed Documents - Mathematics and Computing | OSTI, US...

      Office of Scientific and Technical Information (OSTI)

      Metaphors for cyber security. Moore, Judy Hennessey; Parrott, Lori K.; Karas, Thomas H. (2008) Staggered-grid finite-difference acoustic modeling with the Time-Domain Atmospheric ...

    5. Mathematical modeling and computer simulation of processes in energy systems

      SciTech Connect (OSTI)

      Hanjalic, K.C. )

      1990-01-01

      This book is divided into the following chapters. Modeling techniques and tools (fundamental concepts of modeling); 2. Fluid flow, heat and mass transfer, chemical reactions, and combustion; 3. Processes in energy equipment and plant components (boilers, steam and gas turbines, IC engines, heat exchangers, pumps and compressors, nuclear reactors, steam generators and separators, energy transport equipment, energy convertors, etc.); 4. New thermal energy conversion technologies (MHD, coal gasification and liquefaction fluidized-bed combustion, pulse-combustors, multistage combustion, etc.); 5. Combined cycles and plants, cogeneration; 6. Dynamics of energy systems and their components; 7. Integrated approach to energy systems modeling, and 8. Application of modeling in energy expert systems.

    6. June 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Rodriguez, Tamara S. (2009) 218 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 216 Ferrite Measurement in Austenitic and Duplex ...

    7. September 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      notes for introduction to safety and health Biele, F. (1992) 333 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 286 Ferrite ...

    8. December 2015 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      sensors Sheen, S.H.; Raptis, A.C.; Moscynski, M.J. (1995) 373 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 365 Lecture ...

    9. April 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Publications in biomedical and environmental sciences programs, 1981 Moody, J.B. (comp.) (1982) 306 > A comparison of risk assessment techniques from qualitative to quantitative ...

    10. March 2016 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      sensors Sheen, S.H.; Raptis, A.C.; Moscynski, M.J. (1995) 726 A comparison of risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 560 Ferrite ...

    11. January 2013 Most Viewed Documents for Mathematics And Computing...

      Office of Scientific and Technical Information (OSTI)

      Cybersecurity through Real-Time Distributed Control Systems Kisner, Roger A ORNL; ... M ORNL REACTOR ANALYSIS AND VIRTUAL CONTROL ENVIRONMENT (RAVEN) FY12 REPORT Cristian ...

    12. Introduction to computers: Reference guide

      SciTech Connect (OSTI)

      Ligon, F.V.

      1995-04-01

      The ``Introduction to Computers`` program establishes formal partnerships with local school districts and community-based organizations, introduces computer literacy to precollege students and their parents, and encourages students to pursue Scientific, Mathematical, Engineering, and Technical careers (SET). Hands-on assignments are given in each class, reinforcing the lesson taught. In addition, the program is designed to broaden the knowledge base of teachers in scientific/technical concepts, and Brookhaven National Laboratory continues to act as a liaison, offering educational outreach to diverse community organizations and groups. This manual contains the teacher`s lesson plans and the student documentation to this introduction to computer course.

    13. Applied Math & Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Math & Software - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us ... Twitter Google + Vimeo GovDelivery SlideShare Applied Math & Software HomeTransportation ...

    14. Applied Energy Programs

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (DoD) programs at Los Alamos, and to industry through the Laboratory's Technology Transfer Division. The Applied Energy programs encompass the broad set of energy focus areas:...

    15. Now Accepting Applications for Alvarez Fellowship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers in computer science, applied mathematics or any computational science ... rely on advances in computer science, mathematics, and computational science, as well as ...

    16. Compute nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute nodes Compute nodes Click here to see more detailed hierachical map of the topology of a compute node. Last edited: 2016-04-29 11:35:0

    17. Computer System,

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      undergraduate summer institute http:isti.lanl.gov (Educational Prog) 2016 Computer System, Cluster, and Networking Summer Institute Purpose The Computer System,...

    18. Exascale Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      DesignForward FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home R & D Exascale Computing Exascale Computing Moving forward into the exascale era, ...

    19. DOE Fundamentals Handbook: Mathematics, Volume 1

      SciTech Connect (OSTI)

      Not Available

      1992-06-01

      The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

    20. DOE Fundamentals Handbook: Mathematics, Volume 2

      SciTech Connect (OSTI)

      Not Available

      1992-06-01

      The Mathematics Fundamentals Handbook was developed to assist nuclear facility operating contractors provide operators, maintenance personnel, and the technical staff with the necessary fundamentals training to ensure a basic understanding of mathematics and its application to facility operation. The handbook includes a review of introductory mathematics and the concepts and functional use of algebra, geometry, trigonometry, and calculus. Word problems, equations, calculations, and practical exercises that require the use of each of the mathematical concepts are also presented. This information will provide personnel with a foundation for understanding and performing basic mathematical calculations that are associated with various DOE nuclear facility operations.

    1. Using Mira to Design Cleaner Engines | Argonne Leadership Computing...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Using Mira to Design Cleaner Engines Event Sponsor: Mathematics and Computing Science - LANS Seminar Start Date: Oct 28 2015 - 3:00pm BuildingRoom: Building 240Room 4301...

    2. Mathematics and biology: The interface, challenges and opportunities

      SciTech Connect (OSTI)

      Levin, S.A. )

      1992-06-01

      The interface between mathematics and biology has long been a rich area of research, with mutual benefit to each supporting discipline. Traditional areas of investigation, such as population genetics, ecology, neurobiology, and 3-D reconstructions, have flourished, despite a rather meager environment for the funding of such work. In the past twenty years, the kind and scope of such interactions between mathematicians and biologists have changed dramatically, reaching out to encompass areas of both biology and mathematics that previously had not benefited. At the same time, with the closer integration of theory and experiment, and the increased reliance on high-speed computation, the costs of such research grew, though not the opportunities for funding. The perception became reinforced, both within the research community and at funding agencies, that although these interactions were expanding, they were not doing so at the rate necessary to meet the opportunities and needs. A workshop was held in Washington, DC, between April 28 and May 3, 1990 which drew together a broadly based group of researchers to synthesize conclusions from a group of working papers and extended discussions. The result is the report presented here, which we hope will provide a guide and stimulus to research in mathematical and computational biology for at least the next decade. The report identifies a number of grand challenges, representing a broad consensus among the participants.

    3. Applied Science/Techniques

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied ScienceTechniques Print The ALS is an excellent incubator of new scientific techniques and instrumentation. Many of the technical advances that make the ALS a world-class...

    4. New DOE Office of Science support for CAMERA to develop computational

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      mathematics for experimental facilities research New DOE Office of Science support for CAMERA to develop computational mathematics for experimental facilities research New DOE Office of Science support for CAMERA to develop computational mathematics for experimental facilities research September 22, 2015 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov newcameralogofinal Experimental science is evolving. With the advent of new technology, scientific facilities are collecting data at

    5. Mathematics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      (1868-1942) JSTOR Contains the backfiles of many core academic journals Zentralblatt MATH The ZBMATH Online Database covers 1826-present Organizations American Institute of...

    6. Computational Modeling | Bioenergy | NREL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Modeling NREL uses computational modeling to increase the efficiency of biomass conversion by rational design using multiscale modeling, applying theoretical approaches, and testing scientific hypotheses. model of enzymes wrapping on cellulose; colorful circular structures entwined through blue strands Cellulosomes are complexes of protein scaffolds and enzymes that are highly effective in decomposing biomass. This is a snapshot of a coarse-grain model of complex cellulosome

    7. Computing Information

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information From here you can find information relating to: Obtaining the right computer accounts. Using NIC terminals. Using BooNE's Computing Resources, including: Choosing your desktop. Kerberos. AFS. Printing. Recommended applications for various common tasks. Running CPU- or IO-intensive programs (batch jobs) Commonly encountered problems Computing support within BooNE Bringing a computer to FNAL, or purchasing a new one. Laptops. The Computer Security Program Plan for MiniBooNE The

    8. Apply for the Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant...

    9. Computational Advances in Applied Energy | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      PDF icon Friedmann-LLNL-SEAB.10.11.pdf More Documents & Publications Director's Perspective by George Miller Fact Sheet: Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) ...

    10. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Apply for Beamtime Print Friday, 28 August 2009 13:23 Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn

    11. Mathematical models of cocurrent spray drying

      SciTech Connect (OSTI)

      Negiz, A.; Lagergren, E.S.; Cinar, A.

      1995-10-01

      A steady state mathematical model for a cocurrent spray dryer is developed. The model includes the mass, momentum, and energy balances for a single drying droplet as well as the total energy and mass balances of the drying medium. A log normal droplet size distribution is assumed to hold at the exit of the twin-fluid atomizer located at the top of the drying chamber. The discretization of this log normal distribution with a certain number of bins yields a system of nonlinear coupled first-order differential equations as a function of the axial distance of the drying chamber. This system of equations is used to compute the axial changes in droplet diameter, density, velocity, moisture, and temperature for the droplets at each representative bin. Furthermore, the distributions of important process parameters such as droplet moisture content, diameter, density, and temperature are also obtainable along the length of the chamber. On the basis of the developed model, a constrained nonlinear optimization problem is solved, where the exit particle moisture content is minimized with respect to the process inputs subjected to a fixed mean particle diameter at the chamber exit. Response surface studies based on empirical models are also performed to illustrate the effectiveness of these techniques in achieving the optimal solution when an a priori model is not available. The structure of empirical models obtained from the model is shown to be in agreement with the structure of the empirical models obtained from the experimental studies.

    12. Computing Resources

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cluster-Image TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computing Resources The TRACC Computational Clusters With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD

    13. Collaborative Mathematical Workbench Eliot Feibush, Matthew Milano...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Collaborative Mathematical Workbench Eliot Feibush, Matthew Milano, Benjamin Phillips, Andrew Zwicker, and James Morgan This invention enables modifying and analyzing numerical...

    14. Applied Cathode Enhancement and

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Cathode Enhancement and Robustness Technologies (ACERT) Team Our project team, a part of Los Alamos National Laboratory (LANL) comprised of world leading experts from fields of accelerator design & testing, chemical synthesis of nanomaterials (quantum dots), and shielding application of nanomaterials (graphene and other atomically-thin sheets). Our goal is to develop and demonstrate 'designer' cold cathode electron sources with tunable parameters (bandgap, efficiency, optical

    15. Los Alamos National Laboratory to host Supercomputing Challenge...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and teachers to computers and applied mathematics; and instill enthusiasm for science in ... and teachers to computers and applied mathematics; and instill enthusiasm for science in ...

    16. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB DDR3 800 MHz memory per node Peak Gflop rate 9.2 Gflops/core 36.8 Gflops/node 352 Tflops for the entire machine Each core has their own L1 and L2 caches, with 64 KB and 512KB respectively 2 MB L3 cache shared among the 4 cores Compute Node Software By default the compute nodes run a restricted low-overhead

    17. Vitali Morozov | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Vitali Morozov Principal Application Performance Engineer Vitali Morozov Argonne National Laboratory 9700 South Cass Avenue Building 240 - Rm. 1127 Argonne, IL 60439 630 252-7068 morozov@anl.gov Vitali Morozov is a Principal Application Performance Engineer at the ALCF. He received his B.S. in Mathematics from Novosibirsk State University, and a Ph.D. in Computer Science from Ershov's Institute for Informatics Systems, Novosibirsk, Russia. At Argonne since 2001, he has been working on computer

    18. ACM TOMS replicated computational results initiative

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Heroux, Michael Allen

      2015-06-03

      In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.

    19. ACM TOMS replicated computational results initiative

      SciTech Connect (OSTI)

      Heroux, Michael Allen

      2015-06-03

      In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical Software (TOMS) use this process for rigorously promoting the clarity and completeness of content, and citation of prior work. At the same time, it is unusual to independently confirm computational results.

    20. Profile for Sara Y. Del Valle

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Biosciences Biosecurity Modeling of viral disease dynamics Epidemiology modeling Computational Physics and Applied Mathematics Mathematics Monte Carlo methods Discrete event ...

    1. SciDAC Institutes | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics ... FASTMath - Frameworks, Algorithms and Scalable Technologies for Mathematics http:...

    2. Applied ALARA techniques

      SciTech Connect (OSTI)

      Waggoner, L.O.

      1998-02-05

      The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work.

    3. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Print Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn more about the three different types of

    4. Apply for Beamtime

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Beamtime Print Available Beamlines Determine which ALS beamlines are suitable for your experiment. To do this, you can review the ALS Beamlines Directory, contact the appropriate beamline scientist listed on the Directory, and/or contact the This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Log In to the ALSHub user portal ALSHub Login For More Information About the Types of Proposals To learn more about the three different types of

    5. Search for: All records | SciTech Connect

      Office of Scientific and Technical Information (OSTI)

      Filter Results Filter by Subject mathematics and computing (2) applied mathematics (1) big data (1) computational science (1) computer science (1) condensed matter physics, ...

    6. A novel mathematical model for controllable near-field electrospinning

      SciTech Connect (OSTI)

      Ru, Changhai E-mail: luojun@shu.edu.cn; Robotics and Microsystems Center, Soochow University, Suzhou 215021 ; Chen, Jie; Shao, Zhushuai; Pang, Ming; Luo, Jun E-mail: luojun@shu.edu.cn

      2014-01-15

      Near-field electrospinning (NFES) had better controllability than conventional electrospinning. However, due to the lack of guidance of theoretical model, precise deposition of micro/nano fibers could only accomplished by experience. To analyze the behavior of charged jet in NFES using mathematical model, the momentum balance equation was simplified and a new expression between jet cross-sectional radius and axial position was derived. Using this new expression and mass conservation equation, expressions for jet cross-sectional radius and velocity were derived in terms of axial position and initial jet acceleration in the form of exponential functions. Based on Slender-body theory and Giesekus model, a quadratic equation for initial jet acceleration was acquired. With the proposed model, it was able to accurately predict the diameter and velocity of polymer fibers in NFES, and mathematical analysis rather than experimental methods could be applied to study the effects of the process parameters in NFES. Moreover, the movement velocity of the collector stage can be regulated by mathematical model rather than experience. Therefore, the model proposed in this paper had important guiding significance to precise deposition of polymer fibers.

    7. In the News | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory researchers are applying the power of high-performance computing, combined with sophisticated experiments, to refine plans sodium-cooled fast reactors. ...

    8. Computing Events

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Laboratory (pdf) DOENNSA Laboratories Fulfill National Mission with Trinity and Cielo Petascale Computers (pdf) Exascale Co-design Center for Materials in Extreme...

    9. Computational Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Advanced Materials Laboratory Center for Integrated Nanotechnologies Combustion Research Facility Computational Science Research Institute Joint BioEnergy Institute About EC News ...

    10. Computer Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Cite Seer Department of Energy provided open access science research citations in chemistry, physics, materials, engineering, and computer science IEEE Xplore Full text...

    11. Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Security All JLF participants must fully comply with all LLNL computer security regulations and procedures. A laptop entering or leaving B-174 for the sole use by a US citizen and so configured, and requiring no IP address, need not be registered for use in the JLF. By September 2009, it is expected that computers for use by Foreign National Investigators will have no special provisions. Notify maricle1@llnl.gov of all other computers entering, leaving, or being moved within B 174. Use

    12. Computing and Computational Sciences Directorate - Contacts

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Home About Us Contacts Jeff Nichols Associate Laboratory Director Computing and Computational Sciences Becky Verastegui Directorate Operations Manager Computing and...

    13. ORISE: Applied health physics projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied health physics projects The Oak Ridge Institute for Science and Education (ORISE) provides applied health physics services to government agencies needing technical support ...

    14. Development of the Mathematics of Learning Curve Models for Evaluating...

      Office of Scientific and Technical Information (OSTI)

      of the Mathematics of Learning Curve Models for Evaluating Small Modular Reactor Economics Citation Details In-Document Search Title: Development of the Mathematics of Learning ...

    15. SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCIENCE ON SATURDAY- "Disastrous Equations: The Role of Mathematics in Understanding Tsunami" Professor J. Douglas Wright, Associate Professor Department of Mathematics, Drexel ...

    16. Conference on Non-linear Phenomena in Mathematical Physics: Dedicated...

      Office of Scientific and Technical Information (OSTI)

      current trends of nonlinear phenomena in mathematical physics, but also served as an awareness session of current womens contribution to mathematics. less Authors:...

    17. Extreme Scale Computing, Co-design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design Extreme Scale Computing, Co-design Computational co-design may facilitate revolutionary designs in the next generation of supercomputers. Get Expertise Tim Germann Physics and Chemistry of Materials Email Allen McPherson Energy and Infrastructure Analysis Email Turab Lookman Physics and Condensed Matter and Complex Systems Email Computational co-design involves developing the interacting components of a

    18. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes There are currently 2632 nodes available on PDSF. The compute (batch) nodes at PDSF are heterogenous, reflecting the periodic procurement of new nodes (and the eventual retirement of old nodes). From the user's perspective they are essentially all equivalent except that some have more memory per job slot. If your jobs have memory requirements beyond the default maximum of 1.1GB you should specify that in your job submission and the batch system will run your job on an

    19. Impact analysis on a massively parallel computer

      SciTech Connect (OSTI)

      Zacharia, T.; Aramayo, G.A.

      1994-06-01

      Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

    20. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nodes Quad CoreAMDOpteronprocessor Compute Node Configuration 9,572 nodes 1 quad-core AMD 'Budapest' 2.3 GHz processor per node 4 cores per node (38,288 total cores) 8 GB...

    1. Mathematical and Statistical Opportunities in Cyber Security

      Office of Scientific and Technical Information (OSTI)

      Mathematical and Statistical Opportunities in Cyber Security ∗ Juan Meza † Scott Campbell ‡ David Bailey § Abstract The role of mathematics in a complex system such as the Internet has yet to be deeply explored. In this paper, we summarize some of the important and pressing problems in cyber security from the viewpoint of open science environments. We start by posing the question "What fundamental problems exist within cyber security research that can be helped by advanced

    2. LHC Computing

      SciTech Connect (OSTI)

      Lincoln, Don

      2015-07-28

      The LHC is the world’s highest energy particle accelerator and scientists use it to record an unprecedented amount of data. This data is recorded in electronic format and it requires an enormous computational infrastructure to convert the raw data into conclusions about the fundamental rules that govern matter. In this video, Fermilab’s Dr. Don Lincoln gives us a sense of just how much data is involved and the incredible computer resources that makes it all possible.

    3. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

      SciTech Connect (OSTI)

      Du, Qiang

      2014-11-12

      The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.

    4. ACM TOMS replicated computational results initiative (Journal Article) |

      Office of Scientific and Technical Information (OSTI)

      SciTech Connect Journal Article: ACM TOMS replicated computational results initiative Citation Details In-Document Search This content will become publicly available on June 3, 2016 Title: ACM TOMS replicated computational results initiative In this study, the scientific community relies on the peer review process for assuring the quality of published material, the goal of which is to build a body of work we can trust. Computational journals such as The ACM Transactions on Mathematical

    5. Climate Modeling using High-Performance Computing

      SciTech Connect (OSTI)

      Mirin, A A

      2007-02-05

      The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

    6. Computational mechanics

      SciTech Connect (OSTI)

      Goudreau, G.L.

      1993-03-01

      The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

    7. Compute Nodes

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Nodes Compute Nodes MC-proc.png Compute Node Configuration 6,384 nodes 2 twelve-core AMD 'MagnyCours' 2.1-GHz processors per node (see die image to the right and schematic below) 24 cores per node (153,216 total cores) 32 GB DDR3 1333-MHz memory per node (6,000 nodes) 64 GB DDR3 1333-MHz memory per node (384 nodes) Peak Gflop/s rate: 8.4 Gflops/core 201.6 Gflops/node 1.28 Peta-flops for the entire machine Each core has its own L1 and L2 caches, with 64 KB and 512KB respectively One 6-MB

    8. Applied Optoelectronics | Open Energy Information

      Open Energy Info (EERE)

      optical semiconductor devices, packaged optical components, optical subsystems, laser transmitters, and fiber optic transceivers. References: Applied Optoelectronics1...

    9. Computational mechanics

      SciTech Connect (OSTI)

      Raboin, P J

      1998-01-01

      The Computational Mechanics thrust area is a vital and growing facet of the Mechanical Engineering Department at Lawrence Livermore National Laboratory (LLNL). This work supports the development of computational analysis tools in the areas of structural mechanics and heat transfer. Over 75 analysts depend on thrust area-supported software running on a variety of computing platforms to meet the demands of LLNL programs. Interactions with the Department of Defense (DOD) High Performance Computing and Modernization Program and the Defense Special Weapons Agency are of special importance as they support our ParaDyn project in its development of new parallel capabilities for DYNA3D. Working with DOD customers has been invaluable to driving this technology in directions mutually beneficial to the Department of Energy. Other projects associated with the Computational Mechanics thrust area include work with the Partnership for a New Generation Vehicle (PNGV) for ''Springback Predictability'' and with the Federal Aviation Administration (FAA) for the ''Development of Methodologies for Evaluating Containment and Mitigation of Uncontained Engine Debris.'' In this report for FY-97, there are five articles detailing three code development activities and two projects that synthesized new code capabilities with new analytic research in damage/failure and biomechanics. The article this year are: (1) Energy- and Momentum-Conserving Rigid-Body Contact for NIKE3D and DYNA3D; (2) Computational Modeling of Prosthetics: A New Approach to Implant Design; (3) Characterization of Laser-Induced Mechanical Failure Damage of Optical Components; (4) Parallel Algorithm Research for Solid Mechanics Applications Using Finite Element Analysis; and (5) An Accurate One-Step Elasto-Plasticity Algorithm for Shell Elements in DYNA3D.

    10. Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks

      SciTech Connect (OSTI)

      Ziaul Huque

      2007-08-31

      This is the final technical report for the project titled 'Mathematically Reduced Chemical Reaction Mechanism Using Neural Networks'. The aim of the project was to develop an efficient chemistry model for combustion simulations. The reduced chemistry model was developed mathematically without the need of having extensive knowledge of the chemistry involved. To aid in the development of the model, Neural Networks (NN) was used via a new network topology known as Non-linear Principal Components Analysis (NPCA). A commonly used Multilayer Perceptron Neural Network (MLP-NN) was modified to implement NPCA-NN. The training rate of NPCA-NN was improved with the GEneralized Regression Neural Network (GRNN) based on kernel smoothing techniques. Kernel smoothing provides a simple way of finding structure in data set without the imposition of a parametric model. The trajectory data of the reaction mechanism was generated based on the optimization techniques of genetic algorithm (GA). The NPCA-NN algorithm was then used for the reduction of Dimethyl Ether (DME) mechanism. DME is a recently discovered fuel made from natural gas, (and other feedstock such as coal, biomass, and urban wastes) which can be used in compression ignition engines as a substitute for diesel. An in-house two-dimensional Computational Fluid Dynamics (CFD) code was developed based on Meshfree technique and time marching solution algorithm. The project also provided valuable research experience to two graduate students.

    11. MULTISCALE MATHEMATICS FOR BIOMASS CONVERSION TO RENEWABLE HYDROGEN

      SciTech Connect (OSTI)

      Vlachos, Dionisios; Plechac, Petr; Katsoulakis, Markos

      2013-09-05

      The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomass transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.

    12. Computational Methods for Analyzing Fluid Flow Dynamics from Digital Imagery

      SciTech Connect (OSTI)

      Luttman, A.

      2012-03-30

      The main goal (long term) of this work is to perform computational dynamics analysis and quantify uncertainty from vector fields computed directly from measured data. Global analysis based on observed spatiotemporal evolution is performed by objective function based on expected physics and informed scientific priors, variational optimization to compute vector fields from measured data, and transport analysis proceeding with observations and priors. A mathematical formulation for computing flow fields is set up for computing the minimizer for the problem. An application to oceanic flow based on sea surface temperature is presented.

    13. Energy Department Announces Ten New Projects to Apply High-Performance

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing to Manufacturing Challenges | Department of Energy Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges Energy Department Announces Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges February 17, 2016 - 9:30am Addthis The Energy Department today announced $3 million for ten new projects that will enable private-sector companies to use high-performance computing resources at the department's national laboratories to tackle

    14. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, ... The DOE Office of Science's Advanced Scientific Computing Research (ASCR) program ...

    15. Energy Department Announces Ten New Projects to Apply High-Performance...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Ten New Projects to Apply High-Performance Computing to Manufacturing Challenges Energy ... initiative pairs leading clean energy technology companies with the world-class ...

    16. Apply

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Unofficial transcripts are acceptable. If transcripts are not in English, provide a translation. If grades are not in the U.S.-traditional lettered (A,B,C), or GPA (out of 4.0)...

    17. Computing at JLab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      JLab --- Accelerator Controls CAD CDEV CODA Computer Center High Performance Computing Scientific Computing JLab Computer Silo maintained by webmaster@jlab.org...

    18. Science at ALCF | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Three-dimensional view of shock reflection in a square tube First-Principles Simulations of High-Speed Combustion and Detonation Alexei Khokhlov Allocation Program: INCITE Allocation Hours: 140 Million Science at ALCF Allocation Program - Any - INCITE ALCC ESP Director's Discretionary Year Year -Year 2008 2009 2010 2011 2012 2013 2014 2015 2016 Research Domain - Any - Physics Mathematics Computer Science Chemistry Earth Science Energy Technologies Materials Science Engineering Biological

    19. Computational Sciences and Engineering Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      The Computational Sciences and Engineering Division is a major research division at the Department of Energy's Oak Ridge National Laboratory. CSED develops and applies creative information technology and modeling and simulation research solutions for National Security and National Energy Infrastructure needs. The mission of the Computational Sciences and Engineering Division is to enhance the country's capabilities in achieving important objectives in the areas of national defense, homeland

    20. Argonne's Laboratory computing center - 2007 annual report.

      SciTech Connect (OSTI)

      Bair, R.; Pieper, G. W.

      2008-05-28

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    1. Fermilab | Science at Fermilab | Computing | Grid Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Grid Computing Center interior. Grid Computing Center interior. Computing Grid Computing As high-energy physics experiments grow larger in scope, they require more computing power to process and analyze data. Laboratories purchase rooms full of computer nodes for experiments to use. But many experiments need even more capacity during peak periods . And some experiments do not need to use all of their computing power all of the time. In the early 2000s, members of Fermilab's Computing Division

    2. RATIO COMPUTER

      DOE Patents [OSTI]

      Post, R.F.

      1958-11-11

      An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

    3. CASL-U-2015-0158-000

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      - Joint International Conference on Mathematics and Computation (M&C), Supercomputing ... Int. Conf. Mathematics and Computational Methods Applied to Nuclear Science & Engineering ...

    4. CASL-U-2015-0159-000 The Implementation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      - Joint International Conference on Mathematics and Computation (M&C), Supercomputing ... Int. Conf. Mathematics and Computational Methods Applied to Nuclear Science & Engineering ...

    5. CASL-U-2015-0035-000 High Fidelity Modeling of Pellet-Clad Interaction

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      - Joint International Conference on Mathematics and Computation (M&C), Supercomputing ... of the International Conference on Mathematics and Computational Methods Applied to ...

    6. Applied Materials | Open Energy Information

      Open Energy Info (EERE)

      Jump to: navigation, search Name: Applied Materials Address: 3050 Bowers Avenue Place: Santa Clara, California Zip: 95054 Sector: Solar Website: www.appliedmaterials.com...

    7. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report ?? Phase I

      SciTech Connect (OSTI)

      Mark S. Schmalz

      2011-07-24

      Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.

    8. Idaho Science, Technology, Engineering and Mathematics Overview

      ScienceCinema (OSTI)

      None

      2013-05-28

      Idaho National Laboratory has been instrumental in establishing the Idaho Science, Technology, Engineering and Mathematics initiative -- i-STEM, which brings together industry, educators, government and other partners to provide K-12 teachers with support, materials and opportunities to improve STEM instruction and increase student interest in technical careers. You can learn more about INL's education programs at http://www.facebook.com/idahonationallaboratory.

    9. Applied Materials | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Materials Argonne's nanocomposite charge drain coatings represent a significant breakthrough in the effort to develop microelectromechanical systems, or MEMS. Argonne's nanocomposite charge drain coatings represent a significant breakthrough in the effort to develop microelectromechanical systems, or MEMS. Argonne is a leading technology developer with the advanced manufacturing industry and government sponsors and clients. The emphasis is on applied technology demonstration that often

    10. Artificial intelligence technologies applied to terrain analysis

      SciTech Connect (OSTI)

      Wright, J.C. ); Powell, D.R. )

      1990-01-01

      The US Army Training and Doctrine Command is currently developing, in cooperation with Los Alamos National Laboratory, a Corps level combat simulation to support military analytical studies. This model emphasizes high resolution modeling of the command and control processes, with particular attention to architectural considerations that enable extension of the model. A planned future extension is the inclusion of an computer based planning capability for command echelons that can be dynamical invoked during the execution of then model. Command and control is the process through which the activities of military forces are directed, coordinated, and controlled to achieve the stated mission. To perform command and control the commander must understand the mission, perform terrain analysis, understand his own situation and capabilities as well as the enemy situation and his probable actions. To support computer based planning, data structures must be available to support the computer's ability to understand'' the mission, terrain, own capabilities, and enemy situation. The availability of digitized terrain makes it feasible to apply artificial intelligence technologies to emulate the terrain analysis process, producing data structures for uses in planning. The work derived thus for to support the understanding of terrain is the topic of this paper. 13 refs., 5 figs., 6 tabs.

    11. Mathematical modeling of silica deposition in Tongonan-I reinjection wells, Philippines

      SciTech Connect (OSTI)

      Malate, R.C.M.; O`Sullivan, M.J.

      1993-10-01

      Mathematical models of silica deposition are derived using the method of characteristics for the problem of variable rate injection into a well producing radially symmetric flow. Solutions are developed using the first order rate equation of silica deposition suggested by Rimstidt and Barnes (1980). The changes in porosity and permeability resulting from deposition are included in the models. The models developed are successfully applied in simulating the changes in injection capacity in some of the reinjection wells in Tongonan geothermal field, Philippines.

    12. SciDAC Conferences

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Conferences Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Next Generation Networking Scientific Discovery through ...

    13. Programming Challenges Presentations | U.S. DOE Office of Science...

      Office of Science (SC) Website

      Programming Challenges Presentations Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming ...

    14. Programming Challenges Workshop | U.S. DOE Office of Science...

      Office of Science (SC) Website

      Programming Challenges Workshop Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming ...

    15. Awards | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Next Generation Networking Scientific Discovery through Advanced...

    16. High performance computing and communications: Advancing the frontiers of information technology

      SciTech Connect (OSTI)

      1997-12-31

      This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

    17. Multiscale Mathematics For Plasma Kinetics Spanning Multiple...

      Office of Scientific and Technical Information (OSTI)

      Angeles Sponsoring Org: USDOE Office of Science (SC), Advanced Scientific Computing ... Coulomb collisions; Monte Carlo; Direct Simulation Monte Carlo; stochastic ...

    18. Applied Sedimentology | Open Energy Information

      Open Energy Info (EERE)

      Sedimentology Jump to: navigation, search OpenEI Reference LibraryAdd to library Book: Applied Sedimentology Author R.C. Salley Published Academic Press, 2000 DOI Not Provided...

    19. Physical Chemistry and Applied Spectroscopy

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      PCS Physical Chemistry and Applied Spectroscopy We perform basic and applied research in support of the Laboratory's national security mission and serve a wide range of customers. Contact Us Group Leader Kirk Rector Deputy Group Leader Jeff Pietryga Group Office (505) 667-7121 Postdoctoral researcher Young-Shin Park characterizing emission spectra of LEDs in the Los Alamos National Laboratory optical laboratory. Postdoctoral researcher Young-Shin Park characterizing emission spectra of LEDs in

    20. ORISE: Applied health physics projects

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied health physics projects The Oak Ridge Institute for Science and Education (ORISE) provides applied health physics services to government agencies needing technical support for decommissioning projects. Whether the need is assistance with the development of technical basis documents or advice on how to identify, measure and assess the presence of radiological materials, ORISE can help determine the best course for an environmental cleanup project. Our key areas of expertise include fuel

    1. Computer Modeling of Carbon Metabolism Enables Biofuel Engineering (Fact Sheet)

      SciTech Connect (OSTI)

      Not Available

      2011-09-01

      In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.

    2. The Role of Mathematical Methods in Efficiency Calibration and Uncertainty Estimation in Gamma Based Non-Destructive Assay - 12311

      SciTech Connect (OSTI)

      Venkataraman, R.; Nakazawa, D.

      2012-07-01

      Mathematical methods are being increasingly employed in the efficiency calibration of gamma based systems for non-destructive assay (NDA) of radioactive waste and for the estimation of the Total Measurement Uncertainty (TMU). Recently, ASTM (American Society for Testing and Materials) released a standard guide for use of modeling passive gamma measurements. This is a testimony to the common use and increasing acceptance of mathematical techniques in the calibration and characterization of NDA systems. Mathematical methods offer flexibility and cost savings in terms of rapidly incorporating calibrations for multiple container types, geometries, and matrix types in a new waste assay system or a system that may already be operational. Mathematical methods are also useful in modeling heterogeneous matrices and non-uniform activity distributions. In compliance with good practice, if a computational method is used in waste assay (or in any other radiological application), it must be validated or benchmarked using representative measurements. In this paper, applications involving mathematical methods in gamma based NDA systems are discussed with several examples. The application examples are from NDA systems that were recently calibrated and performance tested. Measurement based verification results are presented. Mathematical methods play an important role in the efficiency calibration of gamma based NDA systems. This is especially true when the measurement program involves a wide variety of complex item geometries and matrix combinations for which the development of physical standards may be impractical. Mathematical methods offer a cost effective means to perform TMU campaigns. Good practice demands that all mathematical estimates be benchmarked and validated using representative sets of measurements. (authors)

    3. Computation Directorate 2008 Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2009-03-25

      Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

    4. Bringing Advanced Computational Techniques to Energy Research

      SciTech Connect (OSTI)

      Mitchell, Julie C

      2012-11-17

      Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

    5. Validating Computer-Designed Proteins for Vaccines

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      apply to a variety of other vaccine targets, such as human immunodeficiency virus and influenza. Wanted: Dead or Computed As strange as it sounds, most vaccines are composed of...

    6. High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      HPC INL Logo Home High-Performance Computing INL's high-performance computing center provides general use scientific computing capabilities to support the lab's efforts in advanced...

    7. Quantum Computing: Solving Complex Problems

      ScienceCinema (OSTI)

      DiVincenzo, David [IBM Watson Research Center

      2009-09-01

      One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

    8. Guide to Preventing Computer Software Piracy

      Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]

      2001-07-12

      Guide to Preventing Computer Software Piracy It is the intent of the Department of Energy (DOE) to issue guidance in accordance with Federal CIO Council recommendations and in compliance with Executive Order 13103. The guidance in this document is based on the CIO Council's recommendations in reference to computer software piracy, and applies to all DOE elements. Canceled by DOE N 205.18

    9. CRC handbook of applied thermodynamics

      SciTech Connect (OSTI)

      Palmer, D.A. . Research and Development Dept.)

      1987-01-01

      The emphasis of this book is on applied thermodynamics, featuring the stage of development of a process rather than the logical development of thermodynamic principles. It is organized according to the types of problems encountered in industry, such as probing research, process assessment, and process development. The applied principles presented can be used in most areas of industry including oil and gas production and processing, chemical processing, power generation, polymer production, food processing, synthetic fuels production, specialty chemicals and pharmaceuticals production, bioengineered processes, etc.

    10. A mathematical framework for multiscale science and engineering : the variational multiscale method and interscale transfer operators.

      SciTech Connect (OSTI)

      Wagner, Gregory John; Collis, Samuel Scott; Templeton, Jeremy Alan; Lehoucq, Richard B.; Parks, Michael L.; Jones, Reese E.; Silling, Stewart Andrew; Scovazzi, Guglielmo; Bochev, Pavel B.

      2007-10-01

      This report is a collection of documents written as part of the Laboratory Directed Research and Development (LDRD) project A Mathematical Framework for Multiscale Science and Engineering: The Variational Multiscale Method and Interscale Transfer Operators. We present developments in two categories of multiscale mathematics and analysis. The first, continuum-to-continuum (CtC) multiscale, includes problems that allow application of the same continuum model at all scales with the primary barrier to simulation being computing resources. The second, atomistic-to-continuum (AtC) multiscale, represents applications where detailed physics at the atomistic or molecular level must be simulated to resolve the small scales, but the effect on and coupling to the continuum level is frequently unclear.

    11. Scientific computations section monthly report, November 1993

      SciTech Connect (OSTI)

      Buckner, M.R.

      1993-12-30

      This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.

    12. Researcher, Los Alamos National Laboratory - Applied Physics...

      National Nuclear Security Administration (NNSA)

      Applied Physics Division | National Nuclear Security Administration Facebook Twitter ... Researcher, Los Alamos National Laboratory - Applied Physics Division Stephen Becker ...

    13. Summer of Applied Geophysical Experience

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Summer of Applied Geophysical Experience (SAGE) 2016 - Our 34 rd Year! SAGE is a 3-4 week research and education program in exploration geophysics for graduate, undergraduate students, and working professionals based in Santa Fe, NM, U.S.A. Application deadline March 27, 2016, 5:00pm MDT SAGE students, faculty, teaching assistants, and visiting scientists acquire, process and interpret reflection/refraction seismic, magnetotelluric (MT)/electromagnetic (EM), ground penetrating radar (GPR),

    14. Computation of multi-material interactions using point method

      SciTech Connect (OSTI)

      Zhang, Duan Z; Ma, Xia; Giguere, Paul T

      2009-01-01

      Calculations of fluid flows are often based on Eulerian description, while calculations of solid deformations are often based on Lagrangian description of the material. When the Eulerian descriptions are used to problems of solid deformations, the state variables, such as stress and damage, need to be advected, causing significant numerical diffusion error. When Lagrangian methods are used to problems involving large solid deformat ions or fluid flows, mesh distortion and entanglement are significant sources of error, and often lead to failure of the calculation. There are significant difficulties for either method when applied to problems involving large deformation of solids. To address these difficulties, particle-in-cell (PIC) method is introduced in the 1960s. In the method Eulerian meshes stay fixed and the Lagrangian particles move through the Eulerian meshes during the material deformation. Since its introduction, many improvements to the method have been made. The work of Sulsky et al. (1995, Comput. Phys. Commun. v. 87, pp. 236) provides a mathematical foundation for an improved version, material point method (MPM) of the PIC method. The unique advantages of the MPM method have led to many attempts of applying the method to problems involving interaction of different materials, such as fluid-structure interactions. These problems are multiphase flow or multimaterial deformation problems. In these problems pressures, material densities and volume fractions are determined by satisfying the continuity constraint. However, due to the difference in the approximations between the material point method and the Eulerian method, erroneous results for pressure will be obtained if the same scheme used in Eulerian methods for multiphase flows is used to calculate the pressure. To resolve this issue, we introduce a numerical scheme that satisfies the continuity requirement to higher order of accuracy in the sense of weak solutions for the continuity equations. Numerical examples are given to demonstrate the new scheme.

    15. NetMOD Version 2.0 Mathematical Framework

      SciTech Connect (OSTI)

      Merchant, Bion J.; Young, Christopher J.; Chael, Eric P.

      2015-08-01

      NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each of the stations. From these signal-to-noise ratios (SNR), the probabilities of signal detection at each station and event detection across the network of stations can be computed given a detection threshold. The purpose of this document is to clearly and comprehensively present the mathematical framework used by NetMOD, the software package developed by Sandia National Laboratories to assess the monitoring capability of ground-based sensor networks. Many of the NetMOD equations used for simulations are inherited from the NetSim network capability assessment package developed in the late 1980s by SAIC (Sereno et al., 1990).

    16. Quantum mechanics problems in observer's mathematics

      SciTech Connect (OSTI)

      Khots, Boris; Khots, Dmitriy

      2012-11-06

      This work considers the ontology, guiding equation, Schrodinger's equation, relation to the Born Rule, the conditional wave function of a subsystem in a setting of arithmetic, algebra and topology provided by Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. Certain results and communications pertaining to solutions of these problems are provided. In particular, we prove the following theorems: Theorem I (Two-slit interference). Let {Psi}{sub 1} be a wave from slit 1, {Psi}{sub 2} - from slit 2, and {Psi} = {Psi}{sub 1}+{Psi}{sub 2}. Then the probability of {Psi} being a wave equals to 0.5. Theorem II (k-bodies solution). For W{sub n} from m-observer point of view with m>log{sub 10}((2 Multiplication-Sign 10{sup 2n}-1){sup 2k}+1), the probability of standard expression of Hamiltonian variation is less than 1 and depends on n,m,k.

    17. Mathematical and Statistical Opportunities in Cyber Security (Technical

      Office of Scientific and Technical Information (OSTI)

      Report) | SciTech Connect Mathematical and Statistical Opportunities in Cyber Security Citation Details In-Document Search Title: Mathematical and Statistical Opportunities in Cyber Security The role of mathematics in a complex system such as the Internet has yet to be deeply explored. In this paper, we summarize some of the important and pressing problems in cyber security from the viewpoint of open science environments. We start by posing the question 'What fundamental problems exist

    18. Conference on Non-linear Phenomena in Mathematical Physics: Dedicated...

      Office of Scientific and Technical Information (OSTI)

      Institute, Toronto, Canada September 18-20, 2008. Sponsors: Association for Women in Mathematics, Inc. and The Fields Institute Citation Details In-Document Search Title:...

    19. International combustion engines; Applied thermosciences

      SciTech Connect (OSTI)

      Ferguson, C.R.

      1985-01-01

      Focusing on thermodynamic analysis - from the requisite first law to more sophisticated applications - and engine design, this book is an introduction to internal combustion engines and their mechanics. It covers the many types of internal combustion engines, including spark ignition, compression ignition, and stratified charge engines, and examines processes, keeping equations of state simple by assuming constant specific heats. Equations are limited to heat engines and later applied to combustion engines. Topics include realistic equations of state, stroichiometry, predictions of chemical equilibrium, engine performance criteria, and friction, which is discussed in terms of the hydrodynamic theory of lubrication and experimental methods such as dimensional analysis.

    20. Applications of Parallel Computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

    1. Computer hardware fault administration

      DOE Patents [OSTI]

      Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

      2010-09-14

      Computer hardware fault administration carried out in a parallel computer, where the parallel computer includes a plurality of compute nodes. The compute nodes are coupled for data communications by at least two independent data communications networks, where each data communications network includes data communications links connected to the compute nodes. Typical embodiments carry out hardware fault administration by identifying a location of a defective link in the first data communications network of the parallel computer and routing communications data around the defective link through the second data communications network of the parallel computer.

    2. advanced simulation and computing

      National Nuclear Security Administration (NNSA)

      Each successive generation of computing system has provided greater computing power and energy efficiency.

      CTS-1 clusters will support NNSA's Life Extension Program and...

    3. Energy Aware Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Partnerships Shifter: User Defined Images Archive APEX Home R & D Energy Aware Computing Energy Aware Computing Dynamic Frequency Scaling One means to lower the energy ...

    4. Molecular Science Computing | EMSL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational and state-of-the-art experimental tools, providing a cross-disciplinary environment to further research. Additional Information Computing user policies Partners...

    5. Building America Expert Meeting: Recommendations for Applying...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Recommendations for Applying Water Heaters in Combination Space and Domestic Water Heating Systems Building America Expert Meeting: Recommendations for Applying Water Heaters in ...

    6. Applied Ventures LLC | Open Energy Information

      Open Energy Info (EERE)

      Applied Ventures LLC Name: Applied Ventures LLC Address: 3050 Bowers Avenue Place: Santa Clara, California Zip: 95054 Region: Southern CA Area Product: Venture capital. Number...

    7. Applied Materials Wind Turbine | Open Energy Information

      Open Energy Info (EERE)

      Wind Turbine Jump to: navigation, search Name Applied Materials Wind Turbine Facility Applied Materials Sector Wind energy Facility Type Community Wind Facility Status In Service...

    8. Applied Intellectual Capital AIC | Open Energy Information

      Open Energy Info (EERE)

      Intellectual Capital AIC Jump to: navigation, search Name: Applied Intellectual Capital (AIC) Place: California Zip: 94501-1010 Product: Applied Intellectual Capital (AIC) was...

    9. CNMS D Jun-Qiang Lu Computer Science and Mathematics Division

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      D I I S S C C O O V V E E R R Y Y SEMINAR SERIES Abstract The pursuit of spintronics ultimately depends on our ability to steer spin currents and detect or flip their polarization. ...

    10. Antaki, G.A. 22 NUCLEAR REACTOR TECHNOLOGY; 99 MATHEMATICS, COMPUTERS...

      Office of Scientific and Technical Information (OSTI)

      PIPES; DYNAMIC LOADS; ANALYTIC FUNCTIONS; ANALYTICAL SOLUTION; STRESSES; REGULATIONS; SEISMIC EFFECTS; STRESS ANALYSIS; EPRI; STANDARDS The paper addresses several analytical...

    11. Mathematical and computational modeling of the diffraction problems by discrete singularities method

      SciTech Connect (OSTI)

      Nesvit, K. V.

      2014-11-12

      The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.

    12. Alice Koniges

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a PhD in Applied and Computational Mathematics at Princeton University, Alice Koniges ... Alice Koniges, Invited Career Seminar, UC Berkeley: Women in Mathematics, October 21, ...

    13. Dr. Thomas F. Russell | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      His scientific background is in applied and computational mathematics, particularly in ... His degrees are in mathematics, from Princeton University (A.B.) and the University of ...

    14. CASL-U-2015-0177-000 A Modified Moving Least

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      - Joint International Conference on Mathematics and Computation (M&C), Supercomputing ... Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, ...

    15. Computing and Computational Sciences Directorate - Information...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      cost-effective, state-of-the-art computing capabilities for research and development. ... communicates and manages strategy, policy and finance across the portfolio of IT assets. ...

    16. Applied Math PI Meet | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Applied Math PI Meet Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC Conferences HPC Operations Review and Best

    17. DOE Applied Math Summit | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      DOE Applied Math Summit Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC Conferences HPC Operations Review and Best

    18. Applied Math PI Meet Talks | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      ASCR Workshops and Conferences » Applied Math PI Meet Talks Advanced Scientific Computing Research (ASCR) ASCR Home About Research Facilities Science Highlights Benefits of ASCR Funding Opportunities Advanced Scientific Computing Advisory Committee (ASCAC) Community Resources ASCR Discovery Monthly News Roundup News Archives ASCR Program Documents ASCR Workshops and Conferences Workshops & Conferences Archive DOE Simulations Summit Scientific Grand Challenges Workshop Series SciDAC

    19. Extreme Scale Computing, Co-Design

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Information Science, Computing, Applied Math » Extreme Scale Computing, Co-design » Publications Publications Ramon Ravelo, Qi An, Timothy C. Germann, and Brad Lee Holian, "Large-scale molecular dynamics simulations of shock induced plasticity in tantalum single crystals," AIP Conference Proceedings 1426, 1263-1266 (2012). Frank J. Cherne, Guy Dimonte, and Timothy C. Germann, "Richtymer-Meshkov instability examined with large-scale molecular dynamics simulations," AIP

    20. Argonne's Laboratory computing resource center : 2006 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

      2007-05-31

      Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has framed a 'path forward' for additional computing resources.

    1. Computing for Finance

      ScienceCinema (OSTI)

      None

      2011-10-06

      The finance sector is one of the driving forces for the use of distributed or Grid computing for business purposes. The speakers will review the state-of-the-art of high performance computing in the financial sector, and provide insight into how different types of Grid computing ? from local clusters to global networks - are being applied to financial applications. They will also describe the use of software and techniques from physics, such as Monte Carlo simulations, in the financial world. There will be four talks of 20min each. The talk abstracts and speaker bios are listed below. This will be followed by a Q&A; panel session with the speakers. From 19:00 onwards there will be a networking cocktail for audience and speakers. This is an EGEE / CERN openlab event organized in collaboration with the regional business network rezonance.ch. A webcast of the event will be made available for subsequent viewing, along with powerpoint material presented by the speakers. Attendance is free and open to all. Registration is mandatory via www.rezonance.ch, including for CERN staff. 1. Overview of High Performance Computing in the Financial Industry Michael Yoo, Managing Director, Head of the Technical Council, UBS Presentation will describe the key business challenges driving the need for HPC solutions, describe the means in which those challenges are being addressed within UBS (such as GRID) as well as the limitations of some of these solutions, and assess some of the newer HPC technologies which may also play a role in the Financial Industry in the future. Speaker Bio: Michael originally joined the former Swiss Bank Corporation in 1994 in New York as a developer on a large data warehouse project. In 1996 he left SBC and took a role with Fidelity Investments in Boston. Unable to stay away for long, he returned to SBC in 1997 while working for Perot Systems in Singapore. Finally, in 1998 he formally returned to UBS in Stamford following the merger with SBC and has remained with UBS for the past 9 years. During his tenure at UBS, he has had a number of leadership roles within IT in development, support and architecture. In 2006 Michael relocated to Switzerland to take up his current role as head of the UBS IB Technical Council, responsible for the overall technology strategy and vision of the Investment Bank. One of Michael's key responsibilities is to manage the UBS High Performance Computing Research Lab and he has been involved in a number of initiatives in the HPC space. 2. Grid in the Commercial WorldFred Gedling, Chief Technology Officer EMEA and Senior Vice President Global Services, DataSynapse Grid computing gets mentions in the press for community programs starting last decade with "Seti@Home". Government, national and supranational initiatives in grid receive some press. One of the IT-industries' best-kept secrets is the use of grid computing by commercial organizations with spectacular results. Grid Computing and its evolution into Application Virtualization is discussed and how this is key to the next generation data center. Speaker Bio: Fred Gedling holds the joint roles of Chief Technology Officer for EMEA and Senior Vice President of Global Services at DataSynapse, a global provider of application virtualisation software. Based in London and working closely with organisations seeking to optimise their IT infrastructures, Fred offers unique insights into the technology of virtualisation as well as the methodology of establishing ROI and rapid deployment to the immediate advantage of the business. Fred has more than fifteen years experience of enterprise middleware and high-performance infrastructures. Prior to DataSynapse he worked in high performance CRM middleware and was the CTO EMEA for New Era of Networks (NEON) during the rapid growth of Enterprise Application Integration. His 25-year career in technology also includes management positions at Goldman Sachs and Stratus Computer. Fred holds a First Class Bsc (Hons) degree in Physics with Astrophysics from the University of Leeds and had the privilege o

    2. DOE Advanced Scientific Advisory Committee (ASCAC): Workforce...

      Office of Scientific and Technical Information (OSTI)

      Experts in the ASCR relevant Computing Sciences, which encompass a range of disciplines including Computer Science, Applied Mathematics, Statistics and domain Computational ...

    3. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

      SciTech Connect (OSTI)

      Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.

      2012-05-04

      The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

    4. Polymorphous computing fabric

      DOE Patents [OSTI]

      Wolinski, Christophe Czeslaw; Gokhale, Maya B.; McCabe, Kevin Peter

      2011-01-18

      Fabric-based computing systems and methods are disclosed. A fabric-based computing system can include a polymorphous computing fabric that can be customized on a per application basis and a host processor in communication with said polymorphous computing fabric. The polymorphous computing fabric includes a cellular architecture that can be highly parameterized to enable a customized synthesis of fabric instances for a variety of enhanced application performances thereof. A global memory concept can also be included that provides the host processor random access to all variables and instructions associated with the polymorphous computing fabric.

    5. HISTORY OF THE ENGINEERING PHYSICS AND MATHEMATICS DIVISION 1955-1993

      SciTech Connect (OSTI)

      Maskewitz, B.F.

      2001-09-14

      A review of division progress reports noting significant events and findings of the Applied Nuclear Physics, Neutron Physics, Engineering Physics, and then Engineering Physics and Mathematics divisions from 1955 to 1993 was prepared for use in developing a history of the Oak Ridge National Laboratory in celebration of its 50th year. The research resulted in an accumulation of historic material and photographs covering 38 years of effort, and the decision was made to publish a brief history of the division. The history begins with a detailed account of the founding of the Applied Nuclear Physics Division in 1955 and continues through the name change to the Neutron Physics Division in the late 1950s. The material thereafter is presented in decades--the sixties, seventies, and eighties--and ends as we enter the nineties.

    6. Fermilab | Science at Fermilab | Computing | High-performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Lattice QCD Farm at the Grid Computing Center at Fermilab. Lattice QCD Farm at the Grid Computing Center at Fermilab. Computing High-performance Computing A workstation computer can perform billions of multiplication and addition operations each second. High-performance parallel computing becomes necessary when computations become too large or too long to complete on a single such machine. In parallel computing, computations are divided up so that many computers can work on the same problem at

    7. Computational nuclear quantum many-body problem: The UNEDF project

      SciTech Connect (OSTI)

      Fann, George I [ORNL

      2013-01-01

      The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

    8. New Mathematical Method Reveals Where Genes Switch On or Off

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      New Mathematical Method Reveals Where Genes Switch On or Off New Mathematical Method Reveals Where Genes Switch On or Off "Compressed sensing" determines atomic-level energy potentials with accuracy approaching experimental measurement February 22, 2012 John Hules, JAHules@lbl.gov, +1 510 486 6008 Figure 1. Helix-turn-helix (HTH) proteins are the most widely distributed family of DNA-binding proteins, occurring in all biological kingdoms. This image shows a lambda repressor HTH

    9. Development of the Mathematics of Learning Curve Models for Evaluating

      Office of Scientific and Technical Information (OSTI)

      Small Modular Reactor Economics (Technical Report) | SciTech Connect Development of the Mathematics of Learning Curve Models for Evaluating Small Modular Reactor Economics Citation Details In-Document Search Title: Development of the Mathematics of Learning Curve Models for Evaluating Small Modular Reactor Economics This report documents the efforts to perform dynamic model validation on the Eastern Interconnection (EI) by modeling governor deadband. An on-peak EI dynamic model is modified

    10. Next-Generation Wireless Instrumentation Integrated with Mathematical

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Modeling for Aluminum Production | Department of Energy Next-Generation Wireless Instrumentation Integrated with Mathematical Modeling for Aluminum Production Next-Generation Wireless Instrumentation Integrated with Mathematical Modeling for Aluminum Production Monitoring Electrolytic Cell Anode Current Increases Current and Energy Efficiency In 2011, five-and-a-half-million tons of aluminum were produced in the United States. Over two-million tons were produced in smelters, large

    11. Mathematical Models Shed New Light on Cancer Mutations

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Mathematical Models Shed New Light on Cancer Mutations Mathematical Models Shed New Light on Cancer Mutations Calculations Run at NERSC Pinpoint Rare Mutants More Quickly November 3, 2014 Contact: David Cameron, 617.432.0441, david_cameron@hms.harvard.edu cancermutations3 Heat map of the average magnitude of interaction energies projected onto a structural representation of SH2 domains (white) in complex with phosphopeptide (green). SH2 (Src Homology 2) is a protein domain found in many

    12. Synthetic Ecology of Microbes: Mathematical Models and Applications

      Office of Scientific and Technical Information (OSTI)

      (Journal Article) | SciTech Connect Synthetic Ecology of Microbes: Mathematical Models and Applications Citation Details In-Document Search Title: Synthetic Ecology of Microbes: Mathematical Models and Applications Authors: Zomorrodi, Ali R. ; Segrè, Daniel Publication Date: 2016-02-01 OSTI Identifier: 1251757 Grant/Contract Number: SC0012627 Type: Published Article Journal Name: Journal of Molecular Biology Additional Journal Information: Journal Volume: 428; Journal Issue: 5 PB; Related

    13. Computers for Learning

      Broader source: Energy.gov [DOE]

      Through Executive Order 12999, the Computers for Learning Program was established to provide Federal agencies a quick and easy system for donating excess and surplus computer equipment to schools...

    14. Cognitive Computing for Security.

      SciTech Connect (OSTI)

      Debenedictis, Erik; Rothganger, Fredrick; Aimone, James Bradley; Marinella, Matthew; Evans, Brian Robert; Warrender, Christina E.; Mickel, Patrick

      2015-12-01

      Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.

    15. Computers in Commercial Buildings

      U.S. Energy Information Administration (EIA) Indexed Site

      Government-owned buildings of all types, had, on average, more than one computer per person (1,104 computers per thousand employees). They also had a fairly high ratio of...

    16. Apply for Beam Time | Advanced Photon Source

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      All About Proposals Users Home Apply for Beam Time Deadlines Proposal Types Concepts, Definitions, and Help My APS Portal My APS Portal Apply for Beam Time Next Proposal Deadline...

    17. How to Apply for the ENERGY STAR®

      Broader source: Energy.gov [DOE]

      Join us to learn about applying for ENERGY STAR Certification in Portfolio Manager. Understand the value of the ENERGY STAR certification, see the step-by-step process of applying, and gain tips to...

    18. Advanced Scientific Computing Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Advanced Scientific Computing Research Advanced Scientific Computing Research Discovering, developing, and deploying computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to the Department of Energy. Get Expertise Pieter Swart (505) 665 9437 Email Pat McCormick (505) 665-0201 Email Dave Higdon (505) 667-2091 Email Fulfilling the potential of emerging computing systems and architectures beyond today's tools and techniques to deliver

    19. Computational Structural Mechanics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      load-2 TRACC RESEARCH Computational Fluid Dynamics Computational Structural Mechanics Transportation Systems Modeling Computational Structural Mechanics Overview of CSM Computational structural mechanics is a well-established methodology for the design and analysis of many components and structures found in the transportation field. Modern finite-element models (FEMs) play a major role in these evaluations, and sophisticated software, such as the commercially available LS-DYNA® code, is

    20. Computing environment logbook

      DOE Patents [OSTI]

      Osbourn, Gordon C; Bouchard, Ann M

      2012-09-18

      A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

    1. BNL ATLAS Grid Computing

      ScienceCinema (OSTI)

      Michael Ernst

      2010-01-08

      As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

    2. Computer_Vision

      Energy Science and Technology Software Center (OSTI)

      2002-10-04

      The Computer_Vision software performs object recognition using a novel multi-scale characterization and matching algorithm. To understand the multi-scale characterization and matching software, it is first necessary to understand some details of the Computer Vision (CV) Project. This project has focused on providing algorithms and software that provide an end-to-end toolset for image processing applications. At a high-level, this end-to-end toolset focuses on 7 coy steps. The first steps are geometric transformations. 1) Image Segmentation. Thismore » step essentially classifies pixels in foe input image as either being of interest or not of interest. We have also used GENIE segmentation output for this Image Segmentation step. 2 Contour Extraction (patent submitted). This takes the output of Step I and extracts contours for the blobs consisting of pixels of interest. 3) Constrained Delaunay Triangulation. This is a well-known geometric transformation that creates triangles inside the contours. 4 Chordal Axis Transform (CAT) . This patented geometric transformation takes the triangulation output from Step 3 and creates a concise and accurate structural representation of a contour. From the CAT, we create a linguistic string, with associated metrical information, that provides a detailed structural representation of a contour. 5.) Normalization. This takes an attributed linguistic string output from Step 4 and balances it. This ensures that the linguistic representation accurately represents the major sections of the contour. Steps 6 and 7 are implemented by the multi-scale characterization and matching software. 6) Multi scale Characterization. This takes as input the attributed linguistic string output from Normalization. Rules from a context free grammar are applied in reverse to create a tree-like representation for each contour. For example, one of the grammar’s rules is L -> (LL ). When an (LL) is seen in a string, a parent node is created that points to the four child symbols ‘(‘ , ‘L’ , ‘L’, and ‘)‘ . Levels in the tree can then be thought of as coarser (towards the root) or finer (towards the leaves) representations of the same contours. 7.) Multi scale Matching. Having a multi-scale characterization allows us to compare objects at a coarser level before matching at finer levels of detail. Matching at a coarser level not only increases the speed of the matching process (you’re comparing fewer symbols) , but also increases accuracy since small variations along contours do not significantly detract from two objects’ similarity.« less

    3. Parallel computing in enterprise modeling.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

      2008-08-01

      This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

    4. Scalable optical quantum computer

      SciTech Connect (OSTI)

      Manykin, E A; Mel'nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre 'Kurchatov Institute', Moscow (Russian Federation)

      2014-12-31

      A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

    5. Workshop in computational molecular biology, April 15, 1991--April 14, 1994

      SciTech Connect (OSTI)

      Tavare, S.

      1995-04-12

      Funds from this award were used to the Workshop in Computational Molecular Biology, `91 Symposium entitled Interface: Computing Science and Statistics, Seattle, Washington, April 21, 1991; the Workshop in Statistical Issues in Molecular Biology held at Stanford, California, August 8, 1993; and the Session on Population Genetics a part of the 56th Annual Meeting, Institute of Mathematical Statistics, San Francisco, California, August 9, 1993.

    6. DOE Issues Funding Opportunity for Advanced Computational and Modeling Research for the Electric Power System

      Broader source: Energy.gov [DOE]

      The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

    7. Derivation of an Applied Nonlinear Schroedinger Equation.

      SciTech Connect (OSTI)

      Pitts, Todd Alan; Laine, Mark Richard; Schwarz, Jens; Rambo, Patrick K.; Karelitz, David B.

      2015-01-01

      We derive from first principles a mathematical physics model useful for understanding nonlinear optical propagation (including filamentation). All assumptions necessary for the development are clearly explained. We include the Kerr effect, Raman scattering, and ionization (as well as linear and nonlinear shock, diffraction and dispersion). We explain the phenomenological sub-models and each assumption required to arrive at a complete and consistent theoretical description. The development includes the relationship between shock and ionization and demonstrates why inclusion of Drude model impedance effects alters the nature of the shock operator. Unclassified Unlimited Release

    8. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... - Department of Mathematics, Massachusetts Institute of Technology (MIT) Vogel, Curtis (Curtis Vogel) - Department of Mathematical Sciences, Montana State University Vogel, ...

    9. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      Department of Mathematics and Statistics, Smith College Haasdonk, Bernard (Bernard ... Department of Mathematics and Statistics, Smith College Hennig, Christian (Christian ...

    10. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

      SciTech Connect (OSTI)

      Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

      2011-03-18

      As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

    11. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

      SciTech Connect (OSTI)

      Izzuddin, Nur; Sunarsih,; Priyanto, Agoes

      2015-05-15

      As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

    12. Sandia Energy - High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing Home Energy Research Advanced Scientific Computing Research (ASCR) High Performance Computing High Performance Computingcwdd2015-03-18T21:41:24+00:00...

    13. A Hygrothermal Risk Analysis Applied to Residential Unvented Attics

      SciTech Connect (OSTI)

      Pallin, Simon B; Kehrer, Manfred

      2013-01-01

      Aresidential building, constructed with an unvented attic, is acommonroof assembly in the United States.The expected hygrothermal performance and service life of the roof are difficult to estimate due to a number of varying parameters.Typical parameters expected to vary are the climate, direction, and slope of the roof as well as the radiation properties of the surface material. Furthermore, influential parameters are indoor moisture excess, air leakages through the attic floor, and leakages from air-handling unit and ventilation ducts. In addition, the type of building materials such as the insulation material and closed or open cell spray polyurethane foam will influence the future performance of the roof. A development of a simulation model of the roof assembly will enable a risk and sensitivity analysis, in which the most important varying parameters on the hygrothermal performance can be determined. The model is designed to perform probabilistic simulations using mathematical and hygrothermal calculation tools. The varying input parameters can be chosen from existing measurements, simulations, or standards. An analysis is applied to determine the risk of consequences, such as mold growth, rot, or energy demand of the HVAC unit. Furthermore, the future performance of the roof can be simulated in different climates to facilitate the design of an efficient and reliable roof construction with the most suitable technical solution and to determine the most appropriate building materials for a given climate

    14. Modules and methods for all photonic computing

      DOE Patents [OSTI]

      Schultz, David R.; Ma, Chao Hung

      2001-01-01

      A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

    15. COMPUTATIONAL SCIENCE CENTER

      SciTech Connect (OSTI)

      DAVENPORT, J.

      2006-11-01

      Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together researchers in these areas and to provide a focal point for the development of computational expertise at the Laboratory. These efforts will connect to and support the Department of Energy's long range plans to provide Leadership class computing to researchers throughout the Nation. Recruitment for six new positions at Stony Brook to strengthen its computational science programs is underway. We expect some of these to be held jointly with BNL.

    16. Applied Field Research Initiative Deep Vadose Zone

      Office of Environmental Management (EM)

      Applied Field Research Initiative Deep Vadose Zone Located on the Hanford Site in Richland, Washington, the Deep Vadose Zone Applied Field Research Initiative (DVZ AFRI) was established to protect water resources by addressing the challenge of preventing contamination in the deep vadose zone from reaching groundwater. Led by the Pacific Northwest National Laboratory, the Initiative is a collaborative effort that leverages Department of Energy (DOE) investments in basic science and applied

    17. Apply for Your First NERSC Allocation

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply for Your First Allocation Apply for Your First NERSC Allocation Initial Steps Needed to Apply for Your First NERSC Allocation All work done at NERSC must be within the DOE Office of Science mission. See the Mission descriptions for each office at Allocations Overview and Eligibility. Prospective Principal Investigators without a NERSC login need to fill out two forms: The online ERCAP Access Request Form. If you wish to designate another person to fill out the request form you may

    18. Liquid Cooling v. Air Cooling Evaluation in the Maui High-Performance Computing Center

      Broader source: Energy.gov [DOE]

      Study evaluates the energy efficiency of a new, liquid-cooled computing system applied in a retrofit project compared to the previously used air-cooled system.

    19. NERSC Computer Security

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Security NERSC Computer Security NERSC computer security efforts are aimed at protecting NERSC systems and its users' intellectual property from unauthorized access or modification. Among NERSC's security goal are: 1. To protect NERSC systems from unauthorized access. 2. To prevent the interruption of services to its users. 3. To prevent misuse or abuse of NERSC resources. Security Incidents If you think there has been a computer security incident you should contact NERSC Security as soon as

    20. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Edison Electrifies Scientific Computing Edison Electrifies Scientific Computing NERSC Flips Switch on New Flagship Supercomputer January 31, 2014 Contact: Margie Wylie, mwylie@lbl.gov, +1 510 486 7421 The National Energy Research Scientific Computing (NERSC) Center recently accepted "Edison," a new flagship supercomputer designed for scientific productivity. Named in honor of American inventor Thomas Alva Edison, the Cray XC30 will be dedicated in a ceremony held at the Department of

    1. Computational Earth Science

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nucleosynthesis (Technical Report) | SciTech Connect Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Citation Details In-Document Search Title: Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.

    2. Computer Architecture Lab

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FastForward CAL Partnerships Shifter: User Defined Images Archive APEX Home » R & D » Exascale Computing » CAL Computer Architecture Lab The goal of the Computer Architecture Laboratory (CAL) is engage in research and development into energy efficient and effective processor and memory architectures for DOE's Exascale program. CAL coordinates hardware architecture R&D activities across the DOE. CAL is a joint NNSA/SC activity involving Sandia National Laboratories (CAL-Sandia) and

    3. Applied Materials Inc AMAT | Open Energy Information

      Open Energy Info (EERE)

      manufacturer of equipment used in solar (silicon, thin-film, BIPV), semiconductor, and LCD markets. References: Applied Materials Inc (AMAT)1 This article is a stub. You can...

    4. Applied Energy Management | Open Energy Information

      Open Energy Info (EERE)

      Energy Management Jump to: navigation, search Name: Applied Energy Management Place: Huntersville, North Carolina Zip: 28078 Sector: Efficiency, Renewable Energy Product: North...

    5. Applied Quantum Technology AQT | Open Energy Information

      Open Energy Info (EERE)

      Quantum Technology AQT Jump to: navigation, search Name: Applied Quantum Technology (AQT) Place: Santa Clara, California Zip: 95054 Product: California-based manufacturer of CIGS...

    6. Mathematical modeling of mass transfer during centrifugal filtration of polydisperse suspensions

      SciTech Connect (OSTI)

      V.F. Pozhidaev; Y.B. Rubinshtein; G.Y. Golberg; S.A. Osadchii

      2009-07-15

      A mass-transfer equation, the solution of which for given boundary conditions makes it possible to derive in analytical form a relationship between the extraction of the solid phase of a suspension into the centrifuge effluent and the fineness of the particles, is suggested on the basis of a model; this is of particular importance in connection with the development of a new trend in the utilization of filtering centrifuges - concentration of coal slurries by extraction into the centrifuge effluent of the finest particles, the ash content of which is substantially higher than that of particles of the coarser classes. Results are presented for production studies under conditions at an active establishment (the Neryungrinskaya Enrichment Factory); these results confirmed the adequacy of the mathematical model proposed: convergence of computed and experimental data was within the limits of the experimental error (no more than 3%). The model in question can be used to predict results of suspension separation by centrifugal filtration.

    7. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... for use in Advanced Strategic Computing codes Theory and modeling of dense plasmas in ICF and astrophysics environments Theory and modeling of astrophysics in support of NASA ...

    8. Personal Computer Inventory System

      Energy Science and Technology Software Center (OSTI)

      1993-10-04

      PCIS is a database software system that is used to maintain a personal computer hardware and software inventory, track transfers of hardware and software, and provide reports.

    9. Nuclear Computational Low Energy Initiative (NUCLEI) | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Nuclear Computational Low Energy Initiative (NUCLEI) FWP/Project Description: We propose to advance large-scale nuclear physics computations to dramatically increase our understanding of nuclear structure and reactions and the properties of nucleonic matter. Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional codes have been developed and scaled efficiently to the largest computers available, and we propose to work closely with applied mathematicians and

    10. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership

      Office of Scientific and Technical Information (OSTI)

      Project Annual Report (Technical Report) | SciTech Connect Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report Citation Details In-Document Search Title: Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of

    11. 60 Years of Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      60 Years of Computing 60 Years of Computing

    12. Nuclear Facilities and Applied Technologies at Sandia

      SciTech Connect (OSTI)

      Wheeler, Dave; Kaiser, Krista; Martin, Lonnie; Hanson, Don; Harms, Gary; Quirk, Tom

      2014-11-28

      The Nuclear Facilities and Applied Technologies organization at Sandia National Laboratories Technical Area Five (TA-V) is the leader in advancing nuclear technologies through applied radiation science and unique nuclear environments. This video describes the organizations capabilities, facilities, and culture.

    13. ELECTRONIC DIGITAL COMPUTER

      DOE Patents [OSTI]

      Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.

      1957-10-01

      The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.

    14. Computer Processor Allocator

      Energy Science and Technology Software Center (OSTI)

      2004-03-01

      The Compute Processor Allocator (CPA) provides an efficient and reliable mechanism for managing and allotting processors in a massively parallel (MP) computer. It maintains information in a database on the health. configuration and allocation of each processor. This persistent information is factored in to each allocation decision. The CPA runs in a distributed fashion to avoid a single point of failure.

    15. Indirection and computer security.

      SciTech Connect (OSTI)

      Berg, Michael J.

      2011-09-01

      The discipline of computer science is built on indirection. David Wheeler famously said, 'All problems in computer science can be solved by another layer of indirection. But that usually will create another problem'. We propose that every computer security vulnerability is yet another problem created by the indirections in system designs and that focusing on the indirections involved is a better way to design, evaluate, and compare security solutions. We are not proposing that indirection be avoided when solving problems, but that understanding the relationships between indirections and vulnerabilities is key to securing computer systems. Using this perspective, we analyze common vulnerabilities that plague our computer systems, consider the effectiveness of currently available security solutions, and propose several new security solutions.

    16. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

      SciTech Connect (OSTI)

      Runnels, Scott Robert; Caldwell, Wendy; Brown, Barton Jed; Pederson, Clark; Brown, Justin; Burrill, Daniel; Feinblum, David; Hyde, David; Levick, Nathan; Lyngaas, Isaac; Maeng, Brad; Reed, Richard LeRoy; Sarno-Smith, Lois; Shohet, Gil; Skarda, Jinhie; Stevens, Josey; Zeppetello, Lucas; Grossman-Ponemon, Benjamin; Bottini, Joseph Larkin; Loudon, Tyson Shane; VanGessel, Francis Gilbert; Nagaraj, Sriram; Price, Jacob

      2015-10-15

      The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it. This report includes both the background for the program and the reports from the students.

    17. Pi in Applied Optics | GE Global Research

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Inside the Applied Optics Lab II Click to email this to a friend (Opens in new window) Share on Facebook (Opens in new window) Click to share (Opens in new window) Click to share on LinkedIn (Opens in new window) Click to share on Tumblr (Opens in new window) The sPI CAM: Inside the Applied Optics Lab II The sPI Cam visits the Applied Optics Lab to see how Mark Meyers, a physicist and optical engineer at GE Global Research, uses Pi. You Might Also Like lightning bolt We One-Upped Ben Franklin,

    18. Argonne's Laboratory Computing Resource Center : 2005 annual report.

      SciTech Connect (OSTI)

      Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

      2007-06-30

      Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to develop comprehensive scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific reach and performance of Argonne's computational applications. Furthermore, recognizing that Jazz is fully subscribed, with considerable unmet demand, the LCRC has begun developing a 'path forward' plan for additional computing resources.

    19. Computers as tools

      SciTech Connect (OSTI)

      Eriksson, I.V.

      1994-12-31

      The following message was recently posted on a bulletin board and clearly shows the relevance of the conference theme: {open_quotes}The computer and digital networks seem poised to change whole regions of human activity -- how we record knowledge, communicate, learn, work, understand ourselves and the world. What`s the best framework for understanding this digitalization, or virtualization, of seemingly everything? ... Clearly, symbolic tools like the alphabet, book, and mechanical clock have changed some of our most fundamental notions -- self, identity, mind, nature, time, space. Can we say what the computer, a purely symbolic {open_quotes}machine,{close_quotes} is doing to our thinking in these areas? Or is it too early to say, given how much more powerful and less expensive the technology seems destinated to become in the next few decades?{close_quotes} (Verity, 1994) Computers certainly affect our lives and way of thinking but what have computers to do with ethics? A narrow approach would be that on the one hand people can and do abuse computer systems and on the other hand people can be abused by them. Weli known examples of the former are computer comes such as the theft of money, services and information. The latter can be exemplified by violation of privacy, health hazards and computer monitoring. Broadening the concept from computers to information systems (ISs) and information technology (IT) gives a wider perspective. Computers are just the hardware part of information systems which also include software, people and data. Information technology is the concept preferred today. It extends to communication, which is an essential part of information processing. Now let us repeat the question: What has IT to do with ethics? Verity mentioned changes in {open_quotes}how we record knowledge, communicate, learn, work, understand ourselves and the world{close_quotes}.

    20. Innovative mathematical modeling in environmental remediation

      SciTech Connect (OSTI)

      Yeh, Gour T.; Gwo, Jin Ping; Siegel, Malcolm D.; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steven B.

      2013-05-01

      There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.

    1. Overview of the NMSEA applied research program

      SciTech Connect (OSTI)

      Stickney, B.; Wilson, A.

      1980-01-01

      Recently the NMSEA has seen the need to augment its other informational programs with a program of in-house applied research. The reasoning behind this move is presented here along with and accounting of past research activities.

    2. SAGE, Summer of Applied Geophysical Experience

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply Who Qualifies Special Undergrad Information Contributors Faculty Past Programs Photo Gallery NSEC » CSES » SAGE SAGE, the Summer of Applied Geophysical Experience A National Science Foundation Research Experiences for Undergraduates program Contacts Institute Director Reinhard Friedel-Los Alamos SAGE Co-Director W. Scott Baldridge-Los Alamos SAGE Co-Director Larry Braile-Purdue University Professional Staff Assistant Georgia Sanchez (505) 665-0855 Email U.S. undergraduates

    3. How to Apply | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Postdoctoral Research Awards » How to Apply How to Apply Online Application Available at www.zintellect.com/Posting/Details/1997 Application deadline May 20, 2016. Familiarize yourself with the benefits, obligations, eligibility requirements, and evaluation criteria. Familiarize yourself with the requirements and obligations to determine whether your education and professional goals are well aligned with the EERE Postdoctoral Research Awards. Read the Evaluation Criteria that will be used to

    4. Apply to the Cyclotron Institute REU Program

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Apply Now Applying for the 2016 NSF-REU Nuclear Physics and Nuclear Chemistry Program at the Cyclotron Institute (APPLICATION DEADLINE IS FRIDAY, FEBRUARY 5th, 2016) Eligibility: Applicants must be US citizens or have permanent resident status. Applicants must have undergraduate status at the time of the program. (Students planning to receive a degree by May 2016 are not eligible). Applicants must have completed an introductory physics/chemistry course and have completed or be enrolled in an

    5. 2011 Computation Directorate Annual Report

      SciTech Connect (OSTI)

      Crawford, D L

      2012-04-11

      From its founding in 1952 until today, Lawrence Livermore National Laboratory (LLNL) has made significant strategic investments to develop high performance computing (HPC) and its application to national security and basic science. Now, 60 years later, the Computation Directorate and its myriad resources and capabilities have become a key enabler for LLNL programs and an integral part of the effort to support our nation's nuclear deterrent and, more broadly, national security. In addition, the technological innovation HPC makes possible is seen as vital to the nation's economic vitality. LLNL, along with other national laboratories, is working to make supercomputing capabilities and expertise available to industry to boost the nation's global competitiveness. LLNL is on the brink of an exciting milestone with the 2012 deployment of Sequoia, the National Nuclear Security Administration's (NNSA's) 20-petaFLOP/s resource that will apply uncertainty quantification to weapons science. Sequoia will bring LLNL's total computing power to more than 23 petaFLOP/s-all brought to bear on basic science and national security needs. The computing systems at LLNL provide game-changing capabilities. Sequoia and other next-generation platforms will enable predictive simulation in the coming decade and leverage industry trends, such as massively parallel and multicore processors, to run petascale applications. Efficient petascale computing necessitates refining accuracy in materials property data, improving models for known physical processes, identifying and then modeling for missing physics, quantifying uncertainty, and enhancing the performance of complex models and algorithms in macroscale simulation codes. Nearly 15 years ago, NNSA's Accelerated Strategic Computing Initiative (ASCI), now called the Advanced Simulation and Computing (ASC) Program, was the critical element needed to shift from test-based confidence to science-based confidence. Specifically, ASCI/ASC accelerated the development of simulation capabilities necessary to ensure confidence in the nuclear stockpile-far exceeding what might have been achieved in the absence of a focused initiative. While stockpile stewardship research pushed LLNL scientists to develop new computer codes, better simulation methods, and improved visualization technologies, this work also stimulated the exploration of HPC applications beyond the standard sponsor base. As LLNL advances to a petascale platform and pursues exascale computing (1,000 times faster than Sequoia), ASC will be paramount to achieving predictive simulation and uncertainty quantification. Predictive simulation and quantifying the uncertainty of numerical predictions where little-to-no data exists demands exascale computing and represents an expanding area of scientific research important not only to nuclear weapons, but to nuclear attribution, nuclear reactor design, and understanding global climate issues, among other fields. Aside from these lofty goals and challenges, computing at LLNL is anything but 'business as usual.' International competition in supercomputing is nothing new, but the HPC community is now operating in an expanded, more aggressive climate of global competitiveness. More countries understand how science and technology research and development are inextricably linked to economic prosperity, and they are aggressively pursuing ways to integrate HPC technologies into their native industrial and consumer products. In the interest of the nation's economic security and the science and technology that underpins it, LLNL is expanding its portfolio and forging new collaborations. We must ensure that HPC remains an asymmetric engine of innovation for the Laboratory and for the U.S. and, in doing so, protect our research and development dynamism and the prosperity it makes possible. One untapped area of opportunity LLNL is pursuing is to help U.S. industry understand how supercomputing can benefit their business. Industrial investment in HPC applications has historically been limited by the prohibitive cost of entry, the inaccessibility of software to run the powerful systems, and the years it takes to grow the expertise to develop codes and run them in an optimal way. LLNL is helping industry better compete in the global market place by providing access to some of the world's most powerful computing systems, the tools to run them, and the experts who are adept at using them. Our scientists are collaborating side by side with industrial partners to develop solutions to some of industry's toughest problems. The goal of the Livermore Valley Open Campus High Performance Computing Innovation Center is to allow American industry the opportunity to harness the power of supercomputing by leveraging the scientific and computational expertise at LLNL in order to gain a competitive advantage in the global economy.

    6. Present and Future Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Important for DOE Energy Frontier Mission 2 * TH HEP is new ... & PDSF (studies based on usage for end of Sep 2012 - Nov ... framework (Sherpa), and a library for the computation of ...

    7. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      a n n u a l r e p o r t 2 0 1 2 Argonne Leadership Computing Facility Director's Message .............................................................................................................................1 About ALCF ......................................................................................................................................... 2 IntroDuCIng MIrA Introducing Mira

    8. Cloud computing security.

      SciTech Connect (OSTI)

      Shin, Dongwan; Claycomb, William R.; Urias, Vincent E.

      2010-10-01

      Cloud computing is a paradigm rapidly being embraced by government and industry as a solution for cost-savings, scalability, and collaboration. While a multitude of applications and services are available commercially for cloud-based solutions, research in this area has yet to fully embrace the full spectrum of potential challenges facing cloud computing. This tutorial aims to provide researchers with a fundamental understanding of cloud computing, with the goals of identifying a broad range of potential research topics, and inspiring a new surge in research to address current issues. We will also discuss real implementations of research-oriented cloud computing systems for both academia and government, including configuration options, hardware issues, challenges, and solutions.

    9. Edison Electrifies Scientific Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... Deployment of Edison was made possible in part by funding from DOE's Office of Science and the DARPA High Productivity Computing Systems program. DOE's Office of Science is the ...

    10. Computing | Department of Energy

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Computing Computing Fun fact: Most systems require air conditioning or chilled water to cool super powerful supercomputers, but the Olympus supercomputer at Pacific Northwest National Laboratory is cooled by the location's 65 degree groundwater. Traditional cooling systems could cost up to $61,000 in electricity each year, but this more efficient setup uses 70 percent less energy. | Photo courtesy of PNNL. Fun fact: Most systems require air conditioning or chilled water to cool super powerful

    11. Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Argonne National Laboratory | 9700 South Cass Avenue | Argonne, IL 60439 | www.anl.gov | September 2013 alcf_keyfacts_fs_0913 Key facts about the Argonne Leadership Computing Facility User support and services Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. Catalysts are computational scientist with domain expertise and work directly with project principal investigators to maximize discovery and reduce time-to- solution.

    12. New TRACC Cluster Computer

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      TRACC Cluster Computer With the addition of a new cluster called Zephyr that was made operational in September of this year (2012), TRACC now offers two clusters to choose from: Zephyr and our original cluster that has now been named Phoenix. Zephyr was acquired from Atipa technologies, and it is a 92-node system with each node having two AMD 16 core, 2.3 GHz, 32 GB processors. See also Computing Resources.

    13. Computational Physics and Methods

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      2 Computational Physics and Methods Performing innovative simulations of physics phenomena on tomorrow's scientific computing platforms Growth and emissivity of young galaxy hosting a supermassive black hole as calculated in cosmological code ENZO and post-processed with radiative transfer code AURORA. image showing detailed turbulence simulation, Rayleigh-Taylor Turbulence imaging: the largest turbulence simulations to date Advanced multi-scale modeling Turbulence datasets Density iso-surfaces

    14. Advanced Simulation and Computing

      National Nuclear Security Administration (NNSA)

      NA-ASC-117R-09-Vol.1-Rev.0 Advanced Simulation and Computing PROGRAM PLAN FY09 October 2008 ASC Focal Point Robert Meisner, Director DOE/NNSA NA-121.2 202-586-0908 Program Plan Focal Point for NA-121.2 Njema Frazier DOE/NNSA NA-121.2 202-586-5789 A Publication of the Office of Advanced Simulation & Computing, NNSA Defense Programs i Contents Executive Summary ----------------------------------------------------------------------------------------------- 1 I. Introduction

    15. Compute Reservation Request Form

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Compute Reservation Request Form Compute Reservation Request Form Users can request a scheduled reservation of machine resources if their jobs have special needs that cannot be accommodated through the regular batch system. A reservation brings some portion of the machine to a specific user or project for an agreed upon duration. Typically this is used for interactive debugging at scale or real time processing linked to some experiment or event. It is not intended to be used to guarantee fast

    16. Effects of Relativity Lead to"Warp Speed" Computations

      SciTech Connect (OSTI)

      Vay, J.-L.

      2007-11-01

      A scientist at Lawrence Berkeley National Laboratory has discovered that a previously unnoticed consequence of Einstein's special theory of relativity can lead to speedup of computer calculations by orders of magnitude when applied to the computer modeling of a certain class of physical systems. This new finding offers the possibility of tackling some problems in a much shorter time and with far more precision than was possible before, as well as studying some configurations in every detail for the first time. The basis of Einstein's theory is the principle of relativity, which states that the laws of physics are the same for all observers, whether the 'observer' is a turtle 'racing' with a rabbit, or a beam of particles moving at near light speed. From the invariance of the laws of physics, one may be tempted to infer that the complexity of a system is independent of the motion of the observer, and consequently, a computer simulation will require the same number of mathematical operations, independently of the reference frame that is used for the calculation. Length contraction and time dilation are well known consequences of the special theory of relativity which lead to very counterintuitive effects. An alien observing human activity through a telescope in a spaceship traveling in the Vicinity of the earth near the speed of light would see everything flattened in the direction of propagation of its spaceship (for him, the earth would have the shape of a pancake), while all motions on earth would appear extremely slow, slowed almost to a standstill. Conversely, a space scientist observing the alien through a telescope based on earth would see a flattened alien almost to a standstill in a flattened spaceship. Meanwhile, an astronaut sitting in a spaceship moving at some lower velocity than the alien spaceship with regard to earth might see both the alien spaceship and the earth flattened in the same proportion and the motion unfolding in each of them at the same speed. Let us now assume that each protagonist (the alien, the space scientist and the astronaut) is to run a computer simulation describing the motion of all of them in a single calculation. In order to model a physical system on a computer, scientists often divide space and time into small chunks. Since the computer must calculated some things for each chunk, having a large system containing numerous small chunks translates to long calculations requiring many computational steps on supercomputers. Let us assume that each protagonist of our intergalactic story uses the space and time slicing as described and chooses to perform the calculation in its own frame of reference. For the alien and the space scientist, the slicing of space and time results in an exceedingly large number of chunks, due to the wide disparity of spatial and time scales needed to describe both their own environment and motion together with the other extremely flattened environment and slowed motion. Since the disparity of scales is reduced for the astronaut, who is traveling at an intermediate velocity, the number of computer operations needed to complete the calculation in his frame of reference will be significantly lower, possibly by many orders of magnitude. Analogously, the new discovery at Lawrence Berkeley National Laboratory shows that there exists a frame of reference minimizing the number of computational operations needed for studying the interaction of beams of particles or light (lasers) interacting at, or near, light speed with other particles or with surrounding structures. Speedups ranging from ten to a million times or more are predicted for the modeling of beams interacting with electron clouds, such as those in the upcoming Large Hadron Collider 'atom smasher' accelerator at CERN (Switzerland), and in free electron lasers and tabletop laser wakefield accelerators. The discovery has surprised many physicists and was received initially with much skepticism. It sounded too much like a 'free lunch'. Yet, the demonstration of a speedup of a stunning one thousand times in a te

    17. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

      SciTech Connect (OSTI)

      Jablonowski, Christiane

      2015-07-14

      The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.

    18. Discretionary Allocation Request | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Discretionary Allocation Request Welcome to the Director's Discretionary Allocation request page. Director's Discretionary Allocations are "start up" awards of compute hours given by the ALCF to projects that can demonstrate a need for leadership-class resources. Awards are made year round to industry, academia, laboratories and others. Duration is three or six months. To apply for an allocation, please complete the following form. The ALCF allocation team will contact you within 2

    19. Computational model, method, and system for kinetically-tailoring multi-drug chemotherapy for individuals

      DOE Patents [OSTI]

      Gardner, Shea Nicole

      2007-10-23

      A method and system for tailoring treatment regimens to individual patients with diseased cells exhibiting evolution of resistance to such treatments. A mathematical model is provided which models rates of population change of proliferating and quiescent diseased cells using cell kinetics and evolution of resistance of the diseased cells, and pharmacokinetic and pharmacodynamic models. Cell kinetic parameters are obtained from an individual patient and applied to the mathematical model to solve for a plurality of treatment regimens, each having a quantitative efficacy value associated therewith. A treatment regimen may then be selected from the plurlaity of treatment options based on the efficacy value.

    20. Intro to computer programming, no computer required! | Argonne...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      ... "Computational thinking requires you to think in abstractions," said Papka, who spoke to computer science and computer-aided design students at Kaneland High School in Maple Park about ...

    1. Partial Support of Meeting of the Board on Mathematical Sciences and Their Applications

      SciTech Connect (OSTI)

      Weidman, Scott

      2014-08-31

      During the performance period, BMSA released the following major reports: Transforming Combustion Research through Cyberinfrastructure (2011); Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification (2012); Fueling Innovation and Discovery: The Mathematical Sciences in the 21st Century (2012); Aging and the Macroeconomy: Long-Term Implications of an Older Population (2012); The Mathematical Sciences in 2025 (2013); Frontiers in Massive Data Analysis (2013); and Developing a 21st Century Global Library for Mathematics Research (2014).

    2. computing | National Nuclear Security Administration

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computing NNSA Announces Procurement of Penguin Computing Clusters to Support Stockpile Stewardship at National Labs The National Nuclear Security Administration's (NNSA's) Lawrence Livermore National Laboratory today announced the awarding of a subcontract to Penguin Computing - a leading developer of high-performance Linux cluster computing systems based in Silicon Valley - to bolster computing for stockpile

    3. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

      SciTech Connect (OSTI)

      Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

      2008-09-01

      The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

    4. Can Cloud Computing Address the Scientific Computing Requirements for DOE

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe Can Cloud Computing Address the Scientific Computing Requirements for DOE Researchers? Well, Yes, No and Maybe January 30, 2012 Jon Bashor, Jbashor@lbl.gov, +1 510-486-5849 Magellan1.jpg Magellan at NERSC After a two-year study of the feasibility of cloud computing systems for meeting the ever-increasing computational needs of scientists,

    5. Featured Announcements

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      October 15, 2012 by Francesca Verdier Researchers in computer science, applied mathematics or any computational science discipline who have received their Ph.D. within the last ...

    6. Program Managers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-1 Computational Partnerships: Galen Shipman, CCS-7 Basic Energy Sciences Materials Sciences & ...

    7. Unsolicited Projects in 2011: Research in Execution Models |...

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    8. Dr Steve Binkley | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      He has conducted research in theoretical chemistry, materials science, computer science, applied mathematics, and microelectronics. At Sandia, Dr. Binkley managed computer science, ...

    9. ASCR X-Stack Portfolio | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      ASCR X-Stack Portfolio Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges ...

    10. X-Stack Software Research | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      X-Stack Software Research Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges ...

    11. aa | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming Challenges Workshop Architectures I ...

    12. Challenges to be Addressed | U.S. DOE Office of Science (SC)

      Office of Science (SC) Website

      Challenges to be Addressed Advanced Scientific Computing Research (ASCR) ASCR Home About Research Applied Mathematics Computer Science Exascale Tools Workshop Programming ...

    13. in High Performance Computing Computer System, Cluster, and Networking...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      iSSH v. Auditd: Intrusion Detection in High Performance Computing Computer System, Cluster, and Networking Summer Institute David Karns, New Mexico State University Katy Protin,...

    14. EEG and MEG source localization using recursively applied (RAP) MUSIC

      SciTech Connect (OSTI)

      Mosher, J.C.; Leahy, R.M.

      1996-12-31

      The multiple signal characterization (MUSIC) algorithm locates multiple asynchronous dipolar sources from electroencephalography (EEG) and magnetoencephalography (MEG) data. A signal subspace is estimated from the data, then the algorithm scans a single dipole model through a three-dimensional head volume and computes projections onto this subspace. To locate the sources, the user must search the head volume for local peaks in the projection metric. Here we describe a novel extension of this approach which we refer to as RAP (Recursively APplied) MUSIC. This new procedure automatically extracts the locations of the sources through a recursive use of subspace projections, which uses the metric of principal correlations as a multidimensional form of correlation analysis between the model subspace and the data subspace. The dipolar orientations, a form of `diverse polarization,` are easily extracted using the associated principal vectors.

    15. SC e-journals, Materials Science

      Office of Scientific and Technical Information (OSTI)

      Materials Science Acta Materialia Advanced Composite Materials Advanced Energy Materials Advanced Engineering Materials Advanced Functional Materials Advanced Materials Advanced Powder Technology Advances in Materials Science and Engineering - OAJ Annual Review of Materials Research Applied Composite Materials Applied Mathematical Modelling Applied Mathematics & Computation Applied Physics A Applied Physics B Applied Surface Science Archives of Computational Materials Science and Surface

    16. Method of applying coatings to substrates

      DOE Patents [OSTI]

      Hendricks, Charles D.

      1991-01-01

      A method for applying novel coatings to substrates is provided. The ends of multiplicity of rods of different materials are melted by focused beams of laser light. Individual electric fields are applied to each of the molten rod ends, thereby ejecting charged particles that include droplets, atomic clusters, molecules, and atoms. The charged particles are separately transported, by the accelerations provided by electric potentials produced by an electrode structure, to substrates where they combine and form the coatings. Layered and thickness graded coatings comprised of hithereto unavailable compositions, are provided.

    17. Extensible Computational Chemistry Environment

      Energy Science and Technology Software Center (OSTI)

      2012-08-09

      ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing themore » power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of the inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

    18. Mathematical Modeling of Microbial Community Dynamics: A Methodological Review

      SciTech Connect (OSTI)

      Song, Hyun-Seob; Cannon, William R.; Beliaev, Alex S.; Konopka, Allan

      2014-10-17

      Microorganisms in nature form diverse communities that dynamically change in structure and function in response to environmental variations. As a complex adaptive system, microbial communities show higher-order properties that are not present in individual microbes, but arise from their interactions. Predictive mathematical models not only help to understand the underlying principles of the dynamics and emergent properties of natural and synthetic microbial communities, but also provide key knowledge required for engineering them. In this article, we provide an overview of mathematical tools that include not only current mainstream approaches, but also less traditional approaches that, in our opinion, can be potentially useful. We discuss a broad range of methods ranging from low-resolution supra-organismal to high-resolution individual-based modeling. Particularly, we highlight the integrative approaches that synergistically combine disparate methods. In conclusion, we provide our outlook for the key aspects that should be further developed to move microbial community modeling towards greater predictive power.

    19. computers | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Sandia donates 242 computers to northern California schools Sandia National Laboratories electronics technologist Mitch Williams prepares the disassembly of 242 computers for ...

    20. Careers | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      At the Argonne Leadership Computing Facility, we are helping to redefine what's possible in computational science. With some of the most powerful supercomputers in the world and a ...

    1. Computer simulation | Open Energy Information

      Open Energy Info (EERE)

      Computer simulation Jump to: navigation, search OpenEI Reference LibraryAdd to library Web Site: Computer simulation Author wikipedia Published wikipedia, 2013 DOI Not Provided...

    2. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse ...

    3. Human-computer interface

      DOE Patents [OSTI]

      Anderson, Thomas G.

      2004-12-21

      The present invention provides a method of human-computer interfacing. Force feedback allows intuitive navigation and control near a boundary between regions in a computer-represented space. For example, the method allows a user to interact with a virtual craft, then push through the windshield of the craft to interact with the virtual world surrounding the craft. As another example, the method allows a user to feel transitions between different control domains of a computer representation of a space. The method can provide for force feedback that increases as a user's locus of interaction moves near a boundary, then perceptibly changes (e.g., abruptly drops or changes direction) when the boundary is traversed.

    4. How to Apply for Senior Executive positions

      Broader source: Energy.gov [DOE]

      To apply vacancies for SENIOR EXECUTIVE SERVICE (SES) , SENIOR LEVEL (SL), SCIENTIFIC AND PROFESSIONAL (ST) positions within the Department of Energy please visit OPM's website: http://www.usajobs.gov. From this site, you may download announcements for vacancies of interest to you.

    5. Uniform insulation applied-B ion diode

      DOE Patents [OSTI]

      Seidel, David B.; Slutz, Stephen A.

      1988-01-01

      An applied-B field extraction ion diode has uniform insulation over an anode surface for increased efficiency. When the uniform insulation is accomplished with anode coils, and a charge-exchange foil is properly placed, the ions may be focused at a point on the z axis.

    6. Mathematical treatment of isotopologue and isotopomer speciation and fractionation in biochemical kinetics

      SciTech Connect (OSTI)

      Maggi, F.M.; Riley, W.J.

      2009-11-01

      We present a mathematical treatment of the kinetic equations that describe isotopologue and isotopomer speciation and fractionation during enzyme-catalyzed biochemical reactions. These equations, presented here with the name GEBIK (general equations for biochemical isotope kinetics) and GEBIF (general equations for biochemical isotope fractionation), take into account microbial biomass and enzyme dynamics, reaction stoichiometry, isotope substitution number, and isotope location within each isotopologue and isotopomer. In addition to solving the complete GEBIK and GEBIF, we also present and discuss two approximations to the full solutions under the assumption of biomass-free and enzyme steady-state, and under the quasi-steady-state assumption as applied to the complexation rate. The complete and approximate approaches are applied to observations of biological denitrification in soils. Our analysis highlights that the full GEBIK and GEBIF provide a more accurate description of concentrations and isotopic compositions of substrates and products throughout the reaction than do the approximate forms. We demonstrate that the isotopic effects of a biochemical reaction depend, in the most general case, on substrate and complex concentrations and, therefore, the fractionation factor is a function of time. We also demonstrate that inverse isotopic effects can occur for values of the fractionation factor smaller than 1, and that reactions that do not discriminate isotopes do not necessarily imply a fractionation factor equal to 1.

    7. A Compact Code for Simulations of Quantum Error Correction in Classical Computers

      SciTech Connect (OSTI)

      Nyman, Peter

      2009-03-10

      This study considers implementations of error correction in a simulation language on a classical computer. Error correction will be necessarily in quantum computing and quantum information. We will give some examples of the implementations of some error correction codes. These implementations will be made in a more general quantum simulation language on a classical computer in the language Mathematica. The intention of this research is to develop a programming language that is able to make simulations of all quantum algorithms and error corrections in the same framework. The program code implemented on a classical computer will provide a connection between the mathematical formulation of quantum mechanics and computational methods. This gives us a clear uncomplicated language for the implementations of algorithms.

    8. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2015-01-27

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    9. Synchronizing compute node time bases in a parallel computer

      DOE Patents [OSTI]

      Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

      2014-12-30

      Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

    10. Computer Security Risk Assessment

      Energy Science and Technology Software Center (OSTI)

      1992-02-11

      LAVA/CS (LAVA for Computer Security) is an application of the Los Alamos Vulnerability Assessment (LAVA) methodology specific to computer and information security. The software serves as a generic tool for identifying vulnerabilities in computer and information security safeguards systems. Although it does not perform a full risk assessment, the results from its analysis may provide valuable insights into security problems. LAVA/CS assumes that the system is exposed to both natural and environmental hazards and tomore » deliberate malevolent actions by either insiders or outsiders. The user in the process of answering the LAVA/CS questionnaire identifies missing safeguards in 34 areas ranging from password management to personnel security and internal audit practices. Specific safeguards protecting a generic set of assets (or targets) from a generic set of threats (or adversaries) are considered. There are four generic assets: the facility, the organization''s environment; the hardware, all computer-related hardware; the software, the information in machine-readable form stored both on-line or on transportable media; and the documents and displays, the information in human-readable form stored as hard-copy materials (manuals, reports, listings in full-size or microform), film, and screen displays. Two generic threats are considered: natural and environmental hazards, storms, fires, power abnormalities, water and accidental maintenance damage; and on-site human threats, both intentional and accidental acts attributable to a perpetrator on the facility''s premises.« less

    11. MHD computations for stellarators

      SciTech Connect (OSTI)

      Johnson, J.L.

      1985-12-01

      Considerable progress has been made in the development of computational techniques for studying the magnetohydrodynamic equilibrium and stability properties of three-dimensional configurations. Several different approaches have evolved to the point where comparison of results determined with different techniques shows good agreement. 55 refs., 7 figs.

    12. Programs for attracting under-represented minority students to graduate school and research careers in computational science. Final report for period October 1, 1995 - September 30, 1997

      SciTech Connect (OSTI)

      Turner, James C. Jr.; Mason, Thomas; Guerrieri, Bruno

      1997-10-01

      Programs have been established at Florida A & M University to attract minority students to research careers in mathematics and computational science. The primary goal of the program was to increase the number of such students studying computational science via an interactive multimedia learning environment One mechanism used for meeting this goal was the development of educational modules. This academic year program established within the mathematics department at Florida A&M University, introduced students to computational science projects using high-performance computers. Additional activities were conducted during the summer, these included workshops, meetings, and lectures. Through the exposure provided by this program to scientific ideas and research in computational science, it is likely that their successful applications of tools from this interdisciplinary field will be high.

    13. Summer of Applied Geophysical Experience Reading List

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Geophysical Experience Reading List Summer of Applied Geophysical Experience Reading List A National Science Foundation Research Experiences for Undergraduates program Contacts Institute Director Reinhard Friedel-Los Alamos SAGE Co-Director W. Scott Baldridge-Los Alamos SAGE Co-Director Larry Braile-Purdue University Professional Staff Assistant Georgia Sanchez (505) 665-0855 Keller, R., Khan, M. A., Morgan, P., et al., 1991, A Comparative Study of the Rio Grande and Kenya rifts, Tectonophys.,

    14. Applied Cathode Enhancement and Robustness Technologies (ACERT)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Accelerators, Electrodynamics » ACERT Applied Cathode Enhancement and Robustness Technologies (ACERT) World leading experts from fields of accelerator design & testing, chemical synthesis of nanomaterials, and shielding application of nanomaterials. thumbnail of Nathan Moody Nathan Moody Principal Investigator (PI) Email ACERT Logo Team Our project team, a part of Los Alamos National Laboratory (LANL) comprised of world leading experts from fields of accelerator design & testing,

    15. A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus

      SciTech Connect (OSTI)

      Raustad, Richard A.

      2013-01-01

      This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

    16. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      Z Yaakobi, Eitan (Eitan Yaakobi) - Department of Electrical Engineering, California Institute of Technology Yagan, Osman (Osman Yagan) - Department of Electrical and Computer ...

    17. SCC: The Strategic Computing Complex

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      SCC: The Strategic Computing Complex SCC: The Strategic Computing Complex The Strategic Computing Complex (SCC) is a secured supercomputing facility that supports the calculation, modeling, simulation, and visualization of complex nuclear weapons data in support of the Stockpile Stewardship Program. The 300,000-square-foot, vault-type building features an unobstructed 43,500-square-foot computer room, which is an open room about three-fourths the size of a football field. The Strategic Computing

    18. Magellan: A Cloud Computing Testbed

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Magellan News & Announcements Archive Petascale Initiative Exascale Computing APEX Home » R & D » Archive » Magellan: A Cloud Computing Testbed Magellan: A Cloud Computing Testbed Cloud computing is gaining a foothold in the business world, but can clouds meet the specialized needs of scientists? That was one of the questions NERSC's Magellan cloud computing testbed explored between 2009 and 2011. The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Oce

    19. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... - Department of Mathematics, Kutztown University of Pennsylvania McMullen, Curtis T.(Curtis T.McMullen).- Department of Mathematics, Harvard University McNamara, Peter ...

    20. Browse by Discipline -- E-print Network Subject Pathways: Computer...

      Office of Scientific and Technical Information (OSTI)

      ... Brakocevic) - Department of Mathematics and Statistics, McGill University Brand, Neal (Neal Brand) - Department of Mathematics, University of North Texas Brandolese, Lorenzo ...

    1. Computer Algebra System

      Energy Science and Technology Software Center (OSTI)

      1992-05-04

      DOE-MACSYMA (Project MAC''s SYmbolic MAnipulation system) is a large computer programming system written in LISP. With DOE-MACSYMA the user can differentiate, integrate, take limits, solve systems of linear or polynomial equations, factor polynomials, expand functions in Laurent or Taylor series, solve differential equations (using direct or transform methods), compute Poisson series, plot curves, and manipulate matrices and tensors. A language similar to ALGOL-60 permits users to write their own programs for transforming symbolic expressions. Franzmore » Lisp OPUS 38 provides the environment for the Encore, Celerity, and DEC VAX11 UNIX,SUN(OPUS) versions under UNIX and the Alliant version under Concentrix. Kyoto Common Lisp (KCL) provides the environment for the SUN(KCL),Convex, and IBM PC under UNIX and Data General under AOS/VS.« less

    2. computational fluid dynamics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      computational fluid dynamics - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs

    3. GPU Computational Screening

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      GPU Computational Screening of Carbon Capture Materials J. Kim 1 , A Koniges 1 , R. Martin 1 , M. Haranczyk 1 , J. Swisher 2 , and B. Smit 1,2 1 Lawrence Berkeley National Laboratory, Berkeley, CA 94720 2 Department of Chemical Engineering, University of California, Berkeley, Berkeley, CA 94720 E-mail: jihankim@lbl.gov Abstract. In order to reduce the current costs associated with carbon capture technologies, novel materials such as zeolites and metal-organic frameworks that are based on

    4. Cloud Computing Services

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Services - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Fuel Cycle Defense Waste Management Programs Advanced

    5. Development of computer graphics

      SciTech Connect (OSTI)

      Nuttall, H.E.

      1989-07-01

      The purpose of this project was to screen and evaluate three graphics packages as to their suitability for displaying concentration contour graphs. The information to be displayed is from computer code simulations describing air-born contaminant transport. The three evaluation programs were MONGO (John Tonry, MIT, Cambridge, MA, 02139), Mathematica (Wolfram Research Inc.), and NCSA Image (National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign). After a preliminary investigation of each package, NCSA Image appeared to be significantly superior for generating the desired concentration contour graphs. Hence subsequent work and this report describes the implementation and testing of NCSA Image on both an Apple MacII and Sun 4 computers. NCSA Image includes several utilities (Layout, DataScope, HDF, and PalEdit) which were used in this study and installed on Dr. Ted Yamada`s Mac II computer. Dr. Yamada provided two sets of air pollution plume data which were displayed using NCSA Image. Both sets were animated into a sequential expanding plume series.

    6. The future of mathematical communication. Final technical report

      SciTech Connect (OSTI)

      Christy, J.

      1994-12-31

      One of the first fruits of cooperation with LBL was the use of the MBone (Multi-Cast Backbone) to broadcast the Conference on the Future of Mathematical Communication, held at MSRI November 30--December 3, 1994. Late last fall, MSRI brought together more than 150 mathematicians, librarians, software developers, representatives of scholarly societies, and both commercial and not-for-profit publishers to discuss the revolution in scholarly communication brought about by digital technology. The conference was funded by the Department of Energy, the National Science Foundation, and the Paul and Gabriella Rosenbaum Foundation. It focused on the impact of the technological revolution on mathematics, but necessarily included issues of a much wider scope. There were talks on electronic publishing, collaboration across the Internet, economic and intellectual property issues, and various new technologies which promise to carry the revolution forward. There were panel discussions of electronic documents in mathematics, the unique nature of electronic journals, technological tools, and the role of scholarly societies. There were focus groups on Developing Countries, K-12 Education, Libraries, and Te{sub X}. The meeting also embodied the promises of the revolution; it was multicast over the MBone channel of the Internet to hundreds of sites around the world and much information on the conference will be available on their World Wide Web server at the URL http://www.msri.org/fmc. The authors have received many comments about the meeting indicating that it has had a profound impact on how the community thinks about how scientists can communicate and make their work public.

    7. Applied Energy Programs, SPO-AE: LANL

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Kevin Ott 505-663-5537 Program Administrator Jutta Kayser 505-663-5649 Program Manager Karl Jonietz 505-663-5539 Program Manager Melissa Fox 505-663-5538 Budget Analyst Fawn Gore 505-665-0224 The Applied Energy Program Office (SPO-AE) manages Los Alamos National Laboratory programs funded by the Department of Energy's Offices of Energy Efficiency/Renewable Energy, Electricity Delivery and Energy Reliability, and Fossil Energy. With energy use increasing across the nation and the world, Los

    8. Apply for a Job | Argonne National Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      FAQs Answers to frequently asked questions about applying for a job at Argonne A Note About Privacy We do not ask you for personally identifiable information such as birthdate, social security number, or driver's license number. To ensure your privacy, please do not include such information in the documents that you upload to the system A Note About File Size Our application system has a file size limit of 820KB. While this is sufficient for the vast majority of documents, we have found that

    9. IPM: A Post-MPI Programming Model | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      IPM: A Post-MPI Programming Model Event Sponsor: Mathematics and Computer Science Division LANS Seminar Start Date: Apr 19 2016 - 3:00pm Building/Room: Building 240/Room 1406-1407 Location: Argonne National Laboratory Speaker(s): Barry Smith Junchao Zhang Speaker(s) Title: Computational Mathematicians, ANL-MCS Event Website: http://www.mcs.anl.gov/research/LANS/events/listn/ The MPI parallel programming model has been a very successful parallel programming model for over twenty years. Though

    10. 2009 Applied and Environmental Microbiology GRC

      SciTech Connect (OSTI)

      Nicole Dubilier

      2009-07-12

      The topic of the 2009 Gordon Conference on Applied and Environmental Microbiology is: From Single Cells to the Environment. The Conference will present and discuss cutting-edge research on applied and environmental microbiology with a focus on understanding interactions between microorganisms and the environment at levels ranging from single cells to complex communities. The Conference will feature a wide range of topics such as single cell techniques (including genomics, imaging, and NanoSIMS), microbial diversity at scales ranging from clonal to global, environmental 'meta-omics', biodegradation and bioremediation, metal - microbe interactions, animal microbiomes and symbioses. The Conference will bring together investigators who are at the forefront of their field, and will provide opportunities for junior scientists and graduate students to present their work in poster format and exchange ideas with leaders in the field. Some poster presenters will be selected for short talks. The collegial atmosphere of this Conference, with extensive discussion sessions as well as opportunities for informal gatherings in the afternoons and evenings, provides an ideal setting for scientists from different disciplines to exchange ideas, brainstorm and discuss cross-disciplinary collaborations.

    11. Semiconductor Device Analysis on Personal Computers

      Energy Science and Technology Software Center (OSTI)

      1993-02-08

      PC-1D models the internal operation of bipolar semiconductor devices by solving for the concentrations and quasi-one-dimensional flow of electrons and holes resulting from either electrical or optical excitation. PC-1D uses the same detailed physical models incorporated in mainframe computer programs, yet runs efficiently on personal computers. PC-1D was originally developed with DOE funding to analyze solar cells. That continues to be its primary mode of usage, with registered copies in regular use at more thanmore » 100 locations worldwide. The program has been successfully applied to the analysis of silicon, gallium-arsenide, and indium-phosphide solar cells. The program is also suitable for modeling bipolar transistors and diodes, including heterojunction devices. Its easy-to-use graphical interface makes it useful as a teaching tool as well.« less

    12. Hybrid soft computing systems: Industrial and commercial applications

      SciTech Connect (OSTI)

      Bonissone, P.P.; Chen, Y.T.; Goebel, K.; Khedkar, P.S.

      1999-09-01

      Soft computing (SC) is an association of computing methodologies that includes as its principal members fuzzy logic, neurocomputing, evolutionary computing and probabilistic computing. The authors present a collection of methods and tools that can be used to perform diagnostics, estimation, and control. These tools are a great match for real-world applications that are characterized by imprecise, uncertain data and incomplete domain knowledge. The authors outline the advantages of applying SC techniques and in particular the synergy derived from the use of hybrid SC systems. They illustrate some combinations of hybrid SC systems, such as fuzzy logic controllers (FLC's) tuned by neural networks (NN's) and evolutionary computing (EC), NN's tuned by EC or FLC's, and EC controlled by FLC's. The authors discuss three successful real-world examples of SC applications to industrial equipment diagnostics, freight train control, and residential property valuation.

    13. High Performance Computing at the Oak Ridge Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      High Performance Computing at the Oak Ridge Leadership Computing Facility Go to Menu Page 2 Outline * Our Mission * Computer Systems: Present, Past, Future * Challenges Along the Way * Resources for Users Go to Menu Page 3 Our Mission Go to Menu Page 4 * World's most powerful computing facility * Nation's largest concentration of open source materials research * $1.3B budget * 4,250 employees * 3,900 research guests annually * $350 million invested in modernization * Nation's most diverse energy

    14. An exact general remeshing scheme applied to physically conservative voxelization

      DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

      Powell, Devon; Abel, Tom

      2015-05-21

      We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal themorecorresponding integral over the output mesh. We refer to this as physically conservative voxelization. At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara [48], who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.less

    15. What is behind small deviations of quantum mechanics theory from experiments? Observer's mathematics point of view

      SciTech Connect (OSTI)

      Khots, Boris; Khots, Dmitriy

      2014-12-10

      Certain results that have been predicted by Quantum Mechanics (QM) theory are not always supported by experiments. This defines a deep crisis in contemporary physics and, in particular, quantum mechanics. We believe that, in fact, the mathematical apparatus employed within today's physics is a possible reason. In particular, we consider the concept of infinity that exists in today's mathematics as the root cause of this problem. We have created Observer's Mathematics that offers an alternative to contemporary mathematics. This paper is an attempt to relay how Observer's Mathematics may explain some of the contradictions in QM theory results. We consider the Hamiltonian Mechanics, Newton equation, Schrodinger equation, two slit interference, wave-particle duality for single photons, uncertainty principle, Dirac equations for free electron in a setting of arithmetic, algebra, and topology provided by Observer's Mathematics (see www.mathrelativity.com). Certain results and communications pertaining to solution of these problems are provided.

    16. SUMO, System performance assessment for a high-level nuclear waste repository: Mathematical models

      SciTech Connect (OSTI)

      Eslinger, P.W.; Miley, T.B.; Engel, D.W.; Chamberlain, P.J. II

      1992-09-01

      Following completion of the preliminary risk assessment of the potential Yucca Mountain Site by Pacific Northwest Laboratory (PNL) in 1988, the Office of Civilian Radioactive Waste Management (OCRWM) of the US Department of Energy (DOE) requested the Performance Assessment Scientific Support (PASS) Program at PNL to develop an integrated system model and computer code that provides performance and risk assessment analysis capabilities for a potential high-level nuclear waste repository. The system model that has been developed addresses the cumulative radionuclide release criteria established by the US Environmental Protection Agency (EPA) and estimates population risks in terms of dose to humans. The system model embodied in the SUMO (System Unsaturated Model) code will also allow benchmarking of other models being developed for the Yucca Mountain Project. The system model has three natural divisions: (1) source term, (2) far-field transport, and (3) dose to humans. This document gives a detailed description of the mathematics of each of these three divisions. Each of the governing equations employed is based on modeling assumptions that are widely accepted within the scientific community.

    17. Discrete Mathematical Approaches to Graph-Based Traffic Analysis

      SciTech Connect (OSTI)

      Joslyn, Cliff A.; Cowley, Wendy E.; Hogan, Emilie A.; Olsen, Bryan K.

      2014-04-01

      Modern cyber defense and anlaytics requires general, formal models of cyber systems. Multi-scale network models are prime candidates for such formalisms, using discrete mathematical methods based in hierarchically-structured directed multigraphs which also include rich sets of labels. An exemplar of an application of such an approach is traffic analysis, that is, observing and analyzing connections between clients, servers, hosts, and actors within IP networks, over time, to identify characteristic or suspicious patterns. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. In this paper, we consider traffic analysis of Netflow using both basic graph statistics and two new mathematical measures involving labeled degree distributions and time interval overlap measures. We do all of this over the VAST test data set of 96M synthetic Netflow graph edges, against which we can identify characteristic patterns of simulated ground-truth network attacks.

    18. [Computer Science and Telecommunications Board activities

      SciTech Connect (OSTI)

      Blumenthal, M.S.

      1993-02-23

      The board considers technical and policy issues pertaining to computer science, telecommunications, and associated technologies. Functions include providing a base of expertise for these fields in NRC, monitoring and promoting health of these fields, initiating studies of these fields as critical resources and sources of national economic strength, responding to requests for advice, and fostering interaction among the technologies and the other pure and applied science and technology. This document describes its major accomplishments, current programs, other sponsored activities, cooperative ventures, and plans and prospects.

    19. System for computer controlled shifting of an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    20. Closed loop computer control for an automatic transmission

      DOE Patents [OSTI]

      Patil, Prabhakar B.

      1989-01-01

      In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

    1. Computational Electronics and Electromagnetics

      SciTech Connect (OSTI)

      DeFord, J.F.

      1993-03-01

      The Computational Electronics and Electromagnetics thrust area is a focal point for computer modeling activities in electronics and electromagnetics in the Electronics Engineering Department of Lawrence Livermore National Laboratory (LLNL). Traditionally, they have focused their efforts in technical areas of importance to existing and developing LLNL programs, and this continues to form the basis for much of their research. A relatively new and increasingly important emphasis for the thrust area is the formation of partnerships with industry and the application of their simulation technology and expertise to the solution of problems faced by industry. The activities of the thrust area fall into three broad categories: (1) the development of theoretical and computational models of electronic and electromagnetic phenomena, (2) the development of useful and robust software tools based on these models, and (3) the application of these tools to programmatic and industrial problems. In FY-92, they worked on projects in all of the areas outlined above. The object of their work on numerical electromagnetic algorithms continues to be the improvement of time-domain algorithms for electromagnetic simulation on unstructured conforming grids. The thrust area is also investigating various technologies for conforming-grid mesh generation to simplify the application of their advanced field solvers to design problems involving complicated geometries. They are developing a major code suite based on the three-dimensional (3-D), conforming-grid, time-domain code DSI3D. They continue to maintain and distribute the 3-D, finite-difference time-domain (FDTD) code TSAR, which is installed at several dozen university, government, and industry sites.

    2. Scanning computed confocal imager

      DOE Patents [OSTI]

      George, John S. (Los Alamos, NM)

      2000-03-14

      There is provided a confocal imager comprising a light source emitting a light, with a light modulator in optical communication with the light source for varying the spatial and temporal pattern of the light. A beam splitter receives the scanned light and direct the scanned light onto a target and pass light reflected from the target to a video capturing device for receiving the reflected light and transferring a digital image of the reflected light to a computer for creating a virtual aperture and outputting the digital image. In a transmissive mode of operation the invention omits the beam splitter means and captures light passed through the target.

    3. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, W.C.

      1998-03-17

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers is disclosed. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them. 5 figs.

    4. computational-hydraulics

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne, Illinois Dr. Steven Lottes This email address is being protected from spambots. You need JavaScript enabled to view it. A training course in the use of computational hydraulics and aerodynamics CFD software using CD-adapco's STAR-CCM+ for analysis will be held at TRACC from March 21-22, 2012. The course assumes a basic knowledge of fluid mechanics and will make extensive use of hands on tutorials. CD-adapco will issue

    5. Computer generated holographic microtags

      DOE Patents [OSTI]

      Sweatt, William C.

      1998-01-01

      A microlithographic tag comprising an array of individual computer generated holographic patches having feature sizes between 250 and 75 nanometers. The tag is a composite hologram made up of the individual holographic patches and contains identifying information when read out with a laser of the proper wavelength and at the proper angles of probing and reading. The patches are fabricated in a steep angle Littrow readout geometry to maximize returns in the -1 diffracted order. The tags are useful as anti-counterfeiting markers because of the extreme difficulty in reproducing them.

    6. Multiprocessor computing for images

      SciTech Connect (OSTI)

      Cantoni, V. ); Levialdi, S. )

      1988-08-01

      A review of image processing systems developed until now is given, highlighting the weak points of such systems and the trends that have dictated their evolution through the years producing different generations of machines. Each generation may be characterized by the hardware architecture, the programmability features and the relative application areas. The need for multiprocessing hierarchical systems is discussed focusing on pyramidal architectures. Their computational paradigms, their virtual and physical implementation, their programming and software requirements, and capabilities by means of suitable languages, are discussed.

    7. Announcement of Computer Software

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      F 241.4 (10-01) (Replaces ESTSC F1 and ESTSC F2) All Other Editions Are Obsolete UNITED STATES DEPARTMENT OF ENERGY ANNOUNCEMENT OF COMPUTER SOFTWARE OMB Control Number 1910-1400 (OMB Burden Disclosure Statement is on last page of Instructions) Record Status (Select One): New Package Software Revision H. Description/Abstract PART I: STI SOFTWARE DESCRIPTION A. Software Title SHORT NAME OR ACRONYM KEYWORDS IN CONTEXT (KWIC) TITLE B. Developer(s) E-MAIL ADDRESS(ES) C. Site Product Number 1. DOE

    8. Ultra-high resolution computed tomography imaging

      DOE Patents [OSTI]

      Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

      2002-01-01

      A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

    9. Peak fitting applied to low-resolution enrichment measurements

      SciTech Connect (OSTI)

      Bracken, D.; McKown, T.; Sprinkle, J.K. Jr.; Gunnink, R.; Kartoshov, M.; Kuropatwinski, J.; Raphina, G.; Sokolov, G.

      1998-12-01

      Materials accounting at bulk processing facilities that handle low enriched uranium consists primarily of weight and uranium enrichment measurements. Most low enriched uranium processing facilities draw separate materials balances for each enrichment handled at the facility. The enrichment measurement determines the isotopic abundance of the {sup 235}U, thereby determining the proper strata for the item, while the weight measurement generates the primary accounting value for the item. Enrichment measurements using the passive gamma radiation from uranium were developed for use in US facilities a few decades ago. In the US, the use of low-resolution detectors was favored because they cost less, are lighter and more robust, and don`t require the use of liquid nitrogen. When these techniques were exported to Europe, however, difficulties were encountered. Two of the possible root causes were discovered to be inaccurate knowledge of the container wall thickness and higher levels of minor isotopes of uranium introduced by the use of reactor returns in the enrichment plants. the minor isotopes cause an increase in the Compton continuum under the 185.7 keV assay peak and the observance of interfering 238.6 keV gamma rays. The solution selected to address these problems was to rely on the slower, more costly, high-resolution gamma ray detectors when the low-resolution method failed. Recently, these gamma ray based enrichment measurement techniques have been applied to Russian origin material. The presence of interfering gamma radiation from minor isotopes was confirmed. However, with the advent of fast portable computers, it is now possible to apply more sophisticated analysis techniques to the low-resolution data in the field. Explicit corrections for Compton background, gamma rays from {sup 236}U daughters, and the attenuation caused by thick containers can be part of the least squares fitting routine. Preliminary results from field measurements in Kazakhstan will be discussed.

    10. FY 1990 Applied Sciences Branch annual report

      SciTech Connect (OSTI)

      Keyes, B.M.; Dippo, P.C.

      1991-11-01

      The Applied Sciences Branch actively supports the advancement of DOE/SERI goals for the development and implementation of the solar photovoltaic technology. The primary focus of the laboratories is to provide state-of-the-art analytical capabilities for materials and device characterization and fabrication. The branch houses a comprehensive facility which is capable of providing information on the full range of photovoltaic components. A major objective of the branch is to aggressively pursue collaborative research with other government laboratories, universities, and industrial firms for the advancement of photovoltaic technologies. Members of the branch disseminate research findings to the technical community in publications and presentations. This report contains information on surface and interface analysis, materials characterization, development, electro-optical characterization module testing and performance, surface interactions and FTIR spectroscopy.

    11. Computer Wallpaper | The Ames Laboratory

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computer Wallpaper We've incorporated the tagline, Creating Materials and Energy Solutions, into a computer wallpaper so you can display it on your desktop as a constant reminder....

    12. Introduction to High Performance Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Introduction to High Performance Computing Introduction to High Performance Computing June 10, 2013 Photo on 7 30 12 at 7.10 AM Downloads Download File Gerber-HPC-2.pdf...

    13. Fermilab | Science at Fermilab | Computing

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computing Computing is indispensable to science at Fermilab. High-energy physics experiments generate an astounding amount of data that physicists need to store, analyze and communicate with others. Cutting-edge technology allows scientists to work quickly and efficiently to advance our understanding of the world . Fermilab's Computing Division is recognized for its expertise in handling huge amounts of data, its success in high-speed parallel computing and its willingness to take its craft in

    14. Super recycled water: quenching computers

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Super recycled water: quenching computers Super recycled water: quenching computers New facility and methods support conserving water and creating recycled products. Using reverse osmosis to "super purify" water allows the system to reuse water and cool down our powerful yet thirsty computers. January 30, 2014 Super recycled water: quenching computers LANL's Sanitary Effluent Reclamation Facility, key to reducing the Lab's discharge of liquid. Millions of gallons of industrial

    15. Development of probabilistic multimedia multipathway computer codes.

      SciTech Connect (OSTI)

      Yu, C.; LePoire, D.; Gnanapragasam, E.; Arnish, J.; Kamboj, S.; Biwer, B. M.; Cheng, J.-J.; Zielen, A. J.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Sallo, A., III.; Peterson, H., Jr.; Williams, W. A.; Environmental Assessment; NRC; EM

      2002-01-01

      The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributions for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.

    16. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the microgrid.

    17. Computing architecture for autonomous microgrids

      DOE Patents [OSTI]

      Goldsmith, Steven Y.

      2015-09-29

      A computing architecture that facilitates autonomously controlling operations of a microgrid is described herein. A microgrid network includes numerous computing devices that execute intelligent agents, each of which is assigned to a particular entity (load, source, storage device, or switch) in the microgrid. The intelligent agents can execute in accordance with predefined protocols to collectively perform computations that facilitate uninterrupted control of the .

    18. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      J K L M N O P Q R S T U V W X Y Z Ilic, Marija D. (Marija D. Ilic) - Department of Electrical and Computer Engineering, Carnegie Mellon University Go back to Individual Researchers ...

    19. Browse by Discipline -- E-print Network Subject Pathways: Mathematics...

      Office of Scientific and Technical Information (OSTI)

      F G H I J K L M N O P Q R S T U V W X Y Z Elkashlan, Maged (Maged Elkashlan) - School of Electronic Engineering and Computer Science, Queen Mary, University of London Erdogan, ...

    20. Applying physics, teamwork to fusion energy science | Princeton...

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Applying physics, teamwork to fusion energy science American Fusion News Category: Massachusetts Institute of Technology (MIT) Link: Applying physics, teamwork to fusion energy science

    1. D&D Toolbox Project - Technology Demonstration of Fixatives Applied...

      Office of Environmental Management (EM)

      platform(s) was demonstrated at the hot cell mockup facility at the FIU's Applied ... Demonstration of Fixatives Applied to Hot Cell Facilities via Remote Sprayer Platforms ...

    2. Energy Department Extends Deadline to Apply for START Tribal...

      Energy Savers [EERE]

      Extends Deadline to Apply for START Tribal Renewable Energy Project Development Assistance to May 22, 2015 Energy Department Extends Deadline to Apply for START Tribal Renewable...

    3. Applying the Battery Ownership Model in Pursuit of Optimal Battery...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies 2012 DOE ...

    4. Rational Catalyst Design Applied to Development of Advanced Oxidation...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Rational Catalyst Design Applied to Development of Advanced Oxidation Catalysts for Diesel Emission Control Rational Catalyst Design Applied to Development of Advanced Oxidation ...

    5. James Webb Space Telescope: PM Lessons Applied - Eric Smith,...

      Energy Savers [EERE]

      James Webb Space Telescope: PM Lessons Applied - Eric Smith, Deputy Program Director, NASA James Webb Space Telescope: PM Lessons Applied - Eric Smith, Deputy Program Director,...

    6. Energy Department Announces Up to $14 Million for Applying Landscape...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Up to 14 Million for Applying Landscape Design to Cellulosic Bioenergy Energy Department Announces Up to 14 Million for Applying Landscape Design to Cellulosic Bioenergy October ...

    7. Large Eddy Simulation (LES) Applied to Advanced Engine Combustion...

      Broader source: Energy.gov (indexed) [DOE]

      Large Eddy Simulation (LES) Applied to Advanced Engine Combustion Research Large Eddy Simulation (LES) Applied to Low-Temperature and Diesel Engine Combustion Research Vehicle ...

    8. The Smart Grid Experience: Applying Results, Reaching Beyond...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Grid Experience: Applying Results, Reaching Beyond - Summary of Conference Proceedings (December 2014) The Smart Grid Experience: Applying Results, Reaching Beyond - Summary of ...

    9. Tritium research activities in Safety and Tritium Applied Research...

      Office of Environmental Management (EM)

      research activities in Safety and Tritium Applied Research (STAR) facility, Idaho National Laboratory Tritium research activities in Safety and Tritium Applied Research (STAR)...

    10. An Evaluation of the Nonlinearity Correction Applied to Atmospheric...

      Office of Scientific and Technical Information (OSTI)

      An Evaluation of the Nonlinearity Correction Applied to Atmospheric Emitted Radiance ... Title: An Evaluation of the Nonlinearity Correction Applied to Atmospheric Emitted ...

    11. Statistical and Domain Analytics Applied to PV Module Lifetime...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      Statistical and Domain Analytics Applied to PV Module Lifetime and Degradation Science Statistical and Domain Analytics Applied to PV Module Lifetime and Degradation Science ...

    12. Distributed Design and Analysis of Computer Experiments

      Energy Science and Technology Software Center (OSTI)

      2002-11-11

      DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

    13. Noise tolerant spatiotemporal chaos computing

      SciTech Connect (OSTI)

      Kia, Behnam; Kia, Sarvenaz; Ditto, William L.; Lindner, John F.; Sinha, Sudeshna

      2014-12-01

      We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

    14. AMRITA -- A computational facility

      SciTech Connect (OSTI)

      Shepherd, J.E.; Quirk, J.J.

      1998-02-23

      Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generates the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.

    15. Computer memory management system

      DOE Patents [OSTI]

      Kirk, III, Whitson John

      2002-01-01

      A computer memory management system utilizing a memory structure system of "intelligent" pointers in which information related to the use status of the memory structure is designed into the pointer. Through this pointer system, The present invention provides essentially automatic memory management (often referred to as garbage collection) by allowing relationships between objects to have definite memory management behavior by use of coding protocol which describes when relationships should be maintained and when the relationships should be broken. In one aspect, the present invention system allows automatic breaking of strong links to facilitate object garbage collection, coupled with relationship adjectives which define deletion of associated objects. In another aspect, The present invention includes simple-to-use infinite undo/redo functionality in that it has the capability, through a simple function call, to undo all of the changes made to a data model since the previous `valid state` was noted.

    16. Mathematical modelling of post combustion in Dofasco's KOBM

      SciTech Connect (OSTI)

      Gou, H.; Irons, G.A.; Lu, W.K.

      1992-01-01

      In the AISI Direct Steelmaking program, trials were undertaken in Dofasco's 300 Tonne KOBM to examine post combustion. To support this work, a two-dimensional turbulent mathematical model has been developed to describe gas flow, combustion reactions and heat transfer (radiation and convection) in converter-type steelmaking processes. Gaseous flow patterns, temperature and heat flux distributions in the furnace were calculated with this model. Key findings are: The post combustion ratio is determined from the rates of oxygen supply, oxygen used for decarburization and the remainder available for post combustion, i.e. deducible from a mass balance calculation, comparison between the heat transfer fluxes calculated based on the model and those measured industrially indicates that the conventionally defined heat transfer efficiency over-estimates the heat recovered by the bath by about 20%, and the location of the combustion zone can be controlled, to a certain extent, by adjusting the lance practice.

    17. Mathematical modelling of post combustion in Dofasco`s KOBM

      SciTech Connect (OSTI)

      Gou, H.; Irons, G.A.; Lu, W.K.

      1992-12-31

      In the AISI Direct Steelmaking program, trials were undertaken in Dofasco`s 300 Tonne KOBM to examine post combustion. To support this work, a two-dimensional turbulent mathematical model has been developed to describe gas flow, combustion reactions and heat transfer (radiation and convection) in converter-type steelmaking processes. Gaseous flow patterns, temperature and heat flux distributions in the furnace were calculated with this model. Key findings are: The post combustion ratio is determined from the rates of oxygen supply, oxygen used for decarburization and the remainder available for post combustion, i.e. deducible from a mass balance calculation, comparison between the heat transfer fluxes calculated based on the model and those measured industrially indicates that the conventionally defined heat transfer efficiency over-estimates the heat recovered by the bath by about 20%, and the location of the combustion zone can be controlled, to a certain extent, by adjusting the lance practice.

    18. Mathematical model of testing of pipeline integrity by thermal fields

      SciTech Connect (OSTI)

      Vaganova, Nataliia

      2014-11-18

      Thermal fields testing at the ground surface above a pipeline are considered. One method to obtain and investigate an ideal thermal field in different environments is a direct numerical simulation of heat transfer processes taking into account the most important physical factors. In the paper a mathematical model of heat propagation from an underground source is described with accounting of physical factors such as filtration of water in soil and solar radiation. Thermal processes are considered in 3D origin where the heat source is a pipeline with constant temperature and non-uniform isolated shell (with 'damages'). This problem leads to solution of heat diffusivity equation with nonlinear boundary conditions. Approaches to analysis of thermal fields are considered to detect damages.

    19. Mathematical modeling of stormwater pollution in a tidal embayment

      SciTech Connect (OSTI)

      Najjar, K.F.

      1989-01-01

      It has been recognized for many years that stormwater runoff provides a transport mechanism for non-point pollutants into the nation's waterways. As more watershed areas continue to urbanize, greater increases in pollutant loadings will continue to impact the water quality of the receiving water bodies. In many instances, the pollutant impact exceeds the assimilative capacity of the receiving water. To estimate the potential impacts of stormwater pollution, mathematical models are constructed. In this dissertation, mathematical models have been constructed to estimate the non-point pollutant loadings from an urbanizing area as well as to model the assimilative capacity of the receiving tidal embayment system. The models are capable of simulating the hydrologic aspects as well as the water quality cycles of the system as a function of urbanization. In determining the response of the receiving water system to stormwater loadings, the change in receiving water quality is modeled spatially as well as temporally. The overall model is composed of three subsystem models: a stormwater model, a hydrodynamic tidal model, and a receiving water quality model. Construction of the stormwater model is based on STORM (Storage, Treatment, Overflow, Runoff Model) by the US Army Corps of Engineers. A ground water component to the model has been added to adjust the model for application to the study area, Lakes Bay, New Jersey. The tidal model is developed from a pseudo two-dimensional approach. The methodology utilizes the link-node concept to simulate the embayment system. Solutions to equations of motion and continuity are solved using a finite difference method. The receiving water quality model is a two-dimensional time variable water quality model which is based in a finite segment approach.

    20. Sandia National Laboratories: Advanced Simulation and Computing:

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Computational Systems & Software Environment Computational Systems & Software Environment Advanced Simulation and Computing Computational Systems & Software Environment Integrated Codes Physics & Engineering Models Verification & Validation Facilities Operation & User Support Research & Collaboration Contact ASC Advanced Simulation and Computing Computational Systems & Software Environment Crack Modeling The Computational Systems & Software Environment

    1. Engineering Physics and Mathematics Division progress report for period ending August 31, 1989

      SciTech Connect (OSTI)

      Not Available

      1989-12-01

      This paper contains abstracts on research performed at the Engineering Physics and Mathematics Division of Oak Ridge National Laboratory. The areas covered are: mathematical science; nuclear-data measurement and evaluation; intelligent systems; nuclear analysis and shielding; and Engineering Physics Information Center. (LSP)

    2. Slide 1

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Group Computational Science and Mathematics Divison Computational Information and ... video databases - Computational mathematics framework * Capability: - Coupled ...

    3. Paul C. Messina | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing ...

    4. CLAMR (Compute Language Adaptive Mesh Refinement)

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) CLAMR (Compute Language Adaptive Mesh Refinement) is being developed as a DOE...

    5. Other World Computing | Open Energy Information

      Open Energy Info (EERE)

      World Computing Jump to: navigation, search Name Other World Computing Facility Other World Computing Sector Wind energy Facility Type Community Wind Facility Status In Service...

    6. Advanced Computational Methods for Security Constrained Financial Transmission Rights

      SciTech Connect (OSTI)

      Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria; Zhou, Ning; Huang, Zhenyu

      2012-07-26

      Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

    7. Mathematical simulation of the amplification of 1790-nm laser radiation in a nuclear-excited He Ar plasma containing nanoclusters of uranium compounds

      SciTech Connect (OSTI)

      Kosarev, V A; Kuznetsova, E E

      2014-02-28

      The possibility of applying dusty active media in nuclearpumped lasers has been considered. The amplification of 1790-nm radiation in a nuclear-excited dusty He Ar plasma is studied by mathematical simulation. The influence of nanoclusters on the component composition of the medium and the kinetics of the processes occurring in it is analysed using a specially developed kinetic model, including 72 components and more than 400 reactions. An analysis of the results indicates that amplification can in principle be implemented in an active laser He Ar medium containing 10-nm nanoclusters of metallic uranium and uranium dioxide. (lasers)

    8. PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

      SciTech Connect (OSTI)

      Joubert, Wayne; Kothe, Douglas B; Nam, Hai Ah

      2009-12-01

      In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

    9. Computational Fluid Dynamics Library

      Energy Science and Technology Software Center (OSTI)

      2005-03-04

      CFDLib05 is the Los Alamos Computational Fluid Dynamics LIBrary. This is a collection of hydrocodes using a common data structure and a common numerical method, for problems ranging from single-field, incompressible flow, to multi-species, multi-field, compressible flow. The data structure is multi-block, with a so-called structured grid in each block. The numerical method is a Finite-Volume scheme employing a state vector that is fully cell-centered. This means that the integral form of the conservation lawsmore » is solved on the physical domain that is represented by a mesh of control volumes. The typical control volume is an arbitrary quadrilateral in 2D and an arbitrary hexahedron in 3D. The Finite-Volume scheme is for time-unsteady flow and remains well coupled by means of time and space centered fluxes; if a steady state solution is required, the problem is integrated forward in time until the user is satisfied that the state is stationary.« less

    10. Python and computer vision

      SciTech Connect (OSTI)

      Doak, J. E.; Prasad, Lakshman

      2002-01-01

      This paper discusses the use of Python in a computer vision (CV) project. We begin by providing background information on the specific approach to CV employed by the project. This includes a brief discussion of Constrained Delaunay Triangulation (CDT), the Chordal Axis Transform (CAT), shape feature extraction and syntactic characterization, and normalization of strings representing objects. (The terms 'object' and 'blob' are used interchangeably, both referring to an entity extracted from an image.) The rest of the paper focuses on the use of Python in three critical areas: (1) interactions with a MySQL database, (2) rapid prototyping of algorithms, and (3) gluing together all components of the project including existing C and C++ modules. For (l), we provide a schema definition and discuss how the various tables interact to represent objects in the database as tree structures. (2) focuses on an algorithm to create a hierarchical representation of an object, given its string representation, and an algorithm to match unknown objects against objects in a database. And finally, (3) discusses the use of Boost Python to interact with the pre-existing C and C++ code that creates the CDTs and CATS, performs shape feature extraction and syntactic characterization, and normalizes object strings. The paper concludes with a vision of the future use of Python for the CV project.

    11. Study of trajectories and combustion of fuel-oil droplets in the combustion chamber of a power-plant boiler with the use of a mathematical model

      SciTech Connect (OSTI)

      Enyakin, Yu.P.; Usman, Yu.M.

      1988-03-01

      A mathematical model was developed to permit study of the behavior of fuel-oil droplets in a combustion chamber, and results are presented from a computer calculation performed for the 300-MW model TGMP-314P boiler of a power plant. The program written to perform the calculations was organized so that the first stage would entail calculation of the combustion (vaporization) of a droplet of liquid fuel. The program then provided for a sudden decrease in the mass of the fuel particle, simulating rupture of the coke shell and ejection of some of the liquid. The program then considered the combustion of a hollow coke particle. Physicochemical parameters characteristic of fuel oil M-100 were introduced in the program in the first stage of computations, while parameters characteristic of the coke particle associated with an unburned fuel-oil droplet were included in the second stage.

    12. Bioinformatics Computing Consultant Position Available

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Bioinformatics Computing Consultant Position Available Bioinformatics Computing Consultant Position Available October 31, 2011 by Katie Antypas NERSC and the Joint Genome Institute (JGI) are searching for two individuals who can help biologists exploit advanced computing platforms. JGI provides production sequencing and genomics for the Department of Energy. These activities are critical to the DOE missions in areas related to clean energy generation and environmental characterization and

    13. computational-fluid-dynamics-training

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      Table of Contents Date Location Advanced Hydraulic and Aerodynamic Analysis Using CFD March 27-28, 2013 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 21-22, 2012 Argonne TRACC Argonne, IL Computational Hydraulics and Aerodynamics using STAR-CCM+ for CFD Analysis March 30-31, 2011 Argonne TRACC Argonne, IL Computational Hydraulics for Transportation Workshop September 23-24, 2009 Argonne TRACC West Chicago, IL

    14. computation | National Nuclear Security Administration

      National Nuclear Security Administration (NNSA)

      Livermore National Laboratory (LLNL), announced her retirement last week after 15 years of leading Livermore's Computation Directorate. "Dona has successfully led a ...

    15. Parallel Computing Summer Research Internship

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      should have basic experience with a scientific computing language, such as C, C++, Fortran and with the LINUX operating system. Duration & Location The program will last ten...

    16. History | Argonne Leadership Computing Facility

      Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

      dedicated to enabling leading-edge computational capabilities to advance fundamental ... (ASCR) program within DOE's Office of Science, the ALCF is one half of the DOE ...

    17. Institutional computing (IC) information session

      SciTech Connect (OSTI)

      Koch, Kenneth R; Lally, Bryan R

      2011-01-19

      The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.

    18. Hanford Meteorological Station computer codes: Volume 2, The PROD computer code

      SciTech Connect (OSTI)

      Andrews, G.L.; Buck, J.W.

      1987-09-01

      At the end of each work shift (day, swing, and graveyard), the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, issues a forecast of the 200-ft-level wind speed and direction and the weather for use at B Plant and PUREX. These forecasts are called production forecasts. The PROD computer code is used to archive these production forecasts and apply quality assurance checks to the forecasts. The code accesses an input file, which contains the previous forecast's date and shift number, and an output file, which contains the production forecasts for the current month. A data entry form consisting of 20 fields is included in the program. The fields must be filled in by the user. The information entered is appended to the current production monthly forecast file, which provides an archive for the production forecasts. This volume describes the implementation and operation of the PROD computer code at the HMS.

    19. Applying Human Factors during the SIS Life Cycle

      SciTech Connect (OSTI)

      Avery, K.

      2010-05-05

      Safety Instrumented Systems (SIS) are widely used in U.S. Department of Energy's (DOE) nonreactor nuclear facilities for safety-critical applications. Although use of the SIS technology and computer-based digital controls, can improve performance and safety, it potentially introduces additional complexities, such as failure modes that are not readily detectable. Either automated actions or manual (operator) actions may be required to complete the safety instrumented function to place the process in a safe state or mitigate a hazard in response to an alarm or indication. DOE will issue a new standard, Application of Safety Instrumented Systems Used at DOE Nonreactor Nuclear Facilities, to provide guidance for the design, procurement, installation, testing, maintenance, operation, and quality assurance of SIS used in safety significant functions at DOE nonreactor nuclear facilities. The DOE standard focuses on utilizing the process industry consensus standard, American National Standards Institute/ International Society of Automation (ANSI/ISA) 84.00.01, Functional Safety: Safety Instrumented Systems for the Process Industry Sector, to support reliable SIS design throughout the DOE complex. SIS design must take into account human-machine interfaces and their limitations and follow good human factors engineering (HFE) practices. HFE encompasses many diverse areas (e.g., information display, user-system interaction, alarm management, operator response, control room design, and system maintainability), which affect all aspects of system development and modification. This paper presents how the HFE processes and principles apply throughout the SIS life cycle to support the design and use of SIS at DOE nonreactor nuclear facilities.

    20. Integrated Computational Materials Engineering (ICME) for Mg...

      Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

      More Documents & Publications Integrated Computational Materials Engineering (ICME) for Mg: International Pilot Project Integrated Computational Materials Engineering (ICME) for ...