V-136: Oracle Critical Patch Update Advisory - April 2013 | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
6: Oracle Critical Patch Update Advisory - April 2013 V-136: Oracle Critical Patch Update Advisory - April 2013 April 17, 2013 - 1:46am Addthis PROBLEM: Oracle Critical Patch...
V-095: Oracle Java Flaws Let Remote Users Execute Arbitrary Code...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5: Oracle Java Flaws Let Remote Users Execute Arbitrary Code V-095: Oracle Java Flaws Let Remote Users Execute Arbitrary Code February 20, 2013 - 12:38am Addthis PROBLEM: Oracle...
V-185: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerabilit...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability V-185: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability June 25, 2013 - 12:41am Addthis PROBLEM: ...
T-537: Oracle Critical Patch Update Advisory - January 2011 ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
T-537: Oracle Critical Patch Update Advisory - January 2011 January 19, 2011 - 7:11am ... Oracle Critical Update Addthis Related Articles T-537: Oracle Critical Patch Update ...
U-191: Oracle Java Multiple Vulnerabilities | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
1: Oracle Java Multiple Vulnerabilities U-191: Oracle Java Multiple Vulnerabilities June 14, 2012 - 7:00am Addthis PROBLEM: Multiple vulnerabilities have been reported in Oracle Java, which can be exploited by malicious local users PLATFORM: Oracle Java JDK 1.7.x / 7.x Oracle Java JRE 1.7.x / 7.x Sun Java JDK 1.5.x Sun Java JDK 1.6.x / 6.x Sun Java JRE 1.4.x Sun Java JRE 1.5.x / 5.x Sun Java JRE 1.6.x / 6.x Sun Java SDK 1.4.x ABSTRACT: The Critical Patch Update for Java SE also includes
V-181: Oracle Java SE Critical Patch Update Advisory - June 2013 |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy 1: Oracle Java SE Critical Patch Update Advisory - June 2013 V-181: Oracle Java SE Critical Patch Update Advisory - June 2013 June 19, 2013 - 1:06am Addthis PROBLEM: Oracle Java SE Critical Patch Update Advisory - June 2013 PLATFORM: Version(s): 5.0 Update 45, 6 Update 45, 7 Update 21; and prior versions ABSTRACT: Multiple vulnerabilities were reported in Oracle Java. REFERENCE LINKS: Oracle Java SE Critical Patch Update June 2013 SecurityTracker Alert ID: 1028679
V-185: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy 85: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability V-185: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability June 25, 2013 - 12:41am Addthis PROBLEM: Apache OpenOffice SDK Oracle Java JavaDoc Spoofing Vulnerability PLATFORM: Apache OpenOffice SDK 3.x ABSTRACT: Apache has acknowledged a vulnerability in Apache OpenOffice SDK REFERENCE LINKS: Apache OpenOffice Secunia Advisory SA53963 Secunia Advisory SA53846 CVE-2013-1571 IMPACT ASSESSMENT:
Energy Science and Technology Software Center (OSTI)
2007-06-01
The Oracle Management Tool Suite is used to automatically manage Oracle based systems. This includes startup and shutdown of databases and application servers as well as backup, space management, workload management and log file management.
Oracle Financials PIA, Bechtel Jacobs Company, LLC | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Oracle Financials PIA, Bechtel Jacobs Company, LLC Oracle Financials PIA, Bechtel Jacobs Company, LLC Oracle Financials PIA, Bechtel Jacobs Company, LLC PDF icon Oracle Financials ...
V-051: Oracle Solaris Java Multiple Vulnerabilities | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Related Articles U-191: Oracle Java Multiple Vulnerabilities U-105:Oracle Java SE Critical Patch Update Advisory T-576: Oracle Solaris Adobe Flash Player Multiple Vulnerabilities...
Tomcat, Oracle & XML Web Archive
Energy Science and Technology Software Center (OSTI)
2008-01-01
The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.
Oracle, Arizona: Energy Resources | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Oracle, Arizona: Energy Resources Jump to: navigation, search Equivalent URI DBpedia Coordinates 32.6109054, -110.7709348 Show Map Loading map... "minzoom":false,"mappingservi...
U-019: Oracle Critical Patch Update Advisory- October 2011
October 2011 Critical Patch Update, security vulnerability fixes for proprietary components of Oracle Linux will be announced in Oracle Critical Patch Updates.
V-004: Oracle Critical Patch Update Advisory- October 2012
October 2012 Critical Patch Update, security vulnerability fixes for proprietary components of Oracle Linux will be announced in Oracle Critical Patch Updates.
U-233: Oracle Database INDEXTYPE CTXSYS.CONTEXT Bug Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Articles U-083:Oracle Critical Patch Update Advisory - January 2012 V-067: Oracle Java Flaw Lets Remote Users Execute Arbitrary Code T-576: Oracle Solaris Adobe Flash Player...
Oracle accrual plans from requirements to implementation
Rivera, Christine K
2009-01-01
Implementing any new business software can be an intimidating prospect and this paper is intended to offer some insight in to how to approach this challenge with some fundamental rules for success. Los Alamos National Laboratory (LANL) had undergone an original ERP implementation of HRMS, Oracle Advanced Benefits, Worker Self Service, Manager Self Service, Project Accounting, Financials and PO, and recently completed a project to implement Oracle Payroll, Time and Labor and Accrual Plans. This paper will describe some of the important lessons that can be applied to any implementation as a whole, and then specifically how this knowledge was applied to the design and deployment of Oracle Accrual Plans for LANL. Finally, detail on the functionality available in Oracle Accrual Plans will be described, as well as the detailed setups.that were utilized at LANL.
Oracle Database DBFS Hierarchical Storage Overview
Rivenes, A
2011-07-25
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory creates large numbers of images during each shot cycle for the analysis of optics, target inspection and target diagnostics. These images must be readily accessible once they are created and available for the 30 year lifetime of the facility. The Livermore Computing Center (LC) runs a High Performance Storage System (HPSS) that is capable of storing NIF's estimated 1 petabyte of diagnostic images at a fraction of what it would cost NIF to operate its own automated tape library. With Oracle 11g Release 2 database, it is now possible to create an application transparent, hierarchical storage system using the LC's HPSS. Using the Oracle DBMS-LOB and DBMS-DBFS-HS packages a SecureFile LOB can now be archived to storage outside of the database and accessed seamlessly through a DBFS 'link'. NIF has chosen to use this technology to implement a hierarchical store for its image based SecureFile LOBs. Using a modified external store and DBFS links, files are written to and read from a disk 'staging area' using Oracle's backup utility. Database external procedure calls invoke OS based scripts to manage a staging area and the transfer of the backup files between the staging area and the Lab's HPSS.
U-105:Oracle Java SE Critical Patch Update Advisory
Multiple vulnerabilities were reported in Oracle Java SE. A remote user can execute arbitrary code on the target system. A remote user can cause denial of service conditions.
U-150: Oracle Critical Patch Update Advisory- April 2012
Critical Patch Updates are the primary means of releasing security fixes for Oracle products to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October.
T-672: Oracle Critical Patch Update Advisory- July 2011
Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply CPU fixes as soon as possible. This Critical Patch Update contains 78 new security fixes across all product families.
U-215: Oracle Critical Patch Update Advisory- July 2012
Critical Patch Updates are the primary means of releasing security fixes for Oracle products to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October.
Oracle Applications Patch Administration Tool (PAT) Beta Version
Energy Science and Technology Software Center (OSTI)
2002-01-04
PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and loadmore » the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with current Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less
V-142: Oracle Java Reflection API Flaw Lets Remote Users Execute...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
2: Oracle Java Reflection API Flaw Lets Remote Users Execute Arbitrary Code V-142: Oracle Java Reflection API Flaw Lets Remote Users Execute Arbitrary Code April 25, 2013 - 12:14am...
V-189: Oracle VirtualBox 'tracepath' Bug Lets Local Guest Users...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Oracle VirtualBox 'tracepath' Bug Lets Local Guest Users Deny Service on the Target Host V-189: Oracle VirtualBox 'tracepath' Bug Lets Local Guest Users Deny Service on the Target...
T-535: Oracle Critical Patch Update- January 2011
This Critical Patch Update Pre-Release Announcement provides advance information about the Oracle Critical Patch Update for January 2011, which will be released on Tuesday, January 18, 2011. While this Pre-Release Announcement is as accurate as possible at the time of publication, the information it contains may change before publication of the Critical Patch Update Advisory. A Critical Patch Update is a collection of patches for multiple security vulnerabilities. This Critical Patch Update contains 66 new security vulnerability fixes across hundreds of Oracle products. Some of the vulnerabilities addressed in this Critical Patch Update affect multiple products. Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update fixes as soon as possible.
U-234: Oracle MySQL User Login Security Bypass Vulnerability
Oracle MySQL is prone to a security bypass vulnerability Attackers can exploit this issue to bypass certain security restrictions.
NPDES compliance monitoring report: Oracle Ridge Mine, San Manuel, Arizona. Draft report
Stevens, J.
1992-11-03
This presents the findings of a compliance evaluation inspection of the Oracle Ridge Copper Mine near San Manuel, Arizona, conducted on August 17, 1992. It is part of a series of inspections of uncontrolled discharges of mine drainage.
T-561: IBM and Oracle Java Binary Floating-Point Number Conversion Denial of Service Vulnerability
IBM and Oracle Java products contain a vulnerability that could allow an unauthenticated, remote attacker to cause a denial of service (DoS) condition on a targeted system.
T-558: Oracle Java SE and Java for Business Critical Patch Update Advisory- February 2011
This Critical Patch Update contains 21 new security fixes for Oracle Java SE and Java for Business. 19 of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password.
U.S. Department of Energy (DOE) all webpages (Extended Search)
2 Problem Scarcity of clean water leads to disease, death and often international tension. In many parts of the world, access to potable water is limited. The clean water supply...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Tech Transfer Success Stories * 2012 Problem Optical coatings are ubiquitous, appearing on items that range from electronic devices, photographic lenses, and windows to aircraft sensors, photovoltaic cells, and lightweight plastic goggles for troops in the field. The coatings are applied to materials such as glass and ceramics, which protect or alter the way the material reflects and transmits light. However, the two main methods of applying these coatings - sputtering and chemical vapor
Tool For Editing Structured Query Language Text Within ORACLE Forms Applications
Energy Science and Technology Software Center (OSTI)
1991-02-01
SQTTEXT is an ORACLE SQL*Forms application that allows a programmer to view and edit all the Structured Query Language (SQL) text for a given application on one screen. This application is an outgrowth of the prototyping of an on-line system dictionary for the Worldwide Household Goods Information system for Transportation-Modernization decision support system being prototyped by the Oak Ridge National Laboratory, but it can be applied to all SQL*Forms software development, debugging, and maintenance.
Oracle inequalities for SVMs that are based on random entropy numbers
Steinwart, Ingo
2009-01-01
In this paper we present a new technique for bounding local Rademacher averages of function classes induced by a loss function and a reproducing kernel Hilbert space (RKHS). At the heart of this technique lies the observation that certain expectations of random entropy numbers can be bounded by the eigenvalues of the integral operator associated to the RKHS. We then work out the details of the new technique by establishing two new oracle inequalities for SVMs, which complement and generalize orevious results.
T-641: Oracle Java SE Critical Patch Update Advisory- June 2011
Office of Energy Efficiency and Renewable Energy (EERE)
This Critical Patch Update contains 17 new security fixes for Oracle Java SE - 5 apply to client and server deployments of Java SE, 11 apply to client deployments of Java SE only, and 1 applies to server deployments of Java SE only. All of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Oracle CVSS scores assume that a user running a Java applet or Java Web Start application has administrator privileges (typical on Windows). Where the user does not run with administrator privileges (typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", and the corresponding CVSS Base score is 7.5 instead of 10 respectively. For issues in Deployment, fixes are only made available for JDK and JRE 6. Users should use the Java Web Start in JRE 6 and the new Java Plug-in introduced in 6 Update 10. CVE-2011-0862, CVE-2011-0873, CVE-2011-0815, CVE-2011-0817, CVE-2011-0863, CVE-2011-0864, CVE-2011-0802, CVE-2011-0814, CVE-2011-0871, CVE-2011-0786, CVE-2011-0788, CVE-2011-0866, CVE-2011-0868, CVE-2011-0872, CVE-2011-0867, CVE-2011-0869, and CVE-2011-0865
V-199: Solaris Bugs Let Local Users Gain Root Privileges, Remote...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
recommends applying July Critical Patch Update Addthis Related Articles V-181: Oracle Java SE Critical Patch Update Advisory - June 2013 V-051: Oracle Solaris Java Multiple...
Crowdsourcing Initiative Seeks Buildings-Related Problems to Solve |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy Crowdsourcing Initiative Seeks Buildings-Related Problems to Solve Crowdsourcing Initiative Seeks Buildings-Related Problems to Solve June 30, 2015 - 9:00am Addthis Calling all building technology innovators! The Building Technologies Office is partnering with the successful SunShot Catalyst crowdsourcing initiative to identify and solve problems related to software development, data, and/or automation. In the first, "Ideation" phase of the initiative, those
Open Problems, Solved Problems !
U.S. Department of Energy (DOE) all webpages (Extended Search)
Problems, Solved Problems and Non-Problems in DOE's Big Data Kathy Y elick Professor o f E lectrical E ngineering a nd C omputer S ciences University o f C alifornia a t B...
This presentation by Amgad Elgowainy of Argonne National Laboratory was given at the DOE Hydrogen Compression, Storage, and Dispensing Workshop in March 2013. csd_workshop_11_elgowainy.pdf (767.06 KB) More Documents & Publications Hydrogen Delivery Analysis Models Overview of Station Analysis Tools Developed in Support of H2USA Webinar Overview of Station Analysis Tools Developed in Support of H2USA DOE Analysis Related to H2USA Energy
Approach to optimize single turbocharger
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy.gov/List of Awardees" "Data is as of March 22, 2013" "Program Office Name","Place of Performance State","DOE Project Name","Awardee Name","Place of Performance City","Total Awarded","Total Outlaid","Award Date" "Advanced Research Projects Agency - Energy","AL-ALABAMA","Program Direction - ARPA -E","NASA/GEORGE C MARSHALL SPACE FLIGHT
OPOWER RFI Response OPOWER RFI Response OPOWER submits these comments to the Department of Energy in response to the recently issued Request for Information on smart grid implementation challenges. In particular, OPOWER writes to comment on the importance of effective customer engagement in smart grid policy making. OPOWER RFI Response (51.98 KB) More Documents & Publications Insights from Smart Meters: The Potential for Peak Hour Savings from Behavior-Based Programs Voices of Experience:
V-111: Multiple vulnerabilities have been reported in Puppet...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
vulnerable system. SOLUTION: Update to a fixed version. Addthis Related Articles V-090: Adobe Flash Player AIR Multiple Vulnerabilities V-083: Oracle Java Multiple...
Statistics Show Bearing Problems Cause the Majority of Wind Turbine Gearbox
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Failures | Department of Energy Statistics Show Bearing Problems Cause the Majority of Wind Turbine Gearbox Failures Statistics Show Bearing Problems Cause the Majority of Wind Turbine Gearbox Failures September 17, 2015 - 12:29pm Addthis In the past, the wind energy industry has been relatively conservative in terms of data sharing, especially with the general public, which has inhibited the research community's efforts to identify and mitigate the premature failures of wind turbine
Tackling Energy Problems For America's Tribal Nations | Department of
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy Energy Problems For America's Tribal Nations Tackling Energy Problems For America's Tribal Nations June 20, 2012 - 6:07pm Addthis Julia Bovey, First Wind; Tracey LeBeau; Neil Kiely, First Wind; and Bob Springer (NREL) at First Wind's new Rollins project near Lincoln, Maine. Julia Bovey, First Wind; Tracey LeBeau; Neil Kiely, First Wind; and Bob Springer (NREL) at First Wind's new Rollins project near Lincoln, Maine. Tracey A. LeBeau Former Director, Office of Indian Energy Policy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Known Problems Known Problems No Open Issues There are currently no open issues with Euclid. Read the full post Subscribe via RSS Subscribe Browse by Date January 2016 Last edited: 2016-04-29 11:34:51
U.S. Department of Energy (DOE) all webpages (Extended Search)
Problems Known Problems Viewing entries posted in 2001 There are no blog entries Subscribe via RSS Subscribe Browse by Date January 2016 Last edited: 2016-04-29 11:34:51
V-083: Oracle Java Multiple Vulnerabilities
A Critical Patch Update is a collection of patches for multiple security vulnerabilities. The Critical Patch Update for Java SE also includes non-security fixes. Critical Patch Updates are cumulative and each advisory describes only the security fixes added since the previous Critical Patch Update and Security Alert.
The Guderley problem revisited
Ramsey, Scott D [Los Alamos National Laboratory; Kamm, James R [Los Alamos National Laboratory; Bolstad, John H [NON LANL
2009-01-01
The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shock configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.
Sandia National Laboratories Problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
Problem Natural disasters such as Hurricane Katrina in New Orleans and the tsunami in Japan in 2011 create emergency situations that must be dealt with quickly and effectively in...
Bicriteria network design problems
Marathe, M.V.; Ravi, R.; Sundaram, R.; Ravi, S.S.; Rosenkrantz, D.J.; Hunt, H.B. III
1997-11-20
The authors study a general class of bicriteria network design problems. A generic problem in this class is as follows: Given an undirected graph and two minimization objectives (under different cost functions), with a budget specified on the first, find a subgraph from a given subgraph class that minimizes the second objective subject to the budget on the first. They consider three different criteria -- the total edge cost, the diameter and the maximum degree of the network. Here, they present the first polynomial-time approximation algorithms for a large class of bicriteria network design problems for the above mentioned criteria. The following general types of results are presented. First, they develop a framework for bicriteria problems and their approximations. Second, when the two criteria are the same they present a black box parametric search technique. This black box takes in as input an (approximation) algorithm for the criterion situation and generates an approximation algorithm for the bicriteria case with only a constant factor loss in the performance guarantee. Third, when the two criteria are the diameter and the total edge costs they use a cluster based approach to devise approximation algorithms. The solutions violate both the criteria by a logarithmic factor. Finally, for the class of treewidth-bounded graphs, they provide pseudopolynomial-time algorithms for a number of bicriteria problems using dynamic programming. The authors show how these pseudopolynomial-time algorithms can be converted to fully polynomial-time approximation schemes using a scaling technique.
Emery, V.J.; Kivelson, S.A.
1993-12-31
In the past few years there has been a resurgence of interest in dynamical impurity problems, as a result of developments in the theory of correlated electron systems. The general dynamical impurity problem is a set of conduction electrons interacting with an impurity which has internal degrees of freedom. The simplest and earliest example, the Kondo problem, has attracted interest since the mid-sixties not only because of its physical importance but also as an example of a model displaying logarithmic divergences order by order in perturbation theory. It provided one of the earliest applications of the renormalization group method, which is designed to deal with just such a situation. As we shall see, the antiferromagnetic Kondo model is controlled by a strong-coupling fixed point, and the essence of the renormalization group solution is to carry out the global renormalization numerically starting from the original (weak-coupling) Hamiltonian. In these lectures, we shall describe an alternative route in which we identify an exactly solvable model which renormalizes to the same fixed point as the original dynamical impurity problem. This approach is akin to determining the critical behavior at a second order phase transition point by solving any model in a given universality class.
The inhibiting bisection problem.
Pinar, Ali
2010-11-01
Given a graph where each vertex is assigned a generation or consumption volume, we try to bisect the graph so that each part has a significant generation/consumption mismatch, and the cutsize of the bisection is small. Our motivation comes from the vulnerability analysis of distribution systems such as the electric power system. We show that the constrained version of the problem, where we place either the cutsize or the mismatch significance as a constraint and optimize the other, is NP-complete, and provide an integer programming formulation. We also propose an alternative relaxed formulation, which can trade-off between the two objectives and show that the alternative formulation of the problem can be solved in polynomial time by a maximum flow solver. Our experiments with benchmark electric power systems validate the effectiveness of our methods.
Sandia National Laboratories Problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
Sandia National Laboratories Problem Natural disasters such as Hurricane Katrina in New Orleans and the tsunami in Japan in 2011 create emergency situations that must be dealt with quickly and effectively in order to minimize injury and loss of life. Simulating such events before they occur can help emergency responders fine-tune their preparations. To create the most accurate modeling scenarios, exercise planners need to know critical details of the event, such as infrastructure damage and
U.S. Department of Energy (DOE) all webpages (Extended Search)
GRAND CHALLENGE PROBLEMS Time is the biggest issue. Materials typically become critical in a matter of months, but solutions take years or decades to develop and implement. Our first two grand challenges address this discrepancy. Anticipating Which Materials May Go Critical In an ideal world, users of materials would anticipate supply-chain disruptions before they occur. They would undertake activities to manage the risks of disruption, including R&D to diversify and increase supplies or to
The Inhibiting Bisection Problem
Pinar, Ali; Fogel, Yonatan; Lesieutre, Bernard
2006-12-18
Given a graph where each vertex is assigned a generation orconsumption volume, we try to bisect the graph so that each part has asignificant generation/consumption mismatch, and the cutsize of thebisection is small. Our motivation comes from the vulnerability analysisof distribution systems such as the electric power system. We show thatthe constrained version of the problem, where we place either the cutsizeor the mismatch significance as a constraint and optimize the other, isNP-complete, and provide an integer programming formulation. We alsopropose an alternative relaxed formulation, which can trade-off betweenthe two objectives and show that the alternative formulation of theproblem can be solved in polynomial time by a maximum flow solver. Ourexperiments with benchmark electric power systems validate theeffectiveness of our methods.
Kovac, F.M.
1995-12-31
The 21PF overpack has had severe metal corrosion and stress corrosion cracking (SCC) for many years. The US Department of Transportation (DOT) and the US Nuclear Regulatory Commission (NRC) have disallowed the use of overpacks containing high chloride foam. Corrosion and SCC of 21PF overpacks have been documented and papers have been presented at conferences about these issues. Regulatory agencies have restricted 21PF overpack use and have requested data to determine if phenolic foam overpacks not meeting original design specifications will be authorized for continued use. This paper details some of the problems experienced by users and relates actions of the DOT and NRC concerning these packages. Industry is working to correct deficiencies, but if they are not successful, the entire uranium enrichment industry will be severely impacted.
Smoothing of mixed complementarity problems
Gabriel, S.A.; More, J.J.
1995-09-01
The authors introduce a smoothing approach to the mixed complementarity problem, and study the limiting behavior of a path defined by approximate minimizers of a nonlinear least squares problem. The main result guarantees that, under a mild regularity condition, limit points of the iterates are solutions to the mixed complementarity problem. The analysis is applicable to a wide variety of algorithms suitable for large-scale mixed complementarity problems.
About an Optimal Visiting Problem
Bagagiolo, Fabio Benetton, Michela
2012-02-15
In this paper we are concerned with the optimal control problem consisting in minimizing the time for reaching (visiting) a fixed number of target sets, in particular more than one target. Such a problem is of course reminiscent of the famous 'Traveling Salesman Problem' and brings all its computational difficulties. Our aim is to apply the dynamic programming technique in order to characterize the value function of the problem as the unique viscosity solution of a suitable Hamilton-Jacobi equation. We introduce some 'external' variables, one per target, which keep in memory whether the corresponding target is already visited or not, and we transform the visiting problem in a suitable Mayer problem. This fact allows us to overcome the lacking of the Dynamic Programming Principle for the originary problem. The external variables evolve with a hysteresis law and the Hamilton-Jacobi equation turns out to be discontinuous.
U.S. Department of Energy (DOE) all webpages (Extended Search)
They can reduce the size and weight of existing next-generation smart grid power electronics systems, allowing greater application in such areas as weapons systems and pulsed...
U.S. Department of Energy (DOE) all webpages (Extended Search)
In addition, Sandia's method is compatible with conventional spray processing and, ... process include high-definition flat panel displays, sensor coatings for both ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
energy solutions, Sandia and Princeton Power Systems have teamed up to develop the Demand Response Inverter (DRI). Innovative Edge The DRI is a power flow control system...
U.S. Department of Energy (DOE) all webpages (Extended Search)
But what if the batteries had the ability to recharge themselves? What if they were covered by a thin photovoltaic (PV) film that could harvest energy from the sun? Just as on ...
Quantum Computing: Solving Complex Problems
DiVincenzo, David [IBM Watson Research Center
2009-09-01
One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.
Surrogate Guderley Test Problem Definition
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-07-06
The surrogate Guderley problem (SGP) is a 'spherical shock tube' (or 'spherical driven implosion') designed to ease the notoriously subtle initialization of the true Guderley problem, while still maintaining a high degree of fidelity. In this problem (similar to the Guderley problem), an infinitely strong shock wave forms and converges in one-dimensional (1D) cylindrical or spherical symmetry through a polytropic gas with arbitrary adiabatic index {gamma}, uniform density {rho}{sub 0}, zero velocity, and negligible pre-shock pressure and specific internal energy (SIE). This shock proceeds to focus on the point or axis of symmetry at r = 0 (resulting in ostensibly infinite pressure, velocity, etc.) and reflect back out into the incoming perturbed gas.
Challenge problems for artificial intelligence
Selman, B.; Brooks, R.A.; Dean, T.
1996-12-31
AI textbooks and papers of ten discuss the big questions, such as {open_quotes}how to reason with uncertainty{close_quotes}, {open_quotes}how to reason efficiently{close_quotes}, or {open_quotes}how to improve performance through learning.{close_quotes} It is more difficult, however, to find descriptions of concrete problems or challenges that are still ambitious and interesting, yet not so open-ended. The goal of this panel is to formulate a set of such challenge problems for the field. Each panelist was asked to formulate one or more challenges. The emphasis is on problems for which there is a good chance that they will be resolved within the next five to ten years.
Sour landfill gas problem solved
Nagl, G.; Cantrall, R.
1996-05-01
In Broward County, Fla., near Pompano Beach, Waste Management of North America (WMNA, a subsidiary of WMX Technologies, Oak Brook, IL) operates the Central Sanitary Landfill and Recycling Center, which includes the country`s largest landfill gas-to-energy plant. The landfill consists of three collection sites: one site is closed, one is currently receiving garbage, and one will open in the future. Approximately 9 million standard cubic feet (scf) per day of landfill gas is collected from approximately 300 wells spread over the 250-acre landfill. With a dramatic increase of sulfur-containing waste coming to a South Florida landfill following Hurricane Andrew, odors related to hydrogen sulfide became a serious problem. However, in a matter of weeks, an innovative desulfurization unit helped calm the landfill operator`s fears. These very high H{sub 2}S concentrations caused severe odor problems in the surrounding residential area, corrosion problems in the compressors, and sulfur dioxide (SO{sub 2}) emission problems in the exhaust gas from the turbine generators.
Substation automation problems and possibilities
Smith, H.L.
1996-10-01
The evolutionary growth in the use and application of microprocessors in substations has brought the industry to the point of considering integrated substation protection, control, and monitoring systems. An integrated system holds the promise of greatly reducing the design, documentation, and implementation cost for the substation control, protection, and monitoring systems. This article examines the technical development path and the present implementation problems.
T-537: Oracle Critical Patch Update Advisory- January 2011
A Critical Patch Update is a collection of patches for multiple security vulnerabilities. It also includes non-security fixes that are required because of interdependencies by those security patches. Critical Patch Updates are cumulative.
T-605: Oracle Critical Patch Update Advisory- April 2011
A Critical Patch Update is a collection of patches for multiple security vulnerabilities. It also includes non-security fixes that are required because of interdependencies by those security patches. Critical Patch Updates are cumulative.
U-014: Oracle Java Runtime Environment (JRE) Multiple Flaws Let...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
SDK and JRE 1.4.233 and prior ABSTRACT: A remote user can create a Java applet or Java Web Start application that, when loaded by the target user, will access or modify data or...
Retrofitting and the mu Problem
Green, Daniel; Weigand, Timo; /SLAC /Stanford U., Phys. Dept.
2010-08-26
One of the challenges of supersymmetry (SUSY) breaking and mediation is generating a {mu} term consistent with the requirements of electro-weak symmetry breaking. The most common approach to the problem is to generate the {mu} term through a SUSY breaking F-term. Often these models produce unacceptably large B{mu} terms as a result. We will present an alternate approach, where the {mu} term is generated directly by non-perturtative effects. The same non-perturbative effect will also retrofit the model of SUSY breaking in such a way that {mu} is at the same scale as masses of the Standard Model superpartners. Because the {mu} term is not directly generated by SUSY breaking effects, there is no associated B{mu} problem. These results are demonstrated in a toy model where a stringy instanton generates {mu}.
Solving the Dark Matter Problem
Baltz, Ted
2009-09-01
Cosmological observations have firmly established that the majority of matter in the universe is of an unknown type, called 'dark matter'. A compelling hypothesis is that the dark matter consists of weakly interacting massive particles (WIMPs) in the mass range around 100 GeV. If the WIMP hypothesis is correct, such particles could be created and studied at accelerators. Furthermore they could be directly detected as the primary component of our galaxy. Solving the dark matter problem requires that the connection be made between the two. We describe some theoretical and experimental avenues that might lead to this connection.
Common Air Conditioner Problems | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Common Air Conditioner Problems Common Air Conditioner Problems A refrigerant leak is one common air conditioning problem. | Photo courtesy of Â©iStockphoto/BanksPhotos. A refrigerant leak is one common air conditioning problem. | Photo courtesy of ©iStockphoto/BanksPhotos. One of the most common air conditioning problems is improper operation. If your air conditioner is on, be sure to close your home's windows and outside doors. For room air conditioners, isolate the room or a group of
Quantum simulations of physics problems
Somma, R. D.; Ortiz, G.; Knill, E. H.; Gubernatis, J. E.
2003-01-01
If a large Quantum Computer (QC) existed today, what type of physical problems could we efficiently simulate on it that we could not efficiently simulate on a classical Turing machine? In this paper we argue that a QC could solve some relevant physical 'questions' more efficiently. The existence of one-to-one mappings between different algebras of observables or between different Hilbert spaces allow us to represent and imitate any physical system by any other one (e.g., a bosonic system by a spin-1/2 system). We explain how these mappings can be performed, and we show quantum networks useful for the efficient evaluation of some physical properties, such as correlation functions and energy spectra.
Inconsistent Investment and Consumption Problems
Kronborg, Morten Tolver; Steffensen, Mogens
2015-06-15
In a traditional Black–Scholes market we develop a verification theorem for a general class of investment and consumption problems where the standard dynamic programming principle does not hold. The theorem is an extension of the standard Hamilton–Jacobi–Bellman equation in the form of a system of non-linear differential equations. We derive the optimal investment and consumption strategy for a mean-variance investor without pre-commitment endowed with labor income. In the case of constant risk aversion it turns out that the optimal amount of money to invest in stocks is independent of wealth. The optimal consumption strategy is given as a deterministic bang-bang strategy. In order to have a more realistic model we allow the risk aversion to be time and state dependent. Of special interest is the case were the risk aversion is inversely proportional to present wealth plus the financial value of future labor income net of consumption. Using the verification theorem we give a detailed analysis of this problem. It turns out that the optimal amount of money to invest in stocks is given by a linear function of wealth plus the financial value of future labor income net of consumption. The optimal consumption strategy is again given as a deterministic bang-bang strategy. We also calculate, for a general time and state dependent risk aversion function, the optimal investment and consumption strategy for a mean-standard deviation investor without pre-commitment. In that case, it turns out that it is optimal to take no risk at all.
Student's algorithm solves real-world problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
Student's algorithm solves real-world problem Supercomputing Challenge: student's algorithm solves real-world problem Students learn how to use powerful computers to analyze, model, and solve real-world problems. April 3, 2012 Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge by creating a computer algorithm that automates the process of
design problem | OpenEI Community
Open Energy Information (Open El) [EERE & EIA]
design problem Home Dc's picture Submitted by Dc(266) Contributor 15 November, 2013 - 13:26 Living Walls ancient building system architect biomimicry building technology cooling cu...
PCx: Optimization Problem Solver | Argonne National Laboratory
U.S. Department of Energy (DOE) all webpages (Extended Search)
programming problems. Windows 95 version includes a user-friendly graphical interface Java graphical interface is available for all environments Source code is available and...
Statewide Power Problems May Affect SSRL
U.S. Department of Energy (DOE) all webpages (Extended Search)
Statewide Power Problems May Affect SSRL The power crisis affecting California and the northwestern US may have some implication for SSRL users during the current run. As the...
Engineering report standard hydrogen monitoring system problems
Golberg, R.L.
1996-09-25
Engineering Report to document moisture problems found during the sampling of the vapors in the dome space for hydrogen in the storage tanks and a recommended solution.
Approximate resolution of hard numbering problems
Bailleux, O.; Chabrier, J.J.
1996-12-31
We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.
Frequency Instability Problems in North American Interconnections
U.S. Department of Energy (DOE) all webpages (Extended Search)
Instability Problems in North American Interconnections Prepared by: Energy Sector Planning and Analysis (ESPA) ... would make the situation worse during an emergency event. ...
Integrated network design and scheduling problems :
Nurre, Sarah G.; Carlson, Jeffrey J.
2014-01-01
We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of the network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.
AMRH and High Energy Reinicke Problem
Shestakov, A I; Greenough, J A
2001-05-14
The authors describe AMRH results on a version of the Reinicke problem specified by the V and V group of LLNL's A-Div. The simulation models a point explosion with heat conduction. The problem specification requires that the heat conduction be replaced with diffusive radiation transport. The matter and radiation energy densities are tightly coupled.
Mitigating PQ Problems in Legacy Data Centers
Ilinets, Boris; /SLAC
2011-06-01
The conclusions of this presentation are: (1) Problems with PQ in legacy data centers still exist and need to be mitigated; (2) Harmonics generated by non-linear IT load can be lowered by passive, active and hybrid cancellation methods; (3) Harmonic study is necessary to find the best way to treat PQ problems; (4) AHF's and harmonic cancellation transformers proved to be very efficient in mitigating PQ problems; and (5) It is important that IT leaders partner with electrical engineering to appropriate ROI statements, justifying many of these expenditures.
SIENA Customer Problem Statement and Requirements
L. Sauer; R. Clay; C. Adams; H. Walther; B. Allan; R. Mariano; C. Poore; B. Whiteside; B. Boughton; J. Dike; E. Hoffman; R. Hogan; C. LeGall
2000-08-01
This document describes the problem domain and functional requirements of the SIENA framework. The software requirements and system architecture of SIENA are specified in separate documents (called SIENA Software Requirement Specification and SIENA Software Architecture, respectively). While currently this version of the document describes the problems and captures the requirements within the Analysis domain (concentrating on finite element models), it is our intention to subsequent y expand this document to describe problems and capture requirements from the Design and Manufacturing domains. In addition, SIENA is designed to be extendible to support and integrate elements from the other domains (see SIENA Software Architecture document).
Creative problem solving at Rocky Reach
Bickford, B.M.; Garrison, D.H.
1997-04-01
Tainter gate inspection and thrust bearing cooling system problems at the 1287-MW Rocky Reach hydroelectric project on the Columbia River in Washington are described. Gate inspection was initiated in response to a failure of similar gates at Folsom Dam. The approach involved measuring the actual forces on the gates and comparing them to original model study parameters, rather than the traditional method of building a hydraulic model. Measurement and visual inspection was completed in one day and had no effect on migration flows. Two problems with the thrust bearing cooling system are described. First, whenever a generating unit was taken off line, cooling water continued circulating and lowered oil temperatures. The second problem involved silt buildup in flow measuring device tubes on the cooling water system. Modifications to correct cooling system problems and associated costs are outlined.
Modeling the black hole excision problem
Szilagyi, B.; Winicour, J.; Kreiss, H.-O.
2005-05-15
We analyze the excision strategy for simulating black holes. The problem is modeled by the propagation of quasilinear waves in a 1-dimensional spatial region with timelike outer boundary, spacelike inner boundary and a horizon in between. Proofs of well-posed evolution and boundary algorithms for a second differential order treatment of the system are given for the separate pieces underlying the finite-difference problem. These are implemented in a numerical code which gives accurate long term simulations of the quasilinear excision problem. Excitation of long wavelength exponential modes, which are latent in the problem, are suppressed using conservation laws for the discretized system. The techniques are designed to apply directly to recent codes for the Einstein equations based upon the harmonic formulation.
Frequency Instability Problems in North American Interconnections
U.S. Department of Energy (DOE) all webpages (Extended Search)
Frequency Instability Problems in North American Interconnections May 1, 2011 DOE/NETL-2011/1473 Frequency Instability Problems in North American Interconnections Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness
Geological problems in radioactive waste isolation
Witherspoon, P.A.
1991-01-01
The problem of isolating radioactive wastes from the biosphere presents specialists in the fields of earth sciences with some of the most complicated problems they have ever encountered. This is especially true for high level waste (HLW) which must be isolated in the underground and away from the biosphere for thousands of years. Essentially every country that is generating electricity in nuclear power plants is faced with the problem of isolating the radioactive wastes that are produced. The general consensus is that this can be accomplished by selecting an appropriate geologic setting and carefully designing the rock repository. Much new technology is being developed to solve the problems that have been raised and there is a continuing need to publish the results of new developments for the benefit of all concerned. The 28th International Geologic Congress that was held July 9--19, 1989 in Washington, DC provided an opportunity for earth scientists to gather for detailed discussions on these problems. Workshop W3B on the subject, Geological Problems in Radioactive Waste Isolation -- A World Wide Review'' was organized by Paul A Witherspoon and Ghislain de Marsily and convened July 15--16, 1989 Reports from 19 countries have been gathered for this publication. Individual papers have been cataloged separately.
Thick diffusion limit boundary layer test problems
Bailey, T. S.; Warsa, J. S.; Chang, J. H.; Adams, M. L.
2013-07-01
We develop two simple test problems that quantify the behavior of computational transport solutions in the presence of boundary layers that are not resolved by the spatial grid. In particular we study the quantitative effects of 'contamination' terms that, according to previous asymptotic analyses, may have a detrimental effect on the solutions obtained by both discontinuous finite element (DFEM) and characteristic-method (CM) spatial discretizations, at least for boundary layers caused by azimuthally asymmetric incident intensities. Few numerical results have illustrated the effects of this contamination, and none have quantified it to our knowledge. Our test problems use leading-order analytic solutions that should be equal to zero in the problem interior, which means the observed interior solution is the error introduced by the contamination terms. Results from DFEM solutions demonstrate that the contamination terms can cause error propagation into the problem interior for both orthogonal and non-orthogonal grids, and that this error is much worse for non-orthogonal grids. This behavior is consistent with the predictions of previous analyses. We conclude that these boundary layer test problems and their variants are useful tools for the study of errors that are introduced by unresolved boundary layers in diffusive transport problems. (authors)
Lovejoy, S.C.; Whirley, R.G.
1990-10-10
This manual describes in detail the solution of ten example problems using the explicit nonlinear finite element code DYNA3D. The sample problems include solid, shell, and beam element types, and a variety of linear and nonlinear material models. For each example, there is first an engineering description of the physical problem to be studied. Next, the analytical techniques incorporated in the model are discussed and key features of DYNA3D are highlighted. INGRID commands used to generate the mesh are listed, and sample plots from the DYNA3D analysis are given. Finally, there is a description of the TAURUS post-processing commands used to generate the plots of the solution. This set of example problems is useful in verifying the installation of DYNA3D on a new computer system. In addition, these documented analyses illustrate the application of DYNA3D to a variety of engineering problems, and thus this manual should be helpful to new analysts getting started with DYNA3D. 7 refs., 56 figs., 9 tabs.
Particle physics confronts the solar neutrino problem
Pal, P.B.
1991-06-01
This review has four parts. In Part I, we describe the reactions that produce neutrinos in the sun and the expected flux of those neutrinos on the earth. We then discuss the detection of these neutrinos, and how the results obtained differ from the theoretical expectations, leading to what is known as the solar neutrino problem. In Part II, we show how neutrino oscillations can provide a solution to the solar neutrino problem. This includes vacuum oscillations, as well as matter enhanced oscillations. In Part III, we discuss the possibility of time variation of the neutrino flux and how a magnetic moment of the neutrino can solve the problem. WE also discuss particle physics models which can give rise to the required values of magnetic moments. In Part IV, we present some concluding remarks and outlook for the recent future.
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Motor operated valves problems tests and simulations
Pinier, D.; Haas, J.L.
1996-12-01
An analysis of the two refusals of operation of the EAS recirculation shutoff valves enabled two distinct problems to be identified on the motorized valves: the calculation methods for the operating torques of valves in use in the power plants are not conservative enough, which results in the misadjustement of the torque limiters installed on their motorizations, the second problem concerns the pressure locking phenomenon: a number of valves may entrap a pressure exceeding the in-line pressure between the disks, which may cause a jamming of the valve. EDF has made the following approach to settle the first problem: determination of the friction coefficients and the efficiency of the valve and its actuator through general and specific tests and models, definition of a new calculation method. In order to solve the second problem, EDF has made the following operations: identification of the valves whose technology enables the pressure to be entrapped: the tests and numerical simulations carried out in the Research and Development Division confirm the possibility of a {open_quotes}boiler{close_quotes} effect: determination of the necessary modifications: development and testing of anti-boiler effect systems.
Solving the problems of infectious waste disposal
Hoffman, S.L.; Cabral, N.J. )
1989-06-01
Lawmakers are increasing pressures to ensure safe, appropriate disposal of infectious waste. This article discusses the problems, the regulatory climate, innovative approaches, and how to pay for them. The paper discusses the regulatory definition of infectious waste, federal and state regulations, and project finance.
The scattering problem for nonlocal potentials
Zolotarev, V A
2014-11-30
We solve the direct and inverse scattering problems for integro-differential operators which are one-dimensional perturbations of the self-adjoint second derivative operator on the half-axis. We also describe the scattering data for this class of operators. Bibliography: 28 titles.
Aleph Field Solver Challenge Problem Results Summary.
Hooper, Russell; Moore, Stan Gerald
2015-01-01
Aleph models continuum electrostatic and steady and transient thermal fields using a finite-element method. Much work has gone into expanding the core solver capability to support enriched mod- eling consisting of multiple interacting fields, special boundary conditions and two-way interfacial coupling with particles modeled using Aleph's complementary particle-in-cell capability. This report provides quantitative evidence for correct implementation of Aleph's field solver via order- of-convergence assessments on a collection of problems of increasing complexity. It is intended to provide Aleph with a pedigree and to establish a basis for confidence in results for more challeng- ing problems important to Sandia's mission that Aleph was specifically designed to address.
Heavy crudes, stocks pose desalting problems
Bartley, D.
1982-02-02
The design of electrostatic desalters for crudes lighter than 30 API is well established and is no longer considered a problem. However, since 1970, the number of desalting applications involving heavy crudes (less than 20 API), syncrudes, and residual fuels has increased markedly. These stocks present unique problems that require additional design considerations. All produced crude oils, including synthetic crude from shale, tar sands, and coal liquefaction, contain impurities that adversely affect production and refining processes, the equipment used in these processes, and the final products. The most common of these impurities are water, salt, solids, metals, and sulfur. The desalting process consists of (1) adding water with a low salt content (preferably fresh) to the feedstock; (2) adequately mixing this added water with the feedstock, which already contains some quantities of salty water, sediment, and/or crystalline salt; and (3) extracting as much water as possible from the feedstock.
Scalable Adaptive Multilevel Solvers for Multiphysics Problems
Xu, Jinchao
2014-12-01
In this project, we investigated adaptive, parallel, and multilevel methods for numerical modeling of various real-world applications, including Magnetohydrodynamics (MHD), complex fluids, Electromagnetism, Navier-Stokes equations, and reservoir simulation. First, we have designed improved mathematical models and numerical discretizaitons for viscoelastic fluids and MHD. Second, we have derived new a posteriori error estimators and extended the applicability of adaptivity to various problems. Third, we have developed multilevel solvers for solving scalar partial differential equations (PDEs) as well as coupled systems of PDEs, especially on unstructured grids. Moreover, we have integrated the study between adaptive method and multilevel methods, and made significant efforts and advances in adaptive multilevel methods of the multi-physics problems.
Are shorted pipeline casings a problem
Gibson, W.F. )
1994-11-01
The pipeline industry has many road and railroad crossings with casings which have been in service for more than 50 years without exhibiting any major problems, regardless of whether the casing is shorted to or isolated from the carrier pipe. The use of smart pigging and continual visual inspection when retrieving a cased pipeline segment have shown that whether shorted or isolated, casings have no significant bearing on the presence or absence of corrosion on the carrier pipe.
Ergonomics problems and solutions in biotechnology laboratories
Coward, T.W.; Stengel, J.W.; Fellingham-Gilbert, P.
1995-03-01
The multi-functional successful ergonomics program currently implemented at Lawrence Livermore National Laboratory (LLNL) will be presented with special emphasis on recent findings in the Biotechnology laboratory environment. In addition to a discussion of more traditional computer-related repetitive stress injuries and associated statistics, the presentation will cover identification of ergonomic problems in laboratory functions such as pipetting, radiation shielding, and microscope work. Techniques to alleviate symptoms and prevent future injuries will be presented.
Diabaticity of nuclear motion: problems and perspectives
Nazarewicz, W [Joint Inst. for Heavy Ion Research, Oak Ridge, TN (United States)] [Joint Inst. for Heavy Ion Research, Oak Ridge, TN (United States)
1992-12-31
The assumption of adiabatic motion lies in foundations of many models of nuclear collective motion. To what extend can nuclear modes be treated adiabatically? Due to the richness and complexity of the nuclear many-body problem there is no unique answer to this question. The challenges of nuclear collective dynamics invite exciting interactions between several areas of physics such as nuclear structure, field theory, nonlinear dynamics, transport theory, and quantum chaos.
CMI Grand Challenge Problems | Critical Materials Institute
U.S. Department of Energy (DOE) all webpages (Extended Search)
CMI Grand Challenge Problems Time is the biggest issue. Materials typically become critical in a matter of months, but solutions take years or decades to develop and implement. Our first two grand challenges address this discrepancy. Anticipating Which Materials May Go Critical In an ideal world, users of materials would anticipate supply-chain disruptions before they occur. They would undertake activities to manage the risks of disruption, including R&D to diversify and increase supplies or
Riemke, Richard Allan
2001-09-01
The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.
Riemke, Richard Allan
2002-09-01
The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.
Exact Overlaps in the Kondo Problem (Journal Article) | DOE PAGES
Office of Scientific and Technical Information (OSTI)
Exact Overlaps in the Kondo Problem Prev Next Title: Exact Overlaps in the Kondo Problem Authors: Lukyanov, Sergei L. ; Saleur, Hubert ; Jacobsen, Jesper L. ; Vasseur, Romain ...
Approaching Problems in Particle and Nuclear Physics with Time...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Approaching Problems in Particle and Nuclear Physics with Time-Dependent Quantum Mechanics (Wednesday, Jan 20) Approaching Problems in Particle and Nuclear Physics with...
Statistics Show Bearing Problems Cause the Majority of Wind Turbine...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Statistics Show Bearing Problems Cause the Majority of Wind Turbine Gearbox Failures Statistics Show Bearing Problems Cause the Majority of Wind Turbine Gearbox Failures September ...
Solving a Class of Nonlinear Eigenvalue Problems by Newton's...
Office of Scientific and Technical Information (OSTI)
We examine the possibility of using the standard Newton's method for solving a class of ... NONLINEAR PROBLEMS nonlinear eigenvalue problem, Newton's method Word Cloud More Like ...
Problems with propagation and time evolution in f ( T ) gravity...
Office of Scientific and Technical Information (OSTI)
Problems with propagation and time evolution in f ( T ) gravity Citation Details In-Document Search Title: Problems with propagation and time evolution in f ( T ) gravity Authors: ...
CrowdPhase: crowdsourcing the phase problem
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.
2014-06-01
The idea of attacking the phase problem by crowdsourcing is introduced. Using an interactive, multi-player, web-based system, participants work simultaneously to select phase sets that correspond to better electron-density maps in order to solve low-resolution phasing problems. The human mind innately excels at some complex tasks that are difficult to solve using computers alone. For complex problems amenable to parallelization, strategies can be developed to exploit human intelligence in a collective form: such approaches are sometimes referred to as ‘crowdsourcing’. Here, a first attempt at a crowdsourced approach for low-resolution ab initio phasing in macromolecular crystallography is proposed. A collaborative online game named CrowdPhase was designed, which relies on a human-powered genetic algorithm, where players control the selection mechanism during the evolutionary process. The algorithm starts from a population of ‘individuals’, each with a random genetic makeup, in this case a map prepared from a random set of phases, and tries to cause the population to evolve towards individuals with better phases based on Darwinian survival of the fittest. Players apply their pattern-recognition capabilities to evaluate the electron-density maps generated from these sets of phases and to select the fittest individuals. A user-friendly interface, a training stage and a competitive scoring system foster a network of well trained players who can guide the genetic algorithm towards better solutions from generation to generation via gameplay. CrowdPhase was applied to two synthetic low-resolution phasing puzzles and it was shown that players could successfully obtain phase sets in the 30° phase error range and corresponding molecular envelopes showing agreement with the low-resolution models. The successful preliminary studies suggest that with further development the crowdsourcing approach could fill a gap in current crystallographic methods by making it
Studies in nonlinear problems of energy
Matkowsky, B.J.
1992-07-01
Emphasis has been on combustion and flame propagation. The research program was on modeling, analysis and computation of combustion phenomena, with emphasis on transition from laminar to turbulent combustion. Nonlinear dynamics and pattern formation were investigated in the transition. Stability of combustion waves, and transitions to complex waves are described. Combustion waves possess large activation energies, so that chemical reactions are significant only in thin layers, or reaction zones. In limit of infinite activation energy, the zones shrink to moving surfaces, (fronts) which must be found during the analysis, so that (moving free boundary problems). The studies are carried out for limiting case with fronts, while the numerical studies are carried out for finite, though large, activation energy. Accurate resolution of the solution in the reaction zones is essential, otherwise false predictions of dynamics are possible. Since the the reaction zones move, adaptive pseudo-spectral methods were developed. The approach is based on a synergism of analytical and computational methods. The numerical computations build on and extend the analytical information. Furthermore, analytical solutions serve as benchmarks for testing the accuracy of the computation. Finally, ideas from analysis (singular perturbation theory) have induced new approaches to computations. The computational results suggest new analysis to be considered. Among the recent interesting results, was spatio-temporal chaos in combustion. One goal is extension of the adaptive pseudo-spectral methods to adaptive domain decomposition methods. Efforts have begun to develop such methods for problems with multiple reaction zones, corresponding to problems with more complex, and more realistic chemistry. Other topics included stochastics, oscillators, Rysteretic Josephson junctions, DC SQUID, Markov jumps, laser with saturable absorber, chemical physics, Brownian movement, combustion synthesis, etc.
PCI Capability Development and Challenge Problem Progress
U.S. Department of Energy (DOE) all webpages (Extended Search)
6-000 PCI Capability Development and Challenge Problem Progress Joe Rashid 1 , Brian D. Wirth 2 , Rich Williamson 3 1 ANATECH Corp 2 University of Tennessee 3 Idaho National Laboratory 2 CASL-U-2016-1086-000 Outline * State of the art of PCI & Fuel Performance Codes (FPCs) * FPCs compatibility with Utilities needs - what are the gaps? Can BISON close these gaps? * PCI Capability Development: BISON progress to-date * BISON as a Phase-2 product - will it fulfill its promise? 3
Analytical solutions to matrix diffusion problems
Keklinen, Pekka
2014-10-06
We report an analytical method to solve in a few cases of practical interest the equations which have traditionally been proposed for the matrix diffusion problem. In matrix diffusion, elements dissolved in ground water can penetrate the porous rock surronuding the advective flow paths. In the context of radioactive waste repositories this phenomenon provides a mechanism by which the area of rock surface in contact with advecting elements is greatly enhanced, and can thus be an important delay mechanism. The cases solved are relevant for laboratory as well for in situ experiments. Solutions are given as integral representations well suited for easy numerical solution.
Public problems: Still waiting on the marketplace for solutions
Gover, J.; Carayannis, E.; Huray, P.
1997-10-01
This report addresses the need for government sponsored R and D to address real public problems. The motivation is that a public benefit of the money spent must be demonstrated. The areas identified as not having appropriate attention resulting in unmet public needs include healthcare cost, cost and benefits of regulations, infrastructure problems, defense spending misaligned with foreign policy objectives, the crime problem, energy impact on the environment, the education problem, low productivity growth industry sectors, the income distribution problem, the aging problem, the propagation of disease and policy changes needed to address the solution of these problems.
DYNA3D Non-reflecting Boundary Conditions - Test Problems
Zywicz, E
2006-09-28
Two verification problems were developed to test non-reflecting boundary segments in DYNA3D (Whirley and Engelmann, 1993). The problems simulate 1-D wave propagation in a semi-infinite rod using a finite length rod and non-reflecting boundary conditions. One problem examines pure pressure wave propagation, and the other problem explores pure shear wave propagation. In both problems the non-reflecting boundary segments yield results that differ only slightly (less than 6%) during a short duration from their corresponding theoretical solutions. The errors appear to be due to the inability to generate a true step-function compressive wave in the pressure wave propagation problem and due to segment integration inaccuracies in the shear wave propagation problem. These problems serve as verification problems and as regression test problems for DYNA3D.
Engineering problems of tandem-mirror reactors
Moir, R.W.; Barr, W.L.; Boghosian, B.M.
1981-10-22
We have completed a comparative evaluation of several end plug configurations for tandem mirror fusion reactors with thermal barriers. The axi-cell configuration has been selected for further study and will be the basis for a detailed conceptual design study to be carried out over the next two years. The axi-cell end plug has a simple mirror cell produced by two circular coils followed by a transition coil and a yin-yang pair, which provides for MHD stability. This paper discusses some of the many engineering problems facing the designer. We estimated the direct cost to be 2$/W/sub e/. Assuming total (direct and indirect) costs to be twice this number, we need to reduce total costs by factors between 1.7 and 2.3 to compete with future LWRs levelized cost of electricity. These reductions may be possible by designing magnets producing over 20T made possible by use of combinations of superconducting and normal conducting coils as well as improvements in performance and cost of neutral beam and microwave power systems. Scientific and technological understanding and innovation are needed in the area of thermal barrier pumping - a process by which unwanted particles are removed (pumped) from certain regions of velocity and real space in the end plug. Removal of exhaust fuel ions, fusion ash and impurities by action of a halo plasma and plasma dump in the mirror end region is another challenging engineering problem discussed in this paper.
The PHEV Charging Infrastructure Planning (PCIP) Problem
Dashora, Yogesh; Barnes, J. Wesley; Pillai, Rekha S; Combs, Todd E; Hilliard, Michael R; Chinthavali, Madhu Sudhan
2010-01-01
Increasing debates over a gasoline independent future and the reduction of greenhouse gas (GHG) emissions has led to a surge in plug-in hybrid electric vehicles (PHEVs) being developed around the world. The majority of PHEV related research has been directed at improving engine and battery operations, studying future PHEV impacts on the grid, and projecting future PHEV charging infrastructure requirements. Due to the limited all-electric range of PHEVs, a daytime PHEV charging infrastructure will be required for most PHEV daily usage. In this paper, for the first time, we present a mixed integer mathematical programming model to solve the PHEV charging infrastructure planning (PCIP) problem for organizations with thousands of people working within a defined geographic location and parking lots well suited to charging station installations. Our case study, based on the Oak Ridge National Laboratory (ORNL) campus, produced encouraging results, indicates the viability of the modeling approach and substantiates the importance of considering both employee convenience and appropriate grid connections in the PCIP problem.
Permafrost problems as they affect gas pipelines (the frost heave problem)
Lipsett, G.B.
1980-01-01
The major problems associated with the construction of a large diameter gas pipeline in a permafrost region are outlined in this presentation. Data pertains to the design and construction of the Alaska Highway Gas Pipeline Project. One of the main problems is maintaining the permafrost in its frozen state. Large diameter pipelines operating at high capacity are heat generators. Therefore, it is necessary to refrigerate the gas to ensure that it remains below 0/sup 0/C at all points in the pipeline system. The pipeline also passes through unfrozen ground where the potential for frost heave exists. The conditions under which frost heave occurs are listed. The extent and location of potential frost heave problem areas must be determined and a frost heave prediction method must be established before construction begins. Another task involves development of design criteria for the pipeline/soil interaction analysis. Remedial methods for use during the operational phase are also discussed. (DMC)
Current problems in plasma spray processing
Berndt, C.C.; Brindley, W.; Goland, A.N.; Herman, H.; Houck, D.L.; Jones, K.; Miller, R.A.; Neiser, R.; Riggs, W.; Sampath, S.; Smith, M.; Spanne, P.
1991-12-31
This detailed report summarizes 8 contributions from a thermal spray conference that was held in late 1991 at Brookhaven National Laboratory (Upton, Long Island, NY, USA). The subject of ``Plasma Spray Processing`` is presented under subject headings of Plasma-particle interactions, Deposit formation dynamics, Thermal properties of thermal barrier coatings, Mechanical properties of coatings, Feed stock materials, Porosity: An integrated approach, Manufacture of intermetallic coatings, and Synchrotron x-ray microtomographic methods for thermal spray materials. Each section is intended to present a concise statement of a specific practical and/or scientific problem, then describe current work that is being performed to investigate this area, and finally to suggest areas of research that may be fertile for future activity.
Current problems in plasma spray processing
Berndt, C.C.; Brindley, W.; Goland, A.N.; Herman, H.; Houck, D.L.; Jones, K.; Miller, R.A.; Neiser, R.; Riggs, W.; Sampath, S.; Smith, M.; Spanne, P. . Thermal Spray Lab.)
1991-01-01
This detailed report summarizes 8 contributions from a thermal spray conference that was held in late 1991 at Brookhaven National Laboratory (Upton, Long Island, NY, USA). The subject of Plasma Spray Processing'' is presented under subject headings of Plasma-particle interactions, Deposit formation dynamics, Thermal properties of thermal barrier coatings, Mechanical properties of coatings, Feed stock materials, Porosity: An integrated approach, Manufacture of intermetallic coatings, and Synchrotron x-ray microtomographic methods for thermal spray materials. Each section is intended to present a concise statement of a specific practical and/or scientific problem, then describe current work that is being performed to investigate this area, and finally to suggest areas of research that may be fertile for future activity.
Municipal solid waste (garbage): problems and benefits
Stillman, G.I.
1983-05-01
The average person in the USA generates from 3 1/2 to 7 lb of garbage/day. The combustible portion of garbage consists primarily of paper products, plastics, textiles, and wood. Problems connected with energy production from municipal solid waste (garbage), and the social, economic, and environmental factors associated with this technology are discussed. The methods for using garbage as a fuel for a combustion process are discussed. One method processes the garbage to produce a fuel that is superior to raw garbage, the other method of using garbage as a fuel is to burn it directly - the mass burning approach. The involvement of the Power Authority of the State of New York in garbage-to-energy technology is discussed.
New tools attack Permian basin stimulation problems
Ely, J.W.; Schubarth, S.K.; Wolters, B.C.; Kromer, C. )
1992-06-08
This paper reports that profitable stimulation treatments in the Permian basin of the southwestern U.S. combine new tools with technology and fluids previously available. This paper reports that a wide selection of fracturing fluids and techniques needs to be considered to solve the varied problems associated with stimulating hydrocarbon reservoirs that are at diverse depths, temperatures, pressures, and lithologies. The Permian basin of West Texas and New Mexico is the most fertile ground in the U.S. for some of the newer stimulation technologies. In this basin, these new tools and techniques have been applied in many older producing areas that previously were treated with more conventional stimulation techniques, including acidizing and conventional fracturing procedures.
Stochastic inverse problems: Models and metrics
Sabbagh, Elias H.; Sabbagh, Harold A.; Murphy, R. Kim; Aldrin, John C.; Annis, Charles; Knopp, Jeremy S.
2015-03-31
In past work, we introduced model-based inverse methods, and applied them to problems in which the anomaly could be reasonably modeled by simple canonical shapes, such as rectangular solids. In these cases the parameters to be inverted would be length, width and height, as well as the occasional probe lift-off or rotation. We are now developing a formulation that allows more flexibility in modeling complex flaws. The idea consists of expanding the flaw in a sequence of basis functions, and then solving for the expansion coefficients of this sequence, which are modeled as independent random variables, uniformly distributed over their range of values. There are a number of applications of such modeling: 1. Connected cracks and multiple half-moons, which we have noted in a POD set. Ideally we would like to distinguish connected cracks from one long shallow crack. 2. Cracks of irregular profile and shape which have appeared in cold work holes during bolt-hole eddy-current inspection. One side of such cracks is much deeper than other. 3. L or C shaped crack profiles at the surface, examples of which have been seen in bolt-hole cracks. By formulating problems in a stochastic sense, we are able to leverage the stochastic global optimization algorithms in NLSE, which is resident in VIC-3D®, to answer questions of global minimization and to compute confidence bounds using the sensitivity coefficient that we get from NLSE. We will also address the issue of surrogate functions which are used during the inversion process, and how they contribute to the quality of the estimation of the bounds.
U-082: McAfee SaaS 'myCIOScn.dll' ActiveX Control Lets Remote...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Lets Remote Users Execute Arbitrary Code January 17, 2012 - 1:00pm Addthis PROBLEM: PHP Null Pointer Dereference in zendstrndup() Lets Local Users Deny Service PLATFORM: PHP...
V-224: Google Chrome Multiple Vulnerabilities | Department of...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
4: Google Chrome Multiple Vulnerabilities V-224: Google Chrome Multiple Vulnerabilities August 22, 2013 - 1:05am Addthis PROBLEM: Multiple vulnerabilities have been reported in...
V-121: Google Chrome Multiple Vulnerabilities | Department of...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
1: Google Chrome Multiple Vulnerabilities V-121: Google Chrome Multiple Vulnerabilities March 28, 2013 - 12:29am Addthis PROBLEM: Google Chrome Multiple Vulnerabilities PLATFORM:...
V-207: Wireshark Multiple Denial of Service Vulnerabilities ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
7: Wireshark Multiple Denial of Service Vulnerabilities V-207: Wireshark Multiple Denial of Service Vulnerabilities July 31, 2013 - 1:59am Addthis PROBLEM: Multiple vulnerabilities...
V-145: IBM Tivoli Federated Identity Manager Products Java Multiple...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
V-145: IBM Tivoli Federated Identity Manager Products Java Multiple Vulnerabilities April 30, 2013 - 12:09am Addthis PROBLEM: IBM Tivoli Federated Identity Manager Products Java ...
U-228: BlackBerry Tablet OS Flash Player Multiple Vulnerabilities...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Addthis PROBLEM: BlackBerry Tablet OS Flash Player Multiple Vulnerabilities PLATFORM: Adobe Flash Player versions included with BlackBerry PlayBook tablet software versions...
U-277: Google Chrome Multiple Flaws Let Remote Users Execute...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Addthis PROBLEM: Google Chrome Multiple Flaws Let Remote Users Execute Arbitrary Code PLATFORM: Version(s): prior to 22.0.1229.92 ABSTRACT: Several vulnerabilities were...
U-237: Mozilla Firefox CVE-2012-1950 Address Bar URI Spoofing...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Addthis PROBLEM: Mozilla Firefox CVE-2012-1950 Address Bar URI Spoofing Vulnerability PLATFORM: Version(s): Mozilla Firefox 6 - 12 ABSTRACT: To exploit this issue, an attacker...
Open problems in condensed matter physics, 1987 (Conference)...
Office of Scientific and Technical Information (OSTI)
Conference: Open problems in condensed matter physics, 1987 Citation Details In-Document Search Title: Open problems in condensed matter physics, 1987 The 1970's and 1980's can be ...
Quantum mechanics problems in observer's mathematics
Khots, Boris; Khots, Dmitriy
2012-11-06
This work considers the ontology, guiding equation, Schrodinger's equation, relation to the Born Rule, the conditional wave function of a subsystem in a setting of arithmetic, algebra and topology provided by Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. Certain results and communications pertaining to solutions of these problems are provided. In particular, we prove the following theorems: Theorem I (Two-slit interference). Let {Psi}{sub 1} be a wave from slit 1, {Psi}{sub 2} - from slit 2, and {Psi} = {Psi}{sub 1}+{Psi}{sub 2}. Then the probability of {Psi} being a wave equals to 0.5. Theorem II (k-bodies solution). For W{sub n} from m-observer point of view with m>log{sub 10}((2 Multiplication-Sign 10{sup 2n}-1){sup 2k}+1), the probability of standard expression of Hamiltonian variation is less than 1 and depends on n,m,k.
Fundamental Scientific Problems in Magnetic Recording
Schulthess, T.C.; Miller, M.K.
2007-06-27
Magnetic data storage technology is presently leading the high tech industry in advancing device integration--doubling the storage density every 12 months. To continue these advancements and to achieve terra bit per inch squared recording densities, new approaches to store and access data will be needed in about 3-5 years. In this project, collaboration between Oak Ridge National Laboratory (ORNL), Center for Materials for Information Technology (MINT) at University of Alabama (UA), Imago Scientific Instruments, and Seagate Technologies, was undertaken to address the fundamental scientific problems confronted by the industry in meeting the upcoming challenges. The areas that were the focus of this study were to: (1) develop atom probe tomography for atomic scale imaging of magnetic heterostructures used in magnetic data storage technology; (2) develop a first principles based tools for the study of exchange bias aimed at finding new anti-ferromagnetic materials to reduce the thickness of the pinning layer in the read head; (3) develop high moment magnetic materials and tools to study magnetic switching in nanostructures aimed at developing improved writers of high anisotropy magnetic storage media.
Possible problems in ENDF/B-VI.r8
Brown, D; Hedstrom, G
2003-10-30
This document lists the problems that we encountered in processing ENDF/B-VI.r8 that we suspect are problems with ENDF/B-VI.r8 itself. It also contains a comparison of linear interpolation methods. Finally, this documents proposes an alternative to the current scheme of reporting problems to the ENDF community.
A class of ejecta transport test problems
Hammerberg, James E; Buttler, William T; Oro, David M; Rousculp, Christopher L; Morris, Christopher; Mariam, Fesseha G
2011-01-31
Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function ofparticulate masses and velocities, f{sub 0}(m, v;t). Some of the properties of this source distribution function have been determined from extensive Taylor and supported wave experiments on shock loaded Sn interfaces of varying surface and subsurface morphology. Such experiments measure the mass moment of f{sub o} under vacuum conditions assuming weak particle-particle interaction and, usually, fully inelastic capture by piezo-electric diagnostic probes. Recently, planar Sn experiments in He, Ar, and Kr gas atmospheres have been carried out to provide transport data both for machined surfaces and for coated surfaces. A hydro code model of ejecta transport usually specifies a criterion for the instantaneous temporal appearance of ejecta with source distribution f{sub 0}(m, v;t{sub 0}). Under the further assumption of separability, f{sub 0}(m,v;t{sub 0}) = f{sub 1}(m)f{sub 2}(v), the motion of particles under the influence of gas dynamic forces is calculated. For the situation of non-interacting particulates, interacting with a gas via drag forces, with the assumption of separability and simplified approximations to the Reynolds number dependence of the drag coefficient, the dynamical equation for the time evolution of the distribution function, f(r,v,m;t), can be resolved as a one-dimensional integral which can be compared to a direct hydro simulation as a test problem. Such solutions can also be used for preliminary analysis of experimental data. We report solutions for several shape dependent drag coefficients and analyze the results of recent planar dsh experiments in Ar and Xe.
Numerical solution of control problems governed by nonlinear differential equations
Heinkenschloss, M.
1994-12-31
In this presentation the author investigates an iterative method for the solution of optimal control problems. These problems are formulated as constrained optimization problems with constraints arising from the state equation and in the form of bound constraints on the control. The method for the solution of these problems uses the special structure of the problem arising from the bound constraint and the state equation. It is derived from SQP methods and projected Newton methods and combines the advantages of both methods. The bound constraint is satisfied by all iterates using a projection, the nonlinear state equation is satisfied in the limit. Only a linearized state equation has to be solved in every iteration. The solution of the linearized problems are done using multilevel methods and GMRES.
On the computational complexity of sequence design problems
Hart, W.E. [Sandia National Labs., Albuquerque, NM (United States)
1997-12-01
Inverse protein folding concerns the identification of an amino acid sequence that folds to a given structure. Sequence design problems attempt to avoid the apparant difficulty of inverse protein folding by defining an energy that can be minimized to find protein-like sequences. We evaluate the practical relevance of two sequence design problems by analyzing their computational complexity. We show that the canonical method of sequence design is intractable and describe approximation algorithms for this problem. We also describe an efficient algorithm that exactly solves the grand canonical method. Our analysis shows how sequence design problems can fail to reduce the difficulty of the inverse protein folding problem and highlights the need to analyze these problems to evaluate their practical relevance. 10 refs., 8 figs.
On the computational complexity of sequence design problems
Hart, W.E. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.
1996-12-31
Inverse protein folding concerns the identification of an amino acid sequence that folds to a given structure. Sequence design problems attempt to avoid the apparent difficulty of inverse protein folding by defining an energy that can be minimized to find protein-like sequences. The authors evaluate the practical relevance of two sequence design problems by analyzing their computation complexity. They show that the canonical method of sequence design is intractable, and describe approximation algorithms for this problem. The authors also describe an efficient algorithm that exactly solves the grand canonical method. The analysis shows how sequence design problems can fail to reduce the difficulty of the inverse protein folding problem, and highlights the need to analyze these problems to evaluate their practical relevance.
On parameterization of the inverse problem for estimating aquifer...
Office of Scientific and Technical Information (OSTI)
Title: On parameterization of the inverse problem for estimating aquifer properties using tracer data Authors: Kowalsky, M. B. ; Finsterle, S. ; Commer, M. ; Williams, K. H. ; ...
FELIX: advances in modeling forward and inverse icesheet problems...
Office of Scientific and Technical Information (OSTI)
icesheet problems. Authors: Perego, Mauro ; Eldred, Michael S. ; Gunazburger, Max ; Salinger, Andrew G. ; Kalashnikova, Irina ; Ju, L. ; Hoffman, M. ; Leng, W. ; Price, S ;...
Crowdsourcing Initiative Seeks Buildings-Related Problems to...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
The Building Technologies Office is partnering with the successful SunShot Catalyst crowdsourcing initiative to identify and solve problems related to software development, data, ...
European Geothermal Drilling Experience-Problem Areas and Case...
Office of Scientific and Technical Information (OSTI)
Drilling Experience-Problem Areas and Case Studies Baron, G.; Ungemach, P. 15 GEOTHERMAL ENERGY; BOREHOLES; DRILLING; EVALUATION; EXPLORATION; GEOTHERMAL RESOURCES; ITALY;...
Tesla Tackling Problem of Power Storage: Chamberlain - Joint...
U.S. Department of Energy (DOE) all webpages (Extended Search)
May 1, 2015, Videos Tesla Tackling Problem of Power Storage: Chamberlain Jeff Chamberlain and Bloomberg's David Gura speak on Bloomberg West discussing the potential global impact ...
History, Applications, Numerical Values and Problems with the...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Numerical Values and Problems with the Calculation of EROI (Energy Return on Energy Investment) Professor Charles Hall State University of NY College of Environmental Science and...
Synthetic fossil fuel technologies: health problems and intersociety...
Office of Scientific and Technical Information (OSTI)
Conference: Synthetic fossil fuel technologies: health problems and intersociety cooperation Citation Details In-Document Search Title: Synthetic fossil fuel technologies: health ...
Trending and root cause analysis of TWRS radiological problem reports
Brown, R.L.
1997-07-31
This document provides a uniform method for trending and performing root cause analysis for radiological problem reports at Tank Waste Remediation System (TWRS).
Using Energy-Filtered TEM to Solve Practical Materials Problems...
Office of Scientific and Technical Information (OSTI)
Title: Using Energy-Filtered TEM to Solve Practical Materials Problems With Inspirations from Gareth Thomas. Abstract not provided. Authors: Sugar, Joshua Daniel ; El Gabaly ...
Office of Energy Efficiency and Renewable Energy (EERE)
A remote authenticated user with 'Create Table' privileges can gain 'SYS' privileges on the target system.
Collins, David
2010-05-15
A general framework for regarding oracle-assisted quantum algorithms as tools for discriminating among unitary transformations is described. This framework is applied to the Deutsch-Jozsa problem and all possible quantum algorithms which solve the problem with certainty using oracle unitaries in a particular form are derived. It is also used to show that any quantum algorithm that solves the Deutsch-Jozsa problem starting with a quantum system in a particular class of initial, thermal equilibrium-based states of the type encountered in solution-state NMR can only succeed with greater probability than a classical algorithm when the problem size n exceeds {approx}10{sup 5}.
On a Highly Nonlinear Self-Obstacle Optimal Control Problem
Di Donato, Daniela; Mugnai, Dimitri
2015-10-15
We consider a non-quadratic optimal control problem associated to a nonlinear elliptic variational inequality, where the obstacle is the control itself. We show that, fixed a desired profile, there exists an optimal solution which is not far from it. Detailed characterizations of the optimal solution are given, also in terms of approximating problems.
Domain wall and isocurvature perturbation problems in axion models
Kawasaki, Masahiro; Yoshino, Kazuyoshi; Yanagida, Tsutomu T. E-mail: tsutomu.tyanagida@ipmu.jp
2013-11-01
Axion models have two serious cosmological problems, domain wall and isocurvature perturbation problems. In order to solve these problems we investigate the Linde's model in which the field value of the Peccei-Quinn (PQ) scalar is large during inflation. In this model the fluctuations of the PQ field grow after inflation through the parametric resonance and stable axionic strings may be produced, which results in the domain wall problem. We study formation of axionic strings using lattice simulations. It is found that in chaotic inflation the axion model is free from both the domain wall and the isocurvature perturbation problems if the initial misalignment angle ?{sub a} is smaller than O(10{sup ?2}). Furthermore, axions can also account for the dark matter for the breaking scale v ? 10{sup 12?16}GeV and the Hubble parameter during inflation H{sub inf}?<10{sup 11?12}GeV in general inflation models.
Shell Element Verification & Regression Problems for DYNA3D
Zywicz, E
2008-02-01
A series of quasi-static regression/verification problems were developed for the triangular and quadrilateral shell element formulations contained in Lawrence Livermore National Laboratory's explicit finite element program DYNA3D. Each regression problem imposes both displacement- and force-type boundary conditions to probe the five independent nodal degrees of freedom employed in the targeted formulation. When applicable, the finite element results are compared with small-strain linear-elastic closed-form reference solutions to verify select aspects of the formulations implementation. Although all problems in the suite depict the same geometry, material behavior, and loading conditions, each problem represents a unique combination of shell formulation, stabilization method, and integration rule. Collectively, the thirty-six new regression problems in the test suite cover nine different shell formulations, three hourglass stabilization methods, and three families of through-thickness integration rules.
Russian Doll Search for solving Constraint Optimization problems
Verfaillie, G.; Lemaitre, M.
1996-12-31
If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.
Workshops and problems for benchmarking eddy current codes
Turner, L.R.; Davey, K.; Ida, N.; Rodger, D.; Kameari, A.; Bossavit, A.; Emson, C.R.I.
1988-08-01
A series of six workshops was held in 1986 and 1987 to compare eddy current codes, using six benchmark problems. The problems included transient and steady-state ac magnetic fields, close and far boundary conditions, magnetic and non-magnetic materials. All the problems were based either on experiments or on geometries that can be solved analytically. The workshops and solutions to the problems are described. Results show that many different methods and formulations give satisfactory solutions, and that in many cases reduced dimensionality or coarse discretization can give acceptable results while reducing the computer time required. A second two-year series of TEAM (Testing Electromagnetic Analysis Methods) workshops, using six more problems, is underway. 12 refs., 15 figs., 4 tabs.
Various forms of indexing HDMR for modelling multivariate classification problems
Aksu, a?r?; Tunga, M. Alper
2014-12-10
The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled. In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.
Nonlinear eigenvalue problems in Density Functional Theory calculations
Fattebert, J
2009-08-28
Developed in the 1960's by W. Kohn and coauthors, Density Functional Theory (DFT) is a very popular quantum model for First-Principles simulations in chemistry and material sciences. It allows calculations of systems made of hundreds of atoms. Indeed DFT reduces the 3N-dimensional Schroedinger electronic structure problem to the search for a ground state electronic density in 3D. In practice it leads to the search for N electronic wave functions solutions of an energy minimization problem in 3D, or equivalently the solution of an eigenvalue problem with a non-linear operator.
Robust Consumption-Investment Problem on Infinite Horizon
Zawisza, Dariusz
2015-12-15
In our paper we consider an infinite horizon consumption-investment problem under a model misspecification in a general stochastic factor model. We formulate the problem as a stochastic game and finally characterize the saddle point and the value function of that game using an ODE of semilinear type, for which we provide a proof of an existence and uniqueness theorem for its solution. Such equation is interested on its own right, since it generalizes many other equations arising in various infinite horizon optimization problems.
A Schwarz alternating procedure for singular perturbation problems
Garbey, M.; Kaper, H.G.
1994-12-31
The authors show that the Schwarz alternating procedure offers a good algorithm for the numerical solution of singular perturbation problems, provided the domain decomposition is properly designed to resolve the boundary and transition layers. They give sharp estimates for the optimal position of the domain boundaries and present convergence rates of the algorithm for various second-order singular perturbation problems. The splitting of the operator is domain-dependent, and the iterative solution of each subproblem is based on a modified asymptotic expansion of the operator. They show that this asymptotic-induced method leads to a family of efficient massively parallel algorithms and report on implementation results for a turning-point problem and a combustion problem.
Domain decomposition methods for solving an image problem
Tsui, W.K.; Tong, C.S.
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
ALCF's new data science program targets "big data" problems ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
ALCF's new data science program targets "big data" problems Author: Laura Wolf April 1, 2016 Facebook Twitter LinkedIn Google E-mail Printer-friendly version The Argonne Leadership ...
Accelerating PDE-Constrained Optimization Problems using Adaptive...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Accelerating PDE-Constrained Optimization Problems using Adaptive Reduced-Order Models January 15, 2016 10:30AM to 11:30AM Presenter Matthew Zahr, Wilkinson Interviewee Location...
Simulation and Analysis of Converging Shock Wave Test Problems
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the original problem, and minimally straining the general credibility of associated analysis and conclusions.
"Upcycling": A Green Solution to the Problem of Plastic - Energy...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Find More Like This Return to Search "Upcycling": A Green Solution to the Problem of ... At Argonne, chemist Vilas Pol has devised an environmentally green method that breaks down ...
The problem of living in a world contaminated with chemicals
Metcalf, R.L.
1990-12-31
The proliferation of xenobiotic chemicals in the global environment poses living problems for each of us aboard {open_quotes}spaceship earth.{close_quotes} Seven case studies are presented that illustrate the magnitude of the problem that can result from waiting to identify toxic hazards until there have been decades of {open_quotes}human guinea pig{close_quotes} exposure. 25 refs., 5 tabs.
Capacitated arc routing problem and its extensions in waste collection
Fadzli, Mohammad; Najwa, Nurul; Luis, Martino
2015-05-15
Capacitated arc routing problem (CARP) is the youngest generation of graph theory that focuses on solving the edge/arc routing for optimality. Since many years, operational research devoted to CARP counterpart, known as vehicle routing problem (VRP), which does not fit to several real cases such like waste collection problem and road maintenance. In this paper, we highlighted several extensions of capacitated arc routing problem (CARP) that represents the real-life problem of vehicle operation in waste collection. By purpose, CARP is designed to find a set of routes for vehicles that satisfies all pre-setting constraints in such that all vehicles must start and end at a depot, service a set of demands on edges (or arcs) exactly once without exceeding the capacity, thus the total fleet cost is minimized. We also addressed the differentiation between CARP and VRP in waste collection. Several issues have been discussed including stochastic demands and time window problems in order to show the complexity and importance of CARP in the related industry. A mathematical model of CARP and its new version is presented by considering several factors such like delivery cost, lateness penalty and delivery time.
Geothermal drilling problems and their impact on cost
Carson, C.C.
1982-01-01
Historical data are presented that demonstrate the significance of unexpected problems. In extreme cases, trouble costs are the largest component of well costs or severe troubles can lead to abandonment of a hole. Drilling experiences from US geothermal areas are used to analyze the frequency and severity of various problems. In addition, average trouble costs are estimated based on this analysis and the relationship between trouble and depth is discussed. The most frequent drilling and completion problem in geothermal wells is lost circulation. This is especially true for resources in underpressured, fractured formations. Serious loss of circulation can occur during drilling - because of this, the producing portions of many wells are drilled with air or aerated drilling fluid and the resulting corrosion/erosion problems are tolerated - but it can also affect the cementing of well casing. Problems in bonding the casing to the formation result from many other causes as well, and are common in geothermal wells. Good bonds are essential because of the possibility of casing collapse due to thermal cycling during the life of the well. Several other problems are identified and their impacts are quantified and discussed.
Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M; Pierson, K.; Rixen, D.
1999-04-01
We report on the application of the one-level FETI method to the solution of a class of substructural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues, and on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.
COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
H. B. HUNT; M. V. MARATHE; R. E. STEARNS
2001-06-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.
The inverse problems of wing panel manufacture processes
Oleinikov, A. I.; Bormotin, K. S.
2013-12-16
It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.
Real time detection and correction of distribution feeder operational problems
Subramanian, A.K.; Huang, J.C.
1995-12-31
The paper presents a new technique that detects and corrects distribution operational problems using closed loop control of substation transformers, capacitors and reactors by an online computer. This allows the distribution system to be operated close to its capacity without sacrificing the quality of power supply. Such operations help defer the additional cost of installing new substations. The technique integrates the Distribution Feeder Analysis (DFA) and the Distribution Substation Control (DSC) functions to achieve this. The DFA function provides the topology and power flow results for the feeders using the substation real time measurements. It does not require feeder section measurements. The realtime feeder results are used in detecting any currently existing feeder operational problems such as feeder section voltages and currents outside their limits. The detected feeder problems are transformed into substation distribution bus objectives and then corrected by the DSC function using controls available at the substation. The DSC function has been performing successfully for several years at Potomac Electric Power Company (PEPCO) in Washington, D.C. It uses a closed loop control scheme that controls the substation transformer taps and shunt capacitor and reactor breakers and optimizes the substation operation. By combining the DFA and DSC functions into a single function and with proper transformation of feeder problems into substation objectives, a new closed loop control scheme for the substation controls is achieved. This scheme corrects the detected feeder problems and optimizes the substation operation. This technique is implemented and tested using the actual substation and feeder models of PEPCO.
Economic penalties of problems and errors in solar energy systems
Raman, K.; Sparkes, H.R.
1983-01-01
Experience with a large number of installed solar energy systems in the HUD Solar Program has shown that a variety of problems and design/installation errors have occurred in many solar systems, sometimes resulting in substantial additional costs for repair and/or replacement. In this paper, the effect of problems and errors on the economics of solar energy systems is examined. A method is outlined for doing this in terms of selected economic indicators. The method is illustrated by a simple example of a residential solar DHW system. An example of an installed, instrumented solar energy system in the HUD Solar Program is then discussed. Detailed results are given for the effects of the problems and errors on the cash flow, cost of delivered heat, discounted payback period, and life-cycle cost of the solar energy system. Conclusions are drawn regarding the most suitable economic indicators for showing the effects of problems and errors in solar energy systems. A method is outlined for deciding on the maximum justifiable expenditure for maintenance on a solar energy system with problems or errors.
Cosmological moduli problem in large volume scenario and thermal inflation
Choi, Kiwoon; Park, Wan-Il; Shin, Chang Sub E-mail: wipark@kias.re.kr
2013-03-01
We show that in a large volume scenario of type IIB string or F-theory compactifications, single thermal inflation provides only a partial solution to the cosmological problem of the light volume modulus. We then clarify the conditions for double thermal inflation, being a simple extension of the usual single thermal inflation scenario, to solve the cosmological moduli problem in the case of relatively light moduli masses. Using a specific example, we demonstrate that double thermal inflation can be realized in large volume scenario in a natural manner, and the problem of the light volume modulus can be solved for the whole relevant mass range. We also find that right amount of baryon asymmetry and dark matter can be obtained via a late-time Affleck-Dine mechanism and the decays of the visible sector NLSP to flatino LSP.
A survey of problems in divertor and edge plasma theory
Boozer, A. ); Braams, B.; Weitzner, H. . Courant Inst. of Mathematical Sciences); Cohen, R. ); Hazeltine, R. . Inst. for Fusion Studies); Hinton, F. ); Houlberg, W. (Oak
1992-12-22
Theoretical physics problems related to divertor design are presented, organized by the region in which they occur. Some of the open questions in edge physics are presented from a theoretician's point of view. After a cursory sketch of the fluid models of the edge plasma and their numerical realization, the following topics are taken up: time-dependent problems, non-axisymmetric effects, anomalous transport in the scrape-off layer, edge kinetic theory, sheath effects and boundary conditions in divertors, electric field effects, atomic and molecular data issues, impurity transport in the divertor region, poloidally localized power dissipation (MARFEs and dense gas targets), helium ash removal, and neutral transport. The report ends with a summary of selected problems of particular significance and a brief bibliography of survey articles and related conference proceedings.
A survey of problems in divertor and edge plasma theory
Boozer, A.; Braams, B.; Weitzner, H.; Cohen, R.; Hazeltine, R.; Hinton, F.; Houlberg, W.; Oktay, E.; Sadowski, W.; Post, D.; Sigmar, D.; Wootton, A.
1992-12-22
Theoretical physics problems related to divertor design are presented, organized by the region in which they occur. Some of the open questions in edge physics are presented from a theoretician`s point of view. After a cursory sketch of the fluid models of the edge plasma and their numerical realization, the following topics are taken up: time-dependent problems, non-axisymmetric effects, anomalous transport in the scrape-off layer, edge kinetic theory, sheath effects and boundary conditions in divertors, electric field effects, atomic and molecular data issues, impurity transport in the divertor region, poloidally localized power dissipation (MARFEs and dense gas targets), helium ash removal, and neutral transport. The report ends with a summary of selected problems of particular significance and a brief bibliography of survey articles and related conference proceedings.
Operating experience review of service water system problems
Lam, P.
1989-01-01
In a recent paper, selected results of a comprehensive review and evaluation of service water system problems conducted by the Office for Analysis and Evaluation of Operational Data (AEOD) of the US Nuclear Regulatory Commission (NRC) were presented. The results of this review and evaluation indicated that service water system problems have significant safety implications. These system problems are attributable to a great variety of causes and have adverse impacts on a large number of safety-related systems and components. To provide additional feedback of operating experience, this paper presents an overview of the dominant mechanisms leading to service water system degradations and failures. The failures and degradations of service water systems observed in the 276 operating events are grouped into six general categories. The six general categories are (1) fouling due to various mechanisms, (2) single-failure and other design deficiencies, (3) flooding, (4) equipment failures, (5) personnel and procedural errors, and (6) seismic deficiencies.
Rekindle the Fire: Building Supercomputers to Solve Dynamic Problems
Studham, Scott S. )
2004-02-16
Seymour Cray had a Lets go to the moon attitude when it came to building high-performance computers. His drive was to create architectures designed to solve the most challenging problems. Modern high-performance computer architects, however, seem to be focusing on building the largest floating-point-generation machines by using truckloads of commodity parts. Don't get me wrong; current clusters can solve a class of problems that are untouchable by any other system in the world, including the supercomputers of yesteryear. Many of the worlds fastest clusters provide new insights into weather forecasting, our understanding of fundamental sciences and provide the ability to model our nuclear stockpiles. Lets call this class of problem a first-principles simulation because the simulations are based on a fundamental physical understanding or model.
COAL-FIRED UTILITY BOILERS: SOLVING ASH DEPOSITION PROBLEMS
Christopher J. Zygarlicke; Donald P. McCollor; Steven A. Benson; Jay R. Gunderson
2001-04-01
The accumulation of slagging and fouling ash deposits in utility boilers has been a source of aggravation for coal-fired boiler operators for over a century. Many new developments in analytical, modeling, and combustion testing methods in the past 20 years have made it possible to identify root causes of ash deposition. A concise and comprehensive guidelines document has been assembled for solving ash deposition as related to coal-fired utility boilers. While this report accurately captures the current state of knowledge in ash deposition, note that substantial research and development is under way to more completely understand and mitigate slagging and fouling. Thus, while comprehensive, this document carries the title ''interim,'' with the idea that future work will provide additional insight. Primary target audiences include utility operators and engineers who face plant inefficiencies and significant operational and maintenance costs that are associated with ash deposition problems. Pulverized and cyclone-fired coal boilers are addressed specifically, although many of the diagnostics and solutions apply to other boiler types. Logic diagrams, ash deposit types, and boiler symptoms of ash deposition are used to aid the user in identifying an ash deposition problem, diagnosing and verifying root causes, determining remedial measures to alleviate or eliminate the problem, and then monitoring the situation to verify that the problem has been solved. In addition to a step-by-step method for identifying and remediating ash deposition problems, this guideline document (Appendix A) provides descriptions of analytical techniques for diagnostic testing and gives extensive fundamental and practical literature references and addresses of organizations that can provide help in alleviating ash deposition problems.
COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
Hunt, H. B.; Marathe, M. V.; Stearns, R. E.
2001-01-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97
Simple methods solve vacuum column problems using plant data
Golden, S.W.; Sloley, A.W. )
1992-09-14
This paper reports that simple methods can be used to evaluate common vacuum column problems using actual field measurements. All that is required is an enthalpy table, a calculator, and an absolute pressure manometer, which can be purchased for about $100. The key to troubleshooting refinery crude or lube vacuum columns is basic plant data. Although many techniques may be used to increase cutpoint, many times the largest yield improvements can be achieved on existing units simply by eliminating such problems, as leaking collector trays or overflowing liquid distributors.
Tabu search techniques for large high-school timetabling problems
Schaerf, A.
1996-12-31
The high-school timetabling problem consists in assigning all the lectures of a high school to the time periods in such a way that no teacher (or class) is involved in more than one lecture at a time and other side constraints are satisfied. The problem is NP-complete and is usually tackled using heuristic methods. This paper describes a solution algorithm (and its implementation) based on Tabu Search. The algorithm interleaves different types of moves and makes use of an adaptive relaxation of the hard constraints. The implementation of the algorithm has been successfully experimented in some large high schools with various kinds of side constraints.
Channeling problem for charged particles produced by confining environment
Chuluunbaatar, O.; Gusev, A. A.; Derbov, V. L.; Krassovitskiy, P. M.; Vinitsky, S. I.
2009-05-15
Channeling problem produced by confining environment that leads to resonance scattering of charged particles via quasistationary states imbedded in the continuum is examined. Nonmonotonic dependence of physical parameters on collision energy and/or confining environment due to resonance transmission and total reflection effects is confirmed that can increase the rate of recombination processes. The reduction of the model for two identical charged ions to a boundary problem is considered together with the asymptotic behavior of the solution in the vicinity of pair-collision point and the results of R-matrix calculations. Tentative estimations of the enhancement factor and the total reflection effect are discussed.
How to Solve Schroedinger Problems by Approximating the Potential Function
Ledoux, Veerle; Van Daele, Marnix
2010-09-30
We give a survey over the efforts in the direction of solving the Schroedinger equation by using piecewise approximations of the potential function. Two types of approximating potentials have been considered in the literature, that is piecewise constant and piecewise linear functions. For polynomials of higher degree the approximating problem is not so easy to integrate analytically. This obstacle can be circumvented by using a perturbative approach to construct the solution of the approximating problem, leading to the so-called piecewise perturbation methods (PPM). We discuss the construction of a PPM in its most convenient form for applications and show that different PPM versions (CPM,LPM) are in fact equivalent.
National Energy Software Center: benchmark problem book. Revision
none,
1985-12-01
Computational benchmarks are given for the following problems: (1) Finite-difference, diffusion theory calculation of a highly nonseparable reactor, (2) Iterative solutions for multigroup two-dimensional neutron diffusion HTGR problem, (3) Reference solution to the two-group diffusion equation, (4) One-dimensional neutron transport transient solutions, (5) To provide a test of the capabilities of multi-group multidimensional kinetics codes in a heavy water reactor, (6) Test of capabilities of multigroup neutron diffusion in LMFBR, and (7) Two-dimensional PWR models.
Problems of millipound thrust measurement. The "Hansen Suspension"
Carta, David G.
2014-03-31
Considered in detail are problems which led to the need and use of the 'Hansen Suspension'. Also discussed are problems which are likely to be encountered in any low level thrust measuring system. The methods of calibration and the accuracies involved are given careful attention. With all parameters optimized and calibration techniques perfected, the system was found capable of a resolution of 10 {mu} lbs. A comparison of thrust measurements made by the 'Hansen Suspension' with measurements of a less sophisticated device leads to some surprising results.
Navier-Stokes Solvers and Generalizations for Reacting Flow Problems
Elman, Howard C
2013-01-27
This is an overview of our accomplishments during the final term of this grant (1 September 2008 -- 30 June 2012). These fall mainly into three categories: fast algorithms for linear eigenvalue problems; solution algorithms and modeling methods for partial differential equations with uncertain coefficients; and preconditioning methods and solvers for models of computational fluid dynamics (CFD).
On the RA research reactor fuel management problems
Matausek, M.V.; Marinkovic, N.
1997-12-01
After 25 yr of operation, the Soviet-origin 6.5-MW heavy water RA research reactor was shut down in 1984. Basic facts about RA reactor operation, aging, reconstruction, and spent-fuel disposal have been presented and discussed in earlier papers. The following paragraphs present recent activities and results related to important fuel management problems.
DYNA3D Material Model 71 - Solid Element Test Problem
Zywicz, E
2008-01-24
A general phenomenological-based elasto-plastic nonlinear isotropic strain hardening material model was implemented in DYNA3D for use in solid, beam, truss, and shell elements. The constitutive model, Model 71, is based upon conventional J2 plasticity and affords optional temperature and rate dependence (visco-plasticity). The expressions for strain hardening, temperature dependence, and rate dependence allow it to represent a wide variety of material responses. Options to capture temperature changes due to adiabatic heating and thermal straining are incorporated into the constitutive framework as well. The verification problem developed for this constitutive model consists of four uni-axial right cylinders subject to constant true strain-rate boundary conditions. Three of the specimens have different constant strain rates imposed, while the fourth specimen is subjected to several strain rate jumps. The material parameters developed by Fehlmann (2005) for 21-6-9 Nitronic steel are utilized. As demonstrated below, the finite element (FE) simulations are in excellent agreement with the theoretical responses and indicated the model is functioning as desired. Consequently, this problem serves as both a verification problem and regression test problem for DYNA3D.
Practical control strategy eliminates FCCU compressor surge problems
Campos, M.C.M.M.; Rodriques, P.S.B. )
1993-01-11
This paper reports that the control system originally designed for the fluid catalytic cracking unit (FCCU) compressor at Petroleo Brasileiro SA's (Petrobras) Presidente Bernardes refinery, in Sao Paulo, Brazil, was inadequate. The system required almost permanent flow recirculation to prevent surge. An improved antisurge control strategy was implemented in mid-1990. Since then, the unit has operated without the former surge problems.
A VLSI structure for the deadlock avoidance problem
Bertolazzi, P.; Bongiovanni, G.
1985-11-01
In this paper the authors present two VLSI structures implementing the banker's algorithm for the deadlock avoidance problem, and we derive the area x (time)/sup 2/ lower bound for such an algorithm. The first structure is based on the VLSI mesh of trees. The second structure is a modification of the first one, and it approaches more closely the theoretical lower bound.
EPA Environmental Justice Collaborative Problem-Solving Cooperative Agreement RFP
The U.S. Environmental Protection Agency (EPA) issued a request for proposals for the Environmental Justice Collaborative Problem-Solving (EJCPS) Cooperative Agreement to support community-based organization to collaborate and partner with industry, government, academia, and other stakeholders to develop and implement solutions that address local environmental and public health issues.
Waste site characterization and remediation: Problems in developing countries
Kalavapudi, M.; Iyengar, V.
1996-12-31
Increased industrial activities in developing countries have degraded the environment, and the impact on the environment is further magnified because of an ever-increasing population, the prime receptors. Independent of the geographical location, it is possible to adopt effective strategies to solve environmental problems. In the United States, waste characterization and remediation practices are commonly used for quantifying toxic contaminants in air, water, and soil. Previously, such procedures were extraneous, ineffective, and cost-intensive. Reconciliation between the government and stakeholders, reinforced by valid data analysis and environmental exposure assessments, has allowed the {open_quotes}Brownfields{close_quotes} to be a successful approach. Certified reference materials and standard reference materials from the National Institute of Standards (NIST) are indispensable tools for solving environmental problems and help to validate data quality and the demands of legal metrology. Certified reference materials are commonly available, essential tools for developing good quality secondary and in-house reference materials that also enhance analytical quality. This paper cites examples of environmental conditions in developing countries, i.e., industrial pollution problems in India, polluted beaches in Brazil, and deteriorating air quality in countries, such as Korea, China, and Japan. The paper also highlights practical and effective approaches for remediating these problems. 23 refs., 7 figs., 1 tab.
Genetic algorithms and their use in Geophysical Problems
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free
Complexity analysis of pipeline mapping problems in distributed heterogeneous networks
Lin, Ying; Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
2009-04-01
Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications. We consider two types of largescale distributed applications: (1) interactive applications where a single dataset is sequentially processed along a pipeline; and (2) streaming applications where a series of datasets continuously flow through a pipeline. The computing pipelines of these applications consist of a number of modules executed in a linear order in network environments with heterogeneous resources under different constraints. Our goal is to find an efficient mapping scheme that allocates the modules of a pipeline to network nodes for minimum endtoend delay or maximum frame rate. We formulate the pipeline mappings in distributed environments as optimization problems and categorize them into six classes with different optimization goals and mapping constraints: (1) Minimum Endtoend Delay with No Node Reuse (MEDNNR), (2) Minimum Endtoend Delay with Contiguous Node Reuse (MEDCNR), (3) Minimum Endtoend Delay with Arbitrary Node Reuse (MEDANR), (4) Maximum Frame Rate with No Node Reuse or Share (MFRNNRS), (5) Maximum Frame Rate with Contiguous Node Reuse and Share (MFRCNRS), and (6) Maximum Frame Rate with Arbitrary Node Reuse and Share (MFRANRS). Here, 'contiguous node reuse' means that multiple contiguous modules along the pipeline may run on the same node and 'arbitrary node reuse' imposes no restriction on node reuse. Note that in interactive applications, a node can be reused but its resource is not shared. We prove that MEDANR is polynomially solvable and the rest are NP-complete. MEDANR, where either
Progress on PRONGHORN Application to NGNP Related Problems
Dana A. Knoll
2009-08-01
We are developing a multiphysics simulation tool for Very High-Temperature gascooled Reactors (VHTR). The simulation tool, PRONGHORN, takes advantages of the Multiphysics Object-Oriented Simulation library, and is capable of solving multidimensional thermal-fluid and neutronics problems implicitly in parallel. Expensive Jacobian matrix formation is alleviated by the Jacobian-free Newton-Krylov method, and physics-based preconditioning is applied to improve the convergence. The initial development of PRONGHORN has been focused on the pebble bed corec concept. However, extensions required to simulate prismatic cores are underway. In this progress report we highlight progress on application of PRONGHORN to PBMR400 benchmark problems, extension and application of PRONGHORN to prismatic core reactors, and progress on simulations of 3-D transients.
MODEL 9975 SHIPPING PACKAGE FABRICATION PROBLEMS AND SOLUTIONS
May, C; Allen Smith, A
2008-05-07
The Model 9975 Shipping Package is the latest in a series (9965, 9968, etc.) of radioactive material shipping packages that have been the mainstay for shipping radioactive materials for several years. The double containment vessels are relatively simple designs using pipe and pipe cap in conjunction with the Chalfont closure to provide a leak-tight vessel. The fabrication appears simple in nature, but the history of fabrication tells us there are pitfalls in the different fabrication methods and sequences. This paper will review the problems that have arisen during fabrication and precautions that should be taken to meet specifications and tolerances. The problems and precautions can also be applied to the Models 9977 and 9978 Shipping Packages.
Dynamic extension of the Simulation Problem Analysis Kernel (SPANK)
Sowell, E.F. . Dept. of Computer Science); Buhl, W.F. )
1988-07-15
The Simulation Problem Analysis Kernel (SPANK) is an object-oriented simulation environment for general simulation purposes. Among its unique features is use of the directed graph as the primary data structure, rather than the matrix. This allows straightforward use of graph algorithms for matching variables and equations, and reducing the problem graph for efficient numerical solution. The original prototype implementation demonstrated the principles for systems of algebraic equations, allowing simulation of steady-state, nonlinear systems (Sowell 1986). This paper describes how the same principles can be extended to include dynamic objects, allowing simulation of general dynamic systems. The theory is developed and an implementation is described. An example is taken from the field of building energy system simulation. 2 refs., 9 figs.
Raymond Davis Jr., Solar Neutrinos, and the Solar Neutrino Problems
U.S. Department of Energy (DOE) all webpages (Extended Search)
Raymond Davis, Jr., Solar Neutrinos, and the Solar Neutrino Problem Resources with Additional Information Raymond Davis, Jr. Photo Courtesy of Brookhaven National Laboratory (BNL) Raymond Davis, Jr., who conducted research in the Chemistry Department at Brookhaven National Laboratory (BNL) from 1948 through 1984, was awarded the 2002 Nobel Prize in Physics "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos." Dr. Davis is also a recipient
A Lie algebraic approach to the Kondo problem
Rajeev, S.G.
2010-04-15
The Kondo problem is approached using the unitary Lie algebra of spin-singlet fermion bilinears. In the limit when the number of values of the spin N goes to infinity the theory approaches a classical limit, which still requires a renormalization. We determine the ground state of this renormalized theory. Then we construct a quantum theory around this classical limit, which amounts to recovering the case of finite N.
Exascale Computing Allows Scientists to Approach New Class of Problems |
U.S. Department of Energy (DOE) all webpages (Extended Search)
Princeton Plasma Physics Lab Exascale Computing Allows Scientists to Approach New Class of Problems By Gale Scott March 19, 2012 Tweet Widget Google Plus One Share on Facebook From left are Venkatramani Balaji, Jeroen Tromp, and Bill Tang at the Visualization Laboratory, created by the Princeton Institute for Computational Science and Engineering (PICSciE), in the Lewis Library on main campus. (Photo by Elle Starkman, PPPL Office of Communications) From left are Venkatramani Balaji, Jeroen
Combined approach to the inverse protein folding problem. Final report
Ruben A. Abagyan
2000-06-01
The main scientific contribution of the project ''Combined approach to the inverse protein folding problem'' submitted in 1996 and funded by the Department of Energy in 1997 is the formulation and development of the idea of the multilink recognition method for identification of functional and structural homologues of newly discovered genes. This idea became very popular after they first announced it and used it in prediction of the threading targets for the CASP2 competition (Critical Assessment of Structure Prediction).
Adaptive domain decomposition methods for advection-diffusion problems
Carlenzoli, C.; Quarteroni, A.
1995-12-31
Domain decomposition methods can perform poorly on advection-diffusion equations if diffusion is dominated by advection. Indeed, the hyperpolic part of the equations could affect the behavior of iterative schemes among subdomains slowing down dramatically their rate of convergence. Taking into account the direction of the characteristic lines we introduce suitable adaptive algorithms which are stable with respect to the magnitude of the convective field in the equations and very effective on bear boundary value problems.
Tensile strengths of problem shales and clays. Master's thesis
Rechner, F.J.
1990-01-01
The greatest single expense faced by oil companies involved in the exploration for crude oil is that of drilling wells. The most abundant rock drilled is shale. Some of these shales cause wellbore stability problems during the drilling process. These can range from slow rate of penetration and high torque up to stuck pipe and hole abandonment. The mechanical integrity of the shale must be known when the shalers are subjected to drilling fluids to develop an effective drilling plan.
Quality problems in waters used for drinking purposes in Italy
Funari, E.; Bastone, A.; Bottoni, P.; De Donno, D.; Donati, L. )
1991-12-01
With a grant from the Italian Ministry of the Environment, the National Institute of Health (Istituto Superiore di Sanita) promoted and coordinated some activities aimed at determining the extent and the intensity of contamination of waters used for human consumption by some chemical agents, and describing causes and modalities of contamination and human health implications. The chemical agents examined were herbicides, nitrates, trihalomethanes, asbestos, manganese and fluoride. In this paper a first nationwide picture of these problems is reported.
Solving Petascale Public Health and Safety Problems Using Uintah | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility Pressure profile of a deflagration to detonation transition in an array of tightly packed PBX9501 cylinders confined by symmetric boundaries on all sides. Pressure profile of a deflagration to detonation transition in an array of tightly packed PBX9501 cylinders confined by symmetric boundaries on all sides. Detonation can be seen in red. Solving Petascale Public Health and Safety Problems Using Uintah PI Name: Martin Berzins PI Email: mb@sci.utah.edu
COLLOQUIUM: History, Applications, Numerical Values and Problems with the
U.S. Department of Energy (DOE) all webpages (Extended Search)
Calculation of EROI - Energy Return on (Energy) Investment | Princeton Plasma Physics Lab March 2, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium COLLOQUIUM: History, Applications, Numerical Values and Problems with the Calculation of EROI - Energy Return on (Energy) Investment Professor Charles Hall State University of NY College of Environmental Science and Forestry Plants and animals are subjected to fierce selective pressure to do the "right thing" energetically, that is to
General Solution of the Kenamond HE Problem 3
Kaul, Ann
2015-12-15
A general solution for programmed burn calculations of the light times produced by a singlepoint initiation of a single HE region surrounding an inert region has been developed. In contrast to the original solutions proposed in References 1 and 2, the detonator is no longer restricted to a location on a Cartesian axis and can be located at any point inside the HE region. This general solution has been implemented in the ExactPack suite of exact solvers for verification problems.
Geological problems in radioactive waste isolation - A world wide review
Witherspoon, P.A.
1991-06-01
The problem of isolating radioactive wastes from the biosphere presents specialists in the earth sciences with some of the most complicated problems they have ever encountered. This is especially true for high-level waste (HLW), which must be isolated in the underground and away from the biosphere for thousands of years. The most widely accepted method of doing this is to seal the radioactive materials in metal canisters that are enclosed by a protective sheath and placed underground in a repository that has been carefully constructed in an appropriate rock formation. Much new technology is being developed to solve the problems that have been raised, and there is a continuing need to publish the results of new developments for the benefit of all concerned. Table 1 presents a summary of the various formations under investigation according to the reports submitted for this world wide review. It can be seen that in those countries that are searching for repository sites, granitic and metamorphic rocks are the prevalent rock type under investigation. Six countries have developed underground research facilities that are currently in use. All of these investigations are in saturated systems below the water table, except the United States project, which is in the unsaturated zone of a fractured tuff.
Collaboration Results - Applying Technical Solutions To Environmental Remediation Problems
Boyd, G.; Fiore, J.; Walker, J.; DeRemer, C.; Wight, E.
2002-02-26
Within the Department of Energy's Office of Environmental Management (EM), the Office of Science and Technology (OST) identifies and develops innovative technologies that accelerate cleanup of high-priority environmental contamination problems and enable EM closure sites to meet closure schedules. OST manages an integrated research and development program that is essential to completing timely and cost-effective cleanup and stewardship of DOE sites. While innovative technologies can make significant contributions to the cleanup process, in some cases, EM has encountered unexpected barriers to their implementation. Technical obstacles are expected, but administrative challenges-such as regulatory, organizational, and stakeholder issues-must also be addressed. OST has found that collaborative needs identification and problem solving are essential components in overcoming these barriers. Collaboration helps EM meet its cleanup goals, close sites, and reduce the overall cost of cleanup at DOE sites nationwide. This paper presents examples of OST's collaboration efforts that expedite site closure and solve specific cleanup problems at EM sites.
The coincidence problem and interacting holographic dark energy
Karwan, Khamphee
2008-05-15
We study the dynamical behaviour of the interacting holographic dark energy model whose interaction term is Q = 3H({lambda}{sub d}{rho}{sub d}+{lambda}{sub c}{rho}{sub c}), where {rho}{sub d} and {rho}{sub c} are the energy densities of dark energy and cold dark matter respectively. To satisfy the observational constraints from type Ia supernovae, the cosmic microwave background shift parameter and baryon acoustic oscillation measurements, if {lambda}{sub c} = {lambda}{sub d} or {lambda}{sub d},{lambda}{sub c}>0, the cosmic evolution will only reach the attractor in the future and the ratio {rho}{sub c}/{rho}{sub d} cannot be slowly varying at present. Since the cosmic attractor can be reached in the future even when the present values of the cosmological parameters do not satisfy the observational constraints, the coincidence problem is not really alleviated in this case. However, if {lambda}{sub c}{ne}{lambda}{sub d} and they are allowed to be negative, the ratio {rho}{sub c}/{rho}{sub d} can be slowly varying at present and the cosmic attractor can be reached near the present epoch. Hence, the alleviation of the coincidence problem is attainable in this case. The alleviation of the coincidence problem in this case is still attainable when confronting this model with Sloan Digital Sky Survey data.
Geological problems in radioactive waste isolation - second worldwide review
Witherspoon, P.A.
1996-09-01
The first world wide review of the geological problems in radioactive waste isolation was published by Lawrence Berkeley National Laboratory in 1991. This review was a compilation of reports that had been submitted to a workshop held in conjunction with the 28th International Geological Congress that took place July 9-19, 1989 in Washington, D.C. Reports from 15 countries were presented at the workshop and four countries provided reports after the workshop, so that material from 19 different countries was included in the first review. It was apparent from the widespread interest in this first review that the problem of providing a permanent and reliable method of isolating radioactive waste from the biosphere is a topic of great concern among the more advanced, as well as the developing, nations of the world. This is especially the case in connection with high-level waste (HLW) after its removal from nuclear power plants. The general concensus is that an adequate isolation can be accomplished by selecting an appropriate geologic setting and carefully designing the underground system with its engineered barriers. This document contains the Second Worldwide Review of Geological Problems in Radioactive Waste Isolation, dated September 1996.
Strengthened MILP formulation for certain gas turbine unit commitment problems
Pan, Kai; Guan, Yongpei; Watson, Jean -Paul; Wang, Jianhui
2015-05-22
In this study, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these typesmore » of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.« less
Strengthened MILP formulation for certain gas turbine unit commitment problems
Pan, Kai; Guan, Yongpei; Watson, Jean -Paul; Wang, Jianhui
2015-05-22
In this study, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these types of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.
Sign problem in Z-coefficient for particle emission angular distributi...
Office of Scientific and Technical Information (OSTI)
Sign problem in Z-coefficient for particle emission angular distributions Citation Details In-Document Search Title: Sign problem in Z-coefficient for particle emission angular ...
Sign problem in Z-coefficient for particle emission angular distributi...
Office of Scientific and Technical Information (OSTI)
Sign problem in Z-coefficient for particle emission angular distributions Citation Details In-Document Search Title: Sign problem in Z-coefficient for particle emission angular...
Problem Turned Into Performance for Solar Cells | U.S. DOE Office...
Problem Turned Into Performance for Solar Cells Basic Energy Sciences (BES) BES Home About ... Problem Turned Into Performance for Solar Cells Boundaries between crystalline grains - ...
Analysis of the Space Propulsion System Problem Using RAVEN
diego mandelli; curtis smith; cristian rabiti; andrea alfonsi
2014-06-01
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available. RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.
Ethylene-Vinyl Acetate Potential Problems for Photovoltaic Packaging: Preprint
U.S. Department of Energy (DOE) all webpages (Extended Search)
Ethylene-Vinyl Acetate Potential Problems for Photovoltaic Packaging Preprint M.D. Kempe, G.J. Jorgensen, K.M. Terwilliger, T.J. McMahon, and C.E. Kennedy National Renewable Energy Laboratory T.T. Borek Sandia National Laboratories Presented at the 2006 IEEE 4 th World Conference on Photovoltaic Energy Conversion (WCPEC-4) Waikoloa, Hawaii May 7-12, 2006 Conference Paper NREL/CP-520-39915 May 2006 NOTICE The submitted manuscript has been offered by an employee of the Midwest Research Institute
Benefits, problems, and issues in open systems architectures
Emmerich, P.; Traynor, P.J.; Klein, S.A.; Fisher, M.T.; Burn, R.D.; Hoffman, R.; Castelli, G.
1994-02-01
This paper is sponsored by the Power System Control Centers Working Group and the Control Centers Open Systems Task Force. The intent of the paper is to focus industry attention on issues relating to open energy management systems (EMS). The short note papers address both benefits and problem areas relating to the current open systems environment. The issues considered herein should be considered by any utility planning for a new EMS. They should also stimulate further through on related topics that will ultimately affect EMS reliability and availability.
Viscous gravitational aether and the cosmological constant problem
Kuang, Xiaomei; Ling, Yi E-mail: yling@ncu.edu.cn
2009-10-01
Recently a notion of gravitational aether is advocated to solve the cosmological constant problem. Through the modification of the source of gravity one finds that the effective Newton's constant is source dependent so as to provide a simple but consistent way to decouple gravity from the vacuum energy. However, in the original paper the ratio of the effective Newton's constants for pressureless dust and radiation has an upper bound which is 0.75. In this paper we propose a scheme to loose this bound by introducing a bulk viscosity for the gravitational aether, and expect this improvement will provide more space for matching predictions from this theoretical programm with observational constraints.
Investigation of valve failure problems in LWR power plants
1980-04-01
An analysis of component failures from information in the computerized Nuclear Safety Information Center (NSIC) data bank shows that for both PWR and BWR plants the component category most responsible for approximately 19.3% of light water reactor (LWR) power plant shutdowns. This investigation by Burns and Roe, Inc. shows that the greatest cause of shutdowns in LWRs due to valve failures is leakage from valve stem packing. Both BWR plants and PWR plants have stem leakage problems (BWRs, 21% and PWRs, 34%).
A class of self-similar hydrodynamics test problems
Ramsey, Scott D; Brown, Lowell S; Nelson, Eric M; Alme, Marv L
2010-12-08
We consider self-similar solutions to the gas dynamics equations. One such solution - a spherical geometry Gaussian density profile - has been analyzed in the existing literature, and a connection between it, a linear velocity profile, and a uniform specific internal energy profile has been identified. In this work, we assume the linear velocity profile to construct an entire class of self-similar sol utions in both cylindrical and spherical geometry, of which the Gaussian form is one possible member. After completing the derivation, we present some results in the context of a test problem for compressible flow codes.
Computational nuclear quantum many-body problem: The UNEDF project
Fann, George I [ORNL
2013-01-01
The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.
Application of PDSLin to the magnetic reconnection problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
of PDSLin to the magnetic reconnection problem This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2013 Comput. Sci. Disc. 6 014002 (http://iopscience.iop.org/1749-4699/6/1/014002) Download details: IP Address: 50.136.219.251 The article was downloaded on 18/04/2013 at 01:26 Please note that terms and conditions apply. View the table of contents for this issue, or go to the journal homepage for more Home Search Collections Journals About Contact us
The Million-Body Problem: Particle Simulations in Astrophysics
Rasio, Fred [Northwestern University
2009-09-01
Computer simulations using particles play a key role in astrophysics. They are widely used to study problems across the entire range of astrophysical scales, from the dynamics of stars, gaseous nebulae, and galaxies, to the formation of the largest-scale structures in the universe. The 'particles' can be anything from elementary particles to macroscopic fluid elements, entire stars, or even entire galaxies. Using particle simulations as a common thread, this talk will present an overview of computational astrophysics research currently done in our theory group at Northwestern. Topics will include stellar collisions and the gravothermal catastrophe in dense star clusters.
Municipal garbage disposal: A problem we cannot ignore
Not Available
1989-01-01
In 1980 the US generated 150 million metric tons of municipal solid waste, and this figure is expected to increase to over 200 million metric tons by 1990. This comment discusses the traditional approaches to waste management, as well as current options available for waste disposal and the federal environmental laws that impinge on these options. Next, the national dimensions of the garbage disposal problem, as epitomized by the garbage barge and the international export of waste generated by this country, are discussed. This Comment concludes with recommendations for a change in public policy to foster recycling, taxing non-biodegradable products, as well as more stringent regulatory controls on solid waste disposal.
The {open_quotes}first{close_quotes} problem
Holsinger, R.F.
1995-02-01
This paper describes the first magnet design problem that Klaus and the author worked on together. At the time, over 30 years ago, Klaus was working as a plasma physicist in the Controlled Thermonuclear Research (CTR) Group, and the author was assigned from the Mechanical Engineering Department to help with the design of experimental equipment for various research projects. Klaus` primary research program was to develop a {open_quotes}plasma gun{close_quotes} for injecting plasma into {open_quotes}mirror machines.{close_quotes} As described, the magnet design aspect of this plasma gun was a challenging task, and led to some innovations that were quite advanced at that time.
Practical approaches to field problems of stationary combustion systems
Lee, S.W.
1997-09-01
The CANMET Energy Technology Centre (CETC) business plan dictates collaboration with industrial clients and other government agencies to promote energy efficiency, health and safety, pollution reduction and productivity enhancement. The Advanced Combustion Technologies group of CETC provides consultation to numerous organizations in combustion related areas by conducting laboratory and field investigations of fossil fuel-fired combustion equipment. CETC, with its modern research facilities and technical expertise, has taken this practical approach since the seventies and has assisted many organizations in overcoming field problems and in providing cost saving measures and improved profit margins. This paper presents a few selected research projects conducted for industrial clients in north and central America. The combustion systems investigated are mostly liquid fuel fired, with the exception of the utility boiler which was coal-fired. The key areas involved include fuel quality, fuel storage/delivery system contamination, waste derived oils, crude oil combustion, unacceptable pollutant emissions, ambient soot deposition, slagging, fouling, boiler component degradation, and particulate characterization. Some of the practical approaches taken to remedy these field problems on several combustion systems including residential, commercial and industrial scale units are discussed.
A scenario for inflationary magnetogenesis without strong coupling problem
Tasinato, Gianmassimo
2015-03-23
Cosmological magnetic fields pervade the entire universe, from small to large scales. Since they apparently extend into the intergalactic medium, it is tantalizing to believe that they have a primordial origin, possibly being produced during inflation. However, finding consistent scenarios for inflationary magnetogenesis is a challenging theoretical problem. The requirements to avoid an excessive production of electromagnetic energy, and to avoid entering a strong coupling regime characterized by large values for the electromagnetic coupling constant, typically allow one to generate only a tiny amplitude of magnetic field during inflation. We propose a scenario for building gauge-invariant models of inflationary magnetogenesis potentially free from these issues. The idea is to derivatively couple a dynamical scalar, not necessarily the inflaton, to fermionic and electromagnetic fields during the inflationary era. Such couplings give additional freedom to control the time-dependence of the electromagnetic coupling constant during inflation. This fact allows us to find conditions to avoid the strong coupling problems that affect many of the existing models of magnetogenesis. We do not need to rely on a particular inflationary set-up for developing our scenario, that might be applied to different realizations of inflation. On the other hand, specific requirements have to be imposed on the dynamics of the scalar derivatively coupled to fermions and electromagnetism, that we are able to satisfy in an explicit realization of our proposal.
On the numerical treatment of problems in atmospheric chemistry
Aro, C.J.
1995-09-01
Atmospheric chemical-radiative-transport (CRT) models are vital in performing research on atmospheric chemical change. Even with the enormous computing capability delivered by massively parallel systems, extended three dimensional CRT simulations are still not computationally feasible. The major obstacle in a CRT model is the nonlinear ODE system describing the chemical kinetics in the model. These ODE systems are usually very stiff and account for anywhere from 75% to 90% of the CPU time required to run a CRT model. In this study, a simple explicit class of time stepping method is developed and demonstrated to be useful in treating chemical ODE systems without the use of a Jacobian matrix. These methods, called preconditioned time differencing methods, are tested on small mathematically idealized problems, box model problems, and full 2-D and 3-D CRT models. The methods are found to be both fast and memory efficient. Studies are performed on both vector and parallel systems. The preconditioned time differencing methods are established as a viable alternative to the more common backward differentiation formulas in terms of CPU speed across architectural platforms.
STUDY OF THE RHIC BPM SMA CONNECTOR FAILURE PROBLEM
LIAW,C.; SIKORA, R.; SCHROEDER, R.
2007-06-25
About 730 BPMs are mounted on the RHIC CQS and Triplet super-conducting magnets. Semi-rigid coaxial cables are used to bring the electrical signal from the BPM feedthroughs to the outside flanges. at the ambient temperature. Every year around 10 cables will lose their signals during the operation. The connection usually failed at the warm end of the cable. The problems were either the solder joint failed or the center conductor retracted out of the SMA connector. Finite element analyses were performed to understand the failure mechanism of the solder joint. The results showed that (1) The SMA center conductor can separate from the mating connector due to the thermal retraction. (2) The maximum thermal stress at the warm end solder joint can exceed the material strength of the Pb37/Sn63 solder material and (3) The magnet ramping frequency (-10 Hz), during the machine startup, can possibly resonant the coaxial cable and damage the solder joints, especially when a fracture is initiated. Test results confirmed that by using the silver bearing solder material (a higher strength material) and by crimping the cable at the locations close to the SMA connector (to prevent the center conductor from retracting) can effectively resolve the connector failure problem.
Approximations of very weak solutions to boundary-value problems.
Berggren, Martin Olof
2003-03-01
Standard weak solutions to the Poisson problem on a bounded domain have square-integrable derivatives, which limits the admissible regularity of inhomogeneous data. The concept of solution may be further weakened in order to define solutions when data is rough, such as for inhomogeneous Dirichlet data that is only square-integrable over the boundary. Such very weak solutions satisfy a nonstandard variational form (u, v) = G(v). A Galerkin approximation combined with an approximation of the right-hand side G defines a finite-element approximation of the very weak solution. Applying conforming linear elements leads to a discrete solution equivalent to the text-book finite-element solution to the Poisson problem in which the boundary data is approximated by L{sub 2}-projections. The L{sub 2} convergence rate of the discrete solution is O(h{sub s}) for some s {element_of} (0,1/2) that depends on the shape of the domain, asserting a polygonal (two-dimensional) or polyhedral (three-dimensional) domain without slits and (only) square-integrable boundary data.
Expert systems applied to two problems in nuclear power plants
Kim, K.Y.
1988-01-01
This dissertation describes two prototype expert systems applied to two problems in nuclear power plants. One problem is spare parts inventory control, and the other one is radionuclide release from containment during severe accident. The expert system for spare parts inventory control can handle spare parts requirements not only in corrective, preventive, or predictive maintenance, but also when failure rates of components or parts are updated by new data. Costs and benefits of spare parts inventory acquisition are evaluated with qualitative attributes such as spare part availability to provide the inventory manager with an improved basis for decision making. The expert system is implemented with Intelligence/Compiler on an IBM-AT. The other expert system for radionuclide release from containment can estimate magnitude, type, location, and time of release of radioactive materials from containment during a severe accident nearly on line, based on the actual measured physical parameters such as temperature and pressure inside the containment. The expert system has a function to check the validation of sensor data. The expert system is implemented with KEE on a Symbolics LISP machine.
Wireless Sensor Networks - Node Localization for Various Industry Problems
Derr, Kurt; Manic, Milos
2015-06-01
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less
Concrete growth problems and remedial measures at TVA projects
Hammer, J.J.
1984-01-01
Most concrete structures are designed and detailed to provide for a volume decrease without excessive cracking. Occasionally, however, a concrete structure exhibits a long-term increase in volume termed concrete growth. Concrete growth may result from a variety of reactions, such as the hydration of unstable oxides included in the concrete mix, or the oxidation of minerals or from an outside attack of sulfates. The most important reaction creating concrete growth is that between minor alkali hydroxides from cement and the concrete aggregates. Two distinctly different harmful reactions have been recognized: the alkali-silicate and alkali-carbonate reactions. Concrete deteriorating from an alkali-aggregate reaction, regardless of the type, develops an obvious network of cracks called pattern or map cracking. These alkali-aggregate reactions and their accompanying concrete growth have presented numerous problems at TVA's Fontana Dam, Chickamauga Dam and lock, and Hiwassee Dam. Much has been learned about alkali-aggregate reaction since 1940. Most harmful reactions can now be prevented in proposed structures by interpreting the results of standard test methods. It is not possible, however, in existing structures to determine how far the growth phenomenon has progressed, how long the effects will have to be dealt with, or what the future effects will be. A program of close surveillance and monitoring is maintained at these projects, and problems are dealt with as they arise.
Wireless Sensor Networks - Node Localization for Various Industry Problems
Derr, Kurt; Manic, Milos
2015-06-01
Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothing (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.
Statistical optimisation techniques in fatigue signal editing problem
Nopiah, Z. M.; Osman, M. H.; Baharin, N.; Abdullah, S.
2015-02-03
Success in fatigue signal editing is determined by the level of length reduction without compromising statistical constraints. A great reduction rate can be achieved by removing small amplitude cycles from the recorded signal. The long recorded signal sometimes renders the cycle-to-cycle editing process daunting. This has encouraged researchers to focus on the segment-based approach. This paper discusses joint application of the Running Damage Extraction (RDE) technique and single constrained Genetic Algorithm (GA) in fatigue signal editing optimisation.. In the first section, the RDE technique is used to restructure and summarise the fatigue strain. This technique combines the overlapping window and fatigue strain-life models. It is designed to identify and isolate the fatigue events that exist in the variable amplitude strain data into different segments whereby the retention of statistical parameters and the vibration energy are considered. In the second section, the fatigue data editing problem is formulated as a constrained single optimisation problem that can be solved using GA method. The GA produces the shortest edited fatigue signal by selecting appropriate segments from a pool of labelling segments. Challenges arise due to constraints on the segment selection by deviation level over three signal properties, namely cumulative fatigue damage, root mean square and kurtosis values. Experimental results over several case studies show that the idea of solving fatigue signal editing within a framework of optimisation is effective and automatic, and that the GA is robust for constrained segment selection.
MATHEMATICAL MODELS OF HYSTERESIS (DYNAMIC PROBLEMS IN HYSTERESIS)
Professor Isaak Mayergoyz
2006-08-21
This research has further advanced the current state of the art in the areas of dynamic aspects of hysteresis and nonlinear large scale magnetization dynamics. The results of this research will find important engineering applications in the areas of magnetic data storage technology and the emerging technology of “spintronics”. Our research efforts have been focused on the following tasks: • Study of fast (pulse) precessional switching of magnetization in magnetic materials. • Analysis of critical fields and critical angles for precessional switching of magnetization. • Development of inverse problem approach to the design of magnetic field pulses for precessional switching of magnetization. • Study of magnetization dynamics induced by spin polarized current injection. • Construction of complete stability diagrams for spin polarized current induced magnetization dynamics. • Development of the averaging technique for the analysis of the slow time scale magnetization dynamics. • Study of thermal effects on magnetization dynamics by using the theory of stochastic processes on graphs.
Diagnosing delivery problems in the White House Information Distribution System
Nahabedian, M.; Shrobe, H.
1996-12-31
As part of a collaboration with the White House Office of Media Affairs, members of the MIT Artificial Intelligence Laboratory designed a system, called COMLINK, which distributes a daily stream of documents released by the Office of Media Affairs. Approximately 4000 direct subscribers receive information from this service but more than 100,000 people receive the information through redistribution channels. The information is distributed via Email and the World Wide Web. In such a large scale distribution scheme, there is a constant problem of subscriptions becoming invalid because the user`s Email account has terminated. This causes a backwash of hundreds of {open_quotes}bounced mail{close_quotes} messages per day which must be processed by the operators of the COMLINK system. To manage this annoying but necessary task, an expert system named BMES was developed to diagnose the failures of information delivery.
Monitoring the progress of anytime problem-solving
Hansen, E.A.; Zilberstein, S.
1996-12-31
Anytime algorithms offer a tradeoff between solution quality and computation time that has proved useful in applying artificial intelligence techniques to time-critical problems. To exploit this tradeoff, a system must be able to determine the best time to stop deliberation and act on the currently available solution. When the rate of improvement of solution quality is uncertain, monitoring the progress of the algorithm can improve the utility of the system. This paper introduces a technique for run-time monitoring of anytime algorithms that is sensitive to the variance of the algorithm`s performance, the time-dependent utility of a solution, the ability of the run-time monitor to estimate the quality of the currently available solution, and the cost of monitoring. The paper examines the conditions under which the technique is optimal and demonstrates its applicability.
First results, problems of French deep gasification program
Gaussens, P.
1983-01-01
The development of a technology for the gasification of deep coal reserves that are technically and economically not exploitable by classic mining methods was investigated. The principal problem is the very low permeability of the deep coal which makes it necessary to create an artificial connection between the injection and production wells which is done of hydrofracturing method. The possibilities of an electrical connection are studied. Difficulties related to the spontaneous ignition of the coal and the creation of a backward combustion are revealed. Exploration of the factors that might limit the quality of the gas produced or the quantity of coal extracted by doublet is suggested which should lead to obtaining criteria for site selection. Knowledge of the natural conditions of a site is essential for the decision and the selection of the operating method. The characterization can be obtained by using exploration methods such as coring, logging, surface geophysics.
Ecology problems associated with geothermal development in California
Shinn, J.H.; Ireland, R.R.
1980-08-04
Geothermal power plants have the potential for supplying about 5% of the US electrical generating needs by 1985, and are even now supplying about one third of San Francisco's electricity. Investigations have shown that the typical geothermal field, such as the hot water resource of Imperial Valley, can be developed in an environmentally sound manner when proper considerations are made for ecosystem problems. Experimental evidence is presented pro and con for potential impacts due to habitat disturbance, powerline corridors, noise effects, trace element emissions from cooling towers, accidental brine discharges into aquatic or soil systems, competition for water and H/sub 2/S effects on vegetation. A mitigation and control strategy is recommended for each ecological issue and it is shown where effects are likely to be irreversible.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measured values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.
Geological aspects of the nuclear waste disposal problem
Laverov, N.P.; Omelianenko, B.L.; Velichkin, V.I.
1994-06-01
For the successful solution of the high-level waste (HLW) problem in Russia one must take into account such factors as the existence of the great volume of accumulated HLW, the large size and variety of geological conditions in the country, and the difficult economic conditions. The most efficient method of HLW disposal consists in the maximum use of protective capacities of the geological environment and in using inexpensive natural minerals for engineered barrier construction. In this paper, the principal trends of geological investigation directed toward the solution of HLW disposal are considered. One urgent practical aim is the selection of sites in deep wells in regions where the HLW is now held in temporary storage. The aim of long-term investigations into HLW disposal is to evaluate geological prerequisites for regional HLW repositories.
Economic analysis of model validation for a challenge problem
Paez, Paul J.; Paez, Thomas L.; Hasselman, Timothy K.
2016-02-19
It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. Asmore » a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.« less
Standard Problems for CFD Validation for NGNP - Status Report
Richard W. Johnson; Richard R. Schultz
2010-08-01
The U.S. Department of Energy (DOE) is conducting research and development to support the resurgence of nuclear power in the United States for both electrical power generation and production of process heat required for industrial processes such as the manufacture of hydrogen for use as a fuel in automobiles. The project is called the Next Generation Nuclear Plant (NGNP) Project, which is based on a Generation IV reactor concept called the very high temperature reactor (VHTR). The VHTR will be of the prismatic or pebble bed type; the former is considered herein. The VHTR will use helium as the coolant at temperatures ranging from 250°C to perhaps 1000°C. While computational fluid dynamics (CFD) has not previously been used for the safety analysis of nuclear reactors in the United States, it is being considered for existing and future reactors. It is fully recognized that CFD simulation codes will have to be validated for flow physics reasonably close to actual fluid dynamic conditions expected in normal operational and accident situations. The “Standard Problem” is an experimental data set that represents an important physical phenomenon or phenomena, whose selection is based on a phenomena identification and ranking table (PIRT) for the reactor in question. It will be necessary to build a database that contains a number of standard problems for use to validate CFD and systems analysis codes for the many physical problems that will need to be analyzed. The first two standard problems that have been developed for CFD validation consider flow in the lower plenum of the VHTR and bypass flow in the prismatic core. Both involve scaled models built from quartz and designed to be installed in the INL’s matched index of refraction (MIR) test facility. The MIR facility employs mineral oil as the working fluid at a constant temperature. At this temperature, the index of refraction of the mineral oil is the same as that of the quartz. This provides an advantage to the
Selected problems of baryon spectroscopy: Chiral soliton versus quark models
Kopeliovich, V. B.
2009-05-15
The inconsistency between the rigid rotator and bound state models at an arbitrary number of colors, the rigid rotator-soft rotator dilemma, and some other problems of baryon spectroscopy are discussed in the framework of the chiral soliton approach (CSA). Consequences of the comparison of CSA results with simple quark models are considered and the 1/N{sub c} expansion for the effective strange antiquark mass is presented, as it follows from the CSA. Strong dependence of the effective strange antiquark mass on the SU(3) multiplet is required to fit the CSA predictions. The difference between 'good' and 'bad' diquark masses, which is about 100 MeV, is in reasonable agreement with other estimates. Multibaryons (hypernuclei) with strangeness are described and some states of interest are also predicted within the CSA.
Users view gas ruling: problems now, gains later
Hume, M.
1985-05-20
Although recent court decisions about natural gas carriage and discount sales programs may cause problems for users in the short term, in the long run they could provide open access on non-discriminatory terms. The programs will continue current operation until July, and the final impact of the decisions will depend upon the response of the Federal Energy Regulatory Commission (FERC) to the court's nullification of its programs. A lapse before establishing new rules could be disruptive. Arguments that the programs boost pipeline profits and unfairly aid dual-fuel users led to the court decision. Refusal to transport gas could generate anti-trust suits against pipelines. FERC expects to issue final rules by the end of summer.
A Geospatial Integrated Problem Solving Environment for Homeland Security Applications
Koch, Daniel B
2010-01-01
Effective planning, response, and recovery (PRR) involving terrorist attacks or natural disasters come with a vast array of information needs. Much of the required information originates from disparate sources in widely differing formats. However, one common attribute the information often possesses is physical location. The organization and visualization of this information can be critical to the success of the PRR mission. Organizing information geospatially is often the most intuitive for the user. In the course of developing a field tool for the U.S. Department of Homeland Security (DHS) Office for Bombing Prevention, a geospatial integrated problem solving environment software framework was developed by Oak Ridge National Laboratory. This framework has proven useful as well in a number of other DHS, Department of Defense, and Department of Energy projects. An overview of the software architecture along with application examples are presented.
Contributions to Sustainability by Communities and Individuals: Problems and Prospects
MacGregor, D.; Tonn, B.E.
1998-11-01
This report examines relationships between a comprehensive set of definitions of and viewpoints on the concept of Sustainability and the abilities of communities and individuals in the United States to meet the behavioral prescriptions inherent in these definitions and viewpoints. This research is timely because sustainability is becoming a cornerstone of national and international environmental strategies designed to simultaneously achieve environmental, economic, and social goals. In the United States, many communities have adopted sustainability principles as the foundation for both their environmental protection efforts and their socioeconomic development initiatives. This research is important because it highlights serious problems communities and inviduals may have in achieving sustainability expectations, and illustrates how much work is needed to help communities and individuals overcome numerous considerable and complex constraints to sustainability.
Expansion-loop enclosure resolves subsea line problems
Rich, S.K.; Alleyne, A.G.
1998-08-03
Recent design and construction of a Gulf of Mexico subsea pipeline illustrate the use of buried, enclosed expansion loops to resolve problems from expansion and upheaval buckling. Buried, subsea pipelines operating at high temperatures and pressures experience extreme compressive loads caused by the axial restraint of the soil. The high axial forces combined with imperfections in the seabed may overstress the pipeline or result in upheaval buckling. Typically, expansion loops, or doglegs, are installed to protect the pipeline risers from expansion and to alleviate axial forces. Buried expansion loops, however, are rendered virtually ineffective by the lateral restraint of the soil. Alternative methods to reduce expansion may increase the potential of upheaval buckling or overstressing the pipeline. Therefore, system design must consider expansion and upheaval buckling together. Discussed here are methods of prevention and control of expansion and upheaval buckling, evaluating the impact on the overall system.
The 2014 Sandia Verification and Validation Challenge: Problem statement
Hu, Kenneth; Orient, George
2016-01-18
This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less
A More General Solution of the Kenamond HE Problem 2
Kaul, Ann
2015-12-15
A more general solution for programmed burn calculations of the light times produced by an unobstructed line-of-sight, multi-point initiation of a composite HE region has been developed. The equations describing the interfaces between detonation fronts have also been included. In contrast to the original solutions proposed in References 1 and 2, four of the detonators are no longer restricted to specific locations on a Cartesian axis and can be located at any point inside the HE region. For the proposed solution, one detonator must be located at the origin. The more general solution for any locations on the 2D y-axis or 3D z-axis has been implemented in the ExactPack suite of exact solvers for verification problems. It could easily be changed to the most general case outlined above.
Development and Implementation of Radiation-Hydrodynamics Verification Test Problems
Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.
2012-08-22
Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.
Problems of organizing zero-effluent production in coking plants
Maiskii, S.V.; Kagasov, V.M.
1981-01-01
The basic method of protecting the environment against pollution by coking plants in the future must be the organization of zero-waste production cycles. Problems associated with the elimination of effluent are considered. In the majority of plants at present, the phenolic effluent formed during coal carbonization and chemical product processing is completely utilized within the plant as a coke quenching medium (the average rate of phenolic effluent formation is 0.4 m/sup 3//ton of dry charge, which equals the irrecoverable water losses in coke quenching operations). However, the increasing adoption of dry coke cooling is inevitably associated with increasing volumes of surplus effluent which cannot be disposed of in coke quenching towers. As a result of experiments it was concluded that: 1. The utilization of phenolic effluent in closed-cycle watercooling systems does not entirely solve the effluent disposal problem. The volume of surplus effluent depends on the volume originally formed, the rate of consuming water in circulation and the time of year. In order to dispose of surplus effluent, wet quenching must be retained for a proportion of the coke produced. 2. The greatest hazards in utilizing phenolic effluent in closed-cycle watercooling systems are corrosion and the build-up of suspended solids. The water must be filtered and biochemically purified before it is fed into the closed-cycle watercooling systems. The total ammonia content after purification should not exceed 100 to 150 mg/l. 3. Stormwater and thawed snow can be used in closed-cycle water supply systems after purification. 4. The realization of zero-effluent conditions in existing plants will require modifications to the existing water supply systems.
How does pressure gravitate? Cosmological constant problem confronts observational cosmology
Narimani, Ali; Scott, Douglas; Afshordi, Niayesh E-mail: nafshordi@pitp.ca
2014-08-01
An important and long-standing puzzle in the history of modern physics is the gross inconsistency between theoretical expectations and cosmological observations of the vacuum energy density, by at least 60 orders of magnitude, otherwise known as the cosmological constant problem. A characteristic feature of vacuum energy is that it has a pressure with the same amplitude, but opposite sign to its energy density, while all the precision tests of General Relativity are either in vacuum, or for media with negligible pressure. Therefore, one may wonder whether an anomalous coupling to pressure might be responsible for decoupling vacuum from gravity. We test this possibility in the context of the Gravitational Aether proposal, using current cosmological observations, which probe the gravity of relativistic pressure in the radiation era. Interestingly, we find that the best fit for anomalous pressure coupling is about half-way between General Relativity (GR), and Gravitational Aether (GA), if we include Planck together with WMAP and BICEP2 polarization cosmic microwave background (CMB) observations. Taken at face value, this data combination excludes both GR and GA at around the 3 σ level. However, including higher resolution CMB observations (''highL'') or baryonic acoustic oscillations (BAO) pushes the best fit closer to GR, excluding the Gravitational Aether solution to the cosmological constant problem at the 4- 5 σ level. This constraint effectively places a limit on the anomalous coupling to pressure in the parametrized post-Newtonian (PPN) expansion, ζ{sub 4} = 0.105 ± 0.049 (+highL CMB), or ζ{sub 4} = 0.066 ± 0.039 (+BAO). These represent the most precise measurement of this parameter to date, indicating a mild tension with GR (for ΛCDM including tensors, with 0ζ{sub 4}=), and also among different data sets.
Nuclear disarmament, disposal of military plutonium and international security problems
Slipchenko, V.S.; Rybatchenkov, V.
1995-12-31
One of the major issues of the current debate deals with the question: what does real nuclear disarmament actually involve? It becomes more and more obvious for many experts that it can no longer be limited to the reduction or elimination of delivery vehicles alone, but must necessarily cove the warheads and the fissile materials recovered from them, which should totally or partially be committed to peaceful use and placed under appropriate international safeguards, thus precluding their re-use for as weapons. There are various options as to how to solve the problems of disposal of fissile materials released from weapons. The optimal choice can only be made on the basis of a thorough study. This study should treat the disposal of weapon-grade plutonium and weapon-grade uranium as separate problems. The possible options for plutonium disposition currently discussed are as follows: (a) Storage in a form or under conditions not suitable for use in the production of new types of nuclear weapons. This option seems to be most natural and inevitable at the first phase, subject to determination of storage period, volume, and technology. Besides, the requirements of the international nuclear weapons nonproliferation regime could be met easily. Safe, secure, and controlled temporary storage may provide an appropriate solution of disposal of weapon-grade plutonium in the near future. (b) Energy utilization (conversion) of weapon-grade plutonium. The most efficient option of utilization of plutonium appears to be for nuclear power generation. This option does not exclude storage, but considers it as a temporary phase, which can, however, be a prolonged one: its length is determined by the political decisions made and possibilities existing to transfer plutonium for processing.
Problems with propagation and time evolution inf(T)gravity (Journal...
Office of Scientific and Technical Information (OSTI)
Problems with propagation and time evolution inf(T)gravity Citation Details In-Document Search Title: Problems with propagation and time evolution inf(T)gravity Authors: Ong, Yen...
Solving The Long-Standing Problem Of Low-Eneregy Nuclear Reactions...
Office of Scientific and Technical Information (OSTI)
Solving The Long-Standing Problem Of Low-Eneregy Nuclear Reactions At The Highest ... Citation Details In-Document Search Title: Solving The Long-Standing Problem Of ...
Solving The Long-Standing Problem Of Low-Energy Nuclear Reactions...
Office of Scientific and Technical Information (OSTI)
Solving The Long-Standing Problem Of Low-Energy Nuclear Reactions At The Highest ... Citation Details In-Document Search Title: Solving The Long-Standing Problem Of Low-Energy ...
Data-aware distributed scientific computing for big-data problems...
Office of Scientific and Technical Information (OSTI)
big-data problems in bio-surveillance Citation Details In-Document Search Title: Data-aware distributed scientific computing for big-data problems in bio-surveillance You are ...
Cloud Feedbacks on Climate: A Challenging Scientific Problem
Norris, Joel
2010-05-10
One reason it has been difficult to develop suitable social and economic policies to address global climate change is that projected global warming during the coming century has a large uncertainty range. The primary physical cause of this large uncertainty range is lack of understanding of the magnitude and even sign of cloud feedbacks on the climate system. If Earth's cloudiness responded to global warming by reflecting more solar radiation back to space or allowing more terrestrial radiation to be emitted to space, this would mitigate the warming produced by increased anthropogenic greenhouse gases. Contrastingly, a cloud response that reduced solar reflection or terrestrial emission would exacerbate anthropogenic greenhouse warming. It is likely that a mixture of responses will occur depending on cloud type and meteorological regime, and at present, we do not know what the net effect will be. This presentation will explain why cloud feedbacks have been a challenging scientific problem from the perspective of theory, modeling, and observations. Recent research results on observed multidecadal cloud-atmosphere-ocean variability over the Pacific Ocean will also be shown, along with suggestions for future research.
Cloud Feedbacks on Climate: A Challenging Scientific Problem
Norris, Joe
2010-05-12
One reason it has been difficult to develop suitable social and economic policies to address global climate change is that projected global warming during the coming century has a large uncertainty range. The primary physical cause of this large uncertainty range is lack of understanding of the magnitude and even sign of cloud feedbacks on the climate system. If Earth's cloudiness responded to global warming by reflecting more solar radiation back to space or allowing more terrestrial radiation to be emitted to space, this would mitigate the warming produced by increased anthropogenic greenhouse gases. Contrastingly, a cloud response that reduced solar reflection or terrestrial emission would exacerbate anthropogenic greenhouse warming. It is likely that a mixture of responses will occur depending on cloud type and meteorological regime, and at present, we do not know what the net effect will be. This presentation will explain why cloud feedbacks have been a challenging scientific problem from the perspective of theory, modeling, and observations. Recent research results on observed multidecadal cloud-atmosphere-ocean variability over the Pacific Ocean will also be shown, along with suggestions for future research.
Closing nuclear fuel cycle with fast reactors: problems and prospects
Shadrin, A.; Dvoeglazov, K.; Ivanov, V.
2013-07-01
The closed nuclear fuel cycle (CNFC) with fast reactors (FR) is the most promising way of nuclear energetics development because it prevents spent nuclear fuel (SNF) accumulation and minimizes radwaste volume due to minor actinides (MA) transmutation. CNFC with FR requires the elaboration of safety, environmentally acceptable and economically effective methods of treatment of SNF with high burn-up and low cooling time. The up-to-date industrially implemented SNF reprocessing technologies based on hydrometallurgical methods are not suitable for the reprocessing of SNF with high burn-up and low cooling time. The alternative dry methods (such as electrorefining in molten salts or fluoride technologies) applicable for such SNF reprocessing have not found implementation at industrial scale. So the cost of SNF reprocessing by means of dry technologies can hardly be estimated. Another problem of dry technologies is the recovery of fissionable materials pure enough for dense fuel fabrication. A combination of technical solutions performed with hydrometallurgical and dry technologies (pyro-technology) is proposed and it appears to be a promising way for the elaboration of economically, ecologically and socially accepted technology of FR SNF management. This paper deals with discussion of main principle of dry and aqueous operations combination that probably would provide safety and economic efficiency of the FR SNF reprocessing. (authors)
Solving Inverse Detection Problems Using Passive Radiation Signatures
Favorite, Jeffrey A.; Armstrong, Jerawan C.; Vaquer, Pablo A.
2012-08-15
The ability to reconstruct an unknown radioactive object based on its passive gamma-ray and neutron signatures is very important in homeland security applications. Often in the analysis of unknown radioactive objects, for simplicity or speed or because there is no other information, they are modeled as spherically symmetric regardless of their actual geometry. In these presentation we discuss the accuracy and implications of this approximation for decay gamma rays and for neutron-induced gamma rays. We discuss an extension of spherical raytracing (for uncollided fluxes) that allows it to be used when the exterior shielding is flat or cylindrical. We revisit some early results in boundary perturbation theory, showing that the Roussopolos estimate is the correct one to use when the quantity of interest is the flux or leakage on the boundary. We apply boundary perturbation theory to problems in which spherically symmetric systems are perturbed in asymmetric nonspherical ways. We apply mesh adaptive direct search (MADS) algorithms to object reconstructions. We present a benchmark test set that may be used to quantitatively evaluate inverse detection methods.
On the robust optimization to the uncertain vaccination strategy problem
Chaerani, D. Anggriani, N. Firdaniza
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.
Solution accelerators for large scale 3D electromagnetic inverse problems
Newman, Gregory A.; Boggs, Paul T.
2004-04-05
We provide a framework for preconditioning nonlinear 3D electromagnetic inverse scattering problems using nonlinear conjugate gradient (NLCG) and limited memory (LM) quasi-Newton methods. Key to our approach is the use of an approximate adjoint method that allows for an economical approximation of the Hessian that is updated at each inversion iteration. Using this approximate Hessian as a preconditoner, we show that the preconditioned NLCG iteration converges significantly faster than the non-preconditioned iteration, as well as converging to a data misfit level below that observed for the non-preconditioned method. Similar conclusions are also observed for the LM iteration; preconditioned with the approximate Hessian, the LM iteration converges faster than the non-preconditioned version. At this time, however, we see little difference between the convergence performance of the preconditioned LM scheme and the preconditioned NLCG scheme. A possible reason for this outcome is the behavior of the line search within the LM iteration. It was anticipated that, near convergence, a step size of one would be approached, but what was observed, instead, were step lengths that were nowhere near one. We provide some insights into the reasons for this behavior and suggest further research that may improve the performance of the LM methods.
Recent results and persisting problems in modeling flow induced coalescence
Forteln, I. E-mail: juza@imc.cas.cz; Jza, J. E-mail: juza@imc.cas.cz
2014-05-15
The contribution summarizes recent results of description of the flow induced coalescence in immiscible polymer blends and addresses problems that call for which solving. The theory of coalescence based on the switch between equations for matrix drainage between spherical or deformed droplets provides a good agreement with more complicated modeling and available experimental data for probability, P{sub c}, that the collision of droplets will be followed by their fusion. A new equation for description of the matrix drainage between deformed droplets, applicable to the whole range of viscosity ratios, p, of the droplets and matrixes, is proposed. The theory facilitates to consider the effect of the matrix elasticity on coalescence. P{sub c} decreases with the matrix relaxation time but this decrease is not pronounced for relaxation times typical of most commercial polymers. Modeling of the flow induced coalescence in concentrated systems is needed for prediction of the dependence of coalescence rate on volume fraction of droplets. The effect of the droplet anisometry on P{sub c} should be studied for better understanding the coalescence in flow field with high and moderate deformation rates. A reliable description of coalescence in mixing and processing devices requires proper modeling of complex flow fields.
Cloud Feedbacks on Climate: A Challenging Scientific Problem
Norris, Joe [Scripps Institution of Oceanography, University of California, San Diego, California, USA
2010-09-01
One reason it has been difficult to develop suitable social and economic policies to address global climate change is that projected global warming during the coming century has a large uncertainty range. The primary physical cause of this large uncertainty range is lack of understanding of the magnitude and even sign of cloud feedbacks on the climate system. If Earth's cloudiness responded to global warming by reflecting more solar radiation back to space or allowing more terrestrial radiation to be emitted to space, this would mitigate the warming produced by increased anthropogenic greenhouse gases. Contrastingly, a cloud response that reduced solar reflection or terrestrial emission would exacerbate anthropogenic greenhouse warming. It is likely that a mixture of responses will occur depending on cloud type and meteorological regime, and at present, we do not know what the net effect will be. This presentation will explain why cloud feedbacks have been a challenging scientific problem from the perspective of theory, modeling, and observations. Recent research results on observed multidecadal cloud-atmosphere-ocean variability over the Pacific Ocean will also be shown, along with suggestions for future research.
Inverse problems in heterogeneous and fractured media using peridynamics
Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.
2015-12-10
The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measuredmore » values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. Furthermore, this type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.« less
Thick-Restart Laczos Method for Symmetric Eigenvalue Problems
Energy Science and Technology Software Center (OSTI)
1999-01-01
This software package implements the thick-restart Lanczos method. It can be used on either a single address space machine or distributed parallel machine. The user can choose to implement or use a matrix-vector multiplication routine in any form convenient. Most of the arithmetic computations in the software are done through calls to BLAS and LAPACK. The software is written in Fortran 90. Because Fortran 90 offers many utility functions such functions such as dynamic memorymore » management, timing functions, random number generator and so on, the program is easily portable to different machines without modifying the source code. It can also be easily accessed from other language such as C or C-+. Since the software is highly modularized, it is relatively easy to adopt it for different type of situations. For example if the eigenvalue problem may have some symmetry and only a portion of the physical domain is discretized, then the dot-product routine needs to be modified. In this software, this modification is limited to one subroutine. It also can be instructed to write checkpoint files so that it can be restarted at a later time.« less
The conflict of interest problem in EIS preparation
Hansen, R.P. [Hansen Environmental Consultants, Englewood, CO (United States); Wolff, T.A. [Sandia National Labs., Albuquerque, NM (United States); McCold, L.N. [Oak Ridge National Lab., TN (United States)
1997-05-01
The National Environmental Policy Act (NEPA) requires that federal agencies prepare environmental impact statements (EISs) on proposals for major Federal action significantly affecting the quality of the human environment. The Council on Environmental Quality (CEQ) regulations require that EISs be prepared directly by the lead agency or a contractor it selects. EIS contractors must execute a disclosure statement specifying that they have ``no financial or other interest`` in the outcome of the project. The intent of the ``conflict of interest`` prohibition is to ensure that the EIS is defensible, free of self-serving bias, and credible to the public. Those coming to the federal government for money, permits, or project approvals must not be placed in the position of analyzing the environmental consequences of their own proposals. This paper analyzes the conflict of interest problem faced by government contractors who maintain and operate government-owned or-controlled facilities for which EISs are required. In the US Department of Energy (DOE) system, these are referred to as ``M and O`` contractors. It also examines organizational conflicts presented by current or prospective government contractors who have a financial or other interest in the outcome of a project or program for which an EIS is prepared. In responding to these and related questions, the paper discusses and interprets the CEQ regulations and guidance on EIS preparation conflict of interest as well as leading federal court opinions. It also distinguishes ``preparers`` from ``participants`` in the EIS preparation process.
A Baryonic Solution to the Missing Satellites Problem
Brooks, Alyson M.; Kuhlen, Michael; Zolotov, Adi; Hooper, Dan
2013-03-01
It has been demonstrated that the inclusion of baryonic physics can alter the dark matter densities in the centers of low-mass galaxies, making the central dark matter slope more shallow than predicted in pure cold dark matter simulations. This flattening of the dark matter profile can occur in the most luminous subhalos around Milky Way mass galaxies. Zolotov et al. have suggested a correction to be applied to the central masses of dark matter-only satellites in order to mimic the affect of (1) the flattening of the dark matter cusp due to supernova feedback in luminous satellites and (2) enhanced tidal stripping due to the presence of a baryonic disk. In this paper, we apply this correction to the z = 0 subhalo masses from the high resolution, dark matter-only Via Lactea II (VL2) simulation, and find that the number of massive subhalos is dramatically reduced. After adopting a stellar mass to halo mass relationship for the VL2 halos, and identifying subhalos that are (1) likely to be destroyed by stripping and (2) likely to have star formation suppressed by photo-heating, we find that the number of massive, luminous satellites around a Milky Way mass galaxy is in agreement with the number of observed satellites around the Milky Way or M31. We conclude that baryonic processes have the potential to solve the missing satellites problem
A remote user can create a Java applet or Java Web Start application that, when loaded by the target user, will access or modify data or execute arbitrary code on the target user's system.
James, T. ); Loftis, J. )
1990-07-01
The Directorate of Personal Property of the Military Traffic Management Command (MTMC) requested that Oak Ridge National laboratory (ORNL) design a prototype decision support system, the Worldwide Household Goods Information System for Transportation Modernization (WHIST-MOD). This decision support system will automate current tasks and provide analysis tools for evaluating the Personal Property Program, predicting impacts to the program, and planning modifications to the program to meet the evolving needs of military service members and the transportation industry. The system designed by ORNL consists of three application modules: system dictionary applications, data acquisition and administration applications, and user applications. The development of the user applications module is divided into two phases. Round 1 is the data selection front-end interface, and Round 2 is the output or back-end interface. This report describes the prototyped front-end interface for the user application module. It discusses user requirements and the prototype design. The information contained in this report is the product of in-depth interviews with MTMC staff, prototype meetings with the users, and the research and design work conducted at ORNL. 18 figs., 2 tabs.
Freeze protection problems and experiences in the HUD solar residential demonstration program
Sparkes, H.R.; Raman, K.; Trivedi, J.
1983-01-01
The different kinds of freeze-up problems in solar energy systems are outlined, and methods of providing freeze protection are briefly discussed. These problems are illustrated by a few selected examples from the HUD Solar Residential Demonstration Program, which show the consequences and cost of freeze-up problems and the importance of protecting solar systems against them.
The final-parsec problem in nonspherical galaxies revisited
Vasiliev, Eugene [Lebedev Physical Institute, Moscow (Russian Federation); Antonini, Fabio [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, Ontario (Canada); Merritt, David, E-mail: eugvas@lpi.ru, E-mail: merritt@astro.rit.edu, E-mail: antonini@cita.utoronto.ca [School of Physics and Astronomy and Center for Computational Relativity and Gravitation, Rochester Institute of Technology, Rochester, NY 14623 (United States)
2014-04-20
We consider the evolution of supermassive black hole binaries at the center of spherical, axisymmetric, and triaxial galaxies, using direct N-body integrations as well as analytic estimates. We find that the rates of binary hardening exhibit a significant N-dependence in all the models, at least for N in the investigated range of 10{sup 5} ? N ? 10{sup 6}. Binary hardening rates are also substantially lower than would be expected if the binary 'loss cone' remained 'full', as it would be if the orbits supplying stars to the binary were being efficiently replenished. The difference in binary hardening rates between the spherical and nonspherical models is less than a factor of two even in the simulations with the largest N. By studying the orbital populations of our models, we conclude that the rate of supply of stars to the binary via draining of centrophilic orbits is indeed expected to be much lower than the full-loss-cone rate, consistent with our simulations. We argue that the binary's evolution in the simulations is driven in roughly equal amounts by collisional and collisionless effects, even at the highest N-values currently accessible. While binary hardening rates would probably reach a limiting value for large N, our results suggest that we cannot approach that rate with currently available algorithms and computing hardware. The extrapolation of results from N-body simulations to real galaxies is therefore not straightforward, casting doubt on recent claims that triaxiality or axisymmetry alone are capable of solving the final-parsec problem in gas-free galaxies.
Natural gas production problems : solutions, methodologies, and modeling.
Rautman, Christopher Arthur; Herrin, James M.; Cooper, Scott Patrick; Basinski, Paul M.; Olsson, William Arthur; Arnold, Bill Walter; Broadhead, Ronald F.; Knight, Connie D.; Keefe, Russell G.; McKinney, Curt; Holm, Gus; Holland, John F.; Larson, Rich; Engler, Thomas W.; Lorenz, John Clay
2004-10-01
Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.
Divertors for Helical Devices: Concepts, Plans, Results, and Problems
Koenig, R.; Grigull, P.; McCormick, K.
2004-07-15
With Large Helical Device (LHD) and Wendelstein 7-X (W7-X), the development of helical devices is now taking a large step forward on the path to a steady-state fusion reactor. Important issues that need to be settled in these machines are particle flux and heat control and the impact of divertors on plasma performance in future continuously burning fusion plasmas. The divertor concepts that will initially be explored in these large machines were prepared in smaller-scale devices like Heliotron E, Compact Helical System (CHS), and Wendelstein 7-AS (W7-AS). While advanced divertor scenarios relevant for W7-X were already studied in W7-AS, other smaller-scale experiments like Heliotron-J, CHS, and National Compact Stellarator Experiment will be used for the further development of divertor concepts. The two divertor configurations that are being investigated are the helical and the island divertor, as well as the local island divertor, which was successfully demonstrated on CHS and just went into operation on LHD. At present, on its route to a fully closed helical divertor, LHD operates in an open helical divertor configuration. W7-X will be equipped right from the start with an actively cooled discrete island divertor that will allow quasi-continuous operation. The divertor design is very similar to the one explored on W7-AS. For sufficiently large island sizes and not too long field line connection lengths, this divertor gives access to a partially detached quasi-steady-state operating scenario in a newly found high-density H-mode operating regime, which benefits from high energy and low impurity confinement times, with edge radiation levels of up to 90% and sufficient neutral compression in the subdivertor region (>10) for active pumping. The basic physics of the different divertor concepts and associated implementation problems, like asymmetries due to drifts, accessibility of essential operating scenarios, toroidal asymmetries due to symmetry breaking error fields
Is EIA part of the wind power planning problem?
Smart, Duncan Ewan; Stojanovic, Timothy A. Warren, Charles R.
2014-11-15
This research evaluates the importance and effectiveness of Environmental Impact Assessment (EIA) within wind farm planning debates, drawing on insights from case studies in Scotland. Despite general public support for renewable energy on the grounds that it is needed to tackle climate change and implement sustainable development, many proposed wind farms encounter significant resistance. The importance of planning issues and (EIA) processes has arguably been overlooked within recent wind farm social acceptability discourse. Through semi-structured interviews with key stakeholders and textual analysis of EIA documents, the characteristics of EIA are assessed in terms of its perceived purpose and performance. The data show that whilst respondents perceive EIA to be important, they express concerns about bias and about the inability of EIA to address climate change and wind farm decommissioning issues adequately. Furthermore, the research identifies key issues which impede the effectiveness of EIA, and reveals differences between theoretical and practical framings of EIA. The paper questions the assumption that EIA is a universally applicable tool, and argues that its effectiveness should be analysed in the context of specific development sectors. The article concludes by reviewing whether the recently amended EIA Directive (2014/52/EU) could resolve identified problems within national EIA practice. - Highlights: • Evaluation of EIA for onshore wind farm planning in Scotland. • EIA is important for multiple aspects of onshore wind farm planning. • Multiple substantive deficiencies of relevance to wind farm planning exist in EIA. • Further research into EIA effectiveness for specific development types is required. • Directive 2014/52/EU may improve EIA effectiveness within wind farm planning.
Finite element analyses for seismic shear wall international standard problem
Park, Y.J.; Hofmayer, C.H.
1998-04-01
Two identical reinforced concrete (RC) shear walls, which consist of web, flanges and massive top and bottom slabs, were tested up to ultimate failure under earthquake motions at the Nuclear Power Engineering Corporation`s (NUPEC) Tadotsu Engineering Laboratory, Japan. NUPEC provided the dynamic test results to the OECD (Organization for Economic Cooperation and Development), Nuclear Energy Agency (NEA) for use as an International Standard Problem (ISP). The shear walls were intended to be part of a typical reactor building. One of the major objectives of the Seismic Shear Wall ISP (SSWISP) was to evaluate various seismic analysis methods for concrete structures used for design and seismic margin assessment. It also offered a unique opportunity to assess the state-of-the-art in nonlinear dynamic analysis of reinforced concrete shear wall structures under severe earthquake loadings. As a participant of the SSWISP workshops, Brookhaven National Laboratory (BNL) performed finite element analyses under the sponsorship of the U.S. Nuclear Regulatory Commission (USNRC). Three types of analysis were performed, i.e., monotonic static (push-over), cyclic static and dynamic analyses. Additional monotonic static analyses were performed by two consultants, F. Vecchio of the University of Toronto (UT) and F. Filippou of the University of California at Berkeley (UCB). The analysis results by BNL and the consultants were presented during the second workshop in Yokohama, Japan in 1996. A total of 55 analyses were presented during the workshop by 30 participants from 11 different countries. The major findings on the presented analysis methods, as well as engineering insights regarding the applicability and reliability of the FEM codes are described in detail in this report. 16 refs., 60 figs., 16 tabs.
Mitigating strategies for CO/sub 2/ problems
Lave, L B
1980-08-01
Vast uncertainties surround the ability to predict the social effects of increased carbon dioxide concentrations in the atmosphere during the next century; fossil fuel combustion rates will change, predicting global climate changes is difficult, and predicting the resulting social reactions to these change is essentially impossible. Unfortunately, the effects of carbon dioxide are likely to be insidious and difficult to connect to climate change. Myriad effects, both good and bad are unlikely to be recognized as caused by carbon dioxide. Conscious adaptation policies have the government or other social institutions act directly to mandate change in behavior through laws, fines, or subsidies. Unfortunately, such actions cannot be tailored to achieve precise objectives; they are blunt tools that should be used only for important goals and then sparingly. Unconscious adaptation takes place through behavioral changes induced by the market place or social institutions. These mechanisms can be swift and powerful, but are difficult to manipulate. Actions such as monitoring climate change and taking care to inform important groups of the current state of knowledge on carbon dioxide induced climate changes can help to speed adaptation along with contingency planning and development of nonfossil fuel technologies can speed adaptation. More important are plans which would set unconscious adaptation into motion, such as plans to disseminate information on the problem and behavior which will help individuals or firms. Of greatest importance is having a society that can quickly perceive and adapt to the new regime. This means a strong economy, high scientific and engineering capabilities, a well educated population, and a more flexible, resilient capital stock. Carbon dioxide can serve as a catalyst in promoting policies that are justified for a host of reasons.
V-144: HP Printers Let Remote Users Access Files on the Printer...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
4: HP Printers Let Remote Users Access Files on the Printer V-144: HP Printers Let Remote Users Access Files on the Printer April 29, 2013 - 12:27am Addthis PROBLEM: HP Printers...
V-115: Apple iOS Bugs Let Local Users Gain Elevated Privileges...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5: Apple iOS Bugs Let Local Users Gain Elevated Privileges V-115: Apple iOS Bugs Let Local Users Gain Elevated Privileges March 20, 2013 - 12:08am Addthis PROBLEM: Apple iOS Bugs...
V-113: Apple Safari Bugs Let Remote Users Execute Arbitrary Code...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
3: Apple Safari Bugs Let Remote Users Execute Arbitrary Code V-113: Apple Safari Bugs Let Remote Users Execute Arbitrary Code March 18, 2013 - 1:53am Addthis PROBLEM: Apple Safari...
V-127: Samba Bug Lets Remote Authenticated Users Modify Files...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
7: Samba Bug Lets Remote Authenticated Users Modify Files V-127: Samba Bug Lets Remote Authenticated Users Modify Files April 5, 2013 - 6:00am Addthis PROBLEM: A vulnerability was...
V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
9: IBM Security AppScan Enterprise Multiple Vulnerabilities V-119: IBM Security AppScan Enterprise Multiple Vulnerabilities March 26, 2013 - 12:56am Addthis PROBLEM: IBM Security...
V-222: SUSE update for Filezilla | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
2: SUSE update for Filezilla V-222: SUSE update for Filezilla August 20, 2013 - 6:00am Addthis PROBLEM: SUSE has issued an update for filezilla PLATFORM: openSUSE 12.2 and 12.3...
V-191: Apple Mac OS X Multiple Vulnerabilities | Department of...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
1: Apple Mac OS X Multiple Vulnerabilities V-191: Apple Mac OS X Multiple Vulnerabilities July 3, 2013 - 6:00am Addthis PROBLEM: Apple has issued a security update for Mac OS X...
V-212: Samba smbd CPU Processing Loop Lets Remote Users Deny...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
2: Samba smbd CPU Processing Loop Lets Remote Users Deny Service V-212: Samba smbd CPU Processing Loop Lets Remote Users Deny Service August 6, 2013 - 6:00am Addthis PROBLEM: A...
T-575: OpenLDAP back-ndb Lets Remote Users Authenticate Without...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Remote Users Authenticate Without a Valid Password T-575: OpenLDAP back-ndb Lets Remote Users Authenticate Without a Valid Password March 11, 2011 - 3:05pm Addthis PROBLEM: A ...
T-731:Symantec IM Manager Code Injection Vulnerability | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
T-731:Symantec IM Manager Code Injection Vulnerability T-731:Symantec IM Manager Code Injection Vulnerability September 30, 2011 - 8:30am Addthis PROBLEM: Symantec IM Manager Code...
T-685: Cisco Warranty CD May Load Malware From a Remote Site...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5: Cisco Warranty CD May Load Malware From a Remote Site T-685: Cisco Warranty CD May Load Malware From a Remote Site August 5, 2011 - 3:26pm Addthis PROBLEM: A vulnerability was...
V-045: Adobe ColdFusion Lets Local Users Bypass Sandbox Restrictions...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5: Adobe ColdFusion Lets Local Users Bypass Sandbox Restrictions V-045: Adobe ColdFusion Lets Local Users Bypass Sandbox Restrictions December 12, 2012 - 2:00am Addthis PROBLEM:...
V-110: Adobe Flash Player Bugs Let Remote Users Execute Arbitrary...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
0: Adobe Flash Player Bugs Let Remote Users Execute Arbitrary Code V-110: Adobe Flash Player Bugs Let Remote Users Execute Arbitrary Code March 13, 2013 - 12:04am Addthis PROBLEM:...
V-138: Red Hat update for icedtea-web | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
8: Red Hat update for icedtea-web V-138: Red Hat update for icedtea-web April 19, 2013 - 6:00am Addthis PROBLEM: Red Hat has issued an update for icedtea-web PLATFORM: Red Hat ...
TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems
Moridis, G.J.; Pruess , K.
1992-11-01
The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for proper applications of TOUGH and related codes.
A Branch and Bound Approach for Truss Topology Design Problems with Valid Inequalities
Cerveira, Adelaide; Agra, Agostinho; Bastos, Fernando; Varum, Humberto
2010-09-30
One of the classical problems in the structural optimization field is the Truss Topology Design Problem (TTDP) which deals with the selection of optimal configuration for structural systems for applications in mechanical, civil, aerospace engineering, among others. In this paper we consider a TTDP where the goal is to find the stiffest truss, under a given load and with a bound on the total volume. The design variables are the cross-section areas of the truss bars that must be chosen from a given finite set. This results in a large-scale non-convex problem with discrete variables. This problem can be formulated as a Semidefinite Programming Problem (SDP problem) with binary variables. We propose a branch and bound algorithm to solve this problem. In this paper it is considered a binary formulation of the problem, to take advantage of its structure, which admits a Knapsack problem as subproblem. Thus, trying to improve the performance of the Branch and Bound, at each step, some valid inequalities for the Knapsack problem are included.
New approach for the solution of optimal control problems on parallel machines. Doctoral thesis
Stech, D.J.
1990-01-01
This thesis develops a highly parallel solution method for nonlinear optimal control problems. Balakrishnan's epsilon method is used in conjunction with the Rayleigh-Ritz method to convert the dynamic optimization of the optimal control problem into a static optimization problem. Walsh functions and orthogonal polynomials are used as basis functions to implement the Rayleigh-Ritz method. The resulting static optimization problem is solved using matrix operations which have well defined massively parallel solution methods. To demonstrate the method, a variety of nonlinear optimal control problems are solved. The nonlinear Raleigh problem with quadratic cost and nonlinear van der Pol problem with quadratic cost and terminal constraints on the states are solved in both serial and parallel on an eight processor Intel Hypercube. The solutions using both Walsh functions and Legendre polynomials as basis functions are given. In addition to these problems which are solved in parallel, a more complex nonlinear minimum time optimal control problem and nonlinear optimal control problem with an inequality constraint on the control are solved. Results show the method converges quickly, even from relatively poor initial guesses for the nominal trajectories.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
CUERVO: A finite element computer program for nonlinear scalar transport problems
Sirman, M.B.; Gartling, D.K.
1995-11-01
CUERVO is a finite element code that is designed for the solution of multi-dimensional field problems described by a general nonlinear, advection-diffusion equation. The code is also applicable to field problems described by diffusion, Poisson or Laplace equations. The finite element formulation and the associated numerical methods used in CUERVO are outlined here; detailed instructions for use of the code are also presented. Example problems are provided to illustrate the use of the code.
Luo Yousong
2010-06-15
In this paper we derive a necessary optimality condition for a local optimal solution of some control problems. These optimal control problems are governed by a semi-linear Vettsel boundary value problem of a linear elliptic equation. The control is applied to the state equation via the boundary and a functional of the control together with the solution of the state equation under such a control will be minimized. A constraint on the solution of the state equation is also considered.
On the Value Function of Weakly Coercive Problems in Nonlinear Stochastic Control
Motta, Monica; Sartori, Caterina
2011-08-15
In this paper we investigate via a dynamic programming approach some nonlinear stochastic control problems where the control set is unbounded and a classical coercivity hypothesis is replaced by some weaker assumptions. We prove that these problems can be approximated by finite fuel problems; show the continuity of the relative value functions and characterize them as unique viscosity solutions of a quasi-variational inequality with suitable boundary conditions.
Willert, Jeffrey; Park, H.; Taitano, William
2015-10-12
High-order/low-order (or moment-based acceleration) algorithms have been used to significantly accelerate the solution to the neutron transport k-eigenvalue problem over the past several years. Recently, the nonlinear diffusion acceleration algorithm has been extended to solve fixed-source problems with anisotropic scattering sources. In this paper, we demonstrate that we can extend this algorithm to k-eigenvalue problems in which the scattering source is anisotropic and a significant acceleration can be achieved. Lastly, we demonstrate that the low-order, diffusion-like eigenvalue problem can be solved efficiently using a technique known as nonlinear elimination.
FELIX: advances in modeling forward and inverse ice-sheet problems...
Office of Scientific and Technical Information (OSTI)
Title: FELIX: advances in modeling forward and inverse ice-sheet problems. Abstract not provided. Authors: Salinger, Andrew G. ; Perego, Mauro ; Hoffman, Mattew ; Leng, Wei ; ...
Solving The Long-Standing Problem Of Low-Eneregy Nuclear Reactions...
Office of Scientific and Technical Information (OSTI)
Eneregy Nuclear Reactions At The Highest Microscopic Level: Annual Continuation And Progress Report Citation Details In-Document Search Title: Solving The Long-Standing Problem Of ...
The Finite Horizon Optimal Multi-Modes Switching Problem: The Viscosity Solution Approach
El Asri, Brahim Hamadene, Said
2009-10-15
In this paper we show existence and uniqueness of a solution for a system of m variational partial differential inequalities with inter-connected obstacles. This system is the deterministic version of the Verification Theorem of the Markovian optimal m-states switching problem. The switching cost functions are arbitrary. This problem is in relation with the valuation of firms in a financial market.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
A unified solution to the small scale problems of the ?CDM model
Popolo, A. Del; Lima, J.A.S.; Fabris, Jlio C.; Rodrigues, Davi C. E-mail: limajas@astro.iag.usp.br E-mail: davi.rodrigues@cosmo-ufes.org
2014-04-01
We study, by means of the model proposed in Del Popolo (2009), the effect of baryon physics on the small scale problems of the CDM model. We show that, using this model, the cusp/core problem, the missing satellite problem (MSP), the Too Big to Fail (TBTF) problem, and the angular momentum catastrophe can be reconciled with observations. Concerning the cusp/core problem, the interaction among dark matter (DM) and baryonic clumps of 1% the mass of the halo, through dynamical friction (DF), is able to flatten the inner cusp of the density profiles. We moreover assume that haloes form primarily through quiescent accretion, in agreement with the spherical collapse model (SCM)-secondary infall model (SIM) prescriptions. The results of this paper follow from the two assumptions above. Concerning the MSP and TBTF problem, applying to the Via Lactea II (VL2) subhaloes a series of corrections similar to those of Brooks et al. (2013), namely applying a Zolotov et al. (2012)-like correction obtained with our model, and further correcting for the UV heating and tidal stripping, we obtain that the number of massive, luminous satellites is in agreement with the number observed in the MW. The model also produces an angular momentum distribution in agreement with observations, that is with the distribution of the angular spin parameter and angular momentum of the dwarfs studied by van den Bosch, Burkert, and Swaters (2001). In conclusion, the small scale problems of the CDM model can all be solved by introducing baryon physics.
Optimization problems in natural gas transportation systems. A state-of-the-art review
Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado
2015-03-24
Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-term basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.
Time-dependent finite-element models of phase-change problems with moving heat sources
Westerberg, K.W. ); Wiklof, C. ); Finlayson, B.A. . Dept. of Chemical Engineering)
1994-03-01
A mathematical model is developed for melting of a multilayered medium while a heat source traverses one boundary. The finite-element method uses moving meshes, front-tracking using spines, an automatic time-step algorithm, and an efficient solution of the linearized equations. A novel solution method allows the fixed-mesh code to work unchanged but allows a moving mesh in other problems. The finite-element method is applied when the heater mesh moves with respect to the multilayered medium mesh. The same technique allows parallel processing for finite-element codes. The model is applied to several test problems and then to the title problem.
Mixed constraint satisfaction: A framework for decision problems under incomplete knowledge
Fargier, H.; Lang, J.; Schiex, T.
1996-12-31
Constraint satisfaction is a powerful tool for representing and solving decision problems with complete knowledge about the world. We extend the CSP framework so as to represent decision problems under incomplete knowledge. The basis of the extension consists in a distinction between controllable and uncontrollable variables - hence the terminology {open_quotes}mixed CSP{close_quotes} - and a {open_quotes}solution{close_quotes} gives actually a conditional decision. We study the complexity of deciding the consistency of a mixed CSP. As the problem is generally intractable, we propose an algorithm for finding an approximate solution.
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequal- ity constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
THERM3D -- A boundary element computer program for transient heat conduction problems
Ingber, M.S.
1994-02-01
The computer code THERM3D implements the direct boundary element method (BEM) to solve transient heat conduction problems in arbitrary three-dimensional domains. This particular implementation of the BEM avoids performing time-consuming domain integrations by approximating a ``generalized forcing function`` in the interior of the domain with the use of radial basis functions. An approximate particular solution is then constructed, and the original problem is transformed into a sequence of Laplace problems. The code is capable of handling a large variety of boundary conditions including isothermal, specified flux, convection, radiation, and combined convection and radiation conditions. The computer code is benchmarked by comparisons with analytic and finite element results.
Doorway state expansion approach to coupled channels problems and application to heavy ions
Breitschaft, A. M.S.; Canto, L. F.; Schechter, H.; Hussein, M. S.; Moniz, Ernest J.
1994-08-01
The doorway expansion method is extended to coupled channels problems in low energy heavy ion collisions. As a test, it is applied to an exactly soluble model and the convergence problem is discussed. The method is then applied to heavy ion elastic scattering due to the optical potential and to a simple coupled channels problem. In both cases very good convergence is reached with six doorway states. The calculation with a single doorway is shown to be much better than the DWBA. 9 refs., 8 figs., 1 tab.
Open problems in condensed matter physics, 1987 (Conference) | SciTech
Office of Scientific and Technical Information (OSTI)
Connect Open problems in condensed matter physics, 1987 Citation Details In-Document Search Title: Open problems in condensed matter physics, 1987 The 1970's and 1980's can be considered the third stage in the explosive development of condensed matter physics. After the very intensive research of the 1930's and 1940's, which followed the formulation of quantum mechanics, and the path-breaking activity of the 1950's and 1960's, the problems being faced now are much more complex and not always
Validity of equation-of-motion approach to kondo problem in the large N
Office of Scientific and Technical Information (OSTI)
limit (Journal Article) | SciTech Connect Validity of equation-of-motion approach to kondo problem in the large N limit Citation Details In-Document Search Title: Validity of equation-of-motion approach to kondo problem in the large N limit The Anderson impurity model for Kondo problem is investigated for arbitrary orbit-spin degeneracy N of the magnetic impurity by the equation of motion method (EOM). By employing a new decoupling scheme, a self-consistent equation for the one-particle
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Solution of basic operational problems of water-development works at the Votkinsk hydroproject
Deev, A. P.; Borisevich, L. A.; Fisenko, V. F.
2012-11-15
Basic operational problems of water-development works at the Votkinsk HPP are examined. Measures for restoration of normal safety conditions for the water-development works at the HPP, which had been taken during service, are presented.
Casting Annotation as an Optimization Problem (2010 JGI/ANL HPC Workshop)
Overbeek, Ross
2010-01-25
Ross Overbeek of the Fellowship for Interpretation of Genomes gives a presentation on "Casting Annotation as an Optimization Problem" at the JGI/Argonne HPC Workshop on January 25, 2010.
On the range of applicability of Baker`s approach to the frame problem
Kartha, G.N.
1996-12-31
We investigate the range of applicability of Baker`s approach to the frame problem using an action language. We show that for temporal projection and deterministic domains, Baker`s approach gives the intuitively expected results.
SEACAS Theory Manuals: Part 1. Problem Formulation in Nonlinear Solid Mechancis
Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.
1998-08-01
This report gives an introduction to the basic concepts and principles involved in the formulation of nonlinear problems in solid mechanics. By way of motivation, the discussion begins with a survey of some of the important sources of nonlinearity in solid mechanics applications, using wherever possible simple one dimensional idealizations to demonstrate the physical concepts. This discussion is then generalized by presenting generic statements of initial/boundary value problems in solid mechanics, using linear elasticity as a template and encompassing such ideas as strong and weak forms of boundary value problems, boundary and initial conditions, and dynamic and quasistatic idealizations. The notational framework used for the linearized problem is then extended to account for finite deformation of possibly inelastic solids, providing the context for the descriptions of nonlinear continuum mechanics, constitutive modeling, and finite element technology given in three companion reports.
Solving The Long-Standing Problem Of Nuclear Reactions At The...
Office of Scientific and Technical Information (OSTI)
Long-Standing Problem Of Nuclear Reactions At The Highest Microscopic Level: Annual Continuation And Progress Report Citation Details In-Document Search Title: Solving The Long-Sta...
Spin chains and Arnold's problem on the Gauss-Kuz'min statistics for quadratic irrationals
Ustinov, Alexey V
2013-05-31
New results related to number theoretic model of spin chains are proved. We solve Arnold's problem on the Gauss-Kuz'min statistics for quadratic irrationals. Bibliography: 24 titles.
Casting Annotation as an Optimization Problem (2010 JGI/ANL HPC Workshop)
Overbeek, Ross
2011-06-08
Ross Overbeek of the Fellowship for Interpretation of Genomes gives a presentation on "Casting Annotation as an Optimization Problem" at the JGI/Argonne HPC Workshop on January 25, 2010.
FLAG Simulations of the Elasticity Test Problem of Gavrilyuk et al.
Kamm, James R.; Runnels, Scott R.; Canfield, Thomas R.; Carney, Theodore C.
2014-04-23
This report contains a description of the impact problem used to compare hypoelastic and hyperelastic material models, as described by Gavrilyuk, Favrie & Saurel. That description is used to set up hypoelastic simulations in the FLAG hydrocode.
Solution of dynamic contact problems by implicit/explicit methods. Final report
Salveson, M.W.; Taylor, R.L.
1996-10-14
The solution of dynamic contact problems within an explicit finite element program such as the LLNL DYNA programs is addressed in the report. The approach is to represent the solution for the deformation of bodies using the explicit algorithm but to solve the contact part of the problem using an implicit approach. Thus, the contact conditions at the next solution state are considered when computing the acceleration state for each explicit time step.
Garrett-Price, B.A.; Smith, S.A.; Watts, R.L.
1984-02-01
A comprehensive overview of heat exchanger fouling in the manufacturing industries is provided. Specifically, this overview addresses: the characteristics of industrial fouling problems; the mitigation and accommodation techniques currently used by industry; and the types and magnitude of costs associated with industrial fouling. A detailed review of the fouling problems, costs and mitigation techniques is provided for the food, textile, pulp and paper, chemical, petroleum, cement, glass and primary metals industries.
Improved time-space method for 3-D heat transfer problems including global warming
Saitoh, T.S.; Wakashima, Shinichiro
1999-07-01
In this paper, the Time-Space Method (TSM) which has been proposed for solving general heat transfer and fluid flow problems was improved in order to cover global and urban warming. The TSM is effective in almost all-transient heat transfer and fluid flow problems, and has been already applied to the 2-D melting problems (or moving boundary problems). The computer running time will be reduced to only 1/100th--1/1000th of the existing schemes for 2-D and 3-D problems. However, in order to apply to much larger-scale problems, for example, global warming, urban warming and general ocean circulation, the SOR method (or other iterative methods) in four dimensions is somewhat tedious and provokingly slow. Motivated by the above situation, the authors improved the speed of iteration of the previous TSM by introducing the following ideas: (1) Timewise chopping: Time domain is chopped into small peaches to save memory requirement; (2) Adaptive iteration: Converged region is eliminated for further iteration; (3) Internal selective iteration: Equation with slow iteration speed in iterative procedure is selectively iterated to accelerate entire convergence; and (4) False transient integration: False transient term is added to the Poisson-type equation and the relevant solution is regarded as a parabolic equation. By adopting the above improvements, the higher-order finite different schemes and the hybrid mesh, the computer running time for the TSM is reduced to some 1/4600th of the conventional explicit method for a typical 3-D natural convection problem in a closed cavity. The proposed TSM will be more efficacious for large-scale environmental problems, such as global warming, urban warming and general ocean circulation, in which a tremendous computing time would be required.
Problems and Solutions: Training Disaster Organizations of the Use of PV |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy Information Resources » Problems and Solutions: Training Disaster Organizations of the Use of PV Problems and Solutions: Training Disaster Organizations of the Use of PV This program guide outlines the application and review procedures for obtaining the necessary permit(s) to install a solar energy system for a new or existing residential building. The guide also describes what system siting or design elements may trigger the need for additional plan review. Location
Mathematical and computational modeling of the diffraction problems by discrete singularities method
Nesvit, K. V.
2014-11-12
The main objective of this study is reduced the boundary-value problems of scattering and diffraction waves on plane-parallel structures to the singular or hypersingular integral equations. For these cases we use a method of the parametric representations of the integral and pseudo-differential operators. Numerical results of the model scattering problems on periodic and boundary gratings and also on the gratings above a flat screen reflector are presented in this paper.
Non-homogeneous solutions of a Coulomb Schrdinger equation as basis set for scattering problems
Del Punta, J. A.; Ambrosio, M. J.; Gasaneo, G.; Zaytsev, S. A.; Ancarani, L. U.
2014-05-15
We introduce and study two-body Quasi Sturmian functions which are proposed as basis functions for applications in three-body scattering problems. They are solutions of a two-body non-homogeneous Schrdinger equation. We present different analytic expressions, including asymptotic behaviors, for the pure Coulomb potential with a driven term involving either Slater-type or Laguerre-type orbitals. The efficiency of Quasi Sturmian functions as basis set is numerically illustrated through a two-body scattering problem.
Not Available
1980-06-01
The objective of this investigation was to identify, analyze and suggest solutions to ventilation problems of the following mining systems proposed for use in western thick seams; multiple lift longwall; single pass longwall with face height in the range of 12 to 19 feet; longwall sublevel caving. To reach this objective, background information on the regulations and ventilation practices relevant to the three methods was reviewed. This was followed by an identification of ventilation problems including the sources and quantities of methane emissions, respirable coal dust, self ignition and self heating. The problems were then analyzed to determine the probability of occurrence, the cause of the problem, and its consequences. Having analyzed these problems, solutions were described to the problems. The major finding of this effort was that, while certain ventilation difficulties can be isolated peculiar to these three moethods, in general, seam specific conditions have a larger role in determining the success of ventilation than does the method used. The major difficulties to be faced by these novel methods are the same as those to be faced by conventional longwalls. Research efforts should proceed on that basis.
Literature survey and documentation on organic solid deposition problem. Status report
Chung, Ting-Horng
1993-12-01
Organic solid deposition is often a major problem in petroleum production and processing. Recently, this problem has attracted more attention because operating costs have become more critical to the profit of oil production. Also, in miscible gas flooding, asphaltene deposition often occurs in the wellbore region after gas breakthrough and causes plugging. The organic deposition problem is particularly serious in offshore oil production. Cooling of crude oil when it flows through long-distance pipelines under sea water may cause organic deposition in the pipeline and result in plugging. NIPER`s Gas EOR Research Project has been devoted to the study of the organic solid deposition problem for three years. NIPER has received many requests for technical support. Recently, the DeepStar project committee on thermo-technology development and standardization has asked NIPER to provide them with NIPER`s expertise and experience. To assist the oil industry, NIPER is preparing a state-of-the-art review on the technical development for the organic deposition problem. In the first quarter, this project has completed a literature survey and documentation. total of 258 publications (114 for wax, 124 for asphaltene, and 20 for related subjects) were collected and categorized. This literature survey was focused on the two subjects: wax and asphaltene. The subjects of bitumen, asphalt, and heavy oil are not included. Also, the collected publications are mostly related to production problems.
Problems with numerical techniques: Application to mid-loop operation transients
Bryce, W.M.; Lillington, J.N.
1997-07-01
There has been an increasing need to consider accidents at shutdown which have been shown in some PSAs to provide a significant contribution to overall risk. In the UK experience has been gained at three levels: (1) Assessment of codes against experiments; (2) Plant studies specifically for Sizewell B; and (3) Detailed review of modelling to support the plant studies for Sizewell B. The work has largely been carried out using various versions of RELAP5 and SCDAP/RELAP5. The paper details some of the problems that have needed to be addressed. It is believed by the authors that these kinds of problems are probably generic to most of the present generation system thermal-hydraulic codes for the conditions present in mid-loop transients. Thus as far as possible these problems and solutions are proposed in generic terms. The areas addressed include: condensables at low pressure, poor time step calculation detection, water packing, inadequate physical modelling, numerical heat transfer and mass errors. In general single code modifications have been proposed to solve the problems. These have been very much concerned with means of improving existing models rather than by formulating a completely new approach. They have been produced after a particular problem has arisen. Thus, and this has been borne out in practice, the danger is that when new transients are attempted, new problems arise which then also require patching.
A point implicit time integration technique for slow transient flow problems
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation of explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.
The impedance problem of wave diffraction by a strip with higher order boundary conditions
Castro, L. P.; Simões, A. M.
2013-10-17
This work is devoted to analyse an impedance boundary-transmission problem for the Helmholtz equation originated by a problem of wave diffraction by an infinite strip with higher order imperfect boundary conditions. A constructive approach of operator relations is built, which allows a transparent interpretation of the problem in an operator theory framework. In particular, different types of operator relations are exhibited for different types of operators acting between Lebesgue and Sobolev spaces on a finite interval and the positive half-line. All this has consequences in the understanding of the structure of this type of problems. In particular, a Fredholm characterization of the problem is obtained in terms of the initial space order parameters. At the request of the author and the Proceedings Editor the above article has been replaced with a corrected version. The original PDF file supplied to AIP Publishing contained an error in the title of the article. The original title appeared as: 'The Impedance Problem of Wave Diffraction by a trip with Higher Order Boundary Conditions.' This article has been replaced and the title now appears correctly online. The corrected article was published on 8 November 2013.
Optimization problems in natural gas transportation systems. A state-of-the-art review
Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado
2015-03-24
Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-termmore » basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.« less
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Alcouffe, R.E.
1985-01-01
A difficult class of problems for the discrete-ordinates neutral particle transport method is to accurately compute the flux due to a spatially localized source. Because the transport equation is solved for discrete directions, the so-called ray effect causes the flux at space points far from the source to be inaccurate. Thus, in general, discrete ordinates would not be the method of choice to solve such problems. It is better suited for calculating problems with significant scattering. The Monte Carlo method is suited to localized source problems, particularly if the amount of collisional interactions in minimal. However, if there are many scattering collisions and the flux at all space points is desired, then the Monte Carlo method becomes expensive. To take advantage of the attributes of both approaches, we have devised a first collision source method to combine the Monte Carlo and discrete-ordinates solutions. That is, particles are tracked from the source to their first scattering collision and tallied to produce a source for the discrete-ordinates calculation. A scattered flux is then computed by discrete ordinates, and the total flux is the sum of the Monte Carlo and discrete ordinates calculated fluxes. In this paper, we present calculational results using the MCNP and TWODANT codes for selected two-dimensional problems that show the effectiveness of this method.
Simulating variable source problems via post processing of individual particle tallies
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.
Identification of significant problems related to light water reactor piping systems
None
1980-07-01
Work on the project was divided into three tasks. In Task 1, past surveys of LWR piping system problems and recent Licensee Event Report summaries are studied to identify the significant problems of LWR piping systems and the primary causes of these problems. Pipe cracking is identified as the most recurring problem and is mainly due to the vibration of pipes due to operating pump-pipe resonance, fluid-flow fluctuations, and vibration of pipe supports. Research relevant to the identified piping system problems is evaluated. Task 2 studies identify typical LWR piping systems and the current loads and load combinations used in the design of these systems. Definitions of loads are reviewed. In Task 3, a comparative study is carried out on the use of nonlinear analysis methods in the design of LWR piping systems. The study concludes that the current linear-elastic methods of analysis may not predict accurately the behavior of piping systems under seismic loads and may, under certain circumstances, result in nonconservative designs. Gaps at piping supports are found to have a significant effect on the response of the piping systems.
Jacobs, D.G.; Epler, J.S.; Rose, R.R.
1980-03-01
A review of problems encountered in the shallow land burial of low-level radioactive wastes has been made in support of the technical aspects of the National Low-Level Waste (LLW) Management Research and Development Program being administered by the Low-Level Waste Management Program Office, Oak Ridge National Laboratory. The operating histories of burial sites at six major DOE and five commercial facilities in the US have been examined and several major problems identified. The problems experienced st the sites have been grouped into general categories dealing with site development, waste characterization, operation, and performance evaluation. Based on this grouping of the problem, a number of major technical issues have been identified which should be incorporated into program plans for further research and development. For each technical issue a discussion is presented relating the issue to a particular problem, identifying some recent or current related research, and suggesting further work necessary for resolving the issue. Major technical issues which have been identified include the need for improved water management, further understanding of the effect of chemical and physical parameters on radionuclide migration, more comprehensive waste records, improved programs for performance monitoring and evaluation, development of better predictive capabilities, evaluation of space utilization, and improved management control.
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
Willert, Jeffrey; Park, H.; Knoll, D.A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton–Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Optimization-based additive decomposition of weakly coercive problems with applications
Bochev, Pavel B.; Ridzal, Denis
2016-01-27
In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less
Stathopoulos, A.; Fischer, C.F.; Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
A study on periodic solutions for the circular restricted three-body problem
Gao, F. B.; Zhang, W. E-mail: gaofabao@gmail.com
2014-12-01
For the circular restricted three-body problem (CR3BP) in the inertial frame, we interpret the fact that there is no non-trivial 2π-periodic solution of the problem's homogeneous system. Furthermore, based on Reissig's theory, the existence of periodic solutions for the CR3BP is proved rigorously by using the above fact in conjunction with an a priori estimate. It is significant that the existence of periodic solutions of the CR3BP is mainly influenced by factors such as initial values, primary masses, and selection of the problem's control function. In addition, it is notable that the analytic proof of Poincaré's first class solutions is addressed for all values of the mass parameter in the interval (0, 1), the value of which must be sufficiently small according to previously published literature.
Efficient solutions to the NDA-NCA low-order eigenvalue problem
Willert, J. A.; Kelley, C. T.
2013-07-01
Recent algorithmic advances combine moment-based acceleration and Jacobian-Free Newton-Krylov (JFNK) methods to accelerate the computation of the dominant eigenvalue in a k-eigenvalue calculation. In particular, NDA-NCA [1], builds a sequence of low-order (LO) diffusion-based eigenvalue problems in which the solution converges to the true eigenvalue solution. Within NDA-NCA, the solution to the LO k-eigenvalue problem is computed by solving a system of nonlinear equation using some variant of Newton's method. We show that we can speed up the solution to the LO problem dramatically by abandoning the JFNK method and exploiting the structure of the Jacobian matrix. (authors)
Computation of 3-D current driven skin effect problems using a current vector potential
Biro, O.; Preis, K.; Renhart, W.; Vrisk, G.; Richter, K.R. )
1993-03-01
A finite element formulation of current driven eddy current problems in terms of a current vector potential and a magnetic scalar potential is developed. Since the traditional T-[Omega] method enforces zero net current in conductors, an impressed current vector potential T[sub 0] is introduced in both conducting and nonconducting regions, describing an arbitrary current distribution with the prescribed net current in each conductor. The function T[sub 0] is represented by means of edge elements while nodal elements are used to approximate the current vector potential and the magnetic scalar potential. The tangential component of T is set to zero on the conductor/nonconductor interfaces. The method is validated by computing the solution to an axisymmetric problem. Some problems involving a coil with several turns wound around an iron core are solved.
Solving Large Scale Nonlinear Eigenvalue Problem in Next-Generation Accelerator Design
Liao, Ben-Shan; Bai, Zhaojun; Lee, Lie-Quan; Ko, Kwok; /SLAC
2006-09-28
A number of numerical methods, including inverse iteration, method of successive linear problem and nonlinear Arnoldi algorithm, are studied in this paper to solve a large scale nonlinear eigenvalue problem arising from finite element analysis of resonant frequencies and external Q{sub e} values of a waveguide loaded cavity in the next-generation accelerator design. They present a nonlinear Rayleigh-Ritz iterative projection algorithm, NRRIT in short and demonstrate that it is the most promising approach for a model scale cavity design. The NRRIT algorithm is an extension of the nonlinear Arnoldi algorithm due to Voss. Computational challenges of solving such a nonlinear eigenvalue problem for a full scale cavity design are outlined.
Effective-medium model of wire metamaterials in the problems of radiative heat transfer
Mirmoosa, M. S. Nefedov, I. S. Simovski, C. R.; Rüting, F.
2014-06-21
In the present work, we check the applicability of the effective medium model (EMM) to the problems of radiative heat transfer (RHT) through so-called wire metamaterials (WMMs)—composites comprising parallel arrays of metal nanowires. It is explained why this problem is so important for the development of prospective thermophotovoltaic (TPV) systems. Previous studies of the applicability of EMM for WMMs were targeted by the imaging applications of WMMs. The analogous study referring to the transfer of radiative heat is a separate problem that deserves extended investigations. We show that WMMs with practically realizable design parameters transmit the radiative heat as effectively homogeneous media. Existing EMM is an adequate tool for qualitative prediction of the magnitude of transferred radiative heat and of its effective frequency band.
Some problems in sequencing and scheduling utilizing branch and bound algorithms
Gim, B.
1988-01-01
This dissertation deals with branch and bound algorithms which are applied to the two-machine flow-shop problem with sparse precedence constraints and the optimal sequencing and scheduling of multiple feedstocks in a batch-type digester problem. The problem studied here is to find a schedule which minimizes the maximum flow time with the requirement that the schedule does not violate a set of sparse precedence constraints. This research provides a branch and bound algorithm which employs a lower bounding rule and is based on an adjustment of the sequence obtained by applying Johnson's algorithm. It is demonstrated that this lower bounding procedure in conjunction with Kurisu's branching rule is effective for the sparse precedence constraints problem case. Biomass to methane production systems have the potential of supplying 25% of the national gas demand. The optimal operation of a batch digester system requires the sequencing and scheduling of all batches from multiple feedstocks during a fixed time horizon. A significant characteristic of these systems is that the feedstock decays in storage before use in the digester system. The operational problem is to determine the time to allocate to each batch of several feedstocks and then sequence the individual batches so as to maximize biogas production for a single batch type digester over a fixed planning horizon. This research provides a branch and bound algorithm for sequencing and a two-step hierarchical dynamic programming procedure for time allocation scheduling. An efficient heuristic algorithm is developed for large problems and demonstrated to yield excellent results.
Boyarinov, V. F.; Kondrushin, A. E.; Fomichenko, P. A.
2012-07-01
Finite-difference time-dependent equations of Surface Harmonics method have been obtained for plane geometry. Verification of these equations has been carried out by calculations of tasks from 'Benchmark Problem Book ANL-7416'. The capacity and efficiency of the Surface Harmonics method have been demonstrated by solution of the time-dependent neutron transport equation in diffusion approximation. The results of studies showed that implementation of Surface Harmonics method for full-scale calculations will lead to a significant progress in the efficient solution of the time-dependent neutron transport problems in nuclear reactors. (authors)
Hyperelliptic curves for multichannel quantum wires and the multichannel Kondo problem
Fendley, P.; Saleur, H.
1999-10-01
We study the current in a multichannel quantum wire and the magnetization in the multichannel Kondo problem. We show that at zero temperature they can be written simply in terms of contour integrals over a (two-dimensional) hyperelliptic curve. This allows one to easily demonstrate the existence of weak-coupling to strong-coupling dualities. In the Kondo problem, the curve is the same for under- and over-screened cases; the only change is in the contour. {copyright} {ital 1999} {ital The American Physical Society}
Weekday and Weekend Air Pollutant Levels in Ozone Problem Areas in the U.S.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
| Department of Energy Weekday and Weekend Air Pollutant Levels in Ozone Problem Areas in the U.S. Weekday and Weekend Air Pollutant Levels in Ozone Problem Areas in the U.S. 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters 2005_deer_lawson.pdf (221.69 KB) More Documents & Publications Weekend/Weekday Ozone Study in the South Coast Air Basin Real-World Studies of Ambient Ozone Formation as a Function of NOx Reductions Summary and Implications for Air
Class of model problems in three-body quantum mechanics that admit exact solutions
Takibayev, N. Zh.
2008-03-15
An approach to solving scattering problems in three-body systems for cases where the mass of one of the particles is extremely small in relation to the masses of the other two particles and where the pair potentials of interaction between the particles involved are separable is developed. Exact analytic solutions to such model problems are found for the scattering of a light particle on two fixed centers and on two interacting heavy particles. It is shown that new resonances and a dynamical resonance enhancement may appear in a three-body system.
Fission theory of binary stars. III. The formulation of the bifurcation problem
Lebovitz, N.R.
1983-12-01
A family of compressible Riemann ellipsoids is taken as the known, unperturbed solution of the equations governing secular evolution of an inviscid fluid mass. The problem of the evolution of figures that depart slightly from the ellipsoidal family is discussed in perturbation form, with special attention to the bifurcation of a nonellipsoidal family from a critical Riemann ellipsoid. The similarities to and differences from the classical fission theory of incompressible liquids are discussed, as are physical assumptions and mathematical techniques needed in treating the present problem.
An Implementation and Evaluation of the AMLS Method for SparseEigenvalue Problems
Gao, Weiguo; Li, Xiaoye S.; Yang, Chao; Bai, Zhaojun
2006-02-14
We describe an efficient implementation and present aperformance study of an algebraic multilevel sub-structuring (AMLS)method for sparse eigenvalue problems. We assess the time and memoryrequirements associated with the key steps of the algorithm, and compareitwith the shift-and-invert Lanczos algorithm in computational cost. Oureigenvalue problems come from two very different application areas: theaccelerator cavity design and the normal mode vibrational analysis of thepolyethylene particles. We show that the AMLS method, when implementedcarefully, is very competitive with the traditional method in broadapplication areas, especially when large numbers of eigenvalues aresought.
First conference on ground control problems in the Illinois Coal Basin: proceedings
Chugh, Y. P.; Van Besien, A.
1980-06-01
The first conference on ground control problems in the Illinois Coal Basin was held at the Southern Illinois University at Carbondale, Illinois, August 22-24, 1979. Twenty-one papers from the proceedings have been entered individually into EDB; one had been entered previously from other sources. (LTN)
FORIG: a modification of the ORIGEN2 isotope-generation and depletion code for fusion problems
Blink, J.A.
1982-03-03
This report describes how to use the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG is an adaptation of ORIGEN2 to run on a Cray-1 computer, and to accept more extensive activation cross sections.
A POD reduced order model for resolving angular direction in neutron/photon transport problems
Buchan, A.G.; Calloo, A.A.; Goffin, M.G.; Dargaville, S.; Fang, F.; Pain, C.C.; Navon, I.M.
2015-09-01
This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead they are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.
A comparison study of optimization methods for the bipartite matching problem (BMP)
Goldstein, M.; Toomarian, N.; Barhen, J.
1988-01-01
A comparison study of optimization methods for the bipartite matching problem (BMP) with independent random distances between the points is presented. It is concluded that a variant of simulated annealing, developed at ORNL, is the most promising for the BMP. 8 refs., 1 fig., 1 tab.
Farfan, E. B.; Jannik, G. T.; Marra, J. C.; Oskolkov, B. Ya.; Bondarkov, M. D.; Gaschak, S. P.; Maksymenko, A. M.; Maksymenko, V. M.; Martynenko, V. I.
2009-11-09
Decommissioning of nuclear power plants and other nuclear fuel cycle facilities has been an imperative issue lately. There exist significant experience and generally accepted recommendations on remediation of lands with residual radioactive contamination; however, there are hardly any such recommendations on remediation of cooling ponds that, in most cases, are fairly large water reservoirs. The literature only describes remediation of minor reservoirs containing radioactive silt (a complete closure followed by preservation) or small water reservoirs resulting in reestablishing natural water flows. Problems associated with remediation of river reservoirs resulting in flooding of vast agricultural areas also have been described. In addition, the severity of environmental and economic problems related to the remedial activities is shown to exceed any potential benefits of these activities. One of the large, highly contaminated water reservoirs that require either remediation or closure is Karachay Lake near the MAYAK Production Association in the Chelyabinsk Region of Russia where liquid radioactive waste had been deep well injected for a long period of time. Backfilling of Karachay Lake is currently in progress. It should be noted that secondary environmental problems associated with its closure are considered to be of less importance since sustaining Karachay Lake would have presented a much higher radiological risk. Another well-known highly contaminated water reservoir is the Chernobyl Nuclear Power Plant (ChNPP) Cooling Pond, decommissioning of which is planned for the near future. This study summarizes the environmental problems associated with the ChNPP Cooling Pond decommissioning.
Technical considerations and problems associated with long-term storage of low-level waste
Siskind, B.
1991-12-31
If a state or regional compact does not have adequate disposal capacity for low-level radioactive waste (LLRW), then extended storage of certain LLRW may be necessary. The Nuclear Regulatory Commission (NRC) contracted with Brookhaven National Laboratory (BNL) several years ago (1984--86) to address the technical issues of extended storage. The dual objectives of this study were (1) to provide practical technical assessments for NRC to consider in evaluating specific proposals for extended storage and (2) to help ensure adequate consideration by NRC, Agreement States, and licensees of potential problems that may arise from existing or proposed extended storage practices. In this summary of that study, the circumstances under which extended storage of LLRW would most likely result in problems during or after the extended storage period are considered and possible mitigative measures to minimize these problems are discussed. These potential problem areas include: (1) the degradation of carbon steel and polyethylene containers during storage and the subsequent need for repackaging (resulting in increased occupational exposure), (2) the generation of hazardous gases during storage, and (3) biodegradative processes in LLRW.
Technical considerations and problems associated with long-term storage of low-level waste
Siskind, B.
1991-01-01
If a state or regional compact does not have adequate disposal capacity for low-level radioactive waste (LLRW), then extended storage of certain LLRW may be necessary. The Nuclear Regulatory Commission (NRC) contracted with Brookhaven National Laboratory (BNL) several years ago (1984--86) to address the technical issues of extended storage. The dual objectives of this study were (1) to provide practical technical assessments for NRC to consider in evaluating specific proposals for extended storage and (2) to help ensure adequate consideration by NRC, Agreement States, and licensees of potential problems that may arise from existing or proposed extended storage practices. In this summary of that study, the circumstances under which extended storage of LLRW would most likely result in problems during or after the extended storage period are considered and possible mitigative measures to minimize these problems are discussed. These potential problem areas include: (1) the degradation of carbon steel and polyethylene containers during storage and the subsequent need for repackaging (resulting in increased occupational exposure), (2) the generation of hazardous gases during storage, and (3) biodegradative processes in LLRW.
Computing confidence intervals on solution costs for stochastic grid generation expansion problems.
Woodruff, David L..; Watson, Jean-Paul
2010-12-01
A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.
Farfan, E.
2009-09-30
Decommissioning of nuclear power plants and other nuclear fuel cycle facilities has been an imperative issue lately. There exist significant experience and generally accepted recommendations on remediation of lands with residual radioactive contamination; however, there are hardly any such recommendations on remediation of cooling ponds that, in most cases, are fairly large water reservoirs. The literature only describes remediation of minor reservoirs containing radioactive silt (a complete closure followed by preservation) or small water reservoirs resulting in reestablishing natural water flows. Problems associated with remediation of river reservoirs resulting in flooding of vast agricultural areas also have been described. In addition, the severity of environmental and economic problems related to the remedial activities is shown to exceed any potential benefits of these activities. One of the large, highly contaminated water reservoirs that require either remediation or closure is Karachay Lake near the MAYAK Production Association in the Chelyabinsk Region of Russia where liquid radioactive waste had been deep well injected for a long period of time. Backfilling of Karachay Lake is currently in progress. It should be noted that secondary environmental problems associated with its closure are considered to be of less importance since sustaining Karachay Lake would have presented a much higher radiological risk. Another well-known highly contaminated water reservoir is the Chernobyl Nuclear Power Plant (ChNPP) Cooling Pond, decommissioning of which is planned for the near future. This study summarizes the environmental problems associated with the ChNPP Cooling Pond decommissioning.
Dovrolis, Konstantinos
2014-04-15
We present the development of a middleware service, called Pythia, that is able to detect, localize, and diagnose performance problems in the network paths that interconnect research sites that are of interest to DOE. The proposed service can analyze perfSONAR data collected from all participating sites.
Piping benchmark problems for the ABB/CE System 80+ Standardized Plant
Bezler, P.; DeGrassi, G.; Braverman, J.; Wang, Y.K.
1994-07-01
To satisfy the need for verification of the computer programs and modeling techniques that will be used to perform the final piping analyses for the ABB/Combustion Engineering System 80+ Standardized Plant, three benchmark problems were developed. The problems are representative piping systems subjected to representative dynamic loads with solutions developed using the methods being proposed for analysis for the System 80+ standard design. It will be required that the combined license licensees demonstrate that their solution to these problems are in agreement with the benchmark problem set. The first System 80+ piping benchmark is a uniform support motion response spectrum solution for one section of the feedwater piping subjected to safe shutdown seismic loads. The second System 80+ piping benchmark is a time history solution for the feedwater piping subjected to the transient loading induced by a water hammer. The third System 80+ piping benchmark is a time history solution of the pressurizer surge line subjected to the accelerations induced by a main steam line pipe break. The System 80+ reactor is an advanced PWR type.
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
Honea, R.B.; Baxter, F.P.
1984-07-01
In 1977 Congress passed the Surface Mining Control and Reclamation Act, which provided for the abatement of abandoned mine land (AML) problems through a reclamation program funded by a severance tax on current mining. AML was defined as any land, including associated buildings, equipment, and affected areas, that was no longer being used for coal mining by August 1977. This act also created the Office of Surface Mining (OSM) in the Department of the Interior to administer the AML program and to assume other regulatory and research responsibilities. This report documents the design, implementation, and results of a National inventory of the most serious problems associated with past mining practices. One of the objectives of the Inventory was to help OSM and the participating states locate, identify, and rank AML problems and estimate their reclamation costs. Other objectives were to encourage states and Indian tribes to collect such data and to provide OSM with the information necessary to guide its decision-making processes and to quantify the progress of the reclamation program. Because only limited funds were available to design and implement the National inventory and because the reclamation fund established by the Act may never be sufficient to correct all AML problems, OSM has focused on only the top-priority problems. It is stressed that this is not an inventory of AML features but rather an inventory of AML impacts. It should be noted that the data and analysis contained in this report are based on a data collection effort conducted by the states, Indian tribes, and OSM contractors between 1979 and mid-1982.
Greg L. Hollinger
2014-06-01
Background: The current rules in the nuclear section of the ASME Boiler and Pressure Vessel (B&PV) Code , Section III, Subsection NH for the evaluation of strain limits and creep-fatigue damage using simplified methods based on elastic analysis have been deemed inappropriate for Alloy 617 at temperatures above 1200F (650C)1. To address this issue, proposed code rules have been developed which are based on the use of elastic-perfectly plastic (E-PP) analysis methods and which are expected to be applicable to very high temperatures. The proposed rules for strain limits and creep-fatigue evaluation were initially documented in the technical literature 2, 3, and have been recently revised to incorporate comments and simplify their application. The revised code cases have been developed. Task Objectives: The goal of the Sample Problem task is to exercise these code cases through example problems to demonstrate their feasibility and, also, to identify potential corrections and improvements should problems be encountered. This will provide input to the development of technical background documents for consideration by the applicable B&PV committees considering these code cases for approval. This task has been performed by Hollinger and Pease of Becht Engineering Co., Inc., Nuclear Services Division and a report detailing the results of the E-PP analyses conducted on example problems per the procedures of the E-PP strain limits and creep-fatigue draft code cases is enclosed as Enclosure 1. Conclusions: The feasibility of the application of the E-PP code cases has been demonstrated through example problems that consist of realistic geometry (a nozzle attached to a semi-hemispheric shell with a circumferential weld) and load (pressure; pipe reaction load applied at the end of the nozzle, including axial and shear forces, bending and torsional moments; through-wall transient temperature gradient) and design and operating conditions (Levels A, B and C).
A METHOD FOR SELECTING SOFTWARE FOR DYNAMIC EVENT ANALYSIS I: PROBLEM SELECTION
J. M. Lacy; S. R. Novascone; W. D. Richins; T. K. Larson
2007-08-01
New nuclear power reactor designs will require resistance to a variety of possible malevolent attacks, as well as traditional dynamic accident scenarios. The design/analysis team may be faced with a broad range of phenomena including air and ground blasts, high-velocity penetrators or shaped charges, and vehicle or aircraft impacts. With a host of software tools available to address these high-energy events, the analysis team must evaluate and select the software most appropriate for their particular set of problems. The accuracy of the selected software should then be validated with respect to the phenomena governing the interaction of the threat and structure. In this paper, we present a method for systematically comparing current high-energy physics codes for specific applications in new reactor design. Several codes are available for the study of blast, impact, and other shock phenomena. Historically, these packages were developed to study specific phenomena such as explosives performance, penetrator/target interaction, or accidental impacts. As developers generalize the capabilities of their software, legacy biases and assumptions can remain that could affect the applicability of the code to other processes and phenomena. R&D institutions generally adopt one or two software packages and use them almost exclusively, performing benchmarks on a single-problem basis. At the Idaho National Laboratory (INL), new comparative information was desired to permit researchers to select the best code for a particular application by matching its characteristics to the physics, materials, and rate scale (or scales) representing the problem at hand. A study was undertaken to investigate the comparative characteristics of a group of shock and high-strain rate physics codes including ABAQUS, LS-DYNA, CTH, ALEGRA, ALE-3D, and RADIOSS. A series of benchmark problems were identified to exercise the features and capabilities of the subject software. To be useful, benchmark problems
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
MFIX simulation of NETL/PSRI challenge problem of circulating fluidized bed
Li, Tingwen; Dietiker, Jean-François; Shahnam, Mehrdad
2012-12-01
In this paper, numerical simulations of NETL/PSRI challenge problem of circulating fluidized bed (CFB) using the open-source code Multiphase Flow with Interphase eXchange (MFIX) are reported. Two rounds of simulation results are reported including the first-round blind test and the second-round modeling refinement. Three-dimensional high fidelity simulations are conducted to model a 12-inch diameter pilot-scale CFB riser. Detailed comparisons between numerical results and experimental data are made with respect to axial pressure gradient profile, radial profiles of solids velocity and solids mass flux along different radial directions at various elevations for operating conditions covering different fluidization regimes. Overall, the numericalmore » results show that CFD can predict the complex gas–solids flow behavior in the CFB riser reasonably well. In addition, lessons learnt from modeling this challenge problem are presented.« less
Grey transport acceleration method for time-dependent radiative transfer problems
Larsen, E.
1988-10-01
A new iterative method for solving hte time-dependent multifrequency radiative transfer equations is described. The method is applicable to semi-implicit time discretizations that generate a linear steady-state multifrequency transport problem with pseudo-scattering within each time step. The standard ''lambda'' iteration method is shown to often converge slowly for such problems, and the new grey transport acceleration (GTA) method, based on accelerating the lambda method by employing a grey, or frequency-independent transport equation, is developed. The GTA method is shown, theoretically by an iterative Fourier analysis, and experimentally by numerical calculations, to converge significantly faster than the lambda method. In addition, the GTA method is conceptually simple to implement for general differencing schemes, on either Eulerian or Lagrangian meshes. copyright 1988 Academic Press, Inc.
A BDDC Algorithm with Deluxe Scaling for Three-Dimensional H (curl) Problems
Dohrmann, Clark R.; Widlund, Olof B.
2015-04-28
In our paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/his the maximum of Hi/hi over all subdomains, where Hi and hi are the diameter and the smallest element diameter for the subdomain Ωi. The analysis makes use of two recent developments. The first is our new approach to averaging across the subdomain interfaces, while the second is amore » new technical tool that allows arguments involving trace classes to be avoided. Furthermore, numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.« less
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
The Center for Computational Sciences and Engineering (CCSE) develops and applies advanced computational methodologies to solve large-scale scientific and engineering problems arising in the Department of Energy (DOE) mission areas involving energy, environmental, and industrial technology. The primary focus is in the application of structured-grid finite difference methods on adaptive grid hierarchies for compressible, incompressible, and low Mach number flows. The diverse range of scientific applications that drive the research typically involve a large range of spatial and temporal scales (e.g. turbulent reacting flows) and require the use of extremely large computing hardware, such as the 153,000-core computer, Hopper, at NERSC. The CCSE approach to these problems centers on the development and application of advanced algorithms that exploit known separations in scale; for many of the application areas this results in algorithms are several orders of magnitude more efficient than traditional simulation approaches.
Large neighborhood search for the double traveling salesman problem with multiple stacks
Bent, Russell W; Van Hentenryck, Pascal
2009-01-01
This paper considers a complex real-life short-haul/long haul pickup and delivery application. The problem can be modeled as double traveling salesman problem (TSP) in which the pickups and the deliveries happen in the first and second TSPs respectively. Moreover, the application features multiple stacks in which the items must be stored and the pickups and deliveries must take place in reserve (LIFO) order for each stack. The goal is to minimize the total travel time satisfying these constraints. This paper presents a large neighborhood search (LNS) algorithm which improves the best-known results on 65% of the available instances and is always within 2% of the best-known solutions.
Self-similar solution of the problem of consolidation and thawing of frozen soil
Klement'ev, A.F.; Klement'eva, E.A.
1988-10-01
This article presents a new mathematical model of the process of thawing of frozen soil taking consolidation into account. Two solutions were obtained: the self-similar solution for the unidimensional biphase problem and an approximate analytical solution for the simplified single-phase problem. A comparison with the results of physical modeling showed that the method is fairly effective in the case of warm permafrost. The mean error in predicting the position of the interface between the thawed and frozen zones for different soils over a period of one to ten years amounted to 20.9%. The use of the method of the All-Union Research Institute of Pipeline Construction yielded an error of 31.6% and the method of the All-Union Research Institute of the Gas Industry an error of 39.6% by comparison.
Koehler, M.D.; Marrs, J.A.
1990-01-01
As national leaders become increasingly aware of the environmental risks that modern technology adds to existing natural environmental problems, they have begun to search for ways to prioritize the risks they face. Several experts in risk assessment, including Professor Gordon Goodman of the Stockholm Environmental Institute, researchers at Clark University's Center for Environment, Technology, Development (CENTED), and the United States Environmental Protection Agency, have already developed some hazard characterization taxonomies that attempt to fill this need. The Kennedy School of Government (KSG) taxonomy if the next iteration of taxonomies designed to characterize environmental problems. The purpose of this Policy Analysis Exercise (PAE) is to test and evaluate the KSG taxonomy. In order to accomplish these goals, the United States and India are presented as case studies. The final section of this PAE provides recommendations to policy makers who use the KSG taxonomy.
MFIX simulation of NETL/PSRI challenge problem of circulating fluidized bed
Li, Tingwen; Dietiker, Jean-Franois; Shahnam, Mehrdad
2012-12-01
In this paper, numerical simulations of NETL/PSRI challenge problem of circulating fluidized bed (CFB) using the open-source code Multiphase Flow with Interphase eXchange (MFIX) are reported. Two rounds of simulation results are reported including the first-round blind test and the second-round modeling refinement. Three-dimensional high fidelity simulations are conducted to model a 12-inch diameter pilot-scale CFB riser. Detailed comparisons between numerical results and experimental data are made with respect to axial pressure gradient profile, radial profiles of solids velocity and solids mass flux along different radial directions at various elevations for operating conditions covering different fluidization regimes. Overall, the numerical results show that CFD can predict the complex gassolids flow behavior in the CFB riser reasonably well. In addition, lessons learnt from modeling this challenge problem are presented.
A BDDC Algorithm with Deluxe Scaling for Three-Dimensional H (curl) Problems
Dohrmann, Clark R.; Widlund, Olof B.
2015-04-28
In our paper, we present and analyze a BDDC algorithm for a class of elliptic problems in the three-dimensional H(curl) space. Compared with existing results, our condition number estimate requires fewer assumptions and also involves two fewer powers of log(H/h), making it consistent with optimal estimates for other elliptic problems. Here, H/his the maximum of Hi/hi over all subdomains, where Hi and hi are the diameter and the smallest element diameter for the subdomain Ωi. The analysis makes use of two recent developments. The first is our new approach to averaging across the subdomain interfaces, while the second is a new technical tool that allows arguments involving trace classes to be avoided. Furthermore, numerical examples are presented to confirm the theory and demonstrate the importance of the new averaging approach in certain cases.
Development of a CFD Analysis Plan for the first VHTR Standard Problem
Richard W. Johnson
2008-09-01
Data from a scaled model of a portion of the lower plenum of the helium-cooled very high temperature reactor (VHTR) are under consideration for acceptance as a computational fluid dynamics (CFD) validation data set or standard problem. A CFD analysis will help determine if the scaled model is a suitable geometry for validation data. The present article describes the development of an analysis plan for the CFD model. The plan examines the boundary conditions that should be used, the extent of the computational domain that should be included and which turbulence models need not be examined against the data. Calculations are made for a closely related 2D geometry to address these issues. It was found that a CFD model that includes only the inside of the scaled model in its computational domain is adequate for CFD calculations. The realizable k~e model was found not to be suitable for this problem because it did not predict vortex-shedding.
Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit
Merzari, E.; Shemon, E. R.; Yu, Y. Q.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Tautges, Timothy; Solberg, Jerome; Ferencz, Robert Mark; Whitesides, R.
2015-12-21
This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.
Viscosity Solutions of Systems of PDEs with Interconnected Obstacles and Switching Problem
Hamadene, S. Morlais, M. A.
2013-04-15
This paper deals with existence and uniqueness of a solution in viscosity sense, for a system of m variational partial differential inequalities with inter-connected obstacles. A particular case is the Hamilton-Jacobi-Bellmann system of the Markovian stochastic optimal m-states switching problem. The switching cost functions depend on (t,x). The main tool is the notion of systems of reflected backward stochastic differential equations with oblique reflection.
Modeling of Gap Closure in Uranium-Zirconium Alloy Metal Fuel - A Test Problem
Simunovic, Srdjan; Ott, Larry J; Gorti, Sarma B; Nukala, Phani K; Radhakrishnan, Balasubramaniam; Turner, John A
2009-10-01
Uranium based binary and ternary alloy fuel is a possible candidate for advanced fast spectrum reactors with long refueling intervals and reduced liner heat rating [1]. An important metal fuel issue that can impact the fuel performance is the fuel-cladding gap closure, and fuel axial growth. The dimensional change in the fuel during irradiation is due to a superposition of the thermal expansion of the fuel due to heating, volumetric changes due to possible phase transformations that occur during heating and the swelling due to fission gas retention. The volumetric changes due to phase transformation depend both on the thermodynamics of the alloy system and the kinetics of phase change reactions that occur at the operating temperature. The nucleation and growth of fission gas bubbles that contributes to fuel swelling is also influenced by the local fuel chemistry and the microstructure. Once the fuel expands and contacts the clad, expansion in the radial direction is constrained by the clad, and the overall deformation of the fuel clad assembly depends upon the dynamics of the contact problem. The neutronics portion of the problem is also inherently coupled with microstructural evolution in terms of constituent redistribution and phase transformation. Because of the complex nature of the problem, a series of test problems have been defined with increasing complexity with the objective of capturing the fuel-clad interaction in complex fuels subjected to a wide range of irradiation and temperature conditions. The abstract, if short, is inserted here before the introduction section. If the abstract is long, it should be inserted with the front material and page numbered as such, then this page would begin with the introduction section.
A General Optimality Conditions for Stochastic Control Problems of Jump Diffusions
Bahlali, Seid; Chala, Adel
2012-02-15
We consider a stochastic control problem where the system is governed by a non linear stochastic differential equation with jumps. The control is allowed to enter into both diffusion and jump terms. By only using the first order expansion and the associated adjoint equation, we establish necessary as well as sufficient optimality conditions of controls for relaxed controls, who are a measure-valued processes.
Topology-Aware Mappings for Large-Scale Eigenvalue Problems | Argonne
U.S. Department of Energy (DOE) all webpages (Extended Search)
Leadership Computing Facility Topology-Aware Mappings for Large-Scale Eigenvalue Problems Authors: Aktulga, H.M., Yang, C., Ng.,E.G., Maris, P., Vary, J.P. Obtaining highly accurate predictions for properties of light atomic nuclei using the Configuration Interaction (CI) approach requires computing the lowest eigenvalues and associated eigenvectors of a large many-body nuclear Hamiltonian matrix, H ˆ . Since H ˆ is a large sparse matrix, a parallel iterative eigensolver designed for
New technologies address the problem areas of coiled-tubing cementing
Carpenter, R.B. )
1992-05-01
Coiled-tubing cementing has been practiced successfully on the Alaskan North Slope for several years. This paper discusses the special problems faced when this technology was applied to offshore U.S. gulf coast operations. The innovative solutions and procedures developed to improve the economic and technical success of coiled-tubing cementing are also discussed. Comparative laboratory and computer studies, as well as field case histories, will be presented to show the economic merit of this technology.
Antitrust Enforcement in the Electricity and Gas Industries: Problems and Solutions for the EU
Leveque, Francois
2006-06-15
Antitrust enforcement in the electricity and gas industries raises specific problems that call for specific solutions. Among the issues: How can the anticompetitive effects of mergers be assessed in a changing regulatory environment? Should long-term agreements in energy purchasing be prohibited? What are the benefits of preventive action such as competition advocacy and market surveillance committees? Should Article 82 (a) of the EC Treaty be used to curb excessive pricing?. (author)
Not Available
1980-03-01
The potential and existing problems concerning the interface between US electric utilities and cogenerators are considered by region. Also considered are regulatory barriers, rates and contracts, economic feasibility, and impact on system planning. Finally, the impact of the National Energy Act on the marketability potential of cogeneration is reviewed. The three appendixes summarize the utility meetings on cogeneration held in Washington, DC, Los Angeles, and Chicago.
Energy Problem Is Something That We Have to Face Now | Center for
U.S. Department of Energy (DOE) all webpages (Extended Search)
Bio-Inspired Solar Fuel Production Center News Research Highlights Center Research News Media about Center Center Video Library Bisfuel Picture Gallery Energy Problem Is Something That We Have to Face Now 13 Mar 2014 Marely Tejeda is a graduate student working with Professors Ana Moore and Vladimiro Mujica on design and testing of the water splitting cells. Marely is involved in the synthesis of the artificial reaction centers creating high potential porphyrins that have enough energy to
U.S. Department of Energy (DOE) all webpages (Extended Search)
2 - Advances in Reactor Physics - Linking Research, Industry, and Education Knoxville, Tennessee, USA, April 15-20, 2012, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2012) CONSTRUCTION OF ACCURACY-PRESERVING SURROGATE FOR THE EIGENVALUE RADIATION DIFFUSION AND/OR TRANSPORT PROBLEM Congjian Wang and Hany S. Abdel-Khalik Department of Nuclear Engineering North Caroline State University Raleigh, NC 27695 cwang21@ncsu.edu ; abdelkhalik@ncsu.edu ABSTRACT The construction of surrogate
1. What is the problem? Lack of secure supply chains for some raw materials cri
U.S. Department of Energy (DOE) all webpages (Extended Search)
. What is the problem? Lack of secure supply chains for some raw materials critical to clean energy technologies hinders U.S. manufacturing and energy security. These critical materials (a) provide essential and specialized properties to advanced engineered products or systems for which there are no easy substitutes and (b) are subject to supply risk. Rare-earth elements, with essential roles in high-efficiency motors and advanced lighting, are the most prominent of the critical materials today.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
The Energy Problem: What the Helios Project Can Do About it (LBNL Science at the Theater)
Chu, Steven
2011-04-28
The energy problem is one of the most important issues that science and technology has to solve. Nobel laureate and Berkeley Lab Director Steven Chu proposes an aggressive research program to transform the existing and future energy systems of the world away from technologies that emit greenhouse gases. Berkeley Lab's Helios Project concentrates on renewable fuels, such as biofuels, and solar technologies, including a new generation of solar photovoltaic cells and the conversion of electricity into chemical storage to meet future demand.
Relationship Between MP and DPP for the Stochastic Optimal Control Problem of Jump Diffusions
Shi Jingtao Wu, Zhen
2011-04-15
This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any derivatives of the value function, relations among the adjoint processes, the generalized Hamiltonian and the value function are investigated by employing the notions of semijets evoked in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is optimal.
Sandia researcher turns "problem" of nonlinear capacitors into a solution |
National Nuclear Security Administration (NNSA)
National Nuclear Security Administration | (NNSA) researcher turns "problem" of nonlinear capacitors into a solution Friday, January 22, 2016 - 2:00am NNSA Blog Sandia National Laboratories' Researcher Juan Elizondo-Decanini holds two compact, high-voltage nonlinear transmission lines. He leads a project on nonlinear behavior in materials - behavior that's usually shunned because it's so unpredictable. (Photo by Randy Montoya) Sandia National Laboratories' Juan Elizondo-Decanini
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatoryepistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Cesari, G.
1994-12-31
The aim of this paper is to analyze experimentally the quality of the solution obtained with dissection algorithms applied to the geometric Traveling Salesman Problem. Starting from Karp`s results. We apply a divide and conquer strategy, first dividing the plane into subregions where we calculate optimal subtours and then merging these subtours to obtain the final tour. The analysis is restricted to problem instances where points are uniformly distributed in the unit square. For relatively small sets of cities we analyze the quality of the solution by calculating the length of the optimal tour and by comparing it with our approximate solution. When the problem instance is too large we perform an asymptotical analysis estimating the length of the optimal tour. We apply the same dissection strategy also to classical heuristics by calculating approximate subtours and by comparing the results with the average quality of the heuristic. Our main result is the estimate of the rate of convergence of the approximate solution to the optimal solution as a function of the number of dissection steps, of the criterion used for the plane division and of the quality of the subtours. We have implemented our programs on MUSIC (MUlti Signal processor system with Intelligent Communication), a Single-Program-Multiple-Data parallel computer with distributed memory developed at the ETH Zurich.
Klicker, Kyle R.; Singhal, Mudita; Stephan, Eric G.; Trease, Lynn L.; Gracio, Deborah K.
2004-06-22
Biologists and bioinformaticists face the ever-increasing challenge of managing large datasets queried from diverse data sources. Genomics and proteomics databases such as the National Center for Biotechnology (NCBI), Kyoto Encyclopedia of Genes and Genomes (KEGG), and the European Molecular Biology Laboratory (EMBL) are becoming the standard biological data department stores that biologists visit on a regular basis to obtain the supplies necessary for conducting their research. However, much of the data that biologists retrieve from these databases needs to be further managed and organized in a meaningful way so that the researcher can focus on the problem that they are trying to investigate and share their data and findings with other researchers. We are working towards developing a problem-solving environment called the Computational Cell Environment (CCE) that provides connectivity to these diverse data stores and provides data retrieval, management, and analysis through all aspects of biological study. In this paper we discuss the system and database design of CCE. We also outline a few problems encountered at various stages of its development and the design decisions taken to resolve them.
On the look-up tables for the critical heat flux in tubes (history and problems)
Kirillov, P.L.; Smogalev, I.P.
1995-09-01
The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality. The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.
Fradkin, Eduardo; Maldacena, Juan; Chatterjee, Lali; Davenport, James W
2015-02-02
On February 2, 2015 the Offices of High Energy Physics (HEP) and Basic Energy Sciences (BES) convened a Round Table discussion among a group of physicists on ‘Common Problems in Condensed Matter and High Energy Physics’. This was motivated by the realization that both fields deal with quantum many body problems, share many of the same challenges, use quantum field theoretical approaches and have productively interacted in the past. The meeting brought together physicists with intersecting interests to explore recent developments and identify possible areas of collaboration.... Several topics were identified as offering great opportunity for discovery and advancement in both condensed matter physics and particle physics research. These included topological phases of matter, the use of entanglement as a tool to study nontrivial quantum systems in condensed matter and gravity, the gauge-gravity duality, non-Fermi liquids, the interplay of transport and anomalies, and strongly interacting disordered systems. Many of the condensed matter problems are realizable in laboratory experiments, where new methods beyond the usual quasi-particle approximation are needed to explain the observed exotic and anomalous results. Tools and techniques such as lattice gauge theories, numerical simulations of many-body systems, and tensor networks are seen as valuable to both communities and will likely benefit from collaborative development.
Chickamauga Hydro Unit 3: History of problems, application of new technology and corrective actions
Miller, L.J. III; Thompson, D.W.
1995-12-31
Chickamauga Unit 3 was placed in commercial operation in 1940 and has been in operation for over fifty years. During the history of the dam, concrete growth has been the source of alignment problems with all of the turbines and generators. This problem has resulted in difficulty in the maintenance of the minimum clearance between the rotating and stationary components of the unit. Disassembly of the units has been necessary to restore these minimum clearances. Over the years several potentially damaging problems have plagued this unit. In November of 1992 a Rotor Mounted Scanner (RMS) manufactured by MCM Enterprise Limited of Bellevue, Washington was installed on this unit. The use of state of the art technology has provided information which allowed operators to prevent an in-service failure when the air gap became dangerously small. Adjustments were made in the operation of the unit to minimize the temperature cycles. This change allowed the continued operation of the unit for an additional seven months to a planned outage. The turbine was scheduled to be replaced due to worn bushings in the trunion of the Kaplan type turbine. The information from the RMS was also used to formulate corrective actions that were taken during the planned outage. The findings made during the outage and corrective actions for continued dependable service will be discussed.
Issue Paper Potential Water Availability Problems Associated with Geothermal Energy Operations
1982-02-19
The report is the first to study and discuss the effect of water supply problems of geothermal development. Geothermal energy resources have the potential of making a significant contribution to the U.S. energy supply situation, especially at the regional and local levels where the resources are located. A significant issue of concern is the availability and cost of water for use in a geothermal power operation primarily because geothermal power plants require large quantities of water for cooling, sludge handling and the operation of environmental control systems. On a per unit basis, geothermal power plants, because of their inherent high heat rejection rates, have cooling requirements several times greater than the conventional fossil fuel plants and therefore the supply of water is a critical factor in the planning, designing, and siting of geothermal power plants. However, no studies have been specifically performed to identify the water requirements of geothermal power plants, the underlying causes of water availability problems, and available techniques to alleviate some of these problems. There is no cost data included in the report. The report includes some descriptions of known geothermal areas. [DJE-2005
Backyard waste management - problems and benefits of individuals managing their solid waste at home
Whalen, M.
1995-05-01
The problems and benefits of individuals managing their solid wastes at home are surveyed. The survey indicates that as the population rises people tend to burn only the combustible portions of their waste. Some communities have limited ordinances that ban the burning of raw garbage, but other municipalities allow residents to burn all of their wastestream, even though some materials are not combustible and cannot be burned. Potential environmental effects involve both the ash residue and the air emissions. While selected burning can reduce some of the environmental hazards these would probably only be marginally less than the impacts of burning it all. The study clearly indicates that the environmental problems of burn barrels are not insignificant. However, the attitudes and motivations of those who burn waste will have to be addressed by the communities that attempt or should attempt to control this problem. These include: avoidance of waste collection costs; availability of trash cartage services; and habit. Habit is probably as strong a motivation as cost avoidance and ease of collection combined. Residents have often burned trash for several generations and regard the practice as a {open_quotes}god-given right.{close_quotes}
Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies
Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.
2012-08-14
The 'tilted-rig' test problem originates from a series of experiments (Smeeton & Youngs, 1987, Youngs, 1989) performed at AWE in the late 1980's, that followed from the 'rocket-rig' experiments (Burrows et al., 1984; Read & Youngs, 1983), and exploratory experiments performed at Imperial College (Andrews, 1986; Andrews and Spalding, 1990). A schematic of the experiment is shown in Figure 1, and comprises a tank filled with light fluid above heavy, and then 'tilted' on one side of the apparatus, thus causing an 'angled interface' to the acceleration history due to rockets. Details of the configuration given in the next chapter include: fluids, dimensions, and other necessary details to simulate the experiment. Figure 2 shows results from two experiments, Case 110 (which is the source for this test problem) that has an Atwood number of 0.5, and Case 115 (a secondary source described in Appendix B), with Atwood of 0.9 Inspection of the photograph in Figure 2 (the main experimental diagnostic) for Case 110. reveals two main areas for mix development; 1) a large-scale overturning motion that produces a rising plume (spike) on the left, and falling plume (bubble) on the right, that are almost symmetric; and 2) a Rayleigh-Taylor driven mixing central mixing region that has a large-scale rotation associated with the rising and falling plumes, and also experiences lateral strain due to stretching of the interface by the plumes, and shear across the interface due to upper fluid moving downward and to the right, and lower fluid moving upward and to the left. Case 115 is similar but differs by a much larger Atwood of 0.9 that drives a strong asymmetry between a left side heavy spike penetration and a right side light bubble penetration. Case 110 is chosen as the source for the present test problem as the fluids have low surface tension (unlike Case 115) due the addition of a surfactant, the asymmetry small (no need to have fine grids for the spike), and there is extensive
Thigpen, L.; Peterson, J.C.
1983-08-01
This report provides instructions on the use of the DYNALK computer program to generate boundary conditions for a soil island used in soil-structure interaction problems. DYNALK converts temporal motions from 2-D TENSOR calculations into appropriate three-dimensional boundary conditions for a DYNA3D soil-structure interaction problem. The program is operational on the CRAY-1 computer.
Bazalii, B V; Degtyarev, S P
2013-07-31
An elliptic boundary-value problem for second-order equations with nonnegative characteristic form is investigated in the situation when there is a weak degeneracy on the boundary of the domain. A priori estimates are obtained for solutions and the problem is proved to be solvable in some weighted Hlder spaces. Bibliography: 18 titles.
Causes of Indoor Air Quality Problems in Schools: Summary of Scientific Research
Bayer, C.W.
2001-02-22
In the modern urban setting, most individuals spend about 80% of their time indoors and are therefore exposed to the indoor environment to a much greater extent than to the outdoors (Lebowitz 1992). Concomitant with this increased habitation in urban buildings, there have been numerous reports of adverse health effects related to indoor air quality (IAQ) (sick buildings). Most of these buildings were built in the last two decades and were constructed to be energy-efficient. The quality of air in the indoor environment can be altered by a number of factors: release of volatile compounds from furnishings, floor and wall coverings, and other finishing materials or machinery; inadequate ventilation; poor temperature and humidity control; re-entrainment of outdoor volatile organic compounds (VOCs); and the contamination of the indoor environment by microbes (particularly fungi). Armstrong Laboratory (1992) found that the three most frequent causes of IAQ are (1) inadequate design and/or maintenance of the heating, ventilation, and air-conditioning (HVAC) system, (2) a shortage of fresh air, and (3) lack of humidity control. A similar study by the National Institute for Occupational Safety and Health (NIOSH 1989) recognized inadequate ventilation as the most frequent source of IAQ problems in the work environment (52% of the time). Poor IAQ due to microbial contamination can be the result of the complex interactions of physical, chemical, and biological factors. Harmful fungal populations, once established in the HVAC system or occupied space of a modern building, may episodically produce or intensify what is known as sick building syndrome (SBS) (Cummings and Withers 1998). Indeed, SBS caused by fungi may be more enduring and recalcitrant to treatment than SBS from multiple chemical exposures (Andrae 1988). An understanding of the microbial ecology of the indoor environment is crucial to ultimately resolving many IAQ problems. The incidence of SBS related to multiple
Studies in nonlinear problems of energy. Progress report, January 1, 1992--December 31, 1992
Matkowsky, B.J.
1992-07-01
Emphasis has been on combustion and flame propagation. The research program was on modeling, analysis and computation of combustion phenomena, with emphasis on transition from laminar to turbulent combustion. Nonlinear dynamics and pattern formation were investigated in the transition. Stability of combustion waves, and transitions to complex waves are described. Combustion waves possess large activation energies, so that chemical reactions are significant only in thin layers, or reaction zones. In limit of infinite activation energy, the zones shrink to moving surfaces, (fronts) which must be found during the analysis, so that (moving free boundary problems). The studies are carried out for limiting case with fronts, while the numerical studies are carried out for finite, though large, activation energy. Accurate resolution of the solution in the reaction zones is essential, otherwise false predictions of dynamics are possible. Since the the reaction zones move, adaptive pseudo-spectral methods were developed. The approach is based on a synergism of analytical and computational methods. The numerical computations build on and extend the analytical information. Furthermore, analytical solutions serve as benchmarks for testing the accuracy of the computation. Finally, ideas from analysis (singular perturbation theory) have induced new approaches to computations. The computational results suggest new analysis to be considered. Among the recent interesting results, was spatio-temporal chaos in combustion. One goal is extension of the adaptive pseudo-spectral methods to adaptive domain decomposition methods. Efforts have begun to develop such methods for problems with multiple reaction zones, corresponding to problems with more complex, and more realistic chemistry. Other topics included stochastics, oscillators, Rysteretic Josephson junctions, DC SQUID, Markov jumps, laser with saturable absorber, chemical physics, Brownian movement, combustion synthesis, etc.
Everett, K.R.
1994-12-31
Ecological problems in many regions on Earth are the result of increasing technological pressure on the environment. These problems concern many of us and cause mankind to unite in order to search for means to protect the environment. Scientists, especially are responsible for the protection of the biosphere. The objective of this conference was to discuss the results of studies on the present condition of the environment in the Far North where the industrial pressure is increasing. The participants of this conference also offered and suggested various necessary measures for the protection of the region and restoration of its disturbed sites. The specific structural characteristics of the environment of the Far North, tundra and northern taiga, cause its fragility and vulnerability to anthropogenic impact. The destruction of the thin, weak layer of soil and vegetation cover changes the thermal balance and thus causes the development of erosion process, which in their turn increase the zone of the direct technogenous destruction. Self restoration processes in this harsh climate usually are slow. The preservation of the ecological integrity in the Far North is essential for the stability of the biosphere of the planet. The specifics of the natural conditions must be taken into account so that man will be able to develop the means of intensive agro-technology that can speed up the process of restoration of the biocenosis in the damaged areas. The extended abstracts of the conference reports that constitute this volume contain both theoretical discussions of problems of recultivation as well as accounts of experimental studies and applied explorations.
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Cai, X.C.
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
Kim, D.; Ghanem, R.
1994-12-31
Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.
Asymptotic solution of light transport problems in optically thick luminescent media
?ahin-Biryol, Derya Ilan, Boaz
2014-06-15
We study light transport in optically thick luminescent random media. Using radiative transport theory for luminescent media and applying asymptotic and computational methods, a corrected diffusion approximation is derived with the associated boundary conditions and boundary layer solution. The accuracy of this approach is verified for a plane-parallel slab problem. In particular, the reduced system models accurately the effect of reabsorption. The impacts of varying the Stokes shift and using experimentally measured luminescence data are explored in detail. The results of this study have application to the design of luminescent solar concentrators, fluorescence medical imaging, and optical cooling using anti-Stokes fluorescence.
Lee, H.; Lee, D.
2013-07-01
This paper presents a new hybrid method of continuous energy Monte Carlo (MC) and multi-group Method of Characteristics (MOC). For a continuous energy neutron transport analysis, the hybrid method employs a continuous energy MC for resonance energy range to treat the resonances accurately and a multi-group MOC for high and low energy ranges for efficiency. Numerical test with a model problem confirms that the hybrid method can produce consistent results with the reference continuous energy MC-only calculation as well as multi-group MOC-only calculation. (authors)
Spherical cavity-expansion forcing function in PRONTO 3D for application to penetration problems
Warren, T.L.; Tabbara, M.R.
1997-05-01
In certain penetration events the primary mode of deformation of the target can be approximated by known analytical expressions. In the context of an analysis code, this approximation eliminates the need for modeling the target as well as the need for a contact algorithm. This technique substantially reduces execution time. In this spirit, a forcing function which is derived from a spherical-cavity expansion analysis has been implemented in PRONTO 3D. This implementation is capable of computing the structural and component responses of a projectile due to three dimensional penetration events. Sample problems demonstrate good agreement with experimental and analytical results.
Two-dimensional lift-up problem for a rigid porous bed
Chang, Y.; Huang, L. H.; Yang, F. P. Y.
2015-05-15
The present study analytically reinvestigates the two-dimensional lift-up problem for a rigid porous bed that was studied by Mei, Yeung, and Liu [“Lifting of a large object from a porous seabed,” J. Fluid Mech. 152, 203 (1985)]. Mei, Yeung, and Liu proposed a model that treats the bed as a rigid porous medium and performed relevant experiments. In their model, they assumed the gap flow comes from the periphery of the gap, and there is a shear layer in the porous medium; the flow in the gap is described by adhesion approximation [D. J. Acheson, Elementary Fluid Dynamics (Clarendon, Oxford, 1990), pp. 243-245.] and the pore flow by Darcy’s law, and the slip-flow condition proposed by Beavers and Joseph [“Boundary conditions at a naturally permeable wall,” J. Fluid Mech. 30, 197 (1967)] is applied to the bed interface. In this problem, however, the gap flow initially mainly comes from the porous bed, and the shear layer may not exist. Although later the shear effect becomes important, the empirical slip-flow condition might not physically respond to the shear effect, and the existence of the vertical velocity affects the situation so greatly that the slip-flow condition might not be appropriate. In contrast, the present study proposes a more general model for the problem, applying Stokes flow to the gap, the Brinkman equation to the porous medium, and Song and Huang’s [“Laminar poroelastic media flow,” J. Eng. Mech. 126, 358 (2000)] complete interfacial conditions to the bed interface. The exact solution to the problem is found and fits Mei’s experiments well. The breakout phenomenon is examined for different soil beds, mechanics that cannot be illustrated by Mei’s model are revealed, and the theoretical breakout times obtained using Mei’s model and our model are compared. The results show that the proposed model is more compatible with physics and provides results that are more precise.
Analysis of forward and inverse problems in chemical dynamics and spectroscopy
Rabitz, H.
1993-12-01
The overall scope of this research concerns the development and application of forward and inverse analysis tools for problems in chemical dynamics and chemical kinetics. The chemical dynamics work is specifically associated with relating features in potential surfaces and resultant dynamical behavior. The analogous inverse research aims to provide stable algorithms for extracting potential surfaces from laboratory data. In the case of chemical kinetics, the focus is on the development of systematic means to reduce the complexity of chemical kinetic models. Recent progress in these directions is summarized below.
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas
2014-12-10
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem
Shemon, E. R.; Grudzinski, J. J.; Lee, C. H.; Thomas, J. W.; Yu, Y. Q.
2015-12-21
This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.
1998-04-01
The over-reaching goal of the Groundwater Grand Challenge component of the Partnership in Computational Science (PICS) was to develop and establish the massively parallel approach for the description of groundwater flow and transport and to address the problem of uncertainties in the data and its interpretation. This necessitated the development of innovative algorithms and the implementation of massively parallel computational tools to provide a suite of simulators for groundwater flow and transport in heterogeneous media. This report summarizes the activities and deliverables of the Groundwater Grand Challenge project funded through the High Performance Computing grand challenge program of the Department of Energy from 1995 through 1997.
A NEPA compliance strategy plan for providing programmatic coverage to agency problems
Eccleston, C.H.
1994-04-01
The National Environmental Policy Act (NEPA) of 1969, requires that all federal actions be reviewed before making a final decision to pursue a proposed action or one of its reasonable alternatives. The NEPA process is expected to begin early in the planning process. This paper discusses an approach for providing efficient and comprehensive NEPA coverage to large-scale programs. Particular emphasis has been given to determining bottlenecks and developing workarounds to such problems. Specifically, the strategy is designed to meet four specific goals: (1) provide comprehensive coverage, (2) reduce compliance cost/time, (3) prevent project delays, and (4) reduce document obsolescence.
Gribok, Andrei V.; Attieh, Ibrahim K.; Hines, J. Wesley; Uhrig, Robert E.
2001-04-15
Inferential sensing is a method that can be used to evaluate parameters of a physical system based on a set of measurements related to these parameters. The most common method of inferential sensing uses mathematical models to infer a parameter value from correlated sensor values. However, since inferential sensing is an inverse problem, it can produce inconsistent results due to minor perturbations in the data. This research shows that regularization can be used in inferential sensing to produce consistent results. Data from Florida Power Corporation's Crystal River nuclear power plant (NPP) are used to give an important example of monitoring NPP feedwater flow rate.
State-of-the-art review of materials-related problems in flue gas desulfurization systems
Maiya, P. S.
1980-10-01
This report characterizes the chemical and mechanical environments to which the structural components used in flue-gas desulfurization (FGD) are exposed. It summarizes the necessary background information pertinent to various FGD processes currently in use, with particular emphasis on lime/limestone scrubbing technology, so that the materials problems and processing variables encountered in FGD systems can be better defined and appreciated. The report also describes the materials currently used and their performance to date in existing wet scrubbers. There is little doubt that with more extensive use of coal and flue-gas scrubbers by utilities and other segments of private industry, a better understanding of the material failure mechanisms, performance limitations, and potential problem areas is required for the design of more reliable and cost-effective FGD systems. To meet the above objectives, a materials evaluation program is proposed. The important experimental variables and the number of tests required to evaluate a given material are discussed. 55 references, 9 figures, 6 tables.
Mimetic finite difference method for the stokes problem on polygonal meshes
Lipnikov, K; Beirao Da Veiga, L; Gyrya, V; Manzini, G
2009-01-01
Various approaches to extend the finite element methods to non-traditional elements (pyramids, polyhedra, etc.) have been developed over the last decade. Building of basis functions for such elements is a challenging task and may require extensive geometry analysis. The mimetic finite difference (MFD) method has many similarities with low-order finite element methods. Both methods try to preserve fundamental properties of physical and mathematical models. The essential difference is that the MFD method uses only the surface representation of discrete unknowns to build stiffness and mass matrices. Since no extension inside the mesh element is required, practical implementation of the MFD method is simple for polygonal meshes that may include degenerate and non-convex elements. In this article, we develop a MFD method for the Stokes problem on arbitrary polygonal meshes. The method is constructed for tensor coefficients, which will allow to apply it to the linear elasticity problem. The numerical experiments show the second-order convergence for the velocity variable and the first-order for the pressure.
Efficient multilevel finite-element approach to three-dimensional phase-change problems
Lee, R.T.; Chiou, W.Y.
1997-01-01
A finite-element (FE) formulation suitable for a multigrid algorithm in solving three-dimensional phase-change problems is described. This formulation is based on the averaged specific heat model. The algorithm has been proved to be very useful for large problems where the computational complexity can be reduced from O(n{sup 3}) to O(n ln n) with high storage efficiency in a personal computer. To evaluate the accuracy of the present algorithm, the numerical results for larger slender ratio are compared with previous analytical solutions. Results show that the numerical solutions at the symmetric surface of the long axis are in very good agreement with the two-dimensional exact solutions for slender ratio = 5. The magnitudes of time steps and freezing-temperature intervals are insensitive to the maximal and average absolute errors when the time step is less than 0.01 s. Consequently, a larger time step can be used to save computing time and retain the same order of accuracy. This algorithm is also available for pure metals and alloys that exhibit a very large or small (or zero) freezing-temperature interval.
Pre-test CFD Calculations for a Bypass Flow Standard Problem
Rich Johnson
2011-11-01
The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.
Parthan, Shantha R.; Milke, Mark W.; Wilson, David C.; Cocks, John H.
2012-03-15
Highlights: Black-Right-Pointing-Pointer We review cost estimation approaches for solid waste management. Black-Right-Pointing-Pointer Unit cost method and benchmarking techniques used in industrialising regions (IR). Black-Right-Pointing-Pointer Variety in scope, quality and stakeholders makes cost estimation challenging in IR. Black-Right-Pointing-Pointer Integrate waste flow and cost models using cost functions to improve cost planning. - Abstract: The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.
Meyer, M.A.; Booker, J.M.
1990-01-01
Expert opinion is frequently used in probabilistic safety assessment (PSA), particularly in estimating low probability events. In this paper, we discuss some of the common problems encountered in eliciting and analyzing expert opinion data and offer solutions or recommendations. The problems are: that experts are not naturally Bayesian. People fail to update their existing information to account for new information as it becomes available, as would be predicted by the Bayesian philosophy; that experts cannot be fully calibrated. To calibrate experts, the feedback from the known quantities must be immediate, frequent, and specific to the task; that experts are limited in the number of things that they can mentally juggle at a time to 7 {plus minus} 2; that data gatherers and analysts can introduce bias by unintentionally causing an altering of the expert's thinking or answers; that the level of detail the data, or granularity, can affect the analyses; and the conditioning effect poses difficulties in gathering and analyzing of the expert data. The data that the expert gives can be conditioned on a variety of factors that can affect the analysis and the interpretation of the results. 31 refs.
Kaushik, D. K.; Keyes, D. E.; Smith, B. F.
1999-02-24
We review and extend to the compressible regime an earlier parallelization of an implicit incompressible unstructured Euler code [9], and solve for flow over an M6 wing in subsonic, transonic, and supersonic regimes. While the parallelization philosophy of the compressible case is identical to the incompressible, we focus here on the nonlinear and linear convergence rates, which vary in different physical regimes, and on comparing the performance of currently important computational platforms. Multiple-scale problems should be marched out at desired accuracy limits, and not held hostage to often more stringent explicit stability limits. In the context of inviscid aerodynamics, this means evolving transient computations on the scale of the convective transit time, rather than the acoustic transit time, or solving steady-state problems with local CFL numbers approaching infinity. Whether time-accurate or steady, we employ Newton's method on each (pseudo-) timestep. The coupling of analysis with design in aerodynamic practice is another motivation for implicitness. Design processes that make use of sensitivity derivatives and the Hessian matrix require operations with the Jacobian matrix of the state constraints (i.e., of the governing PDE system); if the Jacobian is available for design, it may be employed with advantage in a nonlinearly implicit analysis, as well.
OTEC cold water pipe design for problems caused by vortex-excited oscillations
Griffin, O. M.
1980-03-14
Vortex-excited oscillations of marine structures result in reduced fatigue life, large hydrodynamic forces and induced stresses, and sometimes lead to structural damage and to diestructive failures. The cold water pipe of an OTEC plant is nominally a bluff, flexible cylinder with a large aspect ratio (L/D = length/diameter), and is likely to be susceptible to resonant vortex-excited oscillations. The objective of this report is to survey recent results pertaining to the vortex-excited oscillations of structures in general and to consider the application of these findings to the design of the OTEC cold water pipe. Practical design calculations are given as examples throughout the various sections of the report. This report is limited in scope to the problems of vortex shedding from bluff, flexible structures in steady currents and the resulting vortex-excited oscillations. The effects of flow non-uniformities, surface roughness of the cylinder, and inclination to the incident flow are considered in addition to the case of a smooth cyliner in a uniform stream. Emphasis is placed upon design procedures, hydrodynamic coefficients applicable in practice, and the specification of structural response parameters relevant to the OTEC cold water pipe. There are important problems associated with in shedding of vortices from cylinders in waves and from the combined action of waves and currents, but these complex fluid/structure interactions are not considered in this report.
Dick, R.D.; Fourney, W.L.; Young, C.
1985-01-01
During 1981 and 1982, an extensive oil shale fragmentation research program was conducted at the Anvil Points Mine near Rifle, Colorado. The primary goals were to investigate factors involved for adequate fragmentation of oil shale and to evaluate the feasibility of using the modified in situ retort (MIS) method for recovery of oil from oil shale. The field test program included single-deck, single-borehole experiments to obtain basic fragmentation data; multiple-deck, multiple-borehole experiments to evaluate some practical aspects for developing an in situ retort; and the development of a variety of instrumentation technique to diagnose the blast event. This paper discusses some explosive engineering problems encountered, such as electric cap performance in complex blasting patterns, explosive and stem performance in a variety of configurations from the simple to the complex, and the difficulties experienced when reversing the direction of throw of the oil shale in a subscale retort configuration. These problems need solutions before an adequate MIS retort can be created in a single-blast event and even before an experimental mini-retort can be formed. 6 references, 7 figures, 3 tables.
Hamilton, L.D.; Meinhold, A.F.; Baxter, S.L.; Holtzman, S.; Morris, S.C.; Pardi, R.; Rowe, M.D.; Sun, C.; Anspaugh, L.; Layton, D.
1993-03-01
Two important environmental problems at the USDOE Fernald Environmental Management Project (FEMP) facility in Fernald, Ohio were studied in this human health risk assessment. The problems studied were radon emissions from the K-65 waste silos, and offsite contamination of ground water with uranium. Waste from the processing of pitchblende ore is stored in the K-65 silos at the FEMP. Radium-226 in the waste decays to radon gas which escapes to the outside atmosphere. The concern is for an increase in lung cancer risk for nearby residents associated with radon exposure. Monitoring data and a gaussian plume transport model were used to develop a source term and predict exposure and risk to fenceline residents, residents within 1 and 5 miles of the silos, and residents of Hamilton and Cincinnati, Ohio. Two release scenarios were studied: the routine release of radon from the silos and an accidental loss of one silo dome integrity. Exposure parameters and risk factors were described as distributions. Risks associated with natural background radon concentrations were also estimated.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
Chen, Feng; Drumm, Eric; Guiochon, Georges A
2011-01-01
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAM for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.
Solving iTOUGH2 simulation and optimization problems using the PEST protocol
Finsterle, S.A.; Zhang, Y.
2011-02-01
The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstrate the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.
Extended theory of the Taylor problem in the plasmoid-unstable regime
Comisso, L. Grasso, D.; Waelbroeck, F. L.
2015-04-15
A fundamental problem of forced magnetic reconnection has been solved taking into account the plasmoid instability of thin reconnecting current sheets. In this problem, the reconnection is driven by a small amplitude boundary perturbation in a tearing-stable slab plasma equilibrium. It is shown that the evolution of the magnetic reconnection process depends on the external source perturbation and the microscopic plasma parameters. Small perturbations lead to a slow nonlinear Rutherford evolution, whereas larger perturbations can lead to either a stable Sweet-Parker-like phase or a plasmoid phase. An expression for the threshold perturbation amplitude required to trigger the plasmoid phase is derived, as well as an analytical expression for the reconnection rate in the plasmoid-dominated regime. Visco-resistive magnetohydrodynamic simulations complement the analytical calculations. The plasmoid formation plays a crucial role in allowing fast reconnection in a magnetohydrodynamical plasma, and the presented results suggest that it may occur and have profound consequences even if the plasma is tearing-stable.
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.
Ng, Jonathan; Huang, Yi-Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-15
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Recently, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment model with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; Bhattacharjee, A.; Stanier, Adam; Daughton, William; Wang, Liang; Germaschewski, Kai
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
Inverse transport problem solvers based on regularized and compressive sensing techniques
Cheng, Y.; Cao, L.; Wu, H.; Zhang, H.
2012-07-01
According to the direct exposure measurements from flash radiographic image, regularized-based method and compressive sensing (CS)-based method for inverse transport equation are presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. With a large number of measurements, least-square method is utilized to complete the reconstruction. Owing to the ill-posedness of the inverse problems, regularized algorithm is employed. Tikhonov method is applied with an appropriate posterior regularization parameter to get a meaningful solution. However, it's always very costly to obtain enough measurements. With limited measurements, CS sparse reconstruction technique Orthogonal Matching Pursuit (OMP) is applied to obtain the sparse coefficients by solving an optimization problem. This paper constructs and takes the forward projection matrix rather than Gauss matrix as measurement matrix. In the CS-based algorithm, Fourier expansion and wavelet expansion are adopted to convert an underdetermined system to a well-posed system. Simulations and numerical results of regularized method with appropriate regularization parameter and that of CS-based agree well with the reference value, furthermore, both methods avoid amplifying the noise. (authors)
SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy
Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT
2014-06-01
Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physics model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, ?-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for ?-eff were 1470 m{sup ?1} mean, 1360 m{sup ?1} median, 369 m{sup ?1} standard deviation, 933 m{sup ?1} minimum and 2260 m{sup ?1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This
Nash, Stephen G.
2013-11-11
The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for
The nonlinear characteristic scheme for X-Y geometry transport problems
Walters, W.F.; Wareing, T.A.; Marr, D.R.
1995-12-31
The Nonlinear Characteristic (NC) numerical scheme for solving the discrete ordinates form of the transport equation is derived for X-Y geometry. The NC scheme is based on the analytic solution of the discrete-ordinate transport equation in each mesh cell. The driving source for the transport equation is represented by a three-moment preserving, strictly positive, exponential distribution obtained using information theory methods. The analysis of two test problems demonstrates the superior behavior of the NC scheme as compared to other numerical schemes currently used to solve the transport equation. The NC scheme is found to be strictly positive and accurate on meshes where other methods yield either negative and/or remarkably inaccurate results.
Revenue and earnings performance masked continuing investor-owned utility problems
Lincicome, R.A.
1983-06-01
The 1982 increase in revenues and net income for the top 100 electric utilities is misleading because the figure is distorted by the allowance for funds used during construction (AFUDC), which overstates the real dollar strength of most investor-owned utilities. A random sampling of profit and loss statements shows that companies heavily involved in plant construction can have AFUDC over 100% of net income. The average is 50% of utility earnings, while cash dividends run 75% of earnings. The problem is short-term, however, and will diminish as construction is completed. A summary of utility performance presents earnings growth statistics, sales data and comparisons, financial statistics, and income statistics and comparisons. A summary financial table lists the 100 utilities in alphabetical order. 7 tables. (DCK)
Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms
Daene, M.; Gonis, A.; Nicholson, D. M.; Stocks, G. M.
2014-10-14
Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. Wemore » present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.« less
MARS-KS code validation activity through the atlas domestic standard problem
Choi, K. Y.; Kim, Y. S.; Kang, K. H.; Park, H. S.; Cho, S.
2012-07-01
The 2 nd Domestic Standard Problem (DSP-02) exercise using the ATLAS integral effect test data was executed to transfer the integral effect test data to domestic nuclear industries and to contribute to improving the safety analysis methodology for PWRs. A small break loss of coolant accident of a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. Ten calculation results using MARS-KS code were collected, major prediction results were described qualitatively and code prediction accuracy was assessed quantitatively using the FFTBM. In addition, special code assessment activities were carried out to find out the area where the model improvement is required in the MARS-KS code. The lessons from this DSP-02 and recommendations to code developers are described in this paper. (authors)
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
Enhancements of branch and bound methods for the maximal constraint satisfaction problem
Wallace, R.J.
1996-12-31
Two methods are described for enhancing performance of branch and bound methods for overconstrained CSPs. These methods improve either the upper or lower bound, respectively, during search, so the two can be combined. Upper bounds are improved by using heuristic repair methods before search to find a good solution quickly, whose cost is used as the initial upper bound. The method for improving lower bounds is an extension of directed arc consistency preprocessing, used in conjunction with forward checking. After computing directed arc consistency counts, inferred counts are computed for all values based on minimum counts for values of adjacent variables that are later in the search order. This inference process can be iterated, so that counts are cascaded from the end to the beginning of the search order, to augment the initial counts. Improvements in time and effort are demonstrated for both techniques using random problems.
A hybrid approach to the neutron transport K-eigenvalue problem using NDA-based algorithms
Willert, J. A.; Kelley, C. T.; Knoll, D. A.; Park, H.
2013-07-01
In order to provide more physically accurate solutions to the neutron transport equation it has become increasingly popular to use Monte Carlo simulation to model nuclear reactor dynamics. These Monte Carlo methods can be extremely expensive, so we turn to a class of methods known as hybrid methods, which combine known deterministic and stochastic techniques to solve the transport equation. In our work, we show that we can simulate the action of a transport sweep using a Monte Carlo simulation in order to solve the k-eigenvalue problem. We'll accelerate the solution using nonlinear diffusion acceleration (NDA) as in [1,2]. Our work extends the results in [1] to use Monte Carlo simulation as the high-order solver. (authors)
Worldwide assessment of steam-generator problems in pressurized-water-reactor nuclear power plants
Woo, H.H.; Lu, S.C.
1981-09-15
Objective is to assess the reliability of steam generators of pressurized water reactor (PWR) power plants in the United States and abroad. The assessment is based on operation experience of both domestic and foreign PWR plants. The approach taken is to collect and review papers and reports available from the literature as well as information obtained by contacting research institutes both here and abroad. This report presents the results of the assessment. It contains a general background of PWR plant operations, plant types, and materials used in PWR plants. A review of the worldwide distribution of PWR plants is also given. The report describes in detail the degradation problems discovered in PWR steam generators: their causes, their impacts on the performance of steam generators, and the actions to mitigate and avoid them. One chapter is devoted to operating experience of PWR steam generators in foreign countries. Another discusses the improvements in future steam generator design.
In Situ Airborne Instrumentation: Addressing and Solving Measurement Problems in Ice Clouds
Baumgardner, Darrel; Kok, Greg; Avallone, L.; Bansemer, A.; Borrmann, S.; Brown, P.; Bundke, U.; Chuang, P. Y.; Cziczo, D.; Field, P.; et al
2012-02-01
A meeting of 31 international experts on in situ measurements from aircraft was held to identify unresolved questions concerning ice formation and evolution in ice clouds, assess the current state of instrumentation that can address these problems, introduce emerging technology that may overcome current measurement issues and recommend future courses of action that can improve our understanding of ice cloud microphysical processes and their impact on the environment. The meeting proceedings and outcome has been described in detail in a manuscript submitted to the Bulletin of the American Meteorological Society (BAMS) on March 24, 2011. This paper is currently undermore » review. The remainder of this summary, in the following pages, is the text of the BAMS article. A technical note that will be published by the National Center for Atmospheric Research is currently underway and is expected to be published before the end of the year.« less
Drift problems in the automatic analysis of gamma-ray spectra using associative memory algorithms
Olmos, P.; Diaz, J.C.; Perez, J.M.; Aguayo, P. ); Gomez, P.; Rodellar, V. )
1994-06-01
Perturbations affecting nuclear radiation spectrometers during their operation frequently spoil the accuracy of automatic analysis methods. One of the problems usually found in practice refers to fluctuations in the spectrum gain and zero, produced by drifts in the detector and nuclear electronics. The pattern acquired in these conditions may be significantly different from that expected with stable instrumentation, thus complicating the identification and quantification of the radionuclides present in it. In this work, the performance of Associative Memory algorithms when dealing with spectra affected by drifts is explored considering a linear energy-calibration function. The formulation of the extended algorithm, constructed to quantify the possible presence of drifts in the spectrometer, is deduced and the results obtained from its application to several practical cases are commented.
Technical problems to be solved before the solid oxide fuel cell will be commercialized
Bagger, C.; Hendriksen, P.V.; Mogensen, M.
1996-12-31
The problems which must be solved before SOFC-systems are competitive with todays power production technology are of both technical and economical nature. The cost of SOFC stacks at the 25 kW level of today is about 30,000 ECU/kW and it is bound to come down to about 500 ECU/kW. The allowable cost of a SOFC system is anticipated to be around 1500 ECU/kW. As part of the Danish SOFC program (DK-SOFC) a 0.5 kW stack was built and tested during the second half of 1995. Based upon the experience gained, an economic analysis has been made. The tools required to approach an economically acceptable solution are outlined below.
Examination of eastern oil shale disposal problems - the Hope Creek field study
Koppenaal, D.W.; Kruspe, R.R.; Robl, T.L.; Cisler, K.; Allen, D.L.
1985-02-01
A field-based study of problems associated with the disposal of processed Eastern oil shale was initiated in mid-1983 at a private research site in Montgomery County, Kentucky. The study (known as the Hope Creek Spent Oil Shale Disposal Project) is designed to provide information on the geotechnical, revegetation/reclamation, and leachate generation and composition characteristics of processed Kentucky oil shales. The study utilizes processed oil shale materials (retorted oil shale and reject raw oil shale fines) obtained from a pilot plant run of Kentucky oil shale using the travelling grate retort technology. Approximately 1000 tons of processed oil shale were returned to Kentucky for the purpose of the study. The study, composed of three components, is described. The effort to date has concentrated on site preparation and the construction and implementation of the field study research facilities. These endeavors are described and the project direction in the future years is defined.
The people problems of NEPA: Social impact assessment and the role of public involvement
Carnes, S.A.
1989-12-31
This Chapter of the book `` The Scientific Challenges of NEPA`` discusses the people problems of NEPA and social impact assessment and the role of public involvement in NEPA. When Congress passed the National Environmental Policy Act (NEPA) in 1969, there was little guidance on the preparation of environmental impact statements (EIS) and the role of the public in the NEPA process. Excepting the statutory language of NEPA, which referred to impacts on the human environment, nowhere was this more evident than with respect to people. Questions such as what impacts on people should be assessed, how impacts on people should be assessed, and how people, including but not limited to those persons potentially impacted, should be involved in the assessment itself as well as NEPA`s associated administrative processes, were simply not addressed.
Problems of laminar-turbulent transition control in a boundary layer
Fedorov, A.V.; Levchenko, V. I.; Tumin, A.M. Moscow Physical-Technical Institute, )
1991-03-01
The overview of laminar-turbulent transition control compares different methods of transition control for swept-wing streams. The types of unstable disturbances in boundary layer are listed, and flow stabilization is described in terms of small disturbances. The control of the transition zone is based on the description of background disturbances, their transition into instability waves, and their linear and nonlinear amplifications. Specific references cite the applications to Tollmien-Schlichting waves, crossflow instability near an aircraft's leading edge, and unstable disturbances in a boundary layer over a curved surface. Methods of active control or wave cancellation to deal with the problem are listed including localized periodic heating, the introduction of vibrations, or the use of suction-blowing. The results of the comparative overview are of interest to aircraft and other aerospace applications to reduce drag and improve fuel efficiency. 111 refs.
The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems
DOE R&D Accomplishments [OSTI]
Nelson, W. R.; Namito, Yoshihito
1990-03-01
In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.
Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost
Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa
2015-02-15
This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.
In Situ Airborne Instrumentation: Addressing and Solving Measurement Problems in Ice Clouds
Baumgardner, Darrel; Kok, Greg; Avallone, L.; Bansemer, A.; Borrmann, S.; Brown, P.; Bundke, U.; Chuang, P. Y.; Cziczo, D.; Field, P.; Gallagher, M.; Gayet, J. -F.; Korolev, A.; Kraemer, M.; McFarquhar, G.; Mertes, S.; Moehler, O.; Lance, S.; Lawson, P.; Petters, M. D.; Pratt, K.; Roberts, G.; Rogers, D.; Stetzer, O.; Stith, J.; Strapp, W.; Twohy, C.; Wendisch, M.
2012-02-01
A meeting of 31 international experts on in situ measurements from aircraft was held to identify unresolved questions concerning ice formation and evolution in ice clouds, assess the current state of instrumentation that can address these problems, introduce emerging technology that may overcome current measurement issues and recommend future courses of action that can improve our understanding of ice cloud microphysical processes and their impact on the environment. The meeting proceedings and outcome has been described in detail in a manuscript submitted to the Bulletin of the American Meteorological Society (BAMS) on March 24, 2011. This paper is currently under review. The remainder of this summary, in the following pages, is the text of the BAMS article. A technical note that will be published by the National Center for Atmospheric Research is currently underway and is expected to be published before the end of the year.
Electromagnetic scattering problems -Numerical issues and new experimental approaches of validation
Geise, Robert; Neubauer, Bjoern; Zimmer, Georg
2015-03-10
Electromagnetic scattering problems, thus the question how radiated energy spreads when impinging on an object, are an essential part of wave propagation. Though the Maxwells differential equations as starting point, are actually quite simple,the integral formulation of an objects boundary conditions, respectively the solution for unknown induced currents can only be solved numerically in most cases.As a timely topic of practical importance the scattering of rotating wind turbines is discussed, the numerical description of which is still based on rigorous approximations with yet unspecified accuracy. In this context the issue of validating numerical solutions is addressed, both with reference simulations but in particular with the experimental approach of scaled measurements. For the latter the idea of an incremental validation is proposed allowing a step by step validation of required new mathematical models in scattering theory.
Parabolic sturmians approach to the three-body continuum Coulomb problem
Zaytsev, S. A.; Popov, Yu. V.; Piraux, B.
2013-03-15
The three-body continuum Coulomb problem is treated in terms of the generalized parabolic coordinates. Approximate solutions are expressed in the form of a Lippmann-Schwinger-type equation, where the Green's function includes the leading term of the kinetic energy and the total potential energy, whereas the potential contains the non-orthogonal part of the kinetic energy operator. As a test of this approach, the integral equation for the (e{sup -}, e{sup -}, He{sup ++}) system has been solved numerically by using the parabolic Sturmian basis representation of the (approximate) potential. Convergence of the expansion coefficients of the solution has been obtained as the basis set used to describe the potential is enlarged.
The d-edge shortest-path problem for a Monge graph
Bein, W.W.; Larmore, L.L.; Park, J.K.
1992-07-14
A complete edge-weighted directed graph on vertices 1,2,...,n that assigns cost c(i,j) to the edge (i,j) is called Monge if its edge costs form a Monge array, i.e., for all i < k and j < l, c[i, j]+c[k,l]{le} < c[i,l]+c[k,j]. One reason Monge graphs are interesting is that shortest paths can be computed quite quickly in such graphs. In particular, Wilber showed that the shortest path from vertex 1 to vertex n of a Monge graph can be computed in O(n) time, and Aggarwal, Klawe, Moran, Shor, and Wilber showed that the shortest d-edge 1-to-n path (i.e., the shortest path among all 1-to-n paths with exactly d edges) can be computed in O(dn) time. This paper`s contribution is a new algorithm for the latter problem. Assuming 0 {le} c[i,j] {le} U and c[i,j + 1] + c[i + 1,j] {minus} c[i,j] {minus} c[i + 1, j + 1] {ge} L > 0 for all i and j, our algorithm runs in O(n(1 + 1g(U/L))) time. Thus, when d {much_gt} 1 + 1g(U/L), our algorithm represents a significant improvement over Aggarwal et al.`s O(dn)-time algorithm. We also present several applications of our algorithm; they include length-limited Huffman coding, finding the maximum-perimeter d-gon inscribed in a given convex n-gon, and a digital-signal-compression problem.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Some social and economic problems, tasks and purposes of nuclear power in Russia
Adamov, E.O.; Bryunin, S.V.; Orlov, V.V.
1996-08-01
The complicated economic situation in Russia in power generation is manifested in a low efficiency of power utilization and in reduction of its generation and mining of energy resources. Primary energy production per capita in Russia is approximately 50% higher than on the average for Western Europe and approximately the same amount of electric power is generated. But per unit value of gross domestic product (GDP) its consumption is 3.0 and 2.7 times higher, respectively. Amount of diverse pollutants release to the atmosphere per GDP unit value is about 3.0 times higher. Restructuring of Russian economy and modernization of its power generation, which is also a matter of international community concern, will improve these indices, though it will require a lot of time and expenses. A number of aspects should be emphasized: (1) energy policy is to be considered in the context of general economic situation, as well as a key element for solving long-term social problems and base of Russia integration into the world economy; (2) comparatively large resources of fossil fuel are to be considered as national wealth and, strategically, reduction of their consumption for energy generation and export purposes should be envisaged; (3) reactor technologies, that do not rule out potentiality of recurrence of the gravest accidents (reactivity type accidents and the ones involving loss of coolant), can not be put at the foundation of large-scale NP; (4) conditions of nonproliferation that are in use now failed to prevent nuclear weapons propagation to new states and should be replaced by more effective ones; (5) for a country, where NP share in fuel and energy balance is slightly above 3%, not solely evolutionary course of development is feasible; (6) expanding scale of high-level wastes disposal is unacceptable in principle; (7) radical solution of growing ecological problems all over the world, including global warming of climate, is unthinkable without NP development.
The Network Completion Problem: Inferring Missing Nodes and Edges in Networks
Kim, M; Leskovec, J
2011-11-14
Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
LAGRANGE SOLUTIONS TO THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM
Minesaki, Yukitaka
2013-03-15
There is no known integrator that yields exact orbits for the general three-body problem (G3BP). It is difficult to verify whether a numerical procedure yields the correct solutions to the G3BP because doing so requires knowledge of all 11 conserved quantities, whereas only six are known. Without tracking all of the conserved quantities, it is possible to show that the discrete general three-body problem (d-G3BP) yields the correct orbits corresponding to Lagrange solutions of the G3BP. We show that the d-G3BP yields the correct solutions to the G3BP for two special cases: the equilateral triangle and collinear configurations. For the triangular solution, we use the fact that the solution to the three-body case is a superposition of the solutions to the three two-body cases, and we show that the three bodies maintain the same relative distances at all times. To obtain the collinear solution, we assume a specific permutation of the three bodies arranged along a straight rotating line, and we show that the d-G3BP maintains the same distance ratio between two bodies as in the G3BP. Proving that the d-G3BP solutions for these cases are equivalent to those of the G3BP makes it likely that the d-G3BP and G3BP solutions are equivalent in other cases. To our knowledge, this is the first work that proves the equivalence of the discrete solutions and the Lagrange orbits.
Scoping survey of perceived concerns, issues, and problems for near-surface disposal of FUSRAP waste
Robinson, J.E.; Gilbert, T.L.
1982-12-01
This report is a scoping summary of concerns, issues, and perceived problems for near-surface disposal of radioactive waste, based on a survey of the current literature. Near-surface disposal means land burial in or within 15 to 20 m of the earth's surface. It includes shallow land burial (burial in trenches, typically about 6 m deep with a 2-m cap and cover) and some intermediate-depth land burial (e.g., trenches and cap similar to shallow land burial, but placed below 10 to 15 m of clean soil). Proposed solutions to anticipated problems also are discussed. The purpose of the report is to provide a better basis for identifying and evaluating the environmental impacts and related factors that must be analyzed and compared in assessing candidate near-surface disposal sites for FUSRAP waste. FUSRAP wastes are of diverse types, and their classification for regulatory purposes is not yet fixed. Most of it may be characterized as low-activity bulk solid waste, and is similar to mill tailings, but with somewhat lower average specific activity. It may also qualify as Class A segregated waste under the proposed 10 CFR 61 rules, but the parent radionuclides of concern in FUSRAP (primarily U-238 and Th-232) have longer half-lives than do the radionuclides of concern in most low-level waste. Most of the references reviewed deal with low-level waste or mill tailings, since there is as yet very little literature in the public domain on FUSRAP per se.
Yang, W.; Wu, H.; Cao, L.
2012-07-01
More and more MOX fuels are used in all over the world in the past several decades. Compared with UO{sub 2} fuel, it contains some new features. For example, the neutron spectrum is harder and more resonance interference effects within the resonance energy range are introduced because of more resonant nuclides contained in the MOX fuel. In this paper, the wavelets scaling function expansion method is applied to study the resonance behavior of plutonium isotopes within MOX fuel. Wavelets scaling function expansion continuous-energy self-shielding method is developed recently. It has been validated and verified by comparison to Monte Carlo calculations. In this method, the continuous-energy cross-sections are utilized within resonance energy, which means that it's capable to solve problems with serious resonance interference effects without iteration calculations. Therefore, this method adapts to treat the MOX fuel resonance calculation problem natively. Furthermore, plutonium isotopes have fierce oscillations of total cross-section within thermal energy range, especially for {sup 240}Pu and {sup 242}Pu. To take thermal resonance effect of plutonium isotopes into consideration the wavelet scaling function expansion continuous-energy resonance calculation code WAVERESON is enhanced by applying the free gas scattering kernel to obtain the continuous-energy scattering source within thermal energy range (2.1 eV to 4.0 eV) contrasting against the resonance energy range in which the elastic scattering kernel is utilized. Finally, all of the calculation results of WAVERESON are compared with MCNP calculation. (authors)
Solution of elastic-plastic stress analysis problems by the P-version of the finite element method
Szabo, B.A.; Holzer, S.M.; Actis, R.L.
1995-12-31
The solution of small-strain elastic-plastic stress analysis problems by the p-version of the finite element method is discussed. The formulation is based on the deformation theory of plasticity and the displacement method. Practical realization of controlling discretization errors for elastic-plastic problems is the main focus of the paper. Numerical examples, which include comparisons between the deformation and incremental theories of plasticity under tight control of discretization errors, are presented.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
Turinsky, Paul
2015-02-09
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial
Dismantling of Radium-226 Coal Level Gauges: Encountered Problems and How to Solve
Punnachaiya, M.; Nuanjan, P.; Moombansao, K.; Sawangsri, T.; Pruantonsai, P.; Srichom, K.
2006-07-01
This paper describes the techniques for dismantling of disused-sealed Radium-226 (Ra-226) coal level gauges which the source specifications and documents were not available, including problems occurred during dismantling stage and the decision making in solving all those obstacles. The 2 mCi (20 pieces), 6 mCi (20 pieces) and 6.6 mCi (30 pieces) of Ra-226 hemi-spherically-shaped with lead-filled coal level gauges were used in industrial applications for electric power generation. All sources needed to be dismantled for further conditioning as requested by the International Atomic Energy Agency (IAEA). One of the 2 mCi Ra-226 source was dismantled under the supervision of IAEA expert. Before conditioning period, each of the 6 mCi and 6.6 mCi sources were dismantled and inspected. It was found that coal level gauges had two different source types: the sealed cylindrical source (diameter 2 cm x 2 cm length) locked with spring in lead housing for 2 mCi and 6.6 mCi; while the 6 mCi was an embedded capsule inside source holder stud assembly in lead-filled housing. Dismantling Ra-226 coal level gauges comprised of 6 operational steps: confirmation of the surface dose rate on each source activity, calculation of working time within the effective occupational dose limit, cutting the weld of lead container by electrical blade, confirmation of the Ra-226 embedded capsule size using radiation scanning technique and gamma radiography, automatic sawing of the source holder stud assembly, and transferring the source to store in lead safe box. The embedded length of 6 mCi Ra-226 capsule in its diameter 2 cm x 14.7 cm length stud assembly was identified, the results from scanning technique and radiographic film revealed the embedded source length of about 2 cm, therefore all the 6 mCi sources were safely cut at 3 cm using the automatic saw. Another occurring problem was one of the 6.6 mCi spring type source stuck inside its housing because the spring was deformed and there was
Fort, James A.; Cuta, Judith M.; Bajwa, C.; Baglietto, E.
2010-07-18
In the United States, commercial spent nuclear fuel is typically moved from spent fuel pools to outdoor dry storage pads within a transfer cask system that provides radiation shielding to protect personnel and the surrounding environment. The transfer casks are cylindrical steel enclosures with integral gamma and neutron radiation shields. Since the transfer cask system must be passively cooled, decay heat removal from spent nuclear fuel canister is limited by the rate of heat transfer through the cask components, and natural convection from the transfer cask surface. The primary mode of heat transfer within the transfer cask system is conduction, but some cask designs incorporate a liquid neutron shield tank surrounding the transfer cask structural shell. In these systems, accurate prediction of natural convection within the neutron shield tank is an important part of assessing the overall thermal performance of the transfer cask system. The large-scale geometry of the neutron shield tank, which is typically an annulus approximately 2 meters in diameter but only 10-15 cm in thickness, and the relatively small scale velocities (typically less than 5 cm/s) represent a wide range of spatial and temporal scales that contribute to making this a challenging problem for computational fluid dynamics (CFD) modeling. Relevant experimental data at these scales are not available in the literature, but some recent modeling studies offer insights into numerical issues and solutions; however, the geometries in these studies, and for the experimental data in the literature at smaller scales, all have large annular gaps that are not prototypic of the transfer cask neutron shield. This paper proposes that there may be reliable CFD approaches to the transfer cask problem, specifically coupled steady-state solvers or unsteady simulations; however, both of these solutions take significant computational effort. Segregated (uncoupled) steady state solvers that were tested did not
Infinitely many solutions of a quasilinear elliptic problem with an oscillatory potential
Omari, P.; Zanolin, F.
1996-12-31
Let {Omega} be a bounded domain in IR{sup N}, with N {ge} 1, having a smooth boundary {partial_derivative}{Omega}. We denote by A the quasilinear elliptic second order differential operator defined by Au+div(a({vert_bar}{del}{sub u}{vert_bar}{sup 2}){del}{sub u}). We suppose that the function a:[O,+{infinity}{r_arrow}O, +{infinity}] is of class C{sup 1} and satisfies the following ellipticity and growth conditions of Leray-Lions type (cf. e.g. [22]): there are constants {gamma}, {Lambda} > O, K {epsilon} [O,1] and p {epsilon}[1, +{infinity}]such that, for every s > O, {lambda}(K + S){sup p-2} {le} a(s{sup 2}){le}{Lambda} (K+S){sup p-2}({lambda}-1/2) a(s){le}a{prime}(s) s {le}{Gamma} a(s). Hence, we can define, for each s {ge} O, the function A(s) = {integral}{sub O}{sup s} a({xi})d{xi}. Let us consider the Dirichlet problem -Au={mu}(x)g(u) + h(x) in {Omega}, u=O on {partial_derivative}{Omega}, where g: IR {r_arrow} IR is continuous and {mu}, h {epsilon} L{sup {infinity}}({infinity}), with {mu}{sub O} = ess inf{sub {Omega}}{sub {mu}} > O. We also set G(s) = {integral}{sub O}{sup s}g({integral})d{integral}, for all s {epsilon} IR. By a solution of (1.3) we mean a function u {epsilon} W{sub O}{sup 1,p} ({Omega}) {intersection} L{sup {infinity}} ({Omega}) such that {integral}{sub {Omega}} a({vert_bar}{del}{sub u}{vert_bar}{sup 2}){del}{sub u}{del}{sub wdx}= {integral}{sub {Omega}} {mu}g(u)wdx + {integral}{sub {Omega}} hwdx, for every w {epsilon} W{sub O}{sup 1,p}({Omega}), where p is the exponent which appears in (1.1). The aim of this paper is to prove the existence of infinitely many solutions of problem (1.3) when the potential G(s) exhibits an oscillatory behaviour at infinity. 22 refs.
Hart, S. W. D.; Maldonado, G. Ivan; Celik, Cihangir; Leal, Luiz C
2014-01-01
For many Monte Carlo codes cross sections are generally only created at a set of predetermined temperatures. This causes an increase in error as one moves further and further away from these temperatures in the Monte Carlo model. This paper discusses recent progress in the Scale Monte Carlo module KENO to create problem dependent, Doppler broadened, cross sections. Currently only broadening the 1D cross sections and probability tables is addressed. The approach uses a finite difference method to calculate the temperature dependent cross-sections for the 1D data, and a simple linear-logarithmic interpolation in the square root of temperature for the probability tables. Work is also ongoing to address broadening theS (alpha , beta) tables. With the current approach the temperature dependent cross sections are Doppler broadened before transport starts, and, for all but a few isotopes, the impact on cross section loading is negligible. Results can be compared with those obtained by using multigroup libraries, as KENO currently does interpolation on the multigroup cross sections to determine temperature dependent cross-sections. Current results compare favorably with these expected results.
Smith, Kyle K. G.; Poulsen, Jens Aage Nyman, Gunnar; Rossky, Peter J.
2015-06-28
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.
Fuel cells provide a revenue-generating solution to power quality problems
King, J.M. Jr.
1996-03-01
Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air traffic control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.
Electromagnetic Extended Finite Elements for High-Fidelity Multimaterial Problems LDRD Final Report.
Siefert, Christopher; Bochev, Pavel Blagoveston; Kramer, Richard Michael Jack; Voth, Thomas Eugene; Cox, James
2014-09-01
Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include explod- ing bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are im- practical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The em- pirical nature of these models can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which sat- isfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Thin film polycrystalline silicon: Promise and problems in displays and solar cells
Fonash, S.J.
1995-08-01
Thin film polycrystalline Si (poly-Si) with its carrier mobilities, potentially good stability, low intragrain defect density, compatibility with silicon processing, and ease of doping activation is an interesting material for {open_quotes}macroelectronics{close_quotes} applications such as TFTs for displays and solar cells. The poly-Si films needed for these applications can be ultra-thin-in the 500{Angstrom} to 1000{Angstrom} thickness range for flat panel display TFTs and in the 4{mu}m to 10{mu}m thickness range for solar cells. Because the films needed for these microelectronics applications can be so thin, an effective approach to producing the films is that of crystallizing a-Si precursor material. Unlike cast materials, poly-Si films made this way can be produced using low temperature processing. Unlike deposited poly-Si films, these crystallized poly-Si films can have grain widths that are much larger than the film thickness and almost atomically smooth surfaces. This thin film poly-Si crystallized from a-Si precursor films, and its promise and problems for TFTs and solar cells, is the focus of this discussion.
Potential problem areas: extended storage of low-level radioactive waste
Siskind, B.
1985-01-01
If a state or regional compact does not have adequate disposal capacity for low-level radioactive waste (LLRW), then extended storage of certain LLRW may be necessary. The Nuclear Regulatory Commission (NRC) has contracted with Brookhaven National Laboratory to address the technical issues of extended storage. The dual objectives of this study are (1) to provide practical technical assessments for NRC to consider in evaluating specific proposals for extended storage and (2) to help ensure adequate consideration by NRC, Agreement States, and licensees of potential problems that may arise from existing or proposed extended storage practices. Storage alternatives are considered in order to characterize the likely storage environments for these wastes. In particular, the range of storage alternatives considered and being implemented by the nuclear power plant utilities is described. The properties of the waste forms and waste containers are discussed. An overview is given of the performance of the waste package and its contents during storage (e.g., radiolytic gas generation, corrosion) and of the effects of extended storage on the performance of the waste package after storage (e.g., radiation-induced embrittlement of polyethylene, the weakening of steel containers by corrosion). Additional information and actions required to address these concerns, including possible mitigative measures, are discussed. 26 refs., 1 tab.
Not Available
1994-06-01
Air pollution in Mexico City has increased along with the growth of the city, the movement of its population, and the growth of employment created by industry. The main cause of pollution in the city is energy consumption. Therefore, it is necessary to take into account the city`s economic development and its prospects when considering the technological relationships between well-being and energy consumption. Air pollution in the city from dust and other particles suspended in the air is an old problem. However, pollution as we know it today began about 50 years ago with the growth of industry, transportation, and population. The level of well-being attained in Mexico City implies a high energy use that necessarily affects the valley`s natural air quality. However, the pollution has grown so fast that the City must act urgently on three fronts: first, following a comprehensive strategy, transform the economic foundation of the city with nonpolluting activities to replace the old industries, second, halt pollution growth through the development of better technologies; and third, use better fuels, emission controls, and protection of wooded areas.
MPSalsa a finite element computer program for reacting flow problems. Part 2 - user`s guide
Salinger, A.; Devine, K.; Hennigan, G.; Moffat, H.
1996-09-01
This manual describes the use of MPSalsa, an unstructured finite element (FE) code for solving chemically reacting flow problems on massively parallel computers. MPSalsa has been written to enable the rigorous modeling of the complex geometry and physics found in engineering systems that exhibit coupled fluid flow, heat transfer, mass transfer, and detailed reactions. In addition, considerable effort has been made to ensure that the code makes efficient use of the computational resources of massively parallel (MP), distributed memory architectures in a way that is nearly transparent to the user. The result is the ability to simultaneously model both three-dimensional geometries and flow as well as detailed reaction chemistry in a timely manner on MT computers, an ability we believe to be unique. MPSalsa has been designed to allow the experienced researcher considerable flexibility in modeling a system. Any combination of the momentum equations, energy balance, and an arbitrary number of species mass balances can be solved. The physical and transport properties can be specified as constants, as functions, or taken from the Chemkin library and associated database. Any of the standard set of boundary conditions and source terms can be adapted by writing user functions, for which templates and examples exist.
Problems with the quantitative spectroscopic analysis of oxygen rich Czech coals
Pavlikova, H.; Machovic, V.; Cerny, J.; Sebestova, E.
1995-12-01
Solid state NMR and FTIR spectroscopies are two main methods used for the structural analysis of coals and their various products. Obtaining quantitative parameters from coals, such as arornaticity (f{sub a}) by the above mentioned methods can be a rather difficult task. Coal samples of various rank were chosen for the quantitative NMR, FTIR and EPR analyses. The aromaticity was obtained by the FTIR, {sup 13}C CP/MAS and SP/MAS NMR experiments. The content of radicals and saturation characteristics of coals were measured by EPR spectroscopy. The following problems have been discussed: 1. The relationship between the amount of free radicals (N{sub g}) and f{sub a} by NMR. 2. The f{sub a} obtained by solid state NMR and FTIR spectroscopies. 3. The differences between the f{sub a} measured by CP and SP/NMR experiments. 4. The relationship between the content of oxygen groups and the saturation responses of coals. The reliability of our results was checked by measuring the structural parameters of Argonne premium coals.
A novel solution to the gated x-ray detector gain droop problem
Oertel, J. A. Archuleta, T. N.
2014-11-15
Microchannel plate (MCP), microstrip transmission line based, gated x-ray detectors used at the premier ICF laser facilities have a drop in gain as a function of mircostrip length that can be greater than 50% over 40 mm. These losses are due to ohmic losses in a microstrip coating that is less than the optimum electrical skin depth. The electrical skin depth for a copper transmission line at 3 GHz is 1.2 μm while the standard microstrip coating thickness is roughly half a single skin depth. Simply increasing the copper coating thickness would begin filling the MCP pores and limit the number of secondary electrons created in the MCP. The current coating thickness represents a compromise between gain and ohmic loss. We suggest a novel solution to the loss problem by overcoating the copper transmission line with five electrical skin depths (∼6 μm) of Beryllium. Beryllium is reasonably transparent to x-rays above 800 eV and would improve the carrier current on the transmission line. The net result should be an optically flat photocathode response with almost no measurable loss in voltage along the transmission line.
Bronnikov, K. A.; Meierovich, B. E.
2008-02-15
We consider (d{sub 0} + 2)-dimensional configurations with global strings in two extra dimensions and a flat metric in d{sub 0} dimensions, endowed with a warp factor e{sup 2{gamma}} depending on the distance l from the string center. All possible regular solutions of the field equations are classified by the behavior of the warp factor and the extradimensional circular radius r(l). Solutions with r {yields} {infinity} and r {yields} const > 0 as l {yields} {infinity} are interpreted in terms of thick brane-world models. Solutions with r {yields} 0 as l {yields} l{sub c} > 0, i.e., those with a second center, are interpreted as either multibrane systems (which are appropriate for large enough distances l{sub c} between the centers) or as Kaluza-Klein-type configurations with extra dimensions invisible due to their smallness. In the case of the Mexican-hat symmetry-breaking potential, we build the full map of regular solutions on the ({epsilon}, {Gamma}) parameter plane, where {epsilon} acts as an effective cosmological constant and {Gamma} characterizes the gravitational field strength. The trapping properties of candidate brane worlds for test scalar fields are discussed. Good trapping properties for massive fields are found for models with increasing warp factors. Kaluza-Klein-type models are shown to have nontrivial warp factor behaviors, leading to matter particle mass spectra that seem promising from the standpoint of hierarchy problems.
Bronnikov, K. A.; Meierovich, B. E.
2008-02-15
We consider (d{sub 0} + 2)-dimensional configurations with global strings in two extra dimensions and a flat metric in d{sub 0} dimensions, endowed with a warp factor e{sup 2{gamma}} depending on the distance l from the string center. All possible regular solutions of the field equations are classified by the behavior of the warp factor and the extradimensional circular radius r(l). Solutions with r {sup {yields}} {infinity} and r {sup {yields}} const > 0 as l {sup {yields}} {infinity} are interpreted in terms of thick brane-world models. Solutions with r {sup {yields}} 0 as l {sup {yields}} l{sub c} > 0, i.e., those with a second center, are interpreted as either multibrane systems (which are appropriate for large enough distances l{sub c} between the centers) or as Kaluza-Klein-type configurations with extra dimensions invisible due to their smallness. In the case of the Mexican-hat symmetry-breaking potential, we build the full map of regular solutions on the ({epsilon}, {gamma}) parameter plane, where {epsilon} acts as an effective cosmological constant and {gamma} characterizes the gravitational field strength. The trapping properties of candidate brane worlds for test scalar fields are discussed. Good trapping properties for massive fields are found for models with increasing warp factors. Kaluza-Klein-type models are shown to have nontrivial warp factor behaviors, leading to matter particle mass spectra that seem promising from the standpoint of hierarchy problems.
The Problem with Continuity of Knowledge in Enrichment Plant Process Monitoring
Curtis, Michael M.
2009-08-01
It has been three years since the new Gas Centrifuge Enrichment Plant (GCEP) Model Safeguards Approach was approved for implementation by the International Atomic Energy Agency’s Department of Safeguards. Among its recommendations are safeguard measures that place greater emphasis on instrumentation in the process area (Cooley 2007). Irrespective of the compelling technologies, an often overlooked impediment to the application of such instrumentation is maintenance of continuity of knowledge on material that has been identified as abnormal. Any instrument purporting to identify problems in the process area should include some means of containing or monitoring that material until International Atomic Energy Agency (IAEA) inspectors can arrive to confirm the discrepancy. If no containment or surveillance is employed in the interim, and no discrepancy or anomaly is subsequently uncovered in storage cylinders, it is unclear what follow-up action inspectors can take. Some CoK measures have been proposed, but they usually involve an array of cameras or host-applied seals—options that may require a backup system of their own.
Evaluation of vectorized Monte Carlo algorithms on GPUs for a neutron Eigenvalue problem
Du, X.; Liu, T.; Ji, W.; Xu, X. G.; Brown, F. B.
2013-07-01
Conventional Monte Carlo (MC) methods for radiation transport computations are 'history-based', which means that one particle history at a time is tracked. Simulations based on such methods suffer from thread divergence on the graphics processing unit (GPU), which severely affects the performance of GPUs. To circumvent this limitation, event-based vectorized MC algorithms can be utilized. A versatile software test-bed, called ARCHER - Accelerated Radiation-transport Computations in Heterogeneous Environments - was used for this study. ARCHER facilitates the development and testing of a MC code based on the vectorized MC algorithm implemented on GPUs by using NVIDIA's Compute Unified Device Architecture (CUDA). The ARCHER{sub GPU} code was designed to solve a neutron eigenvalue problem and was tested on a NVIDIA Tesla M2090 Fermi card. We found that although the vectorized MC method significantly reduces the occurrence of divergent branching and enhances the warp execution efficiency, the overall simulation speed is ten times slower than the conventional history-based MC method on GPUs. By analyzing detailed GPU profiling information from ARCHER, we discovered that the main reason was the large amount of global memory transactions, causing severe memory access latency. Several possible solutions to alleviate the memory latency issue are discussed. (authors)
Problem-free time-dependent variational principle for open quantum systems
Joubert-Doriol, Loc; Izmaylov, Artur F.
2015-04-07
Methods of quantum nuclear wave-function dynamics have become very efficient in simulating large isolated systems using the time-dependent variational principle (TDVP). However, a straightforward extension of the TDVP to the density matrix framework gives rise to methods that do not conserve the energy in the isolated system limit and the total system population for open systems where only energy exchange with environment is allowed. These problems arise when the system density is in a mixed state and is simulated using an incomplete basis. Thus, the basis set incompleteness, which is inevitable in practical calculations, creates artificial channels for energy and population dissipation. To overcome this unphysical behavior, we have introduced a constrained Lagrangian formulation of TDVP applied to a non-stochastic open system Schrdinger equation [L. Joubert-Doriol, I. G. Ryabinkin, and A. F. Izmaylov, J. Chem. Phys. 141, 234112 (2014)]. While our formulation can be applied to any variational ansatz for the system density matrix, derivation of working equations and numerical assessment is done within the variational multiconfiguration Gaussian approach for a two-dimensional linear vibronic coupling model system interacting with a harmonic bath.
Ross, W.A. )
1994-07-01
The environmental impact statement (EIS) system of the Philippines is reviewed, identifying progress made in its effective implementation since 1986. Improvement in coverage is noted and real commitment to good environmental impact assessment (EIA) practice is found in those responsible for the EIS system. Project proponents show a modest acceptance of the system. Major problems remaining are: (1) the EIS system is seen as a bureaucratic requirement needed to obtain project approvals; (2) political interference determines the outcome of some environmental reviews; (3) questionable practices by public servants serve to discredit the system; and (4) the treatment of projects in environmentally critical areas is less than satisfactory. Based on the principle that it is essential to establish a credible process seen to work effectively by the public, politicians, the government bureaucracy, and proponents, suggestions for improvement are made. They deal with the treatment of EISs for projects already under construction, EIA training courses, and simple adjustments to the EIS system to focus it on the most important projects.
Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.
Smith, William S.; Bull, Jeffrey S.; Wilcox, Trevor; Bos, Randall J.; Shao, Xuan-Min; Goorley, John T.; Costigan, Keeley R.
2012-08-13
In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.
Solutions for resistance-after-fire problems in an electric match
Heckes, A. A.; Montoya, A. P.
1980-01-01
Current leakage in an electric match after firing is a problem if it drains power that can be used elsewhere or if it induces unwanted fluctuations in other electrical circuits. Two novel techniques are described that significantly reduce the RAF sensitivity of a Ti/KClO/sub 4/-loaded electric match used to ignite the pyrotechnic materials in a thermal battery. In the first technique, a thin (less than 10 ..mu..m thick) film insulator, such as Parylene or SiO/sub 2/, is vapor deposited within the match cavity prior to the loading of the pyrotechnic. The insulator tends to smooth the cavity surface as an aid to ejection of firing residues and to decrease the exposed metal surface area to prevent pin-to-pin short circuits. The second technique involves placing a length of heat-shrinkable tubing on the match extending from the output end so that the shrink tubing is activated by the heat of the match and the thermal battery when fired. The shrinkage of the tubing effectively decreases the cross-sectional area for mass and heat transfer from the battery back into the match. 5 figures, 2 tables.
Adiabatic representation in the three-body problem with Coulomb interaction
Vinitskii, S.I.; Ponomarev, L.I.
1982-11-01
An effective method for solving the three-body problem with Coulomb interaction is presented systematically. The essential feature of the method is an expansion of the wave function of the three-particle system with respect to an adiabatic basis and reduction of the original Schroedinger equation to a system of ordinary differential equations. Convergence of the adiabatic expansion is ensured not only by the smallness of the ratio of the particle masses but also by the smallness of the nondiagonal matrix elements of the kinetic-energy operator of particles of the same charge. The possibilities of the method are demonstrated by the example of the calculation of the energies and wave functions of all states of the ..mu..-mesic molecules of the hydrogen isotopes and the e/sup -/e/sup -/e/sup +/ system. The method is equally suitable for calculating the ground state and the excited states of a three-particle system. This is particularly important in the calculation of the energies of the weakly bound states of the mesic molecules dd..mu.. and dt..mu.., knowledge of which is needed to describe the processes of muonic catalysis of nuclear fusion reactions.
On a solution to the problem of the poor cyclic fatigue resistance of bulk metallic glasses
Launey, Maximilien E.; Hofmann, Douglas C.; Johnson, William L.; Ritchie, Robert O.
2009-01-09
The recent development of metallic glass-matrix composites represents a particular milestone in engineering materials for structural applications owing to their remarkable combinations of strength and toughness. However, metallic glasses are highly susceptible to cyclic fatigue damage and previous attempts to solve this problem have been largely disappointing. Here we propose and demonstrate a microstructural design strategy to overcome this limitation by matching the microstructural length scales (of the second phase) to mechanical crack-length scales. Specifically, semi-solid processing is used to optimize the volume fraction, morphology, and size of second phase dendrites to confine any initial deformation (shear banding) to the glassy regions separating dendrite arms having length scales of {approx} 2 {micro}m, i.e., to less than the critical crack size for failure. Confinement of the damage to such interdendritic regions results in enhancement of fatigue lifetimes and increases the fatigue limit by an order of magnitude making these 'designed' composites as resistant to fatigue damage as high-strength steels and aluminum alloys. These design strategies can be universally applied to any other metallic glass systems.